WO2022135071A1 - Procédé et appareil de commande de lecture pour écouteurs, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de commande de lecture pour écouteurs, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022135071A1
WO2022135071A1 PCT/CN2021/134003 CN2021134003W WO2022135071A1 WO 2022135071 A1 WO2022135071 A1 WO 2022135071A1 CN 2021134003 W CN2021134003 W CN 2021134003W WO 2022135071 A1 WO2022135071 A1 WO 2022135071A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
parameter
controlled
headset
noise reduction
Prior art date
Application number
PCT/CN2021/134003
Other languages
English (en)
Chinese (zh)
Inventor
汶晨光
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2022135071A1 publication Critical patent/WO2022135071A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones

Definitions

  • the present application relates to the field of computers, for example, to a headphone playback control method, apparatus, electronic device, and storage medium.
  • terminal devices such as laptops, tablet computers, mobile phones and even wearable devices for entertainment anytime, anywhere.
  • end users can listen to music and watch videos through terminal devices to relax. Spirit.
  • users will choose to wear headphones. Headphones can effectively isolate the noise in the external environment, thereby providing users with better audio playback services, especially when the headphones are noise-cancelling headphones with noise reduction function, they can reduce ambient noise in real time and maximize their own audio output. , showing the best listening effect.
  • earphones especially noise-cancelling earphones, not only isolate the ambient noise, but also isolate the sounds that are more important to the user in the environment, which can easily make the user miss important sound information in the environment.
  • the existence of the noise earphone may make the user unable to hear the announcement of the public transport station, which will bring great inconvenience to the user in some scenarios.
  • the headphone playback control method, device, electronic device and storage medium provided by the present application mainly solve the technical problem that the external environmental sound isolated by the headphone in a specific scene cannot be known.
  • the present application provides a headphone playback control method, including:
  • the present application also provides a headphone playback control device, including:
  • the detection unit is set to detect the real-time scene parameters of the scene where the headset is currently located; the matching unit is set to match the real-time scene parameters with the to-be-controlled scene parameters in the parameter library, wherein at least one of the to-be-controlled scene parameters in the parameter library
  • the control scene parameter is the scene parameter of the scene where the headset is learned when the headset is executed the first anti-noise reduction operation; the anti-noise reduction unit is set to be between the real-time scene parameters and the parameter library After the parameters of the first to-be-controlled scene are successfully matched, the second anti-noise reduction operation is performed.
  • the present application also provides an electronic device, the electronic device includes a processor, a memory and a communication bus;
  • the communication bus is configured to realize the connection and communication between the processor and the memory; the processor is configured to execute one or more computer programs stored in the memory, so as to realize the above-mentioned headphone playback control method.
  • the present application also provides a storage medium, where one or more computer programs are stored in the storage medium, and the one or more computer programs can be executed by one or more processors, so as to implement the above-mentioned headphone playback control method.
  • FIG. 1 is a flowchart of a method for controlling playback of headphones according to an embodiment of the present application
  • FIG. 2 is a flowchart of identifying a scene to be controlled and storing parameters of the scene to be controlled provided by an embodiment of the present application;
  • FIG. 3 is a flow chart of verifying whether an associated scene parameter is suitable as a to-be-controlled scene parameter according to an embodiment of the present application
  • FIG. 4 is a control flow diagram after the second anti-noise reduction operation is realized by automatically reducing the output sound energy of the earphone according to an embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of a headphone playback control device according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware structure of an earphone according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a headphone playback control system according to an embodiment of the present application.
  • the main practice in the related art is to pre-set some sounds before the headphones leave the factory, and store the audio data of these sounds in the memory of the headphones. ;
  • the audio acquisition unit collects the ambient sound signal, and matches the collected ambient sound signal with the preset audio data. If the matching is successful, the collected ambient sound signal is played to the user through the headset , so that the user can hear the ambient sound.
  • this embodiment provides a headphone playback control method.
  • the headphone playback control method can be implemented by one electronic device, or by the cooperation of two or even more than two electronic devices. Please refer to the headphone shown in FIG. 1 .
  • the so-called real-time scene parameters refer to the scene parameters of the current scene.
  • the real-time scene parameters of the headset can represent the scene where the headset is currently located.
  • the sound in the environment is different in different scenarios. For example, when the user is working in the office, there will be keyboard tapping sounds; when the user is on public transportation, there will be site announcement sounds in the environment; when the user is walking on the roadside , there will basically be the sound of the car engine and the whistle. Therefore, in some examples, the scene in which the headset is located can be represented by detecting ambient sound signals.
  • the scene in which the headset is located may be reflected by its state information, for example, whether the headset is in a wearing state or not, whether it is in a motion state or a static state, etc. If the headset is in a wearing state, And at the same time in a state of motion, the user should be moving while wearing the headset.
  • the state information of the headset may include other state information, such as temperature state information, humidity state information, and the like, in addition to at least one of wearing state information and motion state information. Furthermore, even the motion state information can be refined into multiple levels according to the size of the motion speed.
  • the ambient sound signal can be collected by an audio collection unit, the audio collection unit can be set on the earphone or on the terminal communicatively connected to the earphone, or the audio collection unit can exist independently of the earphone and the terminal.
  • the audio collection unit can be located on the terminal or earphone, and considering that the ambient sound signal collected by the terminal, especially the mobile terminal, such as mobile phone, wearable device, etc. in some scenarios, does not match the actual ambient sound signal of the user For example, when the mobile phone is placed in the pocket of the user's clothes, it will produce friction sound with the clothes pocket. This friction sound will be collected by the audio collection sheet of the mobile phone, and because the pocket blocks the external sound, the friction sound will become a The main ambient sound collected by the mobile phone audio collection unit.
  • the ambient sound of the environment where the user is located is quite different from the ambient sound of the environment where the mobile phone is located.
  • the ambient sound signal collected by the audio collection unit on the mobile phone cannot be used as the sound signal of the environment where the user is located.
  • the electronic device can use the audio collection unit on the earphone to perform the environmental sound. Collection of sound signals.
  • the earphone state information can be collected by sensors, such as at least one of a pressure sensor, a temperature sensor, an infrared sensor, a gravity sensor (gyroscope), and the like.
  • the sensor set to detect the wearing state information of the headset is set on the headset.
  • the sensor set to detect the motion state information of the headset may be located on the headset.
  • the mobile terminal considering that the headset will move with the user's movement While moving, the mobile terminal is usually carried by the user. Therefore, the sensor deployed on the mobile terminal can also be used to detect the motion state information of the earphone.
  • sensors configured to collect motion state information of the headset are deployed on both the headset and the terminal, and the electronic device can collect the state information of the headset through the sensors on the headset and the sensor on the terminal jointly.
  • the real-time scene parameters of the headset may only include ambient sound signals, and in other examples, the real-time scene parameters of the headset may only include status information of the headset.
  • the real-time scene parameter of the earphone may include both the ambient sound signal and the state information of the earphone.
  • S104 Match the real-time scene parameters with the to-be-controlled scene parameters in the parameter library.
  • the electronic device can match the real-time scene parameters with the to-be-controlled scene parameters pre-stored in the parameter library to determine whether the real-time scene parameters match any to-be-controlled scene parameters in the parameter library match.
  • the scene to be controlled refers to a scene that needs to improve the probability of the ambient sound being received. Therefore, the scene parameter to be controlled is actually the real-time scene parameter corresponding to the scene to be controlled by the headset.
  • One or more scene parameters to be controlled can be stored in the parameter library, and each scene parameter to be controlled corresponds to a scene to be controlled.
  • at least one scene to be controlled is automatically identified by the electronic device, and correspondingly, the parameter of at least one scene to be controlled in the parameter library is also automatically determined by the electronic device, and does not require manual settings by programmers or users: parameter library
  • the at least one scene parameter to be controlled pre-stored in is the scene parameter of the scene where the headset is learned when it is detected that the headset is subjected to the first anti-noise reduction operation.
  • the “noise reduction operation” is an operation that reduces the probability that the user hears the ambient sound
  • the “anti-noise reduction operation” is the operation that can increase the probability that the user hears the ambient sound.
  • at least two approaches can be adopted: first, the operation of reducing the sound energy output by the earphone to the user's ear; second, collecting the ambient sound signal and passing the ambient sound signal through the Headphone output.
  • the "first anti-noise reduction operation” refers to an operation of manually increasing the probability of ambient sound being received.
  • there is also a second anti-noise reduction operation and the second anti-noise reduction operation refers to an anti-noise reduction operation performed automatically without a manual operation by the user.
  • the first anti-noise reduction operation may include a manual sound energy limiting operation and an operation of manually controlling the earphone to output an ambient sound signal.
  • the manual sound energy limiting operation is a user operation, which is an operation performed by the user in order to reduce the sound energy of the earphones received by his ears.
  • the manual sound energy limiting operation can be an operation that the user manually reduces the output volume of the earphone, or can also be manually controlled by the user.
  • the operation of stopping the audio playback and in some other examples, the manual sound energy limiting operation may be an operation of the user taking off the earphone in the wearing state during the earphone playing.
  • the manual sound energy limiting operation may be a combination of the above three operations: for example, when the user manually reduces the output volume of the earphone, he removes the earphone that was originally in the wearing state, which is also regarded as a manual sound. It can limit the operation; in other examples, the user manually controls the audio playback to stop and then immediately takes off the headphones that were originally in the wearing state. This operation can also be regarded as a manual sound energy limiting operation.
  • the operation of manually controlling the earphone to output the ambient sound signal may include the user issuing an instruction to instruct the electronic device to collect the ambient sound signal, and then play the ambient sound signal through the earphone.
  • an ambient sound collection control button is set on the earphone. If the user wants to hear the ambient sound, he can press the button. After the earphone receives the user's instruction, it can detect the ambient sound. The signal is collected, and the collected sound signal is output to the user.
  • the ambient sound signal may also be collected by the terminal communicatively connected to the earphone according to the user's instruction.
  • S202 Detect real-time scene parameters of the scene where the headset is currently located, and monitor the first anti-noise reduction operation for the headset.
  • the electronic device may detect scene parameters of the scene where the earphone is located in real time, and monitor the first anti-noise reduction operation for the earphone. Both the process of detecting the real-time scene parameter and the process of detecting the first anti-noise reduction operation may be performed periodically, but the detection periods of the two detection processes may be the same or different.
  • S204 Determine whether the first anti-noise reduction operation is monitored.
  • S206 Determine a scene parameter corresponding to the first anti-noise reduction operation in time as the associated scene parameter of the first anti-noise reduction operation.
  • the electronic device needs to determine the real-time scene parameters corresponding to the first anti-noise reduction operation, and, usually, the user is because the environment has appeared more important to him. Only sound will perform the first anti-noise reduction operation. Therefore, when the electronic device determines the real-time scene parameters corresponding to the detected first anti-noise reduction operation, it is actually backtracking to the earphone before and when the first anti-noise reduction operation occurs. scene parameters. Therefore, after detecting the first anti-noise reduction operation, the electronic device can associate the scene parameter corresponding to the first anti-noise reduction operation in time with the first anti-noise reduction operation as the first anti-noise reduction operation the associated scene parameters.
  • the electronic device After determining the relevant scene parameters of the first anti-noise reduction operation, the electronic device can directly store the relevant scene parameters as the to-be-controlled scene parameters, because the user has recently been in the scene corresponding to the relevant scene parameters, and has the ability to limit the sound energy of the headphones. Therefore, the user is very likely to have the need to limit the sound energy of headphones in the corresponding scene in the future. Therefore, the scene corresponding to the associated scene parameter can be used as the scene to be controlled. On this basis, the associated scene parameter is the Control scene parameters.
  • the electronic device can detect the first anti-noise reduction operation, and can also determine the associated scene parameter corresponding to the first anti-noise reduction operation based on the detection of the real-time scene parameter.
  • the associated scene parameter is not suitable as the scene parameter to be controlled, because the user will not be stopped to ask for directions every time in the usual scene, so if the electronic device regards the associated scene parameter as the scene parameter to be controlled After storage, there will be no ambient sound that is important to the user in subsequent scenes.
  • the real-time scene parameters of the headset match the stored parameters of the scene to be controlled, the corresponding scene will also be identified as the scene to be controlled. control the scene.
  • this embodiment provides a process of learning the scene parameters of the scene where the headset is located, and obtaining the scene parameters to be controlled, as shown in Figure 3:
  • the electronic device determines the real-time scene parameters corresponding to the first anti-noise reduction operation, and the determined real-time scene parameter The scene parameter is the associated scene parameter corresponding to the first anti-noise reduction operation.
  • S304 Continue to detect real-time scene parameters of the scene where the headset is located, and match the detected real-time scene parameters with associated scene parameters.
  • the electronic device Since the first anti-noise reduction operation performed by the user in the scene corresponding to the associated scene parameter is occasional, in order to avoid using the scene parameter corresponding to the occasional first anti-noise reduction operation as the scene parameter to be controlled, in this embodiment After the electronic device obtains the associated scene parameters, it will continue to detect the real-time scene parameters of the scene where the headset is located, so as to verify the associated scene parameters. The verification of the associated scene parameter by the electronic device is actually to determine whether the user will perform the first anti-noise reduction operation again when the scene corresponding to the associated scene parameter appears again.
  • the user performs the first anti-noise reduction operation again it means that the user's first anti-noise reduction operation in this scenario is not accidental, and the user does have a high probability of reducing the sound energy of the earphones in this scenario. Therefore, , the corresponding associated scene parameters can be used as the scene parameters to be controlled. Conversely, if the user does not perform the first anti-noise reduction operation again, it means that the user's previous first anti-noise reduction operation is accidental, and the corresponding associated scene parameters should not be stored as the scene parameters to be controlled.
  • the electronic device obtains the real-time scene parameters of the headset, matches the real-time scene parameters with the associated scene parameters, and also determines whether the scene corresponding to the associated scene parameters is reproduced.
  • the electronic device detects the first anti-noise reduction operation and determines the associated scene parameter corresponding to the first anti-noise reduction operation, it is uncertain when the associated scene parameter will be detected again.
  • the associated scene parameter may be stored first, but the associated scene parameter needs to be stored separately from the to-be-controlled scene parameter in the electronic device.
  • the electronic device monitors the first anti-noise reduction operation for the earphone within the first preset time period.
  • the electronic device monitors the first anti-noise reduction operation for the earphone, in fact, it monitors whether the user will manually reduce the sound energy output by the earphone to the ear again or manually control the earphone to output the ambient sound signal in the reproduced associated scene.
  • the first preset duration may be calculated after the electronic device determines that the real-time scene parameter and the associated scene parameter are successfully matched.
  • the first preset duration can be set by the programmer, or can be defined by the user.
  • the electronic device detects the user's first anti-noise reduction operation within the first preset time period, it means that the user performs the first anti-noise reduction operation in the same associated scene is not accidental. Therefore, the associated scene should be If the scene is identified as the scene to be controlled, the associated scene parameter corresponding to the associated scene should be used as the scene parameter to be controlled. Therefore, in this case, the electronic device will store the associated scene parameters as the scene parameters to be controlled.
  • the first anti-noise reduction operation performed by the user in the associated scene for the first time may be different from the first anti-noise reduction operation performed in the reproduced associated scene. For example, the user takes off the wearing headphones in the associated scene for the first time
  • the first anti-noise reduction operation is implemented, but in the associated scene of reproduction, the user can complete the first anti-noise reduction operation by controlling the audio playback to stop.
  • the electronic device fails to detect the user's first anti-noise reduction operation within the first preset time period, it means that the user previously performed the first anti-noise reduction operation in the associated scene by accident. Therefore, the associated scene should not If it is identified as a scene to be controlled, the electronic device can discard the corresponding associated scene parameters.
  • the electronic device may automatically perform the second anti-noise reduction operation.
  • the second anti-noise reduction operation may reduce the output sound energy of the earphone, for example, reduce the output volume of the earphone, or pause audio playback. Because the output sound energy of the earphone itself is reduced, the output sound energy of the earphone received by the user's ear is reduced. In this case, the user's ear can hear more other sounds, including sounds in the external environment.
  • the second anti-noise reduction operation may be implemented by the electronic device collecting the ambient sound signal, and then outputting the collected ambient sound signal to the user's ear through the earphone.
  • the output sound energy of the earphone itself does not need to be reduced, but because the content played by the earphone itself contains the ambient sound, the probability of the user hearing the ambient sound is greatly improved.
  • the headset plays the ambient sound signal to the user
  • the audio that is originally playing can be paused, or the two can be played together.
  • the earphone superimposes and plays the audio originally being played together with the ambient sound signal, the volume of the original audio can be appropriately reduced.
  • the headset playback control method can automatically learn and determine the scene that requires the user to pay attention to the ambient sound based on the user's historical operations without relying on artificially setting the scene or audio, and based on the real-time scene of the headset.
  • the scene data identifies these scenes, and then automatically limits the sound energy of the headphones in these scenes, preventing users from missing important sounds in the environment and improving the user experience.
  • pre-detecting the first anti-noise reduction operation and obtaining the scene parameters corresponding to the headset when the first anti-noise reduction operation is detected, it can identify the scene where the user needs to improve the probability of the ambient sound being received and heard.
  • scenarios you need to listen to the ambient sound take these scenarios as the scenarios to be controlled, and store the scene parameters corresponding to these scenarios in the parameter library as the parameters of the scenarios to be controlled.
  • the real-time scene parameters of the scene where the headset is currently located are detected, and the real-time scene parameters are matched with the parameters of the scene to be controlled pre-stored in the parameter library, so as to determine whether the current scene is a scene that needs to improve the received ambient sound. That is, whether it is the scene to be controlled that needs to perform the second anti-noise reduction operation on the earphone.
  • the second anti-noise reduction operation is automatically performed, and the probability of the user hearing the ambient sound is increased through the second anti-noise reduction operation.
  • the scenarios where the user performs the first anti-noise reduction operation on the headphones are usually scenarios where the user needs to listen to the ambient sound. Therefore, the electronic device can automatically learn and summarize the user needs to listen to the ambient sound based on the user's manual operation combined with the real-time scene parameters of the headphones.
  • Scenarios to be controlled so as to automatically and accurately identify the to-be-controlled scenarios with sound energy limitations in the subsequent process, and automatically perform the second anti-noise reduction operation on the headphones in these to-be-controlled scenarios to prevent users from missing important sounds in the environment .
  • the induction of the scene to be controlled is based on the user's historical operations, which is in line with the user's habits, which is equivalent to "personalized scene customization" for the user, thus improving the intelligence of the playback control of the headset, making the external environment isolated by the headset in a specific scene. The voice was heard.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the user's habits will keep changing.
  • the scene parameters of a scene are stored by the electronic device as the scene parameters to be controlled, it does not mean that any real-time scene parameters matching the scene parameters to be controlled will be detected at any time in the future.
  • the influence of the second anti-noise reduction operation may be manually eliminated.
  • the second anti-noise reduction operation performed by the electronic device is an operation to limit the output sound energy of the headphones
  • the user may perform the first sound energy boosting operation. , which controls the headset to boost the sound energy it outputs to the user's ears.
  • This situation indicates that the user's habits may have changed, and the corresponding scene parameters to be controlled may have become invalid. Therefore, the electronic device needs to determine whether the user's habits have changed based on the detection, and whether the pre-stored scene parameters to be controlled need to be adjusted.
  • S402 Monitor the first sound energy boosting operation for the earphone within a second preset time period.
  • the "first sound energy boosting operation” is an operation detected by a sensor or an input unit to boost the sound energy output from the earphone to the ear, and it is an operation for the user to manually boost the sound energy output from the earphone to the ear.
  • the first sound energy boosting operation may include: putting on the earphone that was originally taken off during the earphone playing, manually increasing the output volume of the earphone, and manually controlling the start of audio playback.
  • the second preset duration may be calculated from the time when the electronic device automatically performs the second anti-noise reduction operation.
  • the second preset duration can be set by the programmer, or can be defined by the user.
  • S404 Determine whether the first sound energy boosting operation is detected within the second preset time period.
  • S406 Record the cumulative number of times the first sound energy boosting operation is performed for the target scene parameter to be controlled.
  • the "target scene parameters to be controlled” referred to here refer to the scene parameters to be controlled that are previously matched from the pre-stored scene parameters to be controlled.
  • the scene parameter c to be controlled, the real-time scene parameter of the headset obtained by the electronic device at one moment is successfully matched with the scene parameter a to be controlled, then the scene parameter a to be controlled is the target scene parameter to be controlled, and the electronic device automatically executes the second step according to the matching result. After the anti-noise reduction operation, it will determine whether the first sound energy boosting operation is detected within the second preset time period. If the judgment result is yes, the electronic device will perform the accumulation of the first sound energy boosting operation for the scene parameter a to be controlled. The number of times is incremented by "1".
  • S408 Determine whether the accumulated number of times corresponding to the target to-be-controlled scene parameter has been preset.
  • the preset number of times can be set by the programmer, or can be set by the user.
  • the size of the preset number of times reflects the difficulty of the electronic device in determining a parameter of a scene to be controlled as invalid: the higher the value of the preset number of times If the value of the preset times is larger, it is more difficult for the parameter of the scene to be controlled to be determined to be invalid.
  • the electronic device If it is determined that the number of times the user has manually boosted sound energy after the electronic device performs the second anti-noise reduction operation has exceeded the preset number of times under the parameters of the target scene to be controlled, it means that the parameters of the target scene to be controlled have become invalid, and the electronic device can Delete the target scene parameter to be controlled.
  • S412 Determine whether the real-time scene parameters of the headset match the target scene parameters to be controlled.
  • S414 Perform a second sound energy boosting operation to boost the output sound energy of the earphone.
  • the electronic device can automatically perform the sound energy boosting operation, that is, perform the second sound energy boosting operation, so as to convert the output sound energy of the earphone. For example, in some examples of this embodiment, the electronic device performs the second sound energy boosting operation.
  • the boost operation is enabled, the headphone output sound energy can be restored to the level before the second anti-noise reduction operation was performed.
  • the electronic device determines that the scene parameters corresponding to the headphones are successfully matched with the scene parameter b to be controlled, and the electronic device will reduce the volume of the headphones. For example, the output volume is reduced to k2; after the electronic device determines that the scene parameter corresponding to the headset and the scene parameter b to be controlled no longer match, the electronic device can control the headset again to play audio according to the output volume of k1.
  • the electronic device determines that the scene parameter corresponding to the headset successfully matches the scene parameter b to be controlled, it controls the audio playback to stop, then after determining that the scene parameter corresponding to the headset and the scene parameter b to be controlled no longer match, the electronic device can Control the headset to continue audio playback at the previous output volume k1.
  • the electronic device may also pre-store the user's voice feature information, and then collect the environment sound signal, and based on pre-stored sound feature information to identify whether the ambient sound signal contains the user's voice. If the user's voice is recognized, the electronic device may automatically perform the second anti-noise reduction operation.
  • the electronic device in this embodiment may be the headset itself, a terminal connected to the headset, or even other devices independent of the headset and the terminal.
  • the headset is connected to the mobile phone for communication, while the electronic device
  • the device is a wearable device that is connected to a headset and a mobile phone, respectively.
  • the functions of the aforementioned electronic devices can also be jointly implemented by two or even more physical devices.
  • part of the process of the aforementioned headset playback control method can be implemented by the headset, and another part of the process can be implemented by a terminal communicatively connected to the headset.
  • the headset may detect real-time scene parameters of the scene where the headset is currently located. Then, the collected real-time scene parameters are transmitted to the terminal. After receiving the real-time scene parameters, the terminal matches the real-time scene parameters with the pre-stored parameters of the scene to be controlled.
  • the second anti-noise operation increases the probability of users hearing ambient sounds.
  • the electronic device can monitor the validity of the pre-stored scene parameters to be controlled based on user operations, and promptly clear the parameters of the scene to be controlled that have expired, so as to avoid erroneously restricting the sound energy of the headset based on the scene parameters to be controlled, and improve the The accuracy of scene recognition to be controlled is conducive to enhancing user experience.
  • the headphone playback control apparatus 50 includes a detection unit 502, a matching unit 504 and an anti-noise reduction unit 506, wherein the detection unit 502 is configured to detect the current scene where the headphones are located the real-time scene parameters; the matching unit 504 is set to match the real-time scene parameters with the to-be-controlled scene parameters in the parameter library, and at least one to-be-controlled scene parameter in the parameter library is the first reverse scene parameter when the headset is executed The scene parameters of the scene where the headset is learned during the noise reduction operation; the anti-noise reduction unit 506 is configured to execute a second anti-noise after the real-time scene parameters are successfully matched with any to-be-controlled scene parameters in the parameter library. In the noise reduction operation, the first anti-noise reduction operation and the second anti-noise reduction operation are both used to increase the probability of ambient sound being received.
  • the headphone playback control device 50 can implement the flow of the foregoing headphone playback control method, and the details of executing the headphone playback control method may refer to the introduction of the foregoing embodiments, which will not be repeated here.
  • the headset playback control device 50 may be deployed on the headset, or may be deployed on a terminal that is communicatively connected to the headset as an audio source for the headset. Even in some other examples, the headset playback control apparatus 50 may also be deployed on a device independent of the headset and the terminal, and the device may be communicatively connected with the headset and the terminal.
  • the present embodiments provide a storage medium including volatile or non-volatile implemented in any method or technology arranged to store information, such as computer readable instructions, data structures, computer program modules or other data volatile, removable or non-removable media.
  • Storage media include but are not limited to random access memory (Random Access Memory, RAM), read-only memory (Read-Only Memory, ROM), electrically erasable programmable read-only memory (Electrically Erasable Programmable read only memory, EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Video Disc (DVD) or other optical disk storage, magnetic cartridge, tape, magnetic disk storage or other magnetic storage device , or any other medium that can be configured to store the desired information and that can be accessed by a computer.
  • RAM Random Access Memory
  • ROM read-only memory
  • EEPROM Electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Video
  • the storage medium may store one or more computer programs that can be read, compiled and executed by one or more processors, and in this embodiment, the storage medium may store a headphone playback control program.
  • the headphone playback control program can be used by one or more processors to execute any flow of headphone playback control described in the foregoing embodiments.
  • This embodiment also provides a computer program product, including a computer-readable device, where the computer program as shown above is stored on the computer-readable device.
  • the computer-readable device may include the computer-readable storage medium as described above.
  • the computer program product includes an electronic device.
  • this embodiment provides an electronic device, as shown in FIG. 6 : the electronic device 6 includes a processor 61 , a memory 62 , and a communication device configured to connect the processor 61 and the memory 62 The bus 63, wherein the memory 62 may be the aforementioned storage medium storing the earphone playback control program.
  • the processor 61 may read the headphone playback control program, compile and execute the process of implementing the headphone playback control method described in the foregoing embodiments.
  • This embodiment also provides an earphone and a terminal.
  • the earphone and the terminal cooperate with each other to implement the earphone playback control method provided by this embodiment:
  • the headset 70 shown in FIG. 7 includes a detection unit 71 , an audio output unit 72 , a first communication unit 73 and a controller 74 , wherein the detection unit 71 , the audio output unit 72 and the first communication unit 73 are all connected with The controller 74 is communicatively connected.
  • the earphone 70 is communicatively connected to the terminal through the first communication unit 73 .
  • the communication connection between the earphone 70 and the terminal may be a wired connection or a wireless connection.
  • the audio output unit 72 is configured to, under the control of the controller 74, output the audio data received by the first communication unit 73 from the terminal, such as the sound of playing a song or a video.
  • the detection unit 71 is configured to detect real-time scene parameters of the scene where the headset 70 is currently located under the control of the controller 74 .
  • the detection unit 71 may include at least one of a sound detection unit and a state detection unit, and in some examples, the detection unit 71 may include both a sound detection unit and a state detection unit, wherein the sound detection unit
  • the unit is configured to collect ambient sound signals of the environment where the earphone 70 is located.
  • the sound detection unit may be a microphone deployed on the earphone 70 .
  • the state detection unit may include multiple types of sensors deployed on the headset, including at least one of pressure sensors, temperature sensors, humidity sensors, infrared sensors, gravity sensors, acceleration sensors, and the like.
  • the first communication unit 73 can also transmit the real-time scene parameters collected by the detection unit 71 to the terminal, so that the terminal can match the real-time scene parameters with the pre-stored scene parameters to be controlled.
  • the terminal 80 includes a second communication unit 81 , a processor 82 , and a memory 83 , the processor 82 is communicatively connected to the second communication unit 81 and the memory 83 , and the terminal 80 is communicatively connected to the headset through the second communication unit 81 .
  • the second communication unit 81 is configured to transmit audio data to the earphone under the control of the processor 82, and realize audio playback through the earphone; in addition, the second communication unit 81 is also configured to receive the real-time scene parameter transmitted by the earphone and representing the scene where the earphone is currently located. .
  • the processor 82 is set to match the real-time scene parameters with the to-be-controlled scene parameters pre-stored in the memory 83.
  • the to-be-controlled scene parameters are the scene parameters corresponding to the earphones when the first anti-noise reduction operation was previously detected, and the first anti-noise reduction operation was passed.
  • the real-time scene parameters acquired by the processor 82 may be partly from the headset and partly from the acquisition unit of the terminal 80 itself.
  • the terminal 80 is also provided with an audio acquisition unit, such as a microphone, with multiple types of sensors.
  • the processor 82 can combine the real-time scene parameters collected by the acquisition unit on the terminal 80 and the real-time scene parameters received from the headset to obtain comprehensive scene parameters, which are obtained by combining two kinds of real-time scene parameters from different sources. More accurately determine the scene the headset is currently in.
  • the processor 82 may monitor the first anti-noise reduction operation for the headset. When the first anti-noise reduction operation is detected, the processor 82 determines the scene parameter corresponding to the first anti-noise reduction operation in time as the relevant scene parameter of the first anti-noise reduction operation, and then uses the relevant scene parameter as the scene to be controlled The parameters are stored in the memory 83 .
  • the processor 82 may also match the real-time scene parameters with the associated scene parameters. If the real-time scene parameters and the associated scene parameters are successfully matched, the processor 82 monitors the first anti-noise reduction operation for the earphone within the first preset time period; if the first anti-noise reduction operation is detected within the first preset time period, Then, the processor 82 stores the associated scene parameters in the memory 83 as the scene parameters to be controlled.
  • the memory 83 also stores the voice feature information of the user, and the processor 82 can collect the ambient sound signal through the microphone of the terminal 80 or the microphone on the earphone, and then according to the sound signal stored in the memory 83 The sound feature information identifies the user's voice in the collected ambient sound signal. After recognizing the user's voice, the processor 82 automatically performs the second anti-noise reduction operation.
  • the processor 82 when the processor 82 automatically performs the second anti-noise reduction operation, it may transmit a control instruction to the earphone through the second communication unit 81, instructing the earphone to reduce its output sound energy. For example instructing the headset to lower its output volume.
  • the processor 82 may directly control the audio to stop playing, so that the second communication unit 81 temporarily stops transmitting audio data to the earphone.
  • the processor 82 may collect the ambient sound signal through the microphone of the terminal 80 itself or the microphone on the earphone, and control the second communication unit 81 to transmit the collected ambient sound signal to the earphone, so that the earphone outputs the collected ambient sound signal. to the ambient sound signal.
  • the manual sound energy lifting operation is an operation of manually increasing the sound energy output by the earphone to the user's ear.
  • the processor 82 If the processor 82 detects the manual sound energy boosting operation within the second preset time period, it records the cumulative number of times the manual sound energy boosting operation is performed for the target scene parameter to be controlled; if the accumulated number of times corresponding to the target scene parameter to be controlled exceeds the preset number If the number of times is set, the processor 82 deletes the target scene parameter to be controlled from the memory 83 .
  • the processor 82 automatically restores the output sound energy of the earphones to the second preset time period. The level before the denoising operation.
  • the aforementioned headset 70 and the terminal 80 can cooperate with each other to implement the headset playback control method in the aforementioned embodiment.
  • the details of the headset playback control method implemented by the headset 70 and the terminal 80 please refer to the introduction of the aforementioned embodiment, which will not be repeated here.
  • the earphone playback control system 9 includes an earphone 70 and a terminal 80 communicatively connected to the earphone 70 , wherein the earphone 70 is the corresponding earphone in FIG. 7 . , the terminal 80 is the corresponding terminal in FIG. 8 .
  • FIG. 9 is only used to illustrate the connection between the earphone 70 and the terminal 80 .
  • the earphone 70 is not limited to the wired earphone shown in FIG.
  • the earphone 70 may be a wireless earphone, such as a Bluetooth earphone.
  • the headphone playback control system, headphone, terminal, electronic device, and storage medium provided in this embodiment can automatically summarize the to-be-controlled scene in which the user needs to listen to the ambient sound based on the user's manual operation combined with the real-time scene parameters of the headphone, for use in the subsequent process. Automatically and accurately identify to-be-controlled scenes with sound energy limitations, and automatically limit the sound energy of headphones in these to-be-controlled scenes to prevent users from missing important sounds in the environment.
  • the induction of the scenes to be controlled is based on the user's historical operations, which is in line with the user's habits, which is equivalent to "personalized customization of the scenes to be controlled" for the user, thereby improving the intelligence of the headset playback control and enhancing the user experience.
  • the functional modules/units in the system, and the device can be implemented as software (which can be implemented by computer program codes executable by a computing device) , firmware, hardware, and their appropriate combination.
  • the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components Components execute cooperatively.
  • Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
  • communication media typically embodies computer readable instructions, data structures, computer program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery, as is well known to those of ordinary skill in the art medium. Therefore, the present application is not limited to any particular combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Headphones And Earphones (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

L'invention concerne un procédé et un appareil de commande de lecture pour écouteurs, un dispositif électronique et un support de stockage. Le procédé de commande de lecture pour écouteurs comprend les étapes consistant à : détecter un paramètre de scène en temps réel d'une scène dans laquelle des écouteurs sont actuellement situés (S102) ; apparier le paramètre de scène en temps réel avec des paramètres de scène à commander dans une bibliothèque de paramètres (S104), au moins un paramètre de scène à commander dans la bibliothèque de paramètres étant un paramètre de scène de la scène dans laquelle les écouteurs sont situés, appris dans la situation où une première opération de réduction de bruit est effectuée sur les écouteurs ; et après que le paramètre de scène en temps réel a été apparié avec succès avec l'un quelconque des paramètres de scène à commander dans la bibliothèque de paramètres, effectuer une seconde opération de réduction de bruit (S106).
PCT/CN2021/134003 2020-12-21 2021-11-29 Procédé et appareil de commande de lecture pour écouteurs, dispositif électronique et support de stockage WO2022135071A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011515433.2A CN114647397A (zh) 2020-12-21 2020-12-21 耳机播放控制方法、装置、电子设备及存储介质
CN202011515433.2 2020-12-21

Publications (1)

Publication Number Publication Date
WO2022135071A1 true WO2022135071A1 (fr) 2022-06-30

Family

ID=81991552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134003 WO2022135071A1 (fr) 2020-12-21 2021-11-29 Procédé et appareil de commande de lecture pour écouteurs, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114647397A (fr)
WO (1) WO2022135071A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041803B (zh) * 2023-08-30 2024-03-22 江西瑞声电子有限公司 耳机播放的控制方法、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103796125A (zh) * 2013-11-21 2014-05-14 广州视源电子科技股份有限公司 一种基于耳机播放的声音调节方法
US20160192073A1 (en) * 2014-12-27 2016-06-30 Intel Corporation Binaural recording for processing audio signals to enable alerts
CN108429955A (zh) * 2018-04-19 2018-08-21 南京拙达科创加速器有限公司 释放环境声音进入耳机的智能装置及方法
CN110708625A (zh) * 2019-09-25 2020-01-17 华东师范大学 基于智能终端的环境声抑制与增强可调节耳机系统与方法
CN111935583A (zh) * 2020-08-24 2020-11-13 Oppo(重庆)智能科技有限公司 耳机模式控制方法、装置、终端设备、系统以及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103796125A (zh) * 2013-11-21 2014-05-14 广州视源电子科技股份有限公司 一种基于耳机播放的声音调节方法
US20160192073A1 (en) * 2014-12-27 2016-06-30 Intel Corporation Binaural recording for processing audio signals to enable alerts
CN108429955A (zh) * 2018-04-19 2018-08-21 南京拙达科创加速器有限公司 释放环境声音进入耳机的智能装置及方法
CN110708625A (zh) * 2019-09-25 2020-01-17 华东师范大学 基于智能终端的环境声抑制与增强可调节耳机系统与方法
CN111935583A (zh) * 2020-08-24 2020-11-13 Oppo(重庆)智能科技有限公司 耳机模式控制方法、装置、终端设备、系统以及存储介质

Also Published As

Publication number Publication date
CN114647397A (zh) 2022-06-21

Similar Documents

Publication Publication Date Title
US11705878B2 (en) Intelligent audio output devices
WO2020228815A1 (fr) Procédé et dispositif de réveil vocal
CN107509153B (zh) 声音播放器件的检测方法、装置、存储介质及终端
JP2019117623A (ja) 音声対話方法、装置、デバイス及び記憶媒体
US9479883B2 (en) Audio signal processing apparatus, audio signal processing method, and program
JP2020031444A (ja) ウェアラブルデバイスの状態に基づいたコンパニオン通信デバイスの動作の変更
US8909537B2 (en) Device capable of playing music and method for controlling music playing in electronic device
WO2019033987A1 (fr) Procédé et appareil d'invite, support d'informations et terminal
EP3001422A1 (fr) Commande automatisée de lecteur multimédia sur la base des paramètres physiologiques détectés d'un utilisateur
CN105451111A (zh) 耳机播放控制方法、装置及终端
WO2020042498A1 (fr) Procédé de traitement d'anomalie d'écouteur, écouteur, système et support d'informations
CN107493500A (zh) 多媒体资源播放方法及装置
CN113038337B (zh) 一种音频播放方法、无线耳机和计算机可读存储介质
CN109067965B (zh) 翻译方法、翻译装置、可穿戴装置及存储介质
JP2023542968A (ja) 定位されたフィードバックによる聴力増強及びウェアラブルシステム
WO2022227655A1 (fr) Procédé et appareil de lecture de sons, dispositif électronique et support de stockage lisible
WO2019013820A1 (fr) Alerte d'utilisateurs concernant des événements
WO2022135071A1 (fr) Procédé et appareil de commande de lecture pour écouteurs, dispositif électronique et support de stockage
CN111988704B (zh) 声音信号处理方法、装置以及存储介质
CN113473304A (zh) 啸叫声抑制方法、装置、耳机及存储介质
WO2021043414A1 (fr) Commande de détection de blocage de microphone
WO2023159717A1 (fr) Procédé de commande de fonctionnement d'écouteurs-boutons, écouteurs-boutons en anneau et support de stockage
CN109151195B (zh) 一种跑步运动时耳机控制方法及其控制系统
CN112102829A (zh) 一种基于语音识别的播放器控制系统及其方法
JP2020127071A (ja) 電子機器及びその制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909070

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.11.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21909070

Country of ref document: EP

Kind code of ref document: A1