WO2021000817A1 - Procédé et dispositif de traitement de son ambiant - Google Patents

Procédé et dispositif de traitement de son ambiant Download PDF

Info

Publication number
WO2021000817A1
WO2021000817A1 PCT/CN2020/098733 CN2020098733W WO2021000817A1 WO 2021000817 A1 WO2021000817 A1 WO 2021000817A1 CN 2020098733 W CN2020098733 W CN 2020098733W WO 2021000817 A1 WO2021000817 A1 WO 2021000817A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
head
electronic device
scene
mounted device
Prior art date
Application number
PCT/CN2020/098733
Other languages
English (en)
Chinese (zh)
Inventor
王大伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021000817A1 publication Critical patent/WO2021000817A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1091Details not provided for in groups H04R1/1008 - H04R1/1083
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Definitions

  • This application relates to the field of terminals and communication technologies, and in particular to environmental sound processing methods and related devices.
  • Bluetooth headsets and noise-cancelling headsets are becoming more and more popular due to their good sound effects and portability.
  • users cannot hear outside sounds clearly when wearing headphones, especially noise-canceling headphones that cancel out all the sounds in the external environment, even the car horns and alarm sounds used to remind users. This will bring users To a certain degree of danger.
  • the user After the earphone shields the ambient sound, the user cannot perceive the external environment, which is not what the user wants.
  • a user wears a headset When a user wears a headset, he needs to perceive the external environment and make a corresponding response.
  • users have different perception requirements for the external environment. For example, when a user wears a headset while walking on a road, he needs to know whether there is a vehicle on the road and the direction of the vehicle to avoid possible safety hazards. For another example, when the user wears headphones in the living room, he needs to know whether there are visitors. How to meet the user's perception needs of the external environment in different scenarios is a technical problem that needs to be solved currently.
  • the present application provides an environmental sound processing method and related devices, which can meet the user's perception needs of the external environment in different scenarios, so that the user can respond accordingly.
  • the present application provides an environmental sound processing method, which may include: a head-mounted device determining a scene where a user is currently located; and the head-mounted device collecting an external environment according to the scene where the user is currently located When the collected sound signal matches the preset environmental sound corresponding to the current scene, the head-mounted device outputs the sound signal of the external environment or outputs prompt information.
  • the technical solution described in the first aspect can enable the user to output the sound signal of the external environment or output prompt information to remind the user when the user has the need to perceive the external environment, so that the user can pay attention to the external environment and make a corresponding response. Reaction.
  • the head-mounted device determining the user's current scene includes one or more of the following: the head-mounted device determines the user according to the user's current location information The current scene; or, the head-mounted device determines the user's current scene based on the user's behavioral sign information; or, the head-mounted device determines the user's current scene based on the recognized voice content .
  • the head-mounted device can automatically determine the user's current scene based on the acquired data, avoiding manual settings, and improving user experience.
  • the user's location information can be determined by at least one of the global navigation satellite system, base station positioning technology, wireless indoor positioning technology, or data collected by sensors;
  • the user's behavioral sign information can be determined by acceleration sensors, temperature sensors, and heart rate sensors. Or it can be determined by the data sensed by at least one of the pulse sensors;
  • the voice content can be collected by a sound pickup device such as a microphone.
  • the head-mounted device determines the scene in which the user is located according to the received user operation. Specifically, the "settings” application starts the “manual mode” according to the received user's operation, and then monitors the user's operation to set the corresponding scene.
  • the preset environmental sound is obtained from the corresponding relationship between the scene and the preset environmental sound.
  • the corresponding relationship between the scene and the preset environmental sound is preset, or is obtained through a machine learning algorithm.
  • the corresponding preset environmental sound in the scene can be obtained according to the scene, that is, the preset environmental sound can be changed according to the change of the scene, which more satisfies the user's needs for perception of the external environment in different scenes.
  • the preset environmental sound is set by the user. Specifically, the "settings” application starts the “manual mode” according to the received user's operation, and then monitors the user's operation to set the corresponding scene and the preset environmental sound corresponding to each scene.
  • the collected sound signal when the collected sound signal includes the preset environmental sound, it is determined that the collected sound signal matches the preset environmental sound corresponding to the current scene; Alternatively, when the collected sound signal includes the preset environmental sound and the threshold reaches the preset threshold, it is determined that the collected sound signal matches the preset environmental sound corresponding to the current scene. In this way, the user can be reminded only when the user needs to perceive the external environment, which avoids interference to the user due to excessive reminders.
  • the threshold includes at least one of a loudness threshold, a duration threshold, or a repetition threshold of a preset environment.
  • the prompt information includes one or more of the following: voice, vibration feedback, or light feedback.
  • the voice includes any one of the following: voice content matching the preset environmental sound, or preset specific voice prompt content.
  • the head-mounted device pauses playing the current audio data of the head-mounted device when outputting the prompt voice, and when the prompt voice is stopped, the head-mounted device It can detect the duration of stopping the prompt voice, and when the duration of stopping the prompt voice reaches the preset duration threshold, the current audio data will continue to be played. In this way, the current audio data playback is resumed after the preset time is passed after the prompt voice playback is completed, which can leave time for the user to respond to a specific sound signal.
  • the method further stops sending the prompt information by any one of the following methods: when the duration of the prompt information reaches a preset time When time, stop sending the prompt message; or, when the detected sound signal of the external environment does not include the preset environmental sound, stop sending the prompt message; or, in response to the user's operation, stop sending the prompt message Prompt information. In this way, interference caused by excessive reminding to the user can be avoided.
  • the present application also provides a head-mounted device, including: one or more processors, a memory, and a communication module; the memory is coupled with the one or more processors, and the memory is used to store the computer Program code, the computer program code includes computer instructions, and the one or more processors invoke the computer instructions to cause the head-mounted device to execute:
  • the head-mounted device When the collected sound signal matches the preset environmental sound corresponding to the current scene, the head-mounted device outputs the sound signal of the external environment or outputs prompt information.
  • the technical solution described in the second aspect can enable the user to output the sound signal of the external environment or output the prompt information to remind the user when the user has the need to perceive the external environment, so that the user can pay attention to the external environment and make a corresponding response. Reaction.
  • determining the current scene of the user includes one or more of the following: determining the current scene of the user according to the user's current location information; or, according to the user's behavior
  • the physical sign information determines the current scene of the user; or, according to the recognized voice content, the current scene of the user is determined.
  • the head-mounted device uses at least one of global navigation satellite systems, base station positioning technology, wireless indoor positioning technology, or data collected by sensors to determine the user's location information; the head-mounted device uses acceleration sensors, temperature sensors, and heart rate The data sensed by at least one of the sensor or the pulse sensor is used to determine the user's behavioral sign information; the head-mounted device collects voice content through a pickup device such as a microphone.
  • a pickup device such as a microphone.
  • the preset environmental sound is obtained from the corresponding relationship between the scene and the preset environmental sound.
  • the corresponding relationship between the scene and the preset environmental sound is preset, or is obtained through a machine learning algorithm.
  • the corresponding preset environmental sound in the scene can be obtained according to the scene, that is, the preset environmental sound can be changed according to the change of the scene, which more satisfies the user's needs for perception of the external environment in different scenes.
  • the preset environmental sound is set by the user. Specifically, the "settings” application starts the “manual mode” according to the received user's operation, and then monitors the user's operation to set the corresponding scene and the preset environmental sound corresponding to each scene.
  • the collected sound signal when the collected sound signal includes the preset environmental sound, it is determined that the collected sound signal matches the preset environmental sound corresponding to the current scene; Alternatively, when the collected sound signal includes the preset environmental sound and the threshold reaches the preset threshold, it is determined that the collected sound signal matches the preset environmental sound corresponding to the current scene. In this way, the user can be reminded only when the user needs to perceive the external environment, which avoids interference to the user due to excessive reminders.
  • the threshold includes at least one of a loudness threshold, a duration threshold, or a repetition threshold of a preset environment.
  • the prompt information includes one or more of the following: voice, vibration feedback, or light feedback.
  • the voice includes any one of the following: voice content matching the preset environmental sound, or preset specific voice prompt content.
  • the one or more processors are further configured to invoke the computer instruction to cause the head-mounted device to execute: pause playback
  • the head-mounted device can detect the duration of stopping the prompt voice, and continue to play the current audio when the duration of stopping the prompt voice reaches the preset duration threshold data. In this way, the current audio data playback is resumed after the preset time is passed after the prompt voice playback is completed, which can leave time for the user to respond to a specific sound signal.
  • the one or more processors are further configured to call the computer instructions to make the head-mounted device execute: When the duration of the prompt information reaches a preset time, terminate the prompt information; or, when the detected sound signal of the external environment does not include the preset environmental sound, terminate the prompt information; or, In response to the user's operation, the displayed prompt message is terminated. In this way, interference caused by excessive reminding to the user can be avoided.
  • the present application also provides an audio playback system.
  • the audio playback system may include an electronic device and a head-mounted device.
  • the head-mounted device may be any possible implementation manner as in the second aspect.
  • the present application provides a computer program product containing instructions.
  • the computer program product When the computer program product is run on an electronic device, the electronic device is caused to perform the operations described in the first aspect and any possible implementation manner in the first aspect. method.
  • the present application provides a computer-readable storage medium, including instructions.
  • the instructions When the instructions are executed on an electronic device, the electronic device is caused to perform the operations described in the first aspect and any possible implementation manner in the first aspect. method.
  • Fig. 1 is a schematic structural diagram of an audio playback system provided by an embodiment of the application.
  • FIG. 2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of the structure of a head-mounted device provided by an embodiment of the application.
  • 4A-4B are schematic diagrams of enabling the "reminder function" provided by an embodiment of this application.
  • FIG. 5 is a schematic flowchart of an environmental sound processing method provided by an embodiment of the application.
  • 6A-6D are schematic diagrams of an interface for manually setting environmental sound content and thresholds according to an embodiment of the application.
  • the electronic device or head-mounted device can determine whether the user has a need to perceive the external environment in the current scene based on the environmental sound. If so, the electronic device or head The wearable device can output prompt information or sound signals of the external environment, so that the user can perceive the external environment and make corresponding responses.
  • the electronic device or head-mounted device can automatically recognize the scene the user is in, and perform actions on the user according to the scene the user is in. Prompts, in turn, can meet the user's perception needs of the external environment in different scenarios.
  • the “reminder function” may be a service or function provided by the electronic device, and may be installed in the electronic device in the form of an APP.
  • the “reminder function” can support the electronic device to prompt the user when the user uses the head-mounted device.
  • the electronic device prompting the user when the user uses the head-mounted device means that when the user uses the head-mounted device, the electronic device or the head-mounted device can determine the current scene according to the preset environmental sound Whether the user needs to perceive the external environment, when it is determined that the user needs to perceive the external environment, the electronic device or head-mounted device sends out a prompt message or a sound signal of the external environment to remind the user to pay attention to the external environment in the current scene.
  • the "reminder function” supports the reminder function provided by the electronic device when the user uses the head-mounted device, refer to the related description of the subsequent embodiments, and will not be repeated here.
  • the electronic device or the head-mounted device can determine whether the user needs to perceive the external environment in the current scene according to the environmental sound. If so, the electronic device or the head-mounted device can output prompt information or sound signals of the external environment, so that the user can perceive the external environment, so as to make a corresponding response, thereby improving the safety of the user using the head-mounted device.
  • the electronic device and the head-mounted device will not determine whether the user has a need to perceive the external environment, and will not send out prompt messages.
  • the user can choose whether to activate the reminder function according to his own needs. For example, when a user is resting at home and only wants to listen to beautiful music through a head-mounted device and does not want to be disturbed by the external environment, he can choose to turn off the "reminder function". When the user is walking on the road and using the head-mounted device to make a phone call, at this time, he wants to know the whistle of the vehicles on the road to improve the safety of walking. At this time, he can choose to turn on the "reminder function". In this way, the needs of users can be better met and user experience can be improved.
  • reminder function is only a word used in this embodiment, and its representative meaning has been recorded in this embodiment, and its name does not constitute any limitation on this embodiment.
  • FIG. 1 shows a schematic structural diagram of an audio playback system 1000 provided by an embodiment of the present application.
  • the audio playback system 1000 may include: an electronic device 100 and a head-mounted device 300.
  • the electronic device 100 is used to send audio signals to the head-mounted device 300 for playback.
  • the electronic device 100 may be a portable electronic device such as a mobile phone, a tablet computer, a personal digital assistant (PDA), and a wearable device.
  • portable electronic devices include but are not limited to portable electronic devices equipped with iOS, Android, Microsoft or other operating systems.
  • the aforementioned portable electronic device may also be other portable electronic devices, such as a laptop computer with a touch-sensitive surface (such as a touch panel).
  • the electronic device 100 may not be a portable electronic device, but a desktop computer or a vehicle-mounted device with a touch-sensitive surface (such as a touch panel).
  • the electronic device 100 may obtain the current scene, determine the preset environmental sound in the current scene, collect the sound signal of the external environment, match the preset environmental sound, and prompt the user.
  • the electronic device may determine the current scene of the user according to at least one of the user's current location information, the user's behavior sign information, and the voice content of the external environment.
  • the head-mounted device 300 is used to convert the audio signal provided by the electronic device 100 into a sound signal for the user to listen to.
  • the headset 300 is a headset.
  • the earphone 300 may also be an in-ear earphone, a headphone or an earbud earphone.
  • the head-mounted device 300 may be a wired head-mounted device or a wireless head-mounted device.
  • the head-mounted device 300 when the head-mounted device 300 is a wired head-mounted device, it can communicate with the electronic device 100 in a plug-in manner.
  • the head-mounted device 300 is a wireless head-mounted device, it may establish a communication connection with the electronic device 100 through a cellular network, a WiFi network, a Bluetooth network, or other wireless communication methods.
  • the head-mounted device 300 when the head-mounted device 300 is in communication with the electronic device 100, the head-mounted device 300 converts the electrical signal emitted by the media player in the electronic device 100 into a sound signal and plays it through the speaker close to the ear, thereby enabling Users can listen to various audio signals without affecting others.
  • the media player refers to an application program used in the electronic device 100 to play multimedia files, for example, "Kugou Music", “Youku Video", and "QQ Music”.
  • the head-mounted device 300 may also obtain the current scene, determine the preset environmental sound in the current scene, collect the sound signal of the external environment, match the preset environmental sound, and prompt the user.
  • the head-mounted device 300 may also determine the current scene of the user according to at least one of the user's current location information, the user's behavior sign information, and the voice content of the external environment.
  • FIG. 2 shows a schematic diagram of the structure of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the electronic device 100 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated access is avoided, the waiting time of the processor 110 is reduced, and the efficiency of the electronic device 100 is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 can implement a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, MPEG-4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 110 can run the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 100 to execute the environmental sound processing methods provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the internal memory 121 may be used to store multiple preset scenes, the preset environmental sounds in each scene, and the association relationship between each scene and the corresponding preset environmental sounds in the scene.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions. In some embodiments of the present application, the electronic device 100 may collect sound signals of the external environment through the microphone 170C.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers, etc. In the embodiment of the present application, the electronic device 100 may determine the user's motion behavior through the acceleration sensor 180E.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 can receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the electronic device 100 exemplarily shown in FIG. 2 may display various user interfaces described in the following embodiments through a display screen 194.
  • the electronic device 100 can detect touch operations in various user interfaces through the touch sensor 180K, such as a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, up or up in each user interface. Swipe down, or perform circle-drawing gestures, etc.
  • the electronic device 100 may detect the motion gesture performed by the user holding the electronic device 100, such as shaking the electronic device, through the gyroscope sensor 180B, the acceleration sensor 180E, and the like.
  • the electronic device 100 can detect non-touch gesture operations through the camera 193 (such as a 3D camera, a depth camera).
  • FIG. 3 shows a schematic diagram of the structure of the earphone 300.
  • the headset 300 may include: a processor 310, a memory 320, a wireless communication processing module 330, a power management module 340, a wired communication module 350, a speaker 360, a microphone 370, a button control module 380, and a sensor module 390.
  • the processor 310 may be used to read and execute computer readable instructions.
  • the processor 310 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operation corresponding to the instruction.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations and logical operations, etc., and can also perform address operations and conversions.
  • the register is mainly responsible for storing the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 310 may be an application specific integrated circuit (ASIC) architecture, MIPS architecture, ARM architecture, or NP architecture, and so on.
  • ASIC application specific integrated circuit
  • the processor 310 may be used to analyze signals received by the wireless communication processing module 330 and/or the wired communication module 350, such as a detection request broadcast by the electronic device 100, a connection request sent by the electronic device 100, and 100 audio data sent, and so on.
  • the processor 310 may be configured to perform corresponding processing operations according to the analysis result, such as generating a detection response, or controlling the speaker module 360 to play corresponding sound signals according to the audio data, and so on.
  • the processor 310 may also be used to generate signals sent by the wireless communication processing module 330 and/or the wired communication module 350, such as Bluetooth broadcast signals, beacon signals, and for example, signals sent to electronic devices. Feedback connection status (such as connection success, connection failure, etc.) signals.
  • the memory 320 is coupled with the processor 310, and is used to store various software programs and/or multiple sets of instructions.
  • the memory 320 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 320 may store an operating system, such as embedded operating systems such as uCOS, VxWorks, and RTLinux.
  • the memory 320 may also store a communication program, which may be used to communicate with the electronic device 100, one or more servers, or additional devices.
  • the memory 320 may be used to store multiple preset scenes, preset environmental sounds in each scene, and an association relationship between each scene and the corresponding preset environmental sounds in the scene.
  • the wireless communication processing module 330 may include one or more of a Bluetooth (BT) communication processing module 330A and a Wi-Fi communication processing module 330B.
  • BT Bluetooth
  • Wi-Fi Wi-Fi
  • one or more of the Bluetooth (BT) communication processing module and the Wi-Fi communication processing module are used to establish a communication connection with the electronic device 100, and receive the electronic device 100 after the communication connection is established. And send the received audio signal to the processor 310 for processing.
  • BT Bluetooth
  • Wi-Fi Wi-Fi
  • the wireless communication processing module 330 may also include a cellular mobile communication processing module (not shown).
  • the cellular mobile communication processing module can communicate with other devices (such as servers) through cellular mobile communication technology.
  • the power management module 340 is used to receive charging input from the charger to charge the battery (not shown). Among them, the charger can be a wireless charger or a wired charger. The power management module 340 is also used to connect to the processor 310. The power management module 340 receives input from the battery and/or charger, and supplies power to the processor 310, the memory 320, the wireless communication module 330, and the like.
  • the speaker 360 also called a "speaker" is used to convert audio electrical signals into sound signals.
  • the head-mounted device 300 includes a left speaker and a right speaker, so as to provide sounds to the user's left and right ears, respectively.
  • the microphone 370 also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the head-mounted device 300 can pick up ambient sounds around the user through a microphone. When the user wears the head-mounted device 300 to make a call or send voice information, the user can approach the microphone 370 through the mouth to make a sound and input the sound signal into the microphone 370.
  • the microphone 370 converts the sound signal into an electric signal and then transmits it to the electronic device 100 through the wireless communication processing module 330 or the wired communication module 350.
  • the head-mounted device 300 may include a left pickup microphone and a right pickup microphone. In some embodiments of the present application, the head-mounted device 300 may collect sound signals of the external environment through the microphone 370.
  • the key control module 380 includes keys and peripheral circuits for realizing key functions.
  • the buttons include power-on button, volume button and so on. The functions of turning on, adjusting the volume and answering calls can be realized through the buttons.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the sensor module 390 includes one or more of a heart rate sensor, a temperature sensor, and a pulse sensor.
  • the sensor module 390 may collect vital signs information of the user.
  • the sensor module 390 includes a temperature sensor to detect the body temperature of the user.
  • the sensor module 390 includes a heart rate sensor, it can detect the user's heart rate.
  • the sensor module 390 may also include motion sensors such as acceleration sensors.
  • the head-mounted device 300 may determine the vital signs of the user through at least one of a heart rate sensor, a temperature sensor, and a pulse sensor. The head-mounted device 300 can also determine the user's motion behavior through an acceleration sensor.
  • the structure illustrated in FIG. 3 does not constitute a specific limitation on the head-mounted device 300.
  • the head-mounted device 300 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • GUI graphical user interface
  • the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Visual interface elements.
  • FIG. 4A exemplarily shows an exemplary user interface 21 on the electronic device 100 for displaying application programs installed by the electronic device 100.
  • the user interface 21 may include: a status bar 201, a calendar indicator 202, a weather indicator 203, a tray 204 with commonly used application icons, a navigation bar 205, and other application icons. among them:
  • the status bar 201 may include: one or more signal strength indicators 201A of mobile communication signals (also called cellular signals), the name of the operator (for example, "China Mobile") 201B, and wireless fidelity (Wi-Fi) )
  • One or more signal strength indicators 201C, battery status indicator 201D, and time indicator 201E of the signal are included in the signal.
  • the calendar indicator 202 can be used to indicate the current time, such as date, day of the week, hour and minute information, etc.
  • the weather indicator 203 can be used to indicate the type of weather, such as cloudy to clear, light rain, etc., and can also be used to indicate information such as temperature.
  • the tray 204 with icons of commonly used application programs can display: a phone icon 204A, a contact icon 204B, a short message icon 204C, and a camera icon 204D.
  • the navigation bar 205 may include system navigation keys such as a return key 205A, a home screen key 205B, and a multi-task key 205C.
  • system navigation keys such as a return key 205A, a home screen key 205B, and a multi-task key 205C.
  • the electronic device 100 may display the previous page of the current page.
  • the electronic device 100 may display the home interface.
  • the electronic device 100 may display the task recently opened by the user.
  • the naming of each navigation key can also be other, which is not limited in this application. It is not limited to virtual keys, and each navigation key in the navigation bar 205 can also be implemented as a physical key.
  • Other application icons can be for example: Wechat icon 206, QQ icon 207, Twitter icon 208, Facebook icon 209, mailbox icon 210, cloud sharing icon 211, memo The icon 212 of, the icon 213 of Alipay, the icon 214 of gallery, and the icon 215 of settings.
  • the user interface 21 may also include a page indicator 216.
  • Other application program icons may be distributed on multiple pages, and the page indicator 216 may be used to indicate which application program in which page the user is currently browsing. The user can slide the area of other application icons left and right to browse application icons in other pages.
  • the user interface 21 exemplarily shown in FIG. 4A may be a home screen (Home screen).
  • FIG. 4A only exemplarily shows the user interface on the electronic device 100, and should not constitute a limitation to the embodiment of the present application.
  • FIGS. 4A and 4B exemplarily show an operation of turning on the "reminder function" on the electronic device 100.
  • the electronic device 100 when the electronic device 100 detects a downward sliding gesture on the status bar 201, in response to the gesture, the electronic device 100 may display a window 217 on the user interface 21. As shown in FIG. 4B, the window 217 may display a switch control 217A of "reminder function", and may also display switch controls of other functions (such as Wi-Fi, Bluetooth, flashlight, etc.). When an operation on the switch control 217A in the window 217 (such as a touch operation on the switch control 217A) is detected, in response to the operation, the electronic device 100 can turn on the "reminder function".
  • a switch control 217A such as a touch operation on the switch control 217A
  • the user can make a downward sliding gesture on the status bar 201 to open the window 217, and can click the switch control 217A of the "reminder function” in the window 217 to conveniently turn on the "reminder function".
  • the expression form of the switch control 217A of the "reminder function” may be text information or icons.
  • the "reminder function” can also be turned on by the user's settings in the "Settings” application.
  • setting the setting item of "Reminder Function” from “OFF” to “ON” can turn on the “Reminder Function” of the electronic device 100.
  • the "settings” application is an application installed on electronic devices such as smart phones, tablet computers and the like for setting various functions of the electronic device, and the name of the application is not limited in the embodiment of the application.
  • it can also be opened by inputting a specific gesture operation on the display screen 194 of the electronic device 100.
  • a specific gesture operation on the display screen 194 of the electronic device 100.
  • the user turns on the "reminder function” by inputting a gesture operation of drawing a circle or a star on the display 194.
  • the "reminder function" of the electronic device 100 can also be turned on through voice input. For example, when the user says “enable the reminder function” or "please turn on the reminder function" to the electronic device 100, the electronic device 100 can turn on the "reminder function".
  • the "reminder function” is enabled by default. It may also be that after the head-mounted device 300 is connected to the electronic device 100, a prompt box is displayed on the display screen of the electronic device 100 to prompt the user whether the "reminder function” needs to be turned on. For example, when the head mounted device 300 is connected to the electronic device 100, the display screen of the electronic device 100 displays whether the "reminder function" is turned on. When the user selects “Yes”, the “reminder function” is turned on, and when the user selects " If no, the "reminder function” is not turned on.
  • the head-mounted device 300 when the head-mounted device 300 is a wired head-mounted device, “the head-mounted device 300 is connected to the electronic device 100” means that the head-mounted device 300 is inserted into the head-mounted device interface of the electronic device 100.
  • the head-mounted device 300 is a wireless head-mounted device, “the head-mounted device 300 is connected to the electronic device 100” means that the head-mounted device 300 establishes a communication connection with the electronic device 100.
  • the electronic device 100 or the head-mounted device 300 can be based on the preset environment To determine whether the user has a need to perceive the external environment in the current scene. If so, the electronic device 100 or the head-mounted device 300 can output prompt information so that the user can perceive the external environment and respond accordingly.
  • the following describes in detail the process of the electronic device 100 or the head mounted device 300 prompting the user according to the user's scene.
  • FIG. 5 shows a schematic flow chart of a method for the electronic device 100 or the head-mounted device 300 provided in an embodiment of the present application to process environmental sounds according to the scene where the user is located.
  • the method may include the following steps:
  • Step S101 Determine the current scene of the user.
  • the following describes in detail how the electronic device 100 or the head-mounted device 300 determines the scene in which the user is located.
  • the electronic device 100 or the head-mounted device 300 can determine the scene where the user is currently located according to the current location information of the user. For example, when the user's location information is a commercial plaza, it can be determined that the user is in a shopping mall scene; when the user's location information is on a street, it can be determined that the user is in a road scene; when the user's location information is a residential area, it can be It is determined that the user is in the home scene; when the location information of the user is a certain park, it can be determined that the user is in the park scene. Specifically, the electronic device 100 or the head-mounted device 300 may obtain the user's location information in any of the following ways or a combination of the following ways:
  • the electronic device 100 or the head-mounted device 300 obtains the longitude and latitude coordinates or the geographic location name through a global navigation satellite system, such as GPS, GLONASS, BDS, QZSS, or SBAS.
  • a global navigation satellite system such as GPS, GLONASS, BDS, QZSS, or SBAS.
  • the electronic device 100 or the head-mounted device 300 acquires the latitude and longitude coordinates by means of base station positioning.
  • the electronic device 100 or the head-mounted device 300 adopts wireless indoor positioning technology to obtain the longitude and latitude coordinates or geographic location name.
  • Wireless indoor positioning technologies may include short-range wireless technologies such as Wi-Fi, radio frequency identification (RFID), Bluetooth, infrared, and ultrasonic.
  • RFID radio frequency identification
  • Bluetooth infrared
  • ultrasonic ultrasonic
  • the electronic device 100 or the head-mounted device 300 obtains location information through the data collected by the sensor.
  • the electronic device 100 may measure the air pressure value through the air pressure sensor 170B, and calculate the altitude based on the measured air pressure value.
  • the electronic device 100 can also measure the speed and acceleration through the acceleration sensor 170C, and calculate the latitude and longitude coordinates of the current time point based on the measured speed, acceleration, and the latitude and longitude coordinates of a previous time point.
  • the electronic device 100 can also measure the distance to an object through the distance sensor 170D, and calculate its own longitude and latitude coordinates based on the measured distance and the longitude and latitude coordinates of the object.
  • some sensors of the electronic device 100 can work continuously for a long time to collect data, so as to obtain the user's location information.
  • the location information acquired by the electronic device 100 and the head-mounted device 300 is the user's location information. Therefore, in the embodiment of the present application, The location information of the user can be acquired through the electronic device 100 or the location information of the user can be acquired through the head-mounted device 300. That is, in practical applications, the device required for acquiring location information can be set in the electronic device according to specific needs. 100 or the head-mounted device 300, there is no limitation here.
  • the electronic device 100 or the head-mounted device 300 can determine the scene where the user is currently located according to the user's behavioral sign information.
  • the behavioral sign information includes sports behavior and vital sign information. For example, when it is detected that the user is running, it can be determined that the user is in a gym scene or a park scene; since the body temperature will drop when the person is in a sleep or light rest state, when it is detected that the user's body temperature is low, it can be determined The user is in a home scene; in addition, when the user is in an exercise state, the heart rate will increase, so when the user's heart rate is detected to increase, it can be determined that the user is in an exercise state, and then the user is in a gym scene or a park scene.
  • the electronic device 100 or the head-mounted device 300 can obtain the user's behavioral sign information through any one of the following methods or a combination of the following methods:
  • the electronic device 100 or the head-mounted device 300 measures the speed and acceleration through an acceleration sensor to determine the user's motion behavior.
  • the electronic device 100 or the head-mounted device 300 measures the vital signs of the user through a temperature sensor, a heart rate sensor, or a pulse sensor.
  • the temperature sensor is used to measure the user's body temperature
  • the heart rate sensor is used to measure the user's heart rate
  • the heart rate sensor can be a photoelectric heart rate sensor, a vibrating heart rate sensor, etc.
  • the pulse sensor is used to measure the user's pulse.
  • vital signs information needs to be close to the user’s skin to be measured, and the electronic device 100 is often carried within a certain distance from the user, these sensors are set on the head-mounted device 300 worn on the user’s ears. It is easier to measure the user’s vital signs information.
  • the heart rate sensor fits the skin of the user's ears, and the photoelectric heart rate sensor can convert the heart rate signal into a corresponding electrical signal and output it to the processor 310.
  • the electronic device 100 or the head-mounted device 300 can determine the current scene of the user according to the recognized voice content. Specifically, the electronic device 100 or the head-mounted device 300 collects voice content through a microphone, and when it is recognized that the voice content contains specific voice content, the scene where the user is located can be determined. For example, when it is recognized that the voice content includes: “Start meeting", “Good morning everyone", etc., it can be determined that the user is currently in a meeting scene. It is understandable that specific voice content may be stored in the memory of the electronic device 100 or the head-mounted device 300 in advance.
  • the electronic device 100 or the head-mounted device 300 may determine the current scene of the user according to the received user input or selected scene.
  • the user can set the current scene as a road scene or a home scene.
  • For setting the scene please refer to the description of the subsequent embodiments, which will not be repeated here.
  • the electronic device 100 or the head-mounted device 300 can automatically determine the scene where the user is located, avoiding the manual setting process, and is convenient.
  • the methods for determining the scene where the user is located in the above-mentioned first to third embodiments may be used in combination or alone, which is not limited here.
  • Step S102 Determine a preset environmental sound corresponding to the scene according to the scene where the user is.
  • the electronic device 100 or the head-mounted device 300 may determine the preset environmental sound in the current scene according to the correspondence between the pre-stored scene and the preset environmental sound. That is, when determining the scene the user is currently in, the preset environmental sound can be determined according to the corresponding relationship.
  • the electronic device 100 or the head-mounted device 300 can learn preset environmental sounds in different scenarios where the user is located. Among them, various scenes and the preset environmental sounds in each scene can be preset according to empirical data. For example, the R&D personnel can determine the preset environmental sounds suitable for different scenes through research and other methods, and preset them in the electronic device 100.
  • the head-mounted device 300 or the cloud server (not shown). When the pre-stored scene, the pre-stored preset environmental sounds, and the correspondence between the scene and the preset environmental sounds are stored in the cloud server, the electronic device 100 or the head-mounted device 300 needs to obtain the stored data first.
  • the electronic device 100 or the head-mounted device 300 needs to determine the content of the corresponding preset environmental sound in the scene according to the scene in which the user is located. For example, please refer to Table 1. Table 1 shows the contents of preset environmental sounds corresponding to several scenarios.
  • Table 1 Contents of preset environmental sounds corresponding to different scenarios
  • Scenes The content of the preset ambient sound Home scene Knocking at the door, home appliance alarm sound, alarm clock, children crying Park scene Thunderstorm, shouting, dog barking, greeting (Hi, hello, hello, etc.) Mall scene Greetings, alarms, user names Road scene Whistle, bicycle bell, stop announcement, alarm Meeting scene User name, greeting, door opening, cell phone ringtone
  • Table 1 is only an example, and the preset environmental sound content in each scene in Table 1 may be partially the same.
  • both the park scene and the meeting scene include "hello.”
  • the types of scenes are not limited to the various scenes shown in Table 1, and the content of the preset environmental sounds in each scene is not limited to the examples shown in Table 1 above.
  • the electronic device 100 or the head-mounted device 300 needs to determine the content of the corresponding preset environmental sounds in the scene and the threshold corresponding to each preset environmental sound according to the scene where the user is located.
  • Table 2 shows the preset environmental sound content corresponding to several scenarios and the threshold corresponding to each preset environmental sound.
  • the threshold includes the loudness threshold, the duration threshold and the repeat At least one of the thresholds of the number of times.
  • Table 2 The content of the preset environmental sounds corresponding to different scenarios and the corresponding thresholds
  • Table 2 is only an example.
  • the loudness thresholds of different preset environmental sounds in each scene are the same. It can be understood that in practical applications, different preset environmental sounds in the same scene can also be set to different
  • the loudness threshold and the types of scenes are not limited to the various scenes shown in Table 2, and the content of the preset ambient sound in each scene and the corresponding threshold are not limited to the examples shown in Table 2 above.
  • the loudness threshold is a threshold of sound loudness, which can be preset according to different scenarios. For example, for the sound information of dangerous signals such as car horns, bicycle bells, and alarm sounds in a road scene, a higher loudness threshold (such as 75 decibels) can be adopted so that the user can avoid dangerous signals in time when hearing the sound signal. Set the situation; another example: In the meeting scene, the minimum loudness threshold (for example, 45 decibels) can be adopted for the name of the name called by others, to pick up the sound sensitively, and avoid the phenomenon of being handed over but not audible.
  • different loudness thresholds are set according to different scenes, which can not only avoid excessively sensitive collection of environmental signals and interfere with the normal operation of the head-mounted device 300, so as to ensure the normal operation of the head-mounted device 300, but also sensitive sound.
  • the picking up it is a good way to inform the user of the effective external sound signal.
  • the electronic device 100 or the head-mounted device 300 can determine the preset ambient sound in the current scene according to the received user input or selection.
  • the electronic device 100 is taken as an example for description.
  • the "Settings" application (215 in FIG. 4A) receives a user's operation (such as a touch operation), it can enter the setting interface as shown in FIG. 6A .
  • the user interface 10 may include a status bar 101, a title bar 102, a switch control 103 for "manual mode", and prompt information 104.
  • the status bar 101 can refer to the status bar 201 in the user interface 21 shown in FIG. 4A, which will not be repeated here.
  • the title bar 102 may include current page indicators 102A and 102B.
  • the current page indicators 102A and 102B can be used to indicate the current page.
  • the text information "settings” and "reminder function” can be used to indicate that the current page is used to display the corresponding content of the reminder function setting items. It is not limited to text information, the current page indicator 102A and 102B may also be icons.
  • the switch control 103 is used to monitor the operation (for example, touch operation) of turning on/off the "manual mode". As shown in FIG. 6B, when an operation on the switch control 103 (such as a touch operation on the switch control 103) is detected, in response to the operation, the electronic device 100 can turn on the "manual mode".
  • the form of the switch control 103 can be text information or icons.
  • the prompt message 104 can be used to introduce the "manual mode" and prompt the user the function of the "manual mode".
  • the presentation form of the prompt information 104 may be text information or an icon.
  • the electronic device 100 may determine the current scene according to the content manually input by the user. For example, if the user manually enters "road scene” in the "please select or input scene” area, the words “road scene” are displayed in this area (refer to FIG. 6C). In another embodiment, the electronic device 100 may also determine the scene the user is currently in according to the scene selected by the user.
  • the electronic device 100 may display a list including various scenes, and when the user selects a road scene, the scene is displayed in the "Please select or enter Scene” area (see Figure 6C). After the scene is set, the electronic device 100 also determines the preset environmental sound in the scene according to the user's input or selection.
  • the setting method of the preset environmental sound can refer to the setting method of the scene, which will not be repeated here.
  • the content of the preset environmental sound in the scene in the automatic mode can also be corrected according to the content of the environmental sound set by the user.
  • the preset environmental sound threshold in the automatic mode may be corrected according to the threshold set by the user. In this way, the accuracy of determining the preset environmental sound in the automatic mode can be improved, and the user experience can be better.
  • preset environmental sound content and corresponding thresholds in different scenarios may also be stored in a cloud server, and the electronic device 100 or the head-mounted device 300 obtains the content from the cloud server 500. Since the cloud server communicates with different head-mounted devices 300 and electronic devices 100, as more and more electronic devices 100 or head-mounted devices 300 are put into use, more and more abundant data will be generated.
  • the cloud server can obtain these data and perform statistics and analysis on the preset environmental sound content and corresponding thresholds in different scenarios according to the big data statistical algorithm, and then can periodically check the preset environmental sound content in different scenarios based on the statistics and analysis results And the corresponding threshold value is revised and updated, so that it can be closer to the needs of users and improve user experience.
  • the environmental sound content set by the user is mostly alarms and door knocks, and the loudness threshold is set to 45 decibels. Therefore, the environmental sound content and loudness threshold can be preset based on the result Make adjustments.
  • the electronic device 100 may also determine the preset environmental sound according to the received user input or selection.
  • the corresponding preset environmental sound content or the corresponding preset environmental sound content and threshold are the same. That is, the preset environmental sound content or the preset environmental sound content and threshold will not change with the change of the scene.
  • the user sets the preset environmental sounds as a whistle sound and an alarm sound.
  • the content of the preset environmental sound is a whistle sound and an alarm sound. In this way, the special needs of different special users can be met, making the electronic device 100 more practical.
  • Step S103 Collect sound signals of the external environment.
  • the external environment sound signal refers to all the sound signals that can be picked up in the external environment, including car horns, pedestrian voices, bicycle ringtones, music played outside, dog barking, alarm bells, etc.
  • a microphone on the electronic device 100 or the head-mounted device 300 may be used to collect sound signals in the external environment.
  • at least one common omnidirectional microphone can be used to pick up the sound, and the sound signal in the external environment can also be collected through a sound sensor or other devices with a sound pickup function, which is not limited here.
  • Step S104 When the collected sound signal matches the preset environmental sound corresponding to the current scene, output the sound signal of the external environment or output prompt information.
  • the collected sound signal when the collected sound signal includes the preset environmental sound, it can be determined that the collected sound signal matches the preset environmental sound corresponding to the current scene. For example, when it is determined that the scene where the current user is located is a road scene, the corresponding environmental sound content is whistle, bicycle bell, and stop announcement. After identifying the collected sound signals of the external environment, it is found that the current environment The sound signal includes the car whistle sound. At this time, it can be determined that the sound signal of the external environment matches the preset environmental sound.
  • the threshold includes at least one of a loudness threshold, a duration threshold, and a repetition threshold. For example, when the collected sound signal includes a preset ambient sound and the loudness of the sound signal is greater than a preset loudness threshold (such as 55 decibels), it can be determined that the collected sound signal matches the preset ambient sound corresponding to the current scene .
  • a preset loudness threshold such as 55 decibels
  • the collected sound signal includes a preset ambient sound and the loudness of the sound signal is greater than the preset loudness threshold (such as 55 decibels), and the duration is greater than the preset time threshold (such as 5s), it can be determined
  • the collected sound signal matches the preset environmental sound corresponding to the current scene.
  • the electronic device 100 or the head-mounted device 300 may send out prompt information so that the user perceives the external environment.
  • the prompt information may include, but is not limited to: voice prompt information, vibration prompt information, light prompt information, and display prompt information.
  • the electronic device 100 or the head-mounted device 300 may send out voice prompts to respond The user is reminded.
  • the electronic device 100 or the head-mounted device 300 may emit a light prompt Information to remind users.
  • the electronic device 100 or the head-mounted device 300 may emit a vibration prompt Information to remind users.
  • the electronic device 100 or the head-mounted device 300 may prompt the user through any one of the following methods or a combination of the following methods:
  • the current audio data of the head-mounted device 300 may be paused.
  • the electronic device 100 can pause the video or music that the user is watching, that is, pause the playback of the current audio data of the head-mounted device 300, and emit a voice through the head-mounted device 300 to remind the user that there may be danger or There is a sound signal that needs to be responded to.
  • the electronic device 100 or the head-mounted device 300 can detect the duration of stopping the prompt voice, and when the duration of the stop playing the prompt voice reaches a preset duration threshold (for example, 2s), it can continue to play The current audio data. In this way, the current audio data playback is resumed after the preset time is passed after the prompt voice playback is completed, which can leave time for the user to respond to a specific sound signal.
  • a preset duration threshold for example, 2s
  • the electronic device 100 or the head-mounted device 300 can detect that the user is currently in a road scene, and if the microphone detects the sound of the external environment When the signal matches the preset whistle sound, the current music playback is paused, and a prompt voice is output through the headset 300.
  • the prompt voice playback is completed, the time to stop playing the prompt voice is detected, and when the duration of stopping the prompt voice reaches the preset duration threshold, the music that the user is listening to can continue to be played. In this way, time can be reserved for the user to view the location of the vehicle and dodge the vehicle, and after the user dodges the vehicle, the music that the user is listening to is automatically played.
  • the prompt voice can be implemented in the following ways.
  • the prompt voice can be a sound content that matches the preset environmental sound. For example, when the collected sound signal of the external environment matches the horn sound in the preset environmental sound content, the horn sound is played through the head-mounted device 300; and when the collected sound signal of the external environment matches the preset sound When the alarm sound in the ambient sound content matches, the head-mounted device 300 plays the alarm sound.
  • the prompt voice can be a preset specific voice prompt content.
  • the electronic device 100 or the head-mounted device 300 may pre-store specific voice prompt content for reminding the user.
  • the pre-stored specific voice prompt content can be played through the head-mounted device 300.
  • the specific voice prompt content of "A vehicle approaching, please avoid” can be played; the specific voice prompt of "Someone knocks on the door, please open the door” can also be played.
  • the specific voice broadcast content in different scenarios can be different, and it can be set when the content of the preset environmental sound is set.
  • the LED (Light Emitting Display, light emitting diode) of the electronic device 100 flashes, or the button light flashes, etc., to remind the user to pay attention to the external environment.
  • the prompt content can be any of pictures, text, or symbols.
  • the prompt content can be any of pictures, text, or symbols.
  • a "bell" can be displayed on the display 194 of the electronic device 100 To prompt the user.
  • the prompt message can be terminated. For example, when the duration of the notification message reaches 30s, the notification message is terminated.
  • the prompt message is terminated. For example, when a car passes by, the whistle sound may have disappeared, so when there is no whistle sound in the ambient sound, the prompt message can be terminated.
  • the first method is selected to send the prompt message, when the prompt message is terminated, the current audio data can continue to be played through the head-mounted device 300. In this way, manual operation by the user can be avoided and the user can be improved Experience.
  • the prompt message is terminated.
  • the volume key or the power key can be assigned to terminate the function of prompting information.
  • the user can control the electronic device 100 to stop the vibration by operating the volume key.
  • all steps can be performed by the electronic device 100, or all of the head-mounted devices
  • the device 300 can also be executed by the electronic device 100 and the head-mounted device 300 together.
  • a trigger signal can be sent to remind another subject to complete the corresponding step.
  • the electronic device 100 sends a The trigger signal is sent to the head-mounted device 300 to remind the head-mounted device 300 to perform corresponding steps.
  • the trigger signal may be a high-level signal or a low-level signal, which is not limited here.
  • the electronic device 100 or the head-mounted device 300 can determine the preset environment according to the current scene of the user, and When the collected sound signal of the external environment matches the preset environmental sound, a prompt message is issued, so that the user can learn the external environment in different scenarios while using the head-mounted device 300, so as to make a corresponding response , Improve the safety of using the head-mounted device 300.
  • the electronic device 100 can send a control command to the head-mounted device 300 according to whether the "prompt function" is turned on to inform the head-mounted device 300 whether Need to perform the above method.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk).

Abstract

La présente invention concerne un procédé de traitement de son ambiant et un dispositif associé ; dans le procédé, lorsqu'un utilisateur porte un dispositif pouvant être monté sur la tête (300), un dispositif électronique (100) ou le dispositif monté sur la tête (300) peut déterminer l'environnement actuel de l'utilisateur, et déterminer un son ambiant prédéfini dans ledit environnement en fonction de l'environnement actuel de l'utilisateur ; si un signal sonore collecté de l'environnement externe correspond au son ambiant prédéfini dans ledit environnement, alors le dispositif électronique (100) ou le dispositif pouvant être monté sur la tête (300) peut émettre un signal sonore de l'environnement externe ou des informations d'invite de sortie de sorte que l'utilisateur puisse percevoir l'environnement externe et répondre en conséquence, ainsi la sécurité de l'utilisateur à l'aide du dispositif pouvant être monté sur la tête (300) est améliorée.
PCT/CN2020/098733 2019-06-29 2020-06-29 Procédé et dispositif de traitement de son ambiant WO2021000817A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910581121.2 2019-06-29
CN201910581121.2A CN112150778A (zh) 2019-06-29 2019-06-29 环境音处理方法及相关装置

Publications (1)

Publication Number Publication Date
WO2021000817A1 true WO2021000817A1 (fr) 2021-01-07

Family

ID=73891276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098733 WO2021000817A1 (fr) 2019-06-29 2020-06-29 Procédé et dispositif de traitement de son ambiant

Country Status (2)

Country Link
CN (1) CN112150778A (fr)
WO (1) WO2021000817A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220239269A1 (en) * 2021-01-22 2022-07-28 Samsung Electronics Co., Ltd. Electronic device controlled based on sound data and method for controlling electronic device based on sound data

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496694A (zh) * 2020-03-19 2021-10-12 上汽通用汽车有限公司 一种车辆声学系统、车辆用座椅以及车辆
CN116594511B (zh) * 2023-07-17 2023-11-07 天安星控(北京)科技有限责任公司 基于虚拟现实的场景体验方法、装置、计算机设备和介质

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007007916A1 (fr) * 2005-07-14 2007-01-18 Matsushita Electric Industrial Co., Ltd. Appareil de transmission et procede permettant de generer une alerte dependant de types de sons
CN101790000A (zh) * 2010-02-20 2010-07-28 华为终端有限公司 一种环境声音提醒方法和移动终端
CN101840700A (zh) * 2010-04-28 2010-09-22 宇龙计算机通信科技(深圳)有限公司 基于移动终端的声音识别方法及移动终端
CN105263078A (zh) * 2015-10-26 2016-01-20 无锡智感星际科技有限公司 一种识别多种音源并提供多样化提示预警机制的智能耳机系统及方法
CN107147795A (zh) * 2017-05-24 2017-09-08 上海与德科技有限公司 一种提示方法及移动终端
WO2017171137A1 (fr) * 2016-03-28 2017-10-05 삼성전자(주) Aide auditive, dispositif portatif et procédé de commande associé
CN107613113A (zh) * 2017-09-05 2018-01-19 深圳天珑无线科技有限公司 一种耳机模式控制方法、装置及计算机可读存储介质
CN107863110A (zh) * 2017-12-14 2018-03-30 西安Tcl软件开发有限公司 基于智能耳机的安全提醒方法、智能耳机及存储介质
CN107948801A (zh) * 2017-12-21 2018-04-20 广东小天才科技有限公司 一种耳机的控制方法及耳机
CN109243442A (zh) * 2018-09-28 2019-01-18 歌尔科技有限公司 声音监测方法、装置及头戴显示设备
CN109493884A (zh) * 2018-12-06 2019-03-19 江苏满运软件科技有限公司 一种外部声源安全提醒方法、系统、设备以及介质
CN109887271A (zh) * 2019-03-21 2019-06-14 深圳市科迈爱康科技有限公司 行人安全预警方法、装置及系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1496701A4 (fr) * 2002-04-12 2009-01-14 Mitsubishi Electric Corp Dispositif d'edition de metadonnees, dispositif de reproduction de metadonnees, dispositif de distribution de metadonnees, dispositif de recherche de metadonnees, dispositif d'etablissement de conditions de reproduction de metadonnees, et procede de distribution de metadonnees
JP2009300915A (ja) * 2008-06-17 2009-12-24 Fujitsu Ltd 音楽再生機能を有する携帯端末
CN106550294A (zh) * 2015-09-18 2017-03-29 丰唐物联技术(深圳)有限公司 基于耳机的监听方法及装置
CN108605073B (zh) * 2016-09-08 2021-01-05 华为技术有限公司 声音信号处理的方法、终端和耳机
CN207531029U (zh) * 2017-11-30 2018-06-22 歌尔科技有限公司 一种线控装置和耳机
CN109120784B (zh) * 2018-08-14 2021-11-16 联想(北京)有限公司 音频播放方法以及电子设备
CN109145847B (zh) * 2018-08-30 2020-09-22 Oppo广东移动通信有限公司 识别方法、装置、穿戴式设备及存储介质
CN109345767B (zh) * 2018-10-19 2020-11-20 广东小天才科技有限公司 穿戴式设备用户的安全提醒方法、装置、设备及存储介质

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007007916A1 (fr) * 2005-07-14 2007-01-18 Matsushita Electric Industrial Co., Ltd. Appareil de transmission et procede permettant de generer une alerte dependant de types de sons
CN101790000A (zh) * 2010-02-20 2010-07-28 华为终端有限公司 一种环境声音提醒方法和移动终端
CN101840700A (zh) * 2010-04-28 2010-09-22 宇龙计算机通信科技(深圳)有限公司 基于移动终端的声音识别方法及移动终端
CN105263078A (zh) * 2015-10-26 2016-01-20 无锡智感星际科技有限公司 一种识别多种音源并提供多样化提示预警机制的智能耳机系统及方法
WO2017171137A1 (fr) * 2016-03-28 2017-10-05 삼성전자(주) Aide auditive, dispositif portatif et procédé de commande associé
CN107147795A (zh) * 2017-05-24 2017-09-08 上海与德科技有限公司 一种提示方法及移动终端
CN107613113A (zh) * 2017-09-05 2018-01-19 深圳天珑无线科技有限公司 一种耳机模式控制方法、装置及计算机可读存储介质
CN107863110A (zh) * 2017-12-14 2018-03-30 西安Tcl软件开发有限公司 基于智能耳机的安全提醒方法、智能耳机及存储介质
CN107948801A (zh) * 2017-12-21 2018-04-20 广东小天才科技有限公司 一种耳机的控制方法及耳机
CN109243442A (zh) * 2018-09-28 2019-01-18 歌尔科技有限公司 声音监测方法、装置及头戴显示设备
CN109493884A (zh) * 2018-12-06 2019-03-19 江苏满运软件科技有限公司 一种外部声源安全提醒方法、系统、设备以及介质
CN109887271A (zh) * 2019-03-21 2019-06-14 深圳市科迈爱康科技有限公司 行人安全预警方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220239269A1 (en) * 2021-01-22 2022-07-28 Samsung Electronics Co., Ltd. Electronic device controlled based on sound data and method for controlling electronic device based on sound data
EP4206900A4 (fr) * 2021-01-22 2024-04-10 Samsung Electronics Co Ltd Dispositif électronique commandé sur la base de données sonores et procédé de commande d'un dispositif électronique sur la base de données sonores

Also Published As

Publication number Publication date
CN112150778A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
CN110138937B (zh) 一种通话方法、设备及系统
WO2021213120A1 (fr) Procédé et appareil de projection d'écran et dispositif électronique
EP3872807A1 (fr) Procédé de commande vocale et dispositif électronique
CN113169760B (zh) 无线短距离音频共享方法及电子设备
WO2020119492A1 (fr) Procédé de traitement de message et appareil associé
CN112399390B (zh) 一种蓝牙回连的方法及相关装置
WO2020062159A1 (fr) Procédé de charge sans fil et dispositif électronique
CN111369988A (zh) 一种语音唤醒方法及电子设备
CN111628916B (zh) 一种智能音箱与电子设备协作的方法及电子设备
WO2021000817A1 (fr) Procédé et dispositif de traitement de son ambiant
CN112119641B (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2021052204A1 (fr) Procédé de découverte de dispositif basé sur un carnet d'adresses, procédé de communication audio et vidéo, et dispositif électronique
CN110784830A (zh) 数据处理方法、蓝牙模块、电子设备与可读存储介质
WO2021031865A1 (fr) Procédé et appareil d'appel
CN114079893A (zh) 蓝牙通信方法、终端设备及计算机可读存储介质
CN113452945A (zh) 分享应用界面的方法、装置、电子设备及可读存储介质
CN111835907A (zh) 一种跨电子设备转接服务的方法、设备以及系统
CN114115770A (zh) 显示控制的方法及相关装置
CN114185503A (zh) 多屏交互的系统、方法、装置和介质
WO2022135157A1 (fr) Procédé et appareil d'affichage de page, ainsi que dispositif électronique et support de stockage lisible
CN113170279B (zh) 基于低功耗蓝牙的通信方法及相关装置
CN111492678B (zh) 一种文件传输方法及电子设备
US20240114110A1 (en) Video call method and related device
WO2022089563A1 (fr) Procédé d'amélioration de son, procédé et appareil de commande d'écouteur et écouteur
CN115022807A (zh) 快递信息提醒方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20834233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20834233

Country of ref document: EP

Kind code of ref document: A1