WO2024255380A1 - 语音消息播放方法及电子设备 - Google Patents

语音消息播放方法及电子设备 Download PDF

Info

Publication number
WO2024255380A1
WO2024255380A1 PCT/CN2024/083255 CN2024083255W WO2024255380A1 WO 2024255380 A1 WO2024255380 A1 WO 2024255380A1 CN 2024083255 W CN2024083255 W CN 2024083255W WO 2024255380 A1 WO2024255380 A1 WO 2024255380A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
electronic device
proximity
proximity light
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2024/083255
Other languages
English (en)
French (fr)
Other versions
WO2024255380A9 (zh
Inventor
肖来成
李辰龙
张文礼
刘铁良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to EP24738264.1A priority Critical patent/EP4503584A4/en
Publication of WO2024255380A1 publication Critical patent/WO2024255380A1/zh
Publication of WO2024255380A9 publication Critical patent/WO2024255380A9/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/605Portable telephones adapted for handsfree use involving control of the receiver volume to provide a dual operational mode at close or far distance from the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of terminal devices, and in particular to a voice message playing method and electronic device.
  • chat applications for text communication and voice communication.
  • voice communication includes but is not limited to video calls, voice calls, and voice messages.
  • the chat application obtains the proximity light sensor to detect the proximity state, starts the anti-false touch process, automatically switches from the speaker playback mode to the receiver playback mode, and turns off the screen to prevent accidental touch.
  • the chat application obtains the proximity light sensor to detect the distance state, turns off the anti-false touch process, automatically switches from the receiver playback mode to the speaker playback mode, and turns on the screen.
  • the chat application may mistakenly close the anti-false touch process when the user answers the voice message at the ear.
  • the chat application has already switched to speaker playback mode and the screen is on, affecting the user experience.
  • the present application provides a voice message playing method and an electronic device.
  • the electronic device can obtain the proximity light state based on an algorithm, and correct the proximity light state detected by the proximity light sensor to avoid false alarms and affect the user experience.
  • the present application provides a method for playing a voice message.
  • the method includes: in response to a received click operation on a voice message, the electronic device calls a speaker to play the voice message, and the display screen of the electronic device is in a light-screen state.
  • the electronic device obtains a first proximity light state based on the detection result of the proximity light sensor, and the electronic device obtains a second proximity light state based on the first state data; wherein the state data includes posture data and motion data, the posture data is used to describe the current posture of the electronic device, and the motion data is used to describe the current motion state of the electronic device.
  • the electronic device detects that both the first proximity light state and the second proximity light state indicate a distant state, and continues to call the speaker to play the voice message, and the display screen of the electronic device is in a light-screen state.
  • the third proximity light state is obtained based on the detection result of the proximity light sensor.
  • the electronic device obtains a fourth proximity light state based on the second state data.
  • the electronic device determines that the third proximity light state is an abnormal state, and based on the fourth proximity light state, the electronic device determines that the electronic device is in an ear listening scenario, calls the handset to play a voice message, and the display screen of the electronic device is in an off-screen state.
  • the electronic device in the present application can calculate the proximity light state of the electronic device by the posture and motion state of the electronic device, so as to correct the detection result of the proximity light sensor in the case of a false alarm of the proximity light sensor to obtain the correct proximity light state.
  • the electronic device can control the mode of the electronic device when playing voice messages based on the correct proximity light state.
  • the gesture assistance algorithm continuously obtains status data after the speaker is turned on, so that the gesture assistance algorithm can obtain the corresponding proximity light state in real time.
  • the electronic device can also call the proximity light state calculated by the algorithm in real time.
  • the state data is ACC data.
  • the posture data includes but is not limited to pitch angle and roll angle.
  • the motion data includes but is not limited to handupJudge parameters.
  • the method also includes: in the process of the user continuously placing the electronic device next to the ear to answer the voice message, the electronic device obtains the fifth proximity light state based on the detection result of the proximity light sensor.
  • the electronic device obtains the sixth proximity light state based on the third state data.
  • the electronic device detects that the fifth proximity light state indicates a distant state, and the sixth proximity light state indicates a close state, determines that the fifth proximity light state is an abnormal state, and, based on the sixth proximity light state, determines that the electronic device is still in the ear-answering scene, continues to call the earpiece to play the voice message, and the display screen of the electronic device is in the off state.
  • the electronic device can obtain the accurate motion state of the electronic device based on the state parameters obtained before the earpiece is turned on and the current state parameters obtained, so as to obtain the accurate proximity light state based on the algorithm.
  • the speaker is continued to be called to play the voice message
  • the electronic device display screen is in a bright screen state, including: the electronic device detects whether the receiver is in an open state. The electronic device detects that the receiver is in a closed state, and determines whether the first proximity light state indicates a far away state. The electronic device determines that the first proximity light state indicates a far away state, and determines whether the second proximity light state indicates a near state. The electronic device determines that the second proximity light state indicates a far away state, and determines that the current target proximity light state of the electronic device is a far away state.
  • the electronic device Based on the target proximity light state, the electronic device continues to call the speaker to play the voice message, and the electronic device display screen is in a bright screen state. In this way, the electronic device can determine whether it is necessary to make a judgment based on the result of the proximity light sensor in combination with the proximity light state obtained by the algorithm. If it is necessary to combine the algorithm, the proximity light state calculated by the algorithm is called again to further determine whether there is a false alarm in the result detected by the proximity light sensor.
  • the electronic device detects that the third proximity light state indicates a far state, and the fourth proximity light state indicates a near state, and determines that the third proximity light state is an abnormal state, including: the electronic device detects whether the earpiece is in an on state. The electronic device detects that the earpiece is in a closed state, and determines whether the third proximity light state indicates a far state. The electronic device determines that the third proximity light state indicates a far state, and determines whether the fourth proximity light state indicates a near state. The electronic device determines that the fourth proximity light state indicates a near state, and based on the third proximity light state and the fourth proximity light state, determines that the third proximity light state is an abnormal state.
  • the electronic device determines that the fourth proximity light state is Target proximity light state. In this way, the electronic device can determine whether it is necessary to combine the proximity light state obtained by the algorithm to make a judgment based on the result of the proximity light sensor. If it is necessary to combine the algorithm, the proximity light state calculated by the algorithm is called to further determine whether there is a false alarm in the result detected by the proximity light sensor.
  • determining that the electronic device is in an ear-to-ear listening scenario, calling the handset to play a voice message, and the display screen of the electronic device is in an off-screen state includes: the electronic device determines that the electronic device is in an ear-to-ear listening scenario based on the target proximity light state, calling the handset to play a voice message, and the display screen of the electronic device is in an off-screen state.
  • the electronic device can determine whether the electronic device is currently in an ear-to-ear listening state based on the algorithm and the result of the proximity light sensor to obtain an accurate result, and control the mode of the electronic device to play the voice message based on the target proximity light state finally determined.
  • the fifth proximity light state indicates a distant state
  • the sixth proximity light state indicates a close state
  • the fifth proximity light state is determined to be an abnormal state
  • the electronic device detects whether the handset is in an open state.
  • the electronic device detects that the handset is in an open state, and determines whether the sixth proximity light state indicates a close state.
  • the electronic device determines that the sixth proximity light indicates a close state, and determines that the fifth proximity light state is an abnormal state based on the fifth proximity light state and the sixth proximity light state.
  • the electronic device determines that the sixth proximity light state is a target proximity light state.
  • the electronic device can obtain the status data before the handset is turned on, and based on the status data, the gesture lock is locked, that is, the proximity light state is locked, so that in the scenario where the electronic device is answering the call by the ear, even if the proximity light sensor misreports, based on the gesture lock having locked the proximity state, the proximity light state finally obtained by the electronic device is still the proximity state, and will not be affected by the sensor's false alarm.
  • the electronic device determines that the electronic device is still in the ear-to-ear listening scenario based on the sixth proximity light state, continues to call the handset to play the voice message, and the display screen of the electronic device is in the off-screen state, including: based on the target proximity light state, determining that the electronic device is in the ear-to-ear listening scenario, continues to call the handset to play the voice message, and the display screen of the electronic device is in the off-screen state.
  • the electronic device can correct the abnormal report of the proximity light sensor based on the proximity light state obtained by the algorithm, and control the mode of the electronic device to play the voice message based on the correct proximity light state.
  • the present application provides an electronic device, comprising: one or more processors, a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, and when the computer programs are executed by the one or more processors, the electronic device performs the following steps: in response to a received click operation on a voice message, calling a speaker to play the voice message, and the display screen of the electronic device is in a bright screen state; based on the detection result of a proximity light sensor, obtaining a first proximity light state; based on the first state data, obtaining a second proximity light state; wherein the state data includes posture data and motion data, the posture data is used to describe the current posture of the electronic device, and the motion data is used to describe the current motion state of the electronic device; when it is detected that both the first proximity light state and the second proximity light state indicate a distant state, continuing to call the speaker to play the voice message, and the display screen of the electronic device is in a bright screen state; when the user raises his hand and places the
  • the electronic device when the computer program is executed by one or more processors, the electronic device performs the following steps: while the user continues to place the electronic device to the ear to listen to voice messages, a fifth proximity light state is obtained based on the detection result of the proximity light sensor; a sixth proximity light state is obtained based on the third state data; when the fifth proximity light state indicates a distant state and the sixth proximity light state indicates a close state, the fifth proximity light state is determined to be an abnormal state; and, based on the sixth proximity light state, it is determined that the electronic device is still in the ear listening scenario, the earpiece continues to be called to play the voice message, and the display screen of the electronic device is in the off state.
  • the electronic device when the computer program is executed by one or more processors, the electronic device performs the following steps: detecting whether the earpiece is in an on state; detecting that the earpiece is in a off state, determining whether the first proximity light state indicates a away state; determining that the first proximity light state indicates a away state, determining whether the second proximity light state indicates a proximity state; determining that the second proximity light state indicates a away state, determining that the current target proximity light state of the electronic device is the away state; based on the target proximity light state, continuing to call the speaker to play the voice message, and the electronic device display is in a bright screen state.
  • the electronic device when the computer program is executed by one or more processors, performs the following steps: detecting whether the earpiece is in an on state; detecting that the earpiece is in a off state, and determining whether the third proximity light state indicates a away state; determining that the third proximity light state indicates a away state, and determining whether the fourth proximity light state indicates a proximity state; determining that the fourth proximity light state indicates a proximity state, and based on the third proximity light state and the fourth proximity light state, determining that the third proximity light state is an abnormal state; and determining that the fourth proximity light state is a target proximity light state.
  • the electronic device when the computer program is executed by one or more processors, the electronic device performs the following steps: based on the target proximity light state, determining that the electronic device is in an ear listening scenario, calling the earpiece to play a voice message, and the display screen of the electronic device is in an off state.
  • the electronic device when the computer program is executed by one or more processors, performs the following steps: detecting whether the earpiece is in an on state; detecting that the earpiece is in an on state, determining whether the sixth proximity light state indicates a proximity state; determining that the sixth proximity light indicates a proximity state, and based on the fifth proximity light state and the sixth proximity light state, determining that the fifth proximity light state is an abnormal state; and determining that the sixth proximity light state is a target proximity light state.
  • the electronic device when the computer program is executed by one or more processors, the electronic device The device performs the following steps: based on the target proximity light state, determining that the electronic device is in an ear listening scenario, continuing to call the earpiece to play the voice message, and the display screen of the electronic device is in an off state.
  • the second aspect and any implementation of the second aspect correspond to the first aspect and any implementation of the first aspect respectively.
  • the technical effects corresponding to the second aspect and any implementation of the second aspect can refer to the technical effects corresponding to the above-mentioned first aspect and any implementation of the first aspect, which will not be repeated here.
  • the present application provides a computer-readable medium for storing a computer program, wherein the computer program includes instructions for executing the method in the first aspect or any possible implementation of the first aspect.
  • the present application provides a computer program, comprising instructions for executing the method in the first aspect or any possible implementation of the first aspect.
  • the present application provides a chip, the chip comprising a processing circuit and a transceiver pin, wherein the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the first aspect or any possible implementation of the first aspect to control the receiving pin to receive a signal and control the sending pin to send a signal.
  • FIG1 is a schematic diagram showing a hardware structure of an electronic device
  • FIG. 2 is a schematic diagram showing an exemplary software structure of an electronic device
  • FIG3 is a schematic diagram of an exemplary user interface
  • FIG4a is a schematic diagram of an exemplary application scenario
  • FIG4b is a schematic diagram of an exemplary application scenario
  • FIG. 4c to FIG. 4d are schematic diagrams showing exemplary module interactions
  • FIG5 is a flow chart showing an exemplary method for playing a voice message
  • FIG6 is a flow chart showing an exemplary method for playing a voice message
  • FIG. 7 is a schematic diagram of a state value calculation process of a gesture assistance algorithm
  • FIG8 is a flow chart showing an exemplary method for playing a voice message
  • 9a to 9b are schematic flow charts of an exemplary voice message playing method
  • FIG10 is a schematic diagram of an exemplary gesture lock calculation process
  • FIG11 is a flow chart showing an exemplary voice playing method
  • FIG12 is a schematic diagram of an exemplary voice message playing process
  • FIG. 13 is a schematic diagram showing the structure of an exemplary device.
  • a and/or B in this article is merely a description of the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects rather than to describe a specific order of objects.
  • a first target object and a second target object are used to distinguish different target objects rather than to describe a specific order of target objects.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • multiple refers to two or more than two.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • FIG1 shows a schematic diagram of the structure of an electronic device 100. It should be understood that the electronic device 100 shown in FIG1 is only an example of an electronic device, and the electronic device 100 may have more or fewer components than shown in the figure, may combine two or more components, or may have a different component configuration.
  • the various components shown in FIG1 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a memory, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • controller a memory
  • video codec a digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • Different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller may generate an operation control signal according to the instruction operation code and the timing signal to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the charging management module 140 is used to receive charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from a wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. While the charging management module 140 is charging the battery 142, it may also power the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle number, battery health status (leakage, impedance), etc.
  • the power management module 141 can also be set in the processor 110.
  • the power management module 141 and the charging management module 140 can also be set in the same device.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, and filter, amplify, and process the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through the antenna 1.
  • at least some of the functional modules of the mobile communication module 150 can be set in the processor 110.
  • at least some of the functional modules of the mobile communication module 150 can be set in the same device as at least some of the modules of the processor 110.
  • the modulation and demodulation processor may include a modulator and a demodulator.
  • the modulator is used to convert the low-frequency baseband to be transmitted
  • the signal is modulated into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor can be an independent device.
  • the modem processor can be independent of the processor 110 and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR) and the like applied to the electronic device 100.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 160 can be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic wave signal and performs filtering processing, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, modulate the frequency of the signal, amplify the signal, and convert it into electromagnetic waves for radiation through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the electronic device 100 implements the display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194 and application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye.
  • the ISP can also perform algorithm optimization on the noise and brightness of the image.
  • the ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP can be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals. For example, when the electronic device 100 is selecting a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital videos.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record videos in a variety of coding formats, such as Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG Moving Picture Experts Group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • applications such as intelligent cognition of electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and videos can be stored in the external memory card.
  • the internal memory 121 can be used to store computer executable program codes, which include instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running the instructions stored in the internal memory 121.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area may store data created during the use of the electronic device 100 (such as audio data, a phone book, etc.), etc.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • UFS universal flash storage
  • the electronic device 100 can implement audio functions such as music playing and recording through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 can be set in the processor 110, or some functional modules of the audio module 170 can be set in the processor 110. In the processor 110 .
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device 100 can listen to music or listen to a hands-free call through the speaker 170A.
  • the receiver 170B also called a "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be received by placing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C.
  • the electronic device 100 can be provided with at least one microphone 170C. In other embodiments, the electronic device 100 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the sound source, realize directional recording function, etc.
  • the earphone interface 170D is used to connect a wired earphone.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5 mm open mobile terminal platform (OMTP) standard interface or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A can be set on the display screen 194.
  • the capacitive pressure sensor can be a parallel plate including at least two conductive materials.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the touch operation intensity according to the pressure sensor 180A.
  • the electronic device 100 can also calculate the touch position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities can correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes i.e., x, y, and z axes
  • the gyro sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the electronic device 100 shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a flip phone, the electronic device 100 can detect the opening and closing of the flip cover according to the magnetic sensor 180D. Then, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, set Features such as automatic flip unlocking.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in all directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device and is applied to applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outward through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode and pocket mode to automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 reduces the performance of a processor located near the temperature sensor 180J to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called a "touch panel”.
  • the touch sensor 180K can be set on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K can also be set on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can obtain a vibration signal. In some embodiments, the bone conduction sensor 180M can obtain a vibration signal of a vibrating bone block of the vocal part of the human body. The bone conduction sensor 180M can also contact the human pulse to receive a blood pressure beat signal. In some embodiments, the bone conduction sensor 180M can also be set in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can parse out a voice signal based on the vibration signal of the vibrating bone block of the vocal part obtained by the bone conduction sensor 180M to realize a voice function.
  • the application processor can parse the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M to realize a heart rate detection function.
  • the key 190 includes a power key, a volume key, etc.
  • the key 190 may be a mechanical key or a touch key.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • Motor 191 can generate vibration prompts.
  • Motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • touch operations acting on different areas of the display screen 194 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminders, receiving messages, alarm clocks, games, etc.
  • the touch vibration feedback effect can also support customization.
  • Indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device 100.
  • FIG. 2 is a software structure block diagram of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture of the electronic device 100 divides the software into several layers, each with a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, from top to bottom, namely, the application layer, the application framework layer, the hardware abstraction layer (HAL), and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, chat, etc.
  • the chat application can be an application pre-installed on the electronic device, or it can be a third-party application, which is not limited in this application.
  • the chat application is used as an example for explanation.
  • the method in the embodiments of this application can be applied to any application that can send voice messages, which is not limited in this application.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a Sensorservice (sensor service), an Audioservice (audio service), and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification type messages. Notifications can be displayed for a short time and then disappear automatically without user interaction. For example, the notification manager is used to notify downloads are complete, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications from applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a reminder sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • Sensorservice is used to provide sensor-related services and provide service interfaces for upper-layer applications.
  • Audioservice is used to provide audio related services and provide service interfaces for upper-layer applications.
  • the HAL layer is an interface layer between the operating system kernel and the hardware circuit.
  • the HAL layer includes but is not limited to: SensorHal (sensor hardware abstraction layer), AudioHal (audio hardware abstraction layer), etc.
  • Audio HAL is used to process the audio stream, for example, to perform noise reduction, directional enhancement and other processing on the audio stream.
  • SensorHal is used to process events reported by sensor drivers.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, Sensorevent module, audio driver, Bluetooth driver, Wi-Fi driver, etc.
  • the display driver is used to control the display of the display.
  • Sensorevent is used to record the status of the receiver or speaker.
  • the audio driver is used to control audio devices, such as but not limited to: receivers, speakers, etc.
  • the SensorHub (sensor component) layer includes but is not limited to: gesture auxiliary module and sensor driver.
  • the gesture auxiliary module is used to control the gesture lock.
  • the gesture lock can also be understood as a proximity light state lock, which is used to lock the proximity light state. If the gesture lock is in the locked state, that is, the proximity light state is locked, the proximity light state reported by the proximity light driver is the proximity state. If the gesture lock is not locked, that is, the proximity light state is not locked, the proximity light driver reports according to the actually detected proximity light state (which can be a distance state or a proximity state).
  • the sensor driver is used to control the state of the sensor (e.g., open or close) and receive the sensor parameters reported by the sensor.
  • the sensor driver may further include sensor drivers corresponding to different sensors.
  • the sensor driver may include a proximity light driver to control the proximity light sensor and receive the sensor parameters uploaded by the proximity light sensor.
  • the sensor driver may also include an acceleration sensor, etc., which is not limited in the present application.
  • the layers in the software structure shown in FIG2 and the components included in each layer do not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer layers than shown in the figure, and each layer may include more or fewer components, which is not limited in the present application.
  • FIG3 is a schematic diagram of an exemplary user interface.
  • the display interface 300 includes a chat application interface 310.
  • the chat application interface includes but is not limited to a chat box and an input box.
  • the chat box is used to display received and sent chat messages.
  • the chat messages include but are not limited to: voice chat messages 311 (hereinafter referred to as voice messages) and text chat messages (hereinafter referred to as text messages).
  • the user can click on the voice message 311 to instruct the mobile phone to play the voice message.
  • the electronic device is a mobile phone as an example for description.
  • the electronic device can also be a wearable device (such as a smart watch or bracelet), a tablet, etc., and the present application does not limit it.
  • the mobile phone when the user clicks on the voice message 311, the mobile phone is usually in a away state.
  • the away state can be understood as a non-ear answering state.
  • the posture is usually to hold the mobile phone with one hand (it can be the left hand or the right hand), and the user can observe the display interface of the mobile phone to click on the voice message 311.
  • the mobile phone (specifically a chat application) responds to the received user operation and plays the voice message 311 through the speaker. It can also be understood that in the away state, the mobile phone plays the audio content in the voice message 311 through the speaker, and the display interface 300 is in a bright screen state, that is, the interface shown in FIG3 is displayed.
  • the state corresponding to the away state is the approach state, which can also be understood as the state of answering the call at the ear.
  • Figure 4b it is a schematic diagram of the scene of answering the call at the ear.
  • the user can move from the away state of the handheld mobile phone shown in Figure 4a to the user's ear side, so that the receiver of the mobile phone is close to the user's ear side, which is the approach state.
  • the mobile phone detects that it is currently in the approach state, and the anti-mistouch process can be started, that is, the voice message is played through the receiver, and the screen is turned off (that is, the display interface 300 turns black).
  • the touch event is locked, that is, the touch sensor no longer detects the touch event to prevent accidental touch.
  • Figures 4c to 4d are schematic diagrams of module interactions shown as examples. Please refer to Figure 4c.
  • the chat application When answering a voice message, the chat application will register to monitor proximity light events. Specifically, the chat application can send a registration monitoring message to Sensorservice. Sersorservice receives the registration monitoring message and sends an activate (trigger) message to SersorHal. In response to the received activate message, SersorHal instructs the sensor driver (specifically, it can be a proximity light driver) to trigger (i.e., start) the proximity light sensor. The proximity light sensor detects proximity light.
  • the specific detection method can refer to the existing technical embodiments, and this application is not limited.
  • the proximity light sensor may continuously report the detected sensor parameters to the proximity light driver, and the proximity light driver may determine the corresponding proximity light event based on the received sensor parameters, which is not limited in this application.
  • the proximity light event includes but is not limited to: proximity events and distance events.
  • the sensor driver may report the relevant event to the proximity light driver when it is determined that a proximity light event switch has occurred (for example, switching from a proximity event to a distance event, or switching from a distance event to a proximity event, which is not limited in this application).
  • the proximity light sensor may also report a proximity light event to the sensor driver based on the detected parameters (which may be referred to as sensor parameters or proximity light parameters).
  • the proximity light event includes but is not limited to: proximity events and distance events.
  • the proximity light sensor may report the relevant event to the proximity light driver only when a switching event occurs.
  • the proximity light event is reported to SensorHal.
  • SensorHal sends the proximity light event to Sensorservice.
  • Sensorsevice sends the proximity light event to the chat application.
  • the chat application receives a proximity event reported by the proximity light driver through other modules, which can also be understood as the proximity light driver reporting the proximity status.
  • the chat application starts the anti-mistouch process in response to the reported proximity status (i.e., proximity event).
  • the chat application sends an indication message to the PoserManager (power management) module to indicate the start of the screen off process.
  • the PoserManager module sends a screen off instruction to the hardware synthesizer (Hardware Composer, HWC).
  • the HWC sends the screen off instruction to the Display driver:
  • the display driver controls the display screen to turn off the screen in response to the received screen-off instruction.
  • the chat application simultaneously sends a handset play instruction to Audioservice, which is used to instruct the handset to be called to play the current voice message.
  • Audioservice sends a handset play instruction to AudioHal
  • AudioHal sends a handset play instruction to the audio driver.
  • the audio driver calls the handset (i.e., the receiver shown in FIG. 1 ) to play the current audio data, i.e., the voice content in the voice message.
  • the proximity light sensor detects an error, for example, the proximity state is detected as a distant state.
  • the proximity state is detected as a distant state.
  • the user's hair may block the detection signal of the proximity light sensor, and the user's hair may absorb the detection signal of the proximity light sensor, so that the proximity light sensor fails to receive the reflected detection signal.
  • the detection result of the proximity light sensor will indicate that the current proximity light state is a distant state.
  • the detection result may be determined by the proximity light sensor, or it may be determined by the proximity light driver, and this application is not limited.
  • the chat application will turn off the anti-false touch process.
  • the chat application responds to the received proximity state and turns off the anti-false touch process.
  • the chat application sends an indication message to the PoserManager module to indicate the closure of the screen-off process.
  • the PoserManager module sends a screen-on instruction to the HWC.
  • the HWC sends the screen-on instruction to the display driver.
  • the display driver controls the display of the screen to light up, for example, displaying the display interface 300 in FIG. 3 .
  • the chat application simultaneously sends a speaker play instruction to Audioservice, which is used to instruct the speaker to play the current voice message.
  • Audioservice sends a speaker play instruction to AudioHal
  • AudioHal sends a speaker play instruction to the audio driver.
  • the audio driver calls the speaker to play the current audio data, that is, the voice content in the voice message.
  • the chat application shuts down the anti-false touch process due to the sensor-driven false alarm, causing the phone to play audio through the speaker, affecting the user experience.
  • the screen is already on when the phone is next to the ear, the user may accidentally touch it.
  • the embodiment of the present application provides a voice message playback method, which can lock the proximity light state through a gesture lock, so that when the algorithm detects that the mobile phone is still in the proximity state, the proximity light sensor can be corrected for the false alarm of the away state, so as to avoid false alarms and improve the accuracy of proximity light reporting.
  • FIG5 is a flowchart of an exemplary method for playing a voice message. Please refer to FIG5 , which specifically includes but is not limited to:
  • the chat application receives a voice message and displays the voice message 311 in the chat application interface.
  • the user can click on the voice message 311 to trigger the chat application to play the voice message 311 through the speaker.
  • the default speaker plays the voice message as an example for explanation.
  • the chat application plays the voice message 311 in response to the received user operation.
  • the proximity light driver determines the proximity light state based on the detection result of the proximity light sensor.
  • the proximity light state obtained by the proximity light driver from the proximity light sensor is referred to as the physical proximity light state.
  • the proximity light state includes a distant state or a close state. If the proximity light sensor is blocked, it corresponds to the close state. If the proximity light sensor is not blocked, it corresponds to the distant state.
  • the proximity light drives the acquisition of the earpiece state to detect whether the earpiece is currently in an open state.
  • S508 is executed.
  • S504 is executed.
  • the proximity light driver determines whether the acquired entity proximity light state is a distance state.
  • S505 is executed.
  • S506 is executed.
  • the gesture assistance algorithm obtains the ACC (Accumulator) data of the mobile phone in real time after the handset or speaker is turned on, and calculates the state value based on the ACC data.
  • the state value is used to indicate whether it is a proximity state.
  • the proximity light state obtained by the algorithm is called the algorithm proximity light state.
  • the state value if the state value is true, it means that the algorithm proximity light state obtained by the gesture assistance algorithm based on the ACC data is the proximity state. In another example, if the state value is false, it means that the algorithm proximity light state obtained by the gesture assistance algorithm based on the ACC data is the distance state.
  • the proximity light driver reads the state value of the output of the gesture assistance algorithm. In one example, if the state value is true, S506 is executed. In another example, if the state value is false, S507 is executed.
  • the proximity light driver can correct the physical proximity light state based on the algorithmic proximity light state to report the correct proximity light state.
  • the proximity light driver reports the proximity light status to the chat application according to the confirmed proximity light status.
  • the gesture assistance algorithm determines the gesture lock state based on the ACC data.
  • the gesture lock state includes a locked state and an unlocked state.
  • the locked state is used to lock the proximity state, that is, when the gesture lock is locked, the proximity light state reported by the proximity light driver is the proximity state.
  • the unlocked state the proximity light state reported by the proximity light driver to the chat application is the actually detected proximity light state, that is, the physical proximity light state, which can include a proximity state or a distance state.
  • the proximity light driver reads the gesture lock state output by the gesture assistance algorithm.
  • S509 Determine whether the gesture lock is in a locked state.
  • the proximity light state reported by the proximity light driver to the chat application is the actually detected proximity light state, that is, the physical proximity light state, which may include a proximity state or a distance state.
  • FIG6 is a flowchart of an exemplary method for playing a voice message, and please refer to FIG6 , which specifically includes but is not limited to the following steps:
  • the application plays a voice message.
  • the chat application receives a voice message and displays the voice message 311 in the chat application interface.
  • the user can click on the voice message 311 to trigger the chat application to play the voice message 311 through the speaker.
  • the default speaker plays the voice message as an example for explanation.
  • the chat application plays the voice message.
  • the application sends a speaker-opening instruction to AudioHal.
  • the chat application determines that a voice message needs to be played. As shown in FIG4c , the chat application sends a speaker play instruction to Audioservice to instruct the speaker to be called to play the current voice message. Audioservice sends a speaker play instruction to AudioHal, and AudioHal sends a speaker play instruction to the audio driver. In response to the received instruction, the audio driver calls the speaker to play the current audio data, that is, the voice content in the voice message. Among them, the display screen is in a bright screen state.
  • AudioHal sends a command to write the speaker opening status to Sensorevent.
  • AudioHal After AudioHal indicates that the speaker is turned on, it can be confirmed that the speaker is currently turned on. AudioHal sends a write speaker turn-on state instruction to Sensorevent to instruct Sensorevent to record that the speaker is currently turned on.
  • Sensorevent in response to the received instruction to write the speaker on state, Sensorevent records that the speaker is currently on. For example, Sensorevent may be provided with a flag bit corresponding to the speaker state, where "1" indicates on and "0" indicates off. Before Sensorevent receives the speaker on state instruction, the flag bit of the speaker state is "0", that is, it is off. In response to the received instruction to write the speaker on state, Sensorevent switches the flag bit to "1" to indicate that the speaker is on.
  • SenSorHal reads the speaker on status.
  • SensorHal reads the data in Sensorevent in real time to obtain the status of the earpiece and the speaker (including open or closed).
  • SensorHal can detect the speaker state switch recorded in Sensorevent, that is, switching from the closed state to the open state.
  • SenSorHal detects that the speaker state is switched from off to on, it sends indication information to the gesture assistance algorithm, where the indication information includes speaker on state information for indicating that the speaker is on.
  • SensorHal triggers the indication information when it detects the state switching of the speaker and/or the receiver. That is to say, after SensorHal currently sends the indication information, if the state of the speaker and the receiver does not change, for example, the receiver is currently in the off state and the speaker is always in the on state, SensorHal will not send the indication information to the gesture assistance algorithm.
  • the gesture assistance algorithm records the speaker opening state.
  • the gesture assistance algorithm determines that the speaker is currently turned on in response to the received indication information.
  • the gesture assistance algorithm can record the speaker state as turned on.
  • the gesture assistance algorithm can also set a flag to record the speaker state and the earpiece state.
  • the recording method can refer to Sensorevent, which will not be repeated here.
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • FIG7 is an exemplary state value calculation process of the gesture assistance algorithm. Please refer to FIG7 , which specifically includes but is not limited to:
  • the gesture assistance algorithm may send a request message to the sensor driver to request ACC data.
  • the sensor driver responds to the request of the gesture assistance algorithm and obtains detection data of the acceleration sensor and the gyroscope sensor.
  • the acceleration sensor and the gyroscope sensor may be always turned on, or may be restarted in response to the call of the sensor driver, which is not limited in this application.
  • pitch and roll can be used to describe the posture of the phone.
  • the handupJudge value is used to describe the process variable of the phone from dynamic to static or static to dynamic.
  • the specific calculation method can refer to the existing technology and will not be repeated in this application.
  • S703 Determine whether pitch, roll, and handupJudge meet a threshold, and the receiver or speaker is turned on.
  • the gesture assistance algorithm compares the calculated pitch, roll and handupJudge with their respective corresponding threshold ranges. If pitch, roll and handupJudge all satisfy the threshold range, for example, pitch and roll satisfy the threshold range corresponding to the left hand holding the phone or the threshold range corresponding to the right hand holding the phone, and handupJudge is greater than 0, then further determine whether the receiver or speaker is in the on state. As described above, the gesture assistance algorithm may record the status of the speaker and/or receiver, and the gesture assistance algorithm may determine whether the receiver or speaker is turned on based on the recorded status. If all the above conditions are met, execute S704. If any condition is not met, the status value output is false.
  • the gesture assistance algorithm may read the proximity state last reported by the proximity light driver, which refers to the proximity light state actually reported by the proximity light driver to the chat application.
  • the state value output is true.
  • the state value output is false.
  • the gesture assistance algorithm is based on the ACC data, and the pitch, roll and handupJudge obtained do not meet the threshold, then the state value output is false, indicating that the algorithm approaches the light state to the far state.
  • the chat application sends a request to register to monitor proximity light to the Sensorservice.
  • the Sensorservice sends an activate message to the SensorHal.
  • SensorHal in response to the received activate message, SensorHal sends a trigger proximity light to the proximity light driver.
  • the sensor instruction is used to instruct the sensor driver (specifically, it may be a proximity light driver) to trigger (ie, start) the proximity light sensor.
  • the proximity light driving obtains the proximity light state.
  • the proximity light driver determines that the mobile phone is in a distant state based on the parameters detected by the proximity light sensor.
  • the proximity light driver reads the receiver status.
  • the gesture assistance records the status of the receiver and the speaker.
  • the current status of the receiver recorded by the gesture assistance is off. That is, the gesture assistance algorithm records the status of the speaker and the receiver as off by default. For example, in S604, the gesture assistance algorithm records the speaker as on, while the receiver is still off.
  • the proximity light driver can read the status of the earpiece and speaker recorded by the gesture assistance algorithm and obtain the current earpiece status.
  • the proximity light driver determines whether the earpiece is turned on.
  • the proximity light driver determines whether the earpiece is in an open state based on the earpiece state read from the gesture assistance algorithm.
  • S508 is executed, and if the handset is turned off, S504 is executed.
  • the proximity light driver detects that the earpiece is in the off state, and S609 is executed.
  • the proximity light driver determines whether it is in the distance state.
  • the proximity light driving is judged based on the physical proximity light state acquired in S606c. As shown in S504 in FIG5 , if the judgment is yes, S505 is executed, and if the judgment is no, S506 is executed.
  • the proximity light driver determines that the current state is the away state (ie, determines as yes), and executes S610 .
  • the proximity light driver reads a state value from the gesture assistance algorithm.
  • the process in Figure 5 goes to S505, corresponding to Figure 6, the proximity light driver reads the state value from the gesture assistance algorithm. It should be noted that the gesture assistance algorithm continues to calculate the output state value after S603 until the speaker and the receiver are turned off.
  • the output state value read by the proximity light driver is the latest calculated output state value of the gesture assistance algorithm.
  • the gesture assistance algorithm can also set a calculation cycle, and its calculation cycle can be the same as the reading cycle of the proximity light driver, which is not limited in this application.
  • the proximity light driver determines whether the state value is true.
  • S506 if the judgment is yes, S506 is executed, and if the judgment is no, S510 is executed. As shown in FIG6 , the proximity light driver reads a state value of false. Accordingly, the process in FIG5 goes to S510.
  • the proximity light driver determines and reports the distance state.
  • the approaching light will be reported as the current physical approaching light state. That is, both the algorithm approaching light state and the physical approaching light state are far away states.
  • the proximity light driver reports the distance state to SensorHal.
  • the proximity light driving device determines to report the away state to SensorHal.
  • Sensorservice reports the away status.
  • Sensorsevice sends the away status to the chat application.
  • the chat application determines that the user is currently away, the screen continues to be lit and the voice message is played through the speaker, that is, the current voice message playback mode is maintained.
  • Figure 8 is a flowchart of an exemplary voice message playing method. Please refer to Figure 8, which specifically includes but is not limited to the following steps:
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • the gesture assistance algorithm continuously obtains ACC data and calculates the corresponding state value.
  • the calculation result in FIG. 7 is still false.
  • the proximity light driver obtains a proximity light state.
  • the proximity light driver determines that the mobile phone is in a distant state based on the parameters detected by the proximity light sensor.
  • the gesture-assisted recording has the status of the receiver and the speaker.
  • the current status of the receiver recorded by the gesture-assisted recording is the off status. That is, the gesture-assisted algorithm records the status of the speaker and the receiver as off by default.
  • the gesture-assisted algorithm records the speaker as on, wherein the receiver status is still off.
  • the proximity light driver can read the status of the earpiece and speaker recorded by the gesture assistance algorithm and obtain the current earpiece status.
  • the proximity light driver determines whether the handset is turned on.
  • the proximity light driver determines whether the earpiece is in an open state based on the earpiece state read from the gesture assistance algorithm.
  • S508 is executed, and if the handset is turned off, S504 is executed.
  • the proximity light driver detects that the earpiece is in the off state, and S805 is executed.
  • the proximity light driver determines whether it is in the distance state.
  • the proximity light driving is judged based on the physical proximity light state acquired in S802. As shown in S504 in FIG5, if the judgment is yes, S505 is executed, and if the judgment is no, S506 is executed.
  • the physical proximity light state acquired by the proximity light driver is the away state, and the proximity light driver determines that the current state is the away state, and executes S806 .
  • the proximity light driver reads a state value from the gesture assistance algorithm.
  • the process in Figure 5 goes to S505, corresponding to Figure 6, the proximity light driver reads the state value from the gesture assistance algorithm. It should be noted that the gesture assistance algorithm continues to calculate the output state value after the speaker is turned on until the speaker and the earpiece are turned off. The output state value read by the proximity light driver is the latest calculated output state value of the gesture assistance algorithm.
  • the gesture assistance algorithm can also set a calculation cycle, and its calculation cycle can be the same as the reading cycle of the proximity light driver, which is not limited in this application.
  • the proximity light driver determines whether the state value is true.
  • S505 of FIG. 5 if the judgment is yes, then S506 is executed, and if the judgment is no, then S510 is executed. As shown in FIG. 8 , the proximity light driver reads the state value as false. Accordingly, the process in FIG. 5 will go to S510.
  • the proximity light driving determines and reports the distance state.
  • the approaching light will be reported as the current physical approaching light state. That is, both the algorithm approaching light state and the physical approaching light state are far away states.
  • the proximity light driver reports the distance state to SensorHal.
  • the proximity light driver determines to report the away state to SensorHal.
  • SensorHal reports the away state to Sensorservice.
  • Sensorsevice sends the away state to the chat application.
  • the chat application determines that the user is currently away, the screen continues to be lit and the voice message is played through the speaker, that is, the current voice message playback mode is maintained.
  • FIG. 9a and FIG. 9b are flowcharts of the voice message playing method shown as an example, please refer to FIG. 9a and FIG. 9b, specifically but not limited to:
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • the gesture assistance algorithm continuously obtains ACC data and calculates the corresponding state value.
  • the gesture assistance algorithm determines yes when executing S703, that is, each parameter meets the threshold and the speaker is turned on.
  • the proximity light driver since the proximity light driver has not reported the proximity state, when S704 is executed, the proximity light driver determines that the last proximity light driver report was the distance state, and the output state value is still false.
  • the proximity light driver obtains a proximity light state.
  • the proximity light driver determines that the current proximity light state is the proximity state based on the detection result of the proximity light sensor.
  • the gesture-assisted recording has the status of the receiver and the speaker.
  • the current status of the receiver recorded by the gesture-assisted recording is the off status. That is, the gesture-assisted algorithm records the status of the speaker and the receiver as off by default.
  • the gesture-assisted algorithm records the speaker as on, wherein the receiver status is still off.
  • the proximity light driver can read the status of the earpiece and speaker recorded by the gesture assistance algorithm and obtain the current earpiece status.
  • the proximity light driver determines whether the earpiece is turned on.
  • the proximity light driver determines whether the earpiece is in an open state based on the earpiece state read from the gesture assistance algorithm.
  • S508 is executed, and if the handset is turned off, S504 is executed.
  • the proximity light driver detects that the earpiece is in the off state, and S905 is executed.
  • the proximity light driving is judged based on the physical proximity light state acquired in S902. As shown in S504 in FIG5 , if the judgment is yes, S505 is executed, and if the judgment is no, S506 is executed.
  • the physical proximity light state acquired by the proximity light driver in S902 is the proximity state.
  • the approaching light driver determines that the current state is the distant state (i.e., the determination is no), and executes S906a. That is, corresponding to FIG. 5, the process will go to S506.
  • the proximity light driving determines and reports the proximity state.
  • the proximity light driver determines that the current state is far away, there is no need to read the algorithm proximity light state, and the proximity state can be reported directly.
  • the proximity light driver reports the proximity status to SensorHal.
  • the proximity light driver determines to report the proximity state to SensorHal.
  • SensorHal reports the proximity state to Sensorservice.
  • Sensorsevice sends the proximity state to the chat application.
  • the application sends a receiver opening instruction to AudioHal.
  • a handset play instruction is sent to Audioservice to instruct the handset to be called to play the current voice message.
  • Audioservice sends the handset play instruction to AudioHal
  • AudioHal sends the handset play instruction to the audio driver.
  • the audio driver calls the handset (i.e., the receiver shown in FIG1) to play the current audio data, i.e., the voice content in the voice message.
  • the chat application sends an instruction message to the PoserManager (power management) module to instruct the start of the screen off process.
  • the PoserManager module sends a screen off instruction to the hardware composer (HWC).
  • the HWC sends the screen off instruction to the display driver.
  • the display driver responds to the received screen off instruction and controls the display screen to turn off.
  • AudioHal sends a command to write the earpiece opening status to Sensorevent.
  • AudioHal After AudioHal indicates that the handset is turned on, it can be confirmed that the handset is currently turned on. AudioHal sends a write handset turn-on state instruction to Sensorevent to instruct Sensorevent to record that the handset is currently turned on. It should be noted that before receiving the speaker turn-off instruction, the speaker state recorded by Sensorevent and the gesture-assisted algorithm is still turned on.
  • Sensorevent in response to the received instruction to write the earpiece on state, Sensorevent records that the earpiece is currently on. For example, Sensorevent may be provided with a flag bit corresponding to the earpiece state, where "1" indicates on and "0" indicates off. Before Sensorevent receives the earpiece on state instruction, the flag bit of the speaker state is "0", that is, it is off. In response to the received instruction to write the earpiece on state, Sensorevent switches the flag bit to "1" to indicate that the earpiece is on.
  • SensorHal reads the data in Sensorevent in real time to obtain the status of the earpiece and the speaker (including open or closed).
  • SensorHal can detect the earpiece state switch recorded in Sensorevent, that is, switching from the closed state to the open state.
  • SenSorHal detects that the state of the earpiece is switched from off to on, it sends indication information to the gesture assistance algorithm, where the indication information includes earpiece on state information for indicating that the earpiece is in an on state.
  • SensorHal triggers the indication information when detecting the state switching of the speaker and/or the receiver. That is, after SensorHal currently sends the indication information, the speaker and the receiver switch to the same state. If the state of the gesture assistant algorithm does not change, for example, the earpiece is currently turned off and the speaker is always on, SensorHal will not send indication information to the gesture assistance algorithm.
  • the gesture assistance algorithm records the handset opening state.
  • the gesture-assisted algorithm determines that the earpiece is currently in an open state in response to the received indication information.
  • the gesture-assisted algorithm may record that the earpiece state is in an open state.
  • the gesture-assisted algorithm may also set a flag to record the speaker state and the earpiece state.
  • the recording method may refer to Sensorevent, which will not be described here.
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • the gesture assistance algorithm continuously obtains ACC data and calculates the corresponding state value.
  • the gesture assistance algorithm executes S703 , it is judged as yes, that is, each parameter meets the threshold value and the speaker is turned on.
  • the proximity light driver since the proximity light driver has reported the proximity state (i.e., S906a ⁇ S906c), when S704 is executed, the proximity light driver determines that the last proximity light driver reported was the proximity state, and the output state value is true, that is, the algorithm proximity light state calculated by the gesture assistance algorithm is the proximity state.
  • the gesture assistance algorithm outputs the gesture lock state based on the ACC data.
  • the gesture assistance algorithm continuously acquires ACC data after the speaker is turned on.
  • the gesture assistance algorithm may only calculate the state value.
  • the gesture assistance algorithm may also calculate the state value and gesture lock at the same time, which is not limited in this application.
  • the gesture assistance algorithm can start calculating the gesture lock state value after detecting that the handset is turned on.
  • the gesture assistance algorithm can also calculate the gesture lock state value when the proximity light driver needs to read the gesture lock state value (i.e., when executing S915), which is not limited in this application.
  • FIG10 is a schematic diagram of an exemplary gesture lock calculation process. Please refer to FIG10 , which specifically includes but is not limited to the following steps:
  • the gesture assistance algorithm acquires ACC data in real time and calculates corresponding pitch, roll and handupJudge values.
  • the gesture assistance algorithm may output a status value and a phone lock status value based on pitch, roll, and handupJudge.
  • the gesture assistance algorithm can execute a corresponding judgment process based on the calculation results of the pitch, roll and handupJudge values when the proximity light drive is called to output a status value and/or a mobile phone lock status value.
  • the gesture assistance algorithm obtains whether the entity proximity light state obtained by the proximity light driver is a proximity state.
  • S1004 is executed.
  • the gesture lock is locked, that is, the output gesture lock state is locked.
  • the gesture lock is not locked, that is, the output gesture lock state is unlocked.
  • the gesture assistance algorithm may execute the process in FIG. 7 to obtain the state value.
  • the gesture lock is locked, that is, the output gesture lock state is locked.
  • the gesture assisting algorithm detects whether the gesture lock is currently in a locked state (ie, the output result of the last gesture lock determination process).
  • the gesture lock is locked, that is, the output gesture lock state is locked.
  • the gesture lock is not locked, that is, the output gesture lock state is unlocked.
  • the gesture assistance algorithm executes the process shown in Figure 10. Specifically, the gesture assistance algorithm executes S1003, determines that the current proximity light state is the proximity state, and then executes S1004. As described above, the gesture assistance algorithm obtains and calculates the pitch, roll and handupJudge values in real time. Accordingly, after the user raises his hand, the gesture assistance algorithm can determine that the conditions for answering the call by the ear are met based on the pitch, roll and handupJudge that have been obtained (that is, obtained before the handset is turned on), that is, S1004 judges that it is yes, and the gesture lock is locked, that is, the proximity light state is locked as the proximity state.
  • the pitch and roll will change compared with the previous values, meeting the right hand answering threshold range or the left hand answering threshold range.
  • the phone moves from the far away position to the ear, and the handupJudge value is greater than 0, that is, the preset threshold is met.
  • the proximity light driver obtains a proximity light state.
  • the proximity light driver determines that the mobile phone is in a proximity state based on the parameters detected by the proximity light sensor.
  • the proximity light driver reads the receiver status from the gesture assistance algorithm.
  • the gesture-assisted recording has the status of the receiver and the speaker.
  • the current status of the receiver recorded by the gesture-assisted recording is the off status. That is, the gesture-assisted algorithm records the status of the speaker and the receiver as off by default.
  • the gesture-assisted algorithm records the speaker as on, wherein the receiver status is still off.
  • the proximity light driver can read the status of the earpiece and speaker recorded by the gesture assistance algorithm and obtain the current earpiece status. state.
  • the proximity light driver determines whether the earpiece is turned on.
  • the proximity light driver determines whether the earpiece is in an open state based on the earpiece state read from the gesture assistance algorithm.
  • S508 is executed, and if the handset is turned off, S504 is executed.
  • the proximity light driver detects that the earpiece is in the on state, and S914 is executed. Correspondingly, the process in FIG5 goes to S508.
  • the proximity light driver reads the gesture lock state value.
  • the gesture assistance algorithm goes through the process shown in FIG. 10 , and outputs the gesture lock state as the lock state, that is, the proximity light lock is the proximity state.
  • the proximity light driver determines and reports the proximity state.
  • the proximity light state is locked as the proximity state. That is, no matter whether the physical proximity light state acquired by the proximity light driver is the proximity state or the distance state, the proximity light state reported by the proximity light driver to the chat application is the proximity state.
  • the proximity light driver reports the proximity status to SensorHal.
  • the proximity light driver determines to report the proximity state to SensorHal.
  • SensorHal reports the proximity state to Sensorservice.
  • Sensorsevice sends the proximity state to the chat application.
  • each module When the user listens to the voice message, each module repeatedly executes S911a to S916c, so that the mobile phone is continuously in the screen-off state and the voice message is played using the receiver.
  • FIG. 11 is a flowchart of an exemplary voice playback method, please refer to Figure 11, specifically including but not limited to:
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • the gesture assistance algorithm outputs false.
  • the gesture assistance algorithm outputs the gesture lock state based on the ACC data.
  • the gesture assistance algorithm executes S1003. Due to the influence of device performance or the reporting cycle of the proximity light driver, the proximity light driver may not have obtained the away state after the user takes the mobile phone away from the ear. Accordingly, the gesture assistance algorithm determines that the current proximity light state is the away state and executes S1005. Exemplarily, the gesture assistance algorithm obtains the state value, and as described above, the state value is judged to be false. The gesture assistance algorithm continues to execute S1006. The gesture assistance algorithm is currently in a locked state, then executes S1007. Accordingly, the gesture assistance algorithm judges based on pitch, roll and handupJudge that the threshold is not met, then releases the locked state, that is, outputs the gesture lock state as an unlocked state.
  • the proximity light driver obtains a proximity light state.
  • the proximity light driver obtains that the physical proximity light state is a distance state.
  • the proximity light driver reads the receiver status from the gesture assistance algorithm.
  • the current earpiece state recorded by the gesture assistance is an open state.
  • the proximity light drive reads that the earpiece is in an open state.
  • the proximity light driver determines whether the earpiece is turned on.
  • the proximity light driver reads the gesture lock state from the gesture assistance algorithm.
  • the proximity light driver determines and reports the distance state.
  • the proximity light driver determines that the handset is turned on and the state obtained by the gesture is unlocked. The process goes to S510, and the proximity light driver can determine to report the currently obtained entity proximity light state, that is, the away state.
  • the proximity light driver reports the distance state to SensorHal.
  • the chat application may start executing from S601a of FIG. 6 , that is, turn on the speaker to continue playing the voice message using the speaker and light up the screen.
  • the proximity light driver may report an abnormality, that is, when the user listens to a voice message at the ear, the proximity light sensor mistakenly detects the distance state.
  • the proximity light driver can correct the physical proximity light detection result of the proximity light driver based on the gesture lock state of the gesture assistance algorithm to report the correct proximity state to the chat application to avoid false alarms.
  • Figure 12 is an exemplary schematic diagram of the voice message playback process, please refer to Figure 12, specifically including but not limited to the following steps:
  • the gesture assistance algorithm outputs a state value based on the ACC data.
  • the gesture assistance algorithm executes the process in FIG. 7.
  • the gesture assistance algorithm determines that the threshold is met and the earpiece and speaker are turned on based on the calculated pitch, roll and handupJudge values.
  • the gesture assistance algorithm determines that it is yes and executes S704.
  • the gesture assistance algorithm determines that the last reported proximity light state is the proximity state, and the judgment is yes, and the output state value is true.
  • the gesture assistance algorithm outputs the gesture lock state based on the ACC data.
  • the gesture assistance algorithm executes the process in Figure 10 based on the ACC data.
  • the gesture assistance algorithm executes S1003.
  • S1202 may be executed before S1201b is executed, that is, the proximity light driver has obtained that the current entity proximity state is a distant state (that is, a false detection state).
  • the gesture assistance algorithm executes S1003, judges as no, and continues to execute S1005.
  • the gesture assistance algorithm can obtain the output state value (that is, the value calculated in S1201a) as true.
  • the gesture assistance algorithm determines that the gesture lock is locked.
  • the gesture assistance algorithm executes S1004, and based on pitch and roll, determines that the threshold is met, and further determines that the gesture lock is locked.
  • handupJudge may always be 0.
  • the gesture assistance algorithm may not consider handupJudge when executing the state value judgment process and the gesture lock state judgment process. For example, when executing S1004, only pitch and roll may be judged without judging handupJudge.
  • the proximity light driver obtains a proximity light state.
  • the proximity light drive obtains an abnormal state, that is, it is actually a proximity state, while the proximity light drive obtains a distance state is taken as an example for explanation.
  • the proximity light driver reads the receiver status from the gesture assistance algorithm.
  • the current earpiece state recorded by the gesture assistance is an open state.
  • the proximity light drive reads that the earpiece is in an open state.
  • the proximity light driver determines whether the earpiece is turned on.
  • the proximity light driver reads the gesture lock state from the gesture assistance algorithm.
  • the proximity light driving determines and reports the proximity state.
  • the proximity light driver determines that the handset is turned on, and the state obtained by the gesture is locked.
  • the process goes to S506, and the proximity light driver can determine to report the proximity state.
  • the physical proximity light state obtained by the proximity light driver is the distant state
  • the gesture lock locks the proximity state
  • the distant state originally to be reported is updated to the proximity state, thereby achieving timely correction of the false detection of the proximity light state, avoiding the problem of false alarms causing the phone to be placed next to the user's ear and played through the speaker, and the screen to be accidentally touched.
  • the proximity light driver reports the proximity status to SensorHal.
  • the chat application detects that the voice message is played, and the chat application may stop calling the speaker or the earpiece, that is, the chat application may send an instruction to AudioHal to turn off the speaker and/or the earpiece.
  • AudioHal instructs the audio driver to turn off the earpiece or the speaker.
  • AudioHal sends an indication to Sensorevent that the speaker and the earpiece are written to be turned off.
  • Sensorevent records that the status of the earpiece and the speaker are both in the off state.
  • SensorHal reads from Sensorevent that the status of the speaker and/or the earpiece is switched from the on state to the off state, and SensorHal sends a speaker off indication and/or an earpiece off indication to the gesture assistance algorithm.
  • the gesture assistance algorithm may record that the status of the earpiece and/or the speaker is in the off state.
  • the gesture assistance algorithm stops acquiring ACC data to reduce system power consumption.
  • the gesture assistance algorithm starts to obtain ACC data and calculates the posture data (such as roll and pitch) and motion data (such as handup judge) of the electronic device to prepare in advance the data required for the gesture lock state calculation.
  • the proximity light driver detects the proximity state and reports it to the chat application.
  • the chat application can start the anti-mistouch process based on the proximity state, that is, instruct the mobile phone to turn off the screen and switch to the earpiece to play the voice message.
  • the chat application calls the earpiece, which requires interaction between the modules. Based on performance impact, the proximity light driver may not have obtained the earpiece on state. In other words, in the process shown in Figure 5, the process continues to S504, and at this time the earpiece is already in the on state, that is, the earpiece plays a voice message, and the screen is in the off state.
  • the proximity light driver Before the proximity light driver receives the handset opening message, the proximity light driver can continue to obtain the state value output by the gesture assistance algorithm.
  • the gesture assistance algorithm has determined that the mobile phone is in a proximity state based on the posture data (i.e., the posture of the mobile phone) and the motion data (i.e., the dynamic process of answering the phone at the ear).
  • the gesture assistance algorithm executes S703 and determines that it is yes, and the process goes to S704.
  • the process since the proximity light driver has reported the proximity state, the process is also judged as yes.
  • the output state value is true, indicating that the result obtained by the algorithm proximity light state is a proximity state.
  • the process will go from S504 to S505.
  • the proximity light driver can determine that the algorithm proximity light state is a proximity state based on the output state value of the gesture assistance algorithm.
  • the proximity light driver will report the proximity state to achieve correction of the abnormal detection result of the entity proximity light state.
  • the electronic device includes hardware and/or software modules corresponding to the execution of each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered to be beyond the scope of the present application.
  • FIG13 shows a schematic block diagram of a device 1300 according to an embodiment of the present application.
  • the device 1300 may include: a processor 1301 and a transceiver/transceiver pin 1302 , and optionally, a memory 1303 .
  • bus 1304 includes a power bus, a control bus, and a status signal bus in addition to a data bus.
  • bus 1304 includes a power bus, a control bus, and a status signal bus in addition to a data bus.
  • various buses are referred to as bus 1304 in the figure.
  • the memory 1303 may be used for the instructions in the aforementioned method embodiment.
  • the processor 1301 may be used to execute the instructions in the memory 1303, and control the receiving pin to receive a signal, and control the sending pin to send a signal.
  • the apparatus 1300 may be the electronic device or a chip of the electronic device in the above method embodiment.
  • This embodiment further provides a computer storage medium, in which computer instructions are stored.
  • the computer instructions are executed on an electronic device, the electronic device executes the above-mentioned related method steps to implement the method in the above-mentioned embodiment.
  • This embodiment also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is enabled to execute the above-mentioned related steps to implement the method in the above-mentioned embodiment.
  • an embodiment of the present application also provides a device, which can specifically be a chip, component or module, and the device may include a connected processor and memory; wherein the memory is used to store computer-executable instructions, and when the device is running, the processor can execute the computer-executable instructions stored in the memory so that the chip executes the methods in the above-mentioned method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are used for The corresponding method provided above is executed. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided above, which will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种语音消息播放方法及电子设备,该方法包括:电子设备在播放语音消息时,基于接近光传感器确定的接近光状态以及状态数据确定的接近光状态,控制电子设备播放语音消息的模式。从而使得在接近光传感器检测异常的情况下,可基于状态数据计算得到的接近光状态对异常接近光状态进行校正,使得电子设备可基于正确的接近光状态,控制电子设备播放语音消息的模式。

Description

语音消息播放方法及电子设备
本申请要求于2023年06月13日提交中国国家知识产权局、申请号为202310697883.5、申请名称为“语音消息播放方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端设备领域,尤其涉及一种语音消息播放方法及电子设备。
背景技术
目前,随着终端技术发展,终端应用的功能越来越强大,例如用户可使用聊天应用进行文字交流以及语音交流。其中,语音交流包括但不限于视频通话、语音通话以及语音消息等。在用户拿起手机并在耳边接听语音消息时,聊天应用获取到接近光传感器检测到接近状态,启动防误触流程,自动从扬声器播放方式切换到听筒播放方式,且熄屏,以防止误触。相应的,当用户放下手机时,聊天应用获取到接近光传感器检测到远离状态,关闭防误触流程,自动从听筒播放方式切换到扬声器播放方式,且亮屏。
但是,由于接近光传感器可能发生误报,造成用户在耳边接听语音消息时,聊天应用误关闭防误触流程,使得手机仍在用户耳边时,聊天应用已经切换到扬声器播放模式,且亮屏,影响用户使用体验。
发明内容
本申请提供一种语音消息播放方法及电子设备。在该方法中,电子设备可基于算法,获取接近光状态,并对接近光传感器检测到的接近光状态进行修正,以避免误报,影响用户使用体验。
第一方面,本申请提供一种语音消息播放方法。该方法包括:电子设备响应于接收到的对语音消息的点击操作,调用扬声器播放语音消息,并且,电子设备的显示屏处于亮屏状态。在电子设备播放语音消息的过程中,电子设备基于接近光传感器的检测结果,获取第一接近光状态,并且,电子设备基于第一状态数据,获取第二接近光状态;其中,状态数据包括姿态数据和运动数据,姿态数据用于描述电子设备当前的姿态,运动数据用于描述电子设备当前的运动状态。电子设备检测到第一接近光状态与第二接近光状态均指示为远离状态,继续调用扬声器播放语音消息,并且,电子设备显示屏处于亮屏状态。在用户抬手将电子设备置于耳边接听语音消息的情况下,基于接近光传感器的检测结果,获取第三接近光状态。电子设备基于第二状态数据,获取第四接近光状态。在电子设备检测到第三接近光状态指示为远离状态,且第四接近光状态指示为接近状态的情 况下,电子设备确定第三接近光状态为异常状态,以及,电子设备基于第四接近光状态,确定电子设备处于耳边接听场景,调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。这样,本申请中的电子设备可通过电子设备姿态以及运动状态,计算出电子设备的接近光状态,以在接近光传感器误报的情况下,对接近光传感器的检测结果进行修正,以得到正确的接近光状态。从而使得电子设备能够基于正确的接近光状态,控制电子设备在播放语音消息时的模式。并且,手势辅助算法在扬声器打开之后,持续获取状态数据,可使得手势辅助算法能够实时获取到对应的接近光状态。相应的,电子设备也可以实时调用算法计算出的接近光状态。
示例性的,状态数据为ACC数据。姿态数据包括但不限于俯仰角和翻滚角。运动数据包括但不限于handupJudge参数。
在一种可能的实现方式中,方法还包括:在用户将电子设备持续置于耳边接听语音消息的过程中,电子设备基于接近光传感器的检测结果,获取第五接近光状态。电子设备基于第三状态数据,获取第六接近光状态。电子设备检测到第五接近光状态指示为远离状态,且第六接近光状态指示为接近状态,确定第五接近光状态为异常状态,以及,基于第六接近光状态,确定电子设备仍处于耳边接听场景,继续调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。这样,电子设备在听筒打开之后,即可基于在听筒打开之前获取到的状态参数和当前获取到的状态参数,从而得到电子设备的准确的运动状态,以基于算法获取到准确的接近光状态。
在一种可能的实现方式中,检测到第一接近光状态与第二接近光状态均指示为远离状态,继续调用扬声器播放语音消息,并且,电子设备显示屏处于亮屏状态,包括:电子设备检测听筒是否为打开状态。电子设备检测到听筒为关闭状态,判断第一接近光状态是否指示为远离状态。电子设备判定第一接近光状态指示为远离状态,判断第二接近光状态是否指示为接近状态。电子设备判定第二接近光状态指示为远离状态,确定电子设备当前的目标接近光状态为远离状态。电子设备基于目标接近光状态,继续调用扬声器播放语音消息,并且,电子设备显示屏处于亮屏状态。这样,电子设备可基于接近光传感器的结果,确定是否需要结合算法得到的接近光状态进行判定。在需要结合算法的情况下,再调用算法算得的接近光状态,以进一步确定接近光传感器所检测到的结果是否存在误报。
在一种可能的实现方式中,电子设备检测到第三接近光状态指示为远离状态,且第四接近光状态指示为接近状态,确定第三接近光状态为异常状态,包括:电子设备检测听筒是否为打开状态。电子设备检测到听筒为关闭状态,判断第三接近光状态是否指示为远离状态。电子设备判定第三接近光状态指示为远离状态,判断第四接近光状态是否指示为接近状态。电子设备判定第四接近光状态指示为接近状态,基于第三接近光状态与第四接近光状态,确定第三接近光状态为异常状态。电子设备确定第四接近光状态为 目标接近光状态。这样,电子设备可基于接近光传感器的结果,确定是否需要结合算法得到的接近光状态进行判定。在需要结合算法的情况下,再调用算法算得的接近光状态,以进一步确定接近光传感器所检测到的结果是否存在误报。
在一种可能的实现方式中,基于第四接近光状态,确定电子设备处于耳边接听场景,调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态,包括:电子设备基于目标接近光状态,确定电子设备处于耳边接听场景,调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。这样,电子设备可基于算法与接近光传感器的结果,确定电子设备当前是否处于耳边接听状态,以获取到准确的结果,并基于最终确定的目标接近光状态,控制电子设备播放语音消息的模式。
在一种可能的实现方式中,检测到第五接近光状态指示为远离状态,且第六接近光状态指示为接近状态,确定第五接近光状态为异常状态,包括:电子设备检测听筒是否为打开状态。电子设备检测到听筒为打开状态,判断第六接近光状态是否指示为接近状态。电子设备判定第六接近光指示为接近状态,基于第五接近光状态与第六接近光状态,确定第五接近光状态为异常状态。电子设备确定第六接近光状态为目标接近光状态。这样,电子设备在听筒打开之前,即可获取到状态数据,并基于状态数据,将手势锁锁定,即锁定接近光状态,使得电子设备在耳边接听的场景中,即使接近光传感器误报,而基于手势锁已经锁定接近状态,则电子设备最终获取到的接近光状态仍然是接近状态,而不会收到传感器误报的影响。
在一种可能的实现方式中,电子设备基于第六接近光状态,确定电子设备仍处于耳边接听场景,继续调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态,包括:基于目标接近光状态,确定电子设备处于耳边接听场景,继续调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。这样,电子设备可基于算法得到的接近光状态,对接近光传感器的异常上报进行校正,并基于正确的接近光状态,控制电子设备播放语音消息的模式。
第二方面,本申请提供一种电子设备,包括:一个或多个处理器、存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:响应于接收到的对语音消息的点击操作,调用扬声器播放语音消息,并且,电子设备的显示屏处于亮屏状态;基于接近光传感器的检测结果,获取第一接近光状态;基于第一状态数据,获取第二接近光状态;其中,状态数据包括姿态数据和运动数据,姿态数据用于描述电子设备当前的姿态,运动数据用于描述电子设备当前的运动状态;检测到第一接近光状态与第二接近光状态均指示为远离状态,继续调用扬声器播放语音消息,并且,电子设备显示屏处于亮屏状态;在用户抬手将电子设备置于耳边接听语音消息的情况下,基于接近光传感器的检测结果, 获取第三接近光状态;基于第二状态数据,获取第四接近光状态;检测到第三接近光状态指示为远离状态,且第四接近光状态指示为接近状态,确定第三接近光状态为异常状态,以及,基于第四接近光状态,确定电子设备处于耳边接听场景,调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:在用户将电子设备持续置于耳边接听语音消息的过程中,基于接近光传感器的检测结果,获取第五接近光状态;基于第三状态数据,获取第六接近光状态;检测到第五接近光状态指示为远离状态,且第六接近光状态指示为接近状态,确定第五接近光状态为异常状态,以及,基于第六接近光状态,确定电子设备仍处于耳边接听场景,继续调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:检测听筒是否为打开状态;检测到听筒为关闭状态,判断第一接近光状态是否指示为远离状态;判定第一接近光状态指示为远离状态,判断第二接近光状态是否指示为接近状态;判定第二接近光状态指示为远离状态,确定电子设备当前的目标接近光状态为远离状态;基于目标接近光状态,继续调用扬声器播放语音消息,并且,电子设备显示屏处于亮屏状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:检测听筒是否为打开状态;检测到听筒为关闭状态,判断第三接近光状态是否指示为远离状态;判定第三接近光状态指示为远离状态,判断第四接近光状态是否指示为接近状态;判定第四接近光状态指示为接近状态,基于第三接近光状态与第四接近光状态,确定第三接近光状态为异常状态;确定第四接近光状态为目标接近光状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:基于目标接近光状态,确定电子设备处于耳边接听场景,调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:检测听筒是否为打开状态;检测到听筒为打开状态,判断第六接近光状态是否指示为接近状态;判定第六接近光指示为接近状态,基于第五接近光状态与第六接近光状态,确定第五接近光状态为异常状态;确定第六接近光状态为目标接近光状态。
在一种可能的实现方式中,当计算机程序被一个或多个处理器执行时,使得电子设 备执行以下步骤:基于目标接近光状态,确定电子设备处于耳边接听场景,继续调用听筒播放语音消息,并且,电子设备的显示屏处于熄屏状态。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,本申请提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第四方面,本申请提供了一种计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第五方面,本申请提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第一方面或第一方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
附图说明
图1为示例性示出的电子设备的硬件结构示意图;
图2为示例性示出的电子设备的软件结构示意图;
图3为示例性示出的用户界面示意图;
图4a为示例性示出的应用场景示意图;
图4b为示例性示出的应用场景示意图;
图4c~图4d为示例性示出的模块交互示意图;
图5为示例性示出的语音消息播放方法的流程示意图;
图6为示例性示出的语音消息播放方法的流程示意图;
图7为示例性示出的手势辅助算法的状态值计算流程示意图;
图8为示例性示出的语音消息播放方法的流程示意图;
图9a~图9b为示例性示出的语音消息播放方法的流程示意图;
图10为示例性示出的手势锁计算流程示意图;
图11为示例性示出的语音播放方法的流程示意图;
图12为示例性示出的语音消息播放流程示意图;
图13为示例性示出的装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本 申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
图1示出了电子设备100的结构示意图。应该理解的是,图1所示电子设备100仅是电子设备的一个范例,并且电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图1中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带 信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置 于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置 翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图2是本申请实施例的电子设备100的软件结构框图。
电子设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,硬件抽象层(hardware abstraction layer,HAL),以及内核层。
应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息,聊天等应用程序。其中,聊天应用可以为电子设备预安装的应用,也可以是第三方应用,本申请不做限定。本申请实施例中仅以聊天应用为例进行说明,在其他实施例中,本申请实施例中的方法可以应用于任意可以发送语音消息的应用,本申请不做限定。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器,Sensorservice(传感器服务),Audioservice(音频服务)等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消 息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Sensorservice用于提供传感器的相关服务,并为上层应用提供服务接口。
Audioservice用于提供音频的相关服务,并为上层应用提供服务接口。
HAL层为位于操作系统内核与硬件电路之间的接口层。HAL层包括但不限于:SensorHal(传感器硬件抽象层)、AudioHal(音频硬件抽象层)等。
其中,其中,Audio HAL用于对音频流进行处理,例如,对音频流进行降噪、定向增强等处理。
SensorHal用于对传感器驱动上报的事件进行处理。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,Sensorevent模块,音频驱动,蓝牙驱动,Wi-Fi驱动等。
其中,显示驱动用于控制显示器的显示。
Sensorevent用于记录听筒或扬声器的状态。
音频驱动用于控制音频器件,例如包括但不限于:听筒、扬声器等。
SensorHub(传感器组件)层包括但不限于:手势辅助模块和传感器驱动。
手势辅助模块用于控制手势锁。其中,手势锁也可以理解为是接近光状态锁,用于锁定接近光状态。若手势锁为锁定状态,即为接近光状态锁定,则接近光驱动上报的接近光状态为接近状态。若手势锁未锁定,即为接近光状态未锁定,则接近光驱动依照实际检测到的接近光状态(可以为远离状态,也可以为接近状态)进行上报。
传感器驱动用于控制传感器的状态(例如打开或关闭)以及接收传感器上报的传感参数。可选地,传感器驱动中可以进一步包括对应于不同传感器的传感器驱动,例如,在本申请实施例中,传感器驱动可以包括接近光驱动,以控制接近光传感器,并接收接近光传感器上传的传感参数。可选地,传感器驱动还可以包括加速度传感器等,本申请不做限定。
可以理解的是,图2示出的软件结构中的层以及各层中包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的层,以及每个层中可以包括更多或更少的部件,本申请不做限定。
图3为示例性示出的用户界面示意图,请参照图3,显示界面300中包括聊天应用界面310。其中聊天应用界面中包括但不限于聊天框和输入框等。其中,聊天框用于显示接收和发送的聊天消息。在本申请实施例中,聊天消息包括但不限于:语音聊天消息311(以下简称语音消息)和文字聊天消息(简称文字消息)等。
如图3所示,用户可点击语音消息311,以指示手机播放语音消息。需要说明的是,本申请实施例中仅以电子设备为手机为例进行说明,在其他实施例中,电子设备还可以是可穿戴设备(例如智能手表或手环)、平板等,本申请不做限定。
示例性的,如图4a所示的场景示意图,在本申请实施例中,用户在点击语音消息311时,手机通常是处于远离状态。在本申请实施例中,远离状态可以理解为是非耳边接听状态。在图4a所示的场景中,用户若需要点击语音消息311,其姿势通常是单手(可以是左手或右手)持手机,且用户可观测到手机的显示界面,以点击语音消息311。在本申请实施例中,用户点击语音消息311之后,手机(具体为聊天应用)响应于接收到的用户操作,通过扬声器播放语音消息311。也可以理解为,在远离状态下,手机通过扬声器播放语音消息311中的音频内容,并且,显示界面300处于亮屏状态,即显示如图3所示的界面。
在本申请实施例中,与远离状态对应的为接近状态,也可以理解为是耳边接听状态。如图4b所示为耳边接听的场景示意图,在图4b中,用户可从图4a所示的手持手机的远离状态,向用户的耳侧移动,使得手机的听筒贴近用户耳侧,即为接近状态。在该场景中,手机检测到当前处于接近状态,则可启动防误触流程,即通过听筒播放语音消息,并熄屏(即显示界面300变黑)。在本申请实施例中,手机熄屏后,触摸事件被锁定,即触摸传感器不再检测触摸事件,以防止误触。
结合图4a和图4b所示的场景,图4c~图4d为示例性示出的模块交互示意图。请参照图4c,在接听语音消息时,聊天应用会注册监听接近光事件。具体的,聊天应用可向Sensorservice发送注册监听消息。Sersorservice接收到注册监听消息,向SersorHal发送activate(触发)信息。SersorHal响应于接收到的activate消息,指示传感器驱动(具体可以是接近光驱动)触发(即启动)接近光传感器。接近光传感器检测接近光,具体检测方式可参照已有技术实施例,本申请不做限定。
在本申请实施例中,接近光传感器可以持续向接近光驱动上报检测到的传感器参数,接近光驱动可基于接收到的传感器参数确定对应的接近光事件,本申请不做限定。其中,接近光事件包括但不限于:接近事件和远离事件。可选地,传感器驱动可以在确定发生接近光事件切换(例如从接近事件切换到远离事件,或远离事件切换到接近事件,本申请不做限定)的时候,向接近光驱动上报相关事件。
可选地,接近光传感器也可以基于检测到的参数(可以称为传感器参数或接近光参数),向传感器驱动上报接近光事件。其中,接近光事件包括但不限于:接近事件和远离事件。可选地,接近光传感器可以仅在发生切换事件的时候,向接近光驱动上报相关事件。
示例性的,传感器驱动确定发生接近光事件(包括远离事件或接近事件)后,向SensorHal上报接近光事件。SensorHal响应于接收到的接近光事件,向Sensorservice发送接近光事件。Sensorsevice将接近光事件发送给聊天应用。
在本申请实施例中,若用户拿起手机接听,则聊天应用接收到接近光驱动通过其它模块上报的接近事件,也可以理解为是接近光驱动上报接近状态。聊天应用响应于上报的接近状态(即接近事件)启动防误触流程。如图4d所示,示例性的,聊天应用向PoserManager(电源管理)模块发送指示消息,用于指示启动灭屏流程。PoserManager模块向硬件合成器(Hardware Composer,HWC)发送灭屏指令。HWC将灭屏指令发送给 显示驱动。显示驱动响应于接收到的灭屏指令,控制显示屏熄屏。
示例性的,聊天应用同时向Audioservice发送听筒播放指令,用于指示调用听筒播放当前语音消息。Audioservice向AudioHal发送听筒播放指令,AudioHal向音频驱动发送听筒播放指令。音频驱动响应于接收到的指令,调用听筒(即图1中所示的受话器)播放当前音频数据,即语音消息中的语音内容。
在图4b和图4c所示的流程中,聊天应用是否启动防误触流程均是依赖于接近光传感器的检测结果。在一些实施例中,如果接近光传感器检测错误,例如将接近状态检测为远离状态。例如用户的头发可能会遮挡住接近光传感器的检测信号,用户的头发可能吸收接近光传感器的检测信号,使得接近光传感器未能接收到反射回的检测信号。相应的,接近光传感器的检测结果将指示当前接近光状态为远离状态。其中,如上文所述,检测结果可能是接近光传感器确定的,也可能是接近光驱动确定的,本申请不做限定。在该异常场景中,由于接近光传感器误报,聊天应用将关闭防误触流程。具体的,仍参照图4c,聊天应用响应于接收到的接近状态,关闭防误触流程。聊天应用向向PoserManager模块发送指示消息,用于指示关闭灭屏流程。PoserManager模块向HWC发送亮屏指令。HWC将亮屏指令发送给显示驱动。显示驱动响应于接收到的亮屏指令,控制显示亮屏,例如显示图3中的显示界面300。
示例性的,聊天应用同时向Audioservice发送扬声器播放指令,用于指示调用扬声器播放当前语音消息。Audioservice向AudioHal发送扬声器播放指令,AudioHal向音频驱动发送扬声器播放指令。音频驱动响应于接收到的指令,调用扬声器播放当前音频数据,即语音消息中的语音内容。
也就是说,当用户拿起手机,在耳边接听语音消息时,由于传感器驱动误报,聊天应用关闭防误触流程,使得手机通过扬声器播放音频,影响用户体验,并且,由于在耳边时已经亮屏,则用户可能会误触。
本申请实施例提供一种语音消息播放方法,可通过手势锁锁定接近光状态,以在算法检测到手机仍处于接近状态的情况下,校正接近光传感器误报的远离状态,以避免误报,提高接近光上报的准确性。
图5为示例性示出的语音消息播放方法的流程示意图,请参照图5,具体包括但不限于:
S501,播放语音消息。
示例性的,如图3所示,聊天应用接收到语音消息,并在聊天应用界面中显示语音消息311。用户可点击语音消息311,以触发聊天应用通过扬声器播放语音消息311。在本申请实施例中,均以默认扬声器播放语音消息为例进行说明。
聊天应用响应于接收到的用户操作,播放语音消息311。
S502,获取接近光状态。
示例性的,接近光驱动基于接近光传感器的检测结果,确定接近光状态,为区分于算法得到的接近光状态,本申请实施例中将接近光驱动从接近光传感器获取到的接近光状态称为实体接近光状态。
其中,接近光状态包括远离状态或接近状态。其中,接近光传感器被遮挡,则对应为接近状态。接近光传感器未被遮挡,则对应为远离状态。
S503,检测听筒是否打开。
示例性的,接近光驱动获取听筒状态,以检测听筒当前是否为打开状态。
一个示例中,如果听筒为打开状态,则执行S508。
另一个示例中,如果听筒为关闭状态,则执行S504。
S504,判断接近光状态是否为远离状态。
示例性的,接近光驱动判断获取到的实体接近光状态是否为远离状态。
一个示例中,如果为远离状态,则执行S505。
另一个示例中,如果为接近状态,则执行S506。
S505,判断状态值是否为true。
在本申请实施例中,手势辅助算法在听筒或者是扬声器打开之后,实时获取手机的ACC(Accumulator,累加器)数据,并基于ACC数据计算状态值。其中,状态值用于指示是否为接近状态。在本申请实施例中,为区分于上文所述的实体接近光状态,算法所得的接近光状态称为算法接近光状态。
一个示例中,如果状态值为true,则表示手势辅助算法基于ACC数据得到算法接近光状态为接近状态。另一个示例中,如果状态值为false,则表示手势辅助算法基于ACC数据得到的算法接近光状态为远离状态。
在本申请实施例中,接近光驱动读取手势辅助算法的输出的状态值。一个示例中,如果状态值为true,则执行S506。另一个示例中,如果状态值为false,则执行S507。
S506,确定上报接近状态。
在本申请实施例中,实体接近光状态为远离状态,而算法接近光状态为接近状态,则说明实体接近光状态存在误报,例如上文所述的接近光传感器将接近状态,误检测为远离状态。接近光驱动可基于算法接近光状态,对实体接近光状态进行校正,以上报正确的接近光状态。
S507,接近光状态上报。
示例性的,接近光驱动按照已确认的接近光状态,向聊天应用上报接近光状态。
S508,获取手势锁状态。
示例性的,手势辅助算法基于ACC数据,确定手势锁状态。在本申请实施例中,手势锁状态包括锁定状态和未锁定状态。其中,锁定状态用于锁定接近状态,也就是说,手势锁锁定的情况下,接近光驱动上报的接近光状态均为接近状态。未锁定状态下,接近光驱动向聊天应用上报的接近光状态即为实际检测到的接近光状态,即实体接近光状态,可以包括接近状态或远离状态。
示例性的,接近光驱动读取手势辅助算法输出的手势锁状态。
S509,判断手势锁是否为锁定状态。
一个示例中,如果手势锁状态为锁定状态,则执行S506。
另一个示例中,如果手势锁状态为非锁定状态,则执行S510。
S510,确定上报当前接近光状态。
示例性的,如上文所述,未锁定状态下,接近光驱动向聊天应用上报的接近光状态即为实际检测到的接近光状态,即实体接近光状态,可以包括接近状态或远离状态。
下面以具体实施例对图5所示的流程进行详细说明。图6为示例性示出的语音消息播放方法的流程示意图,请参照图6,具体包括但不限于如下步骤:
S601,应用播放语音消息。
示例性的,如图3所示,聊天应用接收到语音消息,并在聊天应用界面中显示语音消息311。用户可点击语音消息311,以触发聊天应用通过扬声器播放语音消息311。在本申请实施例中,均以默认扬声器播放语音消息为例进行说明。
示例性的,聊天应用响应于接收到的用户点击语音消息311的操作,播放语音消息。
S602a,应用向AudioHal发送打开扬声器指令。
示例性的,聊天应用确定需要播放语音消息,如图4c所示,聊天应用向Audioservice发送扬声器播放指令,用于指示调用扬声器播放当前语音消息。Audioservice向AudioHal发送扬声器播放指令,AudioHal向音频驱动发送扬声器播放指令。音频驱动响应于接收到的指令,调用扬声器播放当前音频数据,即语音消息中的语音内容。其中,显示屏处于亮屏状态。
S602b,AudioHal向Sensorevent发送写入扬声器打开状态指令。
示例性的,AudioHal指示扬声器打开之后,可确认扬声器当前为打开状态。AudioHal向Sensorevent发送写入扬声器打开状态指令,以指示Sensorevent记录扬声器当前为打开状态。
S602c,Sensorevnet记录扬声器打开状态。
示例性的,Sensorevent响应于接收到的写入扬声器打开状态指令,记录扬声器当前为打开状态。例如,Sensorevent可设置有对应于扬声器状态的标记位,其中“1”表示打开,“0”表示关闭。Sensorevent接收到扬声器打开指令之前,扬声器状态的标记位为“0”,即为关闭状态。Sensorevent响应于接收到的写入扬声器打开状态指令,将标记位切换为“1”,以指示扬声器打开。
S602,SenSorHal读取扬声器打开状态。
示例性的,SensorHal实时读取Sensorevent中的数据,以获取听筒与扬声器的状态(包括打开或关闭)。在本实例中,Sensorevent记录扬声器打开状态之后,SensorHal可监测到Sensorevnet中记录的扬声器状态切换,即从关闭状态切换为打开状态。
S603,SensorHal向手势辅助算法发送指示信息。
示例性的,SenSorHal检测到扬声器状态从关闭切换到打开之后,向手势辅助算法发送指示信息,指示信息中包括扬声器打开状态信息,用于指示扬声器打开。
需要说明的是,在本申请实施例中,SensorHal在监测到扬声器和/或听筒的状态切换的情况下,触发指示信息。也就是说,SensorHal当前发送指示信息之后,扬声器与听筒的状态无变化的情况下,例如听筒当前一直处于关闭状态,扬声器一直处于打开状态,SensorHal不会向手势辅助算法发送指示信息。
S604,手势辅助算法记录扬声器打开状态。
示例性的,手势辅助算法响应于接收到的指示信息,确定扬声器当前为打开状态。手势辅助算法可记录扬声器状态为打开状态。可选地,手势辅助算法也可以设置标志位,用于记录扬声器状态以及听筒状态。记录方式可以参照Sensorevent,此处不再赘述。
S605,手势辅助算法基于ACC数据,输出状态值。
示例性的,图7为示例性示出的手势辅助算法的状态值计算流程,请参照图7,具体包括但不限于:
S701,获取ACC数据
示例性的,手势辅助算法可向传感器驱动发送请求信息,以请求ACC数据。传感器驱动响应于手势辅助算法的请求,获取加速度传感器以及陀螺仪传感器等的检测数据。可选地,加速度传感器与陀螺仪传感器可以是一直开启的,也可以是响应于传感器驱动的调用再启动的,本申请不做限定。
S702,计算pitch(俯仰角)、roll(翻滚角)和handupJudge(抬手判断)数值。
示例性的,pitch(俯仰角)、roll(翻滚角)可用于描述手机的姿态。handupJudge数值用于描述手机的动态到静态或静态到动态的过程变量。具体计算方式可参照已有技术,本申请不再赘述。
S703,判断pitch、roll和handupJudge是否满足阈值,且听筒或扬声器为打开状态。
示例性的,手势辅助算法基于计算出的pitch、roll和handupJudge,与各自对应的阈值范围进行比较。如果pitch、roll和handupJudge均满足阈值范围,例如,pitch和roll满足左手持手机对应的阈值范围或者是右手持手机对应的阈值范围,且,handupJudge大于0,则进一步判断听筒或扬声器是否为打开状态。如上文所述,手势辅助算法可记录有扬声器和/或听筒的状态,手势辅助算法可基于记录的状态,确定听筒或扬声器是否打开。如果上述条件均满足,则执行S704。如果任意条件不满足,则状态值输出false。
S704,判断上一次接近光上报状态是否为接近状态。
示例性的,手势辅助算法可读取接近光驱动上一次上报的接近状态,是指接近光驱动实际向聊天应用上报的接近光状态。
一个示例中,如果接近光上报状态为接近状态,状态值输出为true。
另一个示例中,如果接近光上报状态为远离状态,状态值输出为false。
仍参照图6,在该示例中,用户当前手持手机,即手机处于远离状态。相应的,手势辅助算法基于ACC数据,获取到的pitch、roll和handupJudge均不满足阈值,则状态值输出为false,用于指示算法接近光状态为远离状态。
S606a,应用向SensorHal注册监听接近光事件。
示例性的,应用执行S601a的同时(或者是不分先后),聊天应用向Sensorservice发送注册监听接近光请求。Sensorservice响应于接收到的注册监听接近光请求,向SensorHal发送activate消息。
S606b,SensorHal向接近光驱动发送触发接近光传感器指令。
示例性的,SensorHal响应于接收到的activate消息,向接近光驱动发送触发接近光 传感器指令,用于指示传感器驱动(具体可以是接近光驱动)触发(即启动)接近光传感器。
S606c,接近光驱动获取接近光状态。
示例性的,如图4a所示,用户当前手持手机,即接近光驱动基于接近光传感器检测到的参数,确定手机处于远离状态。
S607,接近光驱动读取听筒状态。
示例性的,如上文所述,手势辅助记录有听筒与扬声器的状态。手势辅助记录的听筒当前的状态为关闭状态。也就是说,手势辅助算法默认记录扬声器与听筒的状态均是关闭的。如S604,手势辅助算法记录扬声器为打开状态,其中,听筒状态仍然为关闭。
接近光驱动可读取手势辅助算法记录的听筒与扬声器的状态,并获取到当前听筒状态。
S608,接近光驱动判断听筒是否打开。
示例性的,接近光驱动基于从手势辅助算法读取到的听筒状态,判断听筒是否为打开状态。
如图5的S503,如果听筒打开,则执行S508,如果听筒关闭,则执行S504。
在本实例中,接近光驱动检测到听筒为关闭状态,执行S609。
S609,接近光驱动判断是否为远离状态。
示例性的,接近光驱动基于S606c获取到的实体接近光状态进行判断。如图5中的S504所示,如果判断为是,则执行S505,如果判断为否,则执行S506。
如图6所示,在该示例中,在S606c中接近光驱动获取到的实体接近光状态为远离状态,接近光驱动判断当前状态为远离状态(即判断为是),执行S610。
S610,接近光驱动从手势辅助算法读取状态值。
示例性的,图5中的流程走到S505,对应于图6中,接近光驱动从手势辅助算法读取状态值。需要说明的是,手势辅助算法在S603之后直至扬声器与听筒关闭之前,持续计算输出状态值。接近光驱动读取的即为手势辅助算法最新计算得到的输出状态值。可选地,手势辅助算法也可以设置计算周期,其计算周期可以与接近光驱动的读取周期相同,本申请不做限定。
S611,接近光驱动判断状态值是否为true。
示例性的,在图5的S505中,如果判断为是,则执行S506,如果判断为否,则执行S510。如图6所示,接近光驱动读取到状态值为false。相应的,图5中的流程将走到S510。
S612a,接近光驱动确定上报远离状态。
示例性的,如图5中的S510,接近光将以当前的实体接近光状态进行上报。也就是说,算法接近光状态以及实体接近光状态均是远离状态。
S612b,接近光驱动向SensorHal上报远离状态。
S612c,SensorHal向应用上报远离状态。
示例性的,请参照图4b,接近光驱动确定向SensorHal上报远离状态。SensorHal向 Sensorservice上报远离状态。Sensorsevice将远离状态发送给聊天应用。
聊天应用确定当前为远离状态,则继续亮屏,并通过扬声器播放语音消息,即保持当前的语音消息播放方式。
示例性的,用户在播放语音过程中,持续手持手机处于远离状态,在该过程中,各模块循环执行图5中的流程。图8为示例性示出的语音消息播放方法的流程示意图,请参照图8,具体包括但不限于如下步骤:
S801,手势辅助算法基于ACC数据,输出状态值。
示例性的,如上文所述,手势辅助算法持续获取ACC数据,并计算对应的状态值。在该示例中,由于手机处于远离状态,则图7中的计算结果仍然为false。
S802,接近光驱动获取接近光状态。
示例性的,如图4a所示,用户当前手持手机,即接近光驱动基于接近光传感器检测到的参数,确定手机处于远离状态。
S803,接近光驱动读取听筒状态。
示例性的,如上文所述,手势辅助记录有听筒与扬声器的状态。手势辅助记录的听筒当前的状态为关闭状态。也就是说,手势辅助算法默认记录扬声器与听筒的状态均是关闭的。手势辅助算法记录扬声器为打开状态,其中,听筒状态仍然为关闭。
接近光驱动可读取手势辅助算法记录的听筒与扬声器的状态,并获取到当前听筒状态。
S804,接近光驱动判断听筒是否打开。
示例性的,接近光驱动基于从手势辅助算法读取到的听筒状态,判断听筒是否为打开状态。
如图5的S503,如果听筒打开,则执行S508,如果听筒关闭,则执行S504。
在本实例中,接近光驱动检测到听筒为关闭状态,执行S805。
S805,接近光驱动判断是否为远离状态。
示例性的,接近光驱动基于S802获取到的实体接近光状态进行判断。如图5中的S504所示,如果判断为是,则执行S505,如果判断为否,则执行S506。
如图8所示,在该示例中,在S802中接近光驱动获取到的实体接近光状态为远离状态,接近光驱动判断当前状态为远离状态,执行S806。
S806,接近光驱动从手势辅助算法读取状态值。
示例性的,图5中的流程走到S505,对应于图6中,接近光驱动从手势辅助算法读取状态值。需要说明的是,手势辅助算法在扬声器打开之后直至扬声器与听筒关闭之前,持续计算输出状态值。接近光驱动读取的即为手势辅助算法最新计算得到的输出状态值。可选地,手势辅助算法也可以设置计算周期,其计算周期可以与接近光驱动的读取周期相同,本申请不做限定。
S807,接近光驱动判断状态值是否为true。
示例性的,在图5的S505中,如果判断为是,则执行S506,如果判断为否,则执行S510。如图8所示,接近光驱动读取到状态值为false。相应的,图5中的流程将走到 S510。
S808a,接近光驱动确定上报远离状态。
示例性的,如图5中的S510,接近光将以当前的实体接近光状态进行上报。也就是说,算法接近光状态以及实体接近光状态均是远离状态。
S808b,接近光驱动向SensorHal上报远离状态。
S808c,SensorHal向应用上报远离状态。
示例性的,请参照图4b,接近光驱动确定向SensorHal上报远离状态。SensorHal向Sensorservice上报远离状态。Sensorsevice将远离状态发送给聊天应用。
聊天应用确定当前为远离状态,则继续亮屏,并通过扬声器播放语音消息,即保持当前的语音消息播放方式。
示例性的,以用户从图4a所示的接听方式切换为图4b所示的接听方式,即,用户将手机拿到耳边接听语音消息。图9a~图9b为示例性示出的语音消息播放方法的流程示意图,请参照图9a~图9b,具体包括但不限于:
S901,手势辅助算法基于ACC数据,输出状态值。
示例性的,如上文所述,手势辅助算法持续获取ACC数据,并计算对应的状态值。在该示例中,由于用户拿起手机至耳边接听,手势辅助算法执行S703时,判断为是,即各参数满足阈值,且扬声器为打开状态。
在该示例中,由于接近光驱动还未上报过接近状态,则S704执行时,接近光驱动判断上一次接近光驱动上报为远离状态,输出状态值仍然为false。
S902,接近光驱动获取接近光状态。
示例性的,用户将手机拿起到耳边,相应的,接近光驱动基于接近光传感器的检测结果,确定当前接近光状态为接近状态。
S903,接近光驱动读取听筒状态。
示例性的,如上文所述,手势辅助记录有听筒与扬声器的状态。手势辅助记录的听筒当前的状态为关闭状态。也就是说,手势辅助算法默认记录扬声器与听筒的状态均是关闭的。手势辅助算法记录扬声器为打开状态,其中,听筒状态仍然为关闭。
接近光驱动可读取手势辅助算法记录的听筒与扬声器的状态,并获取到当前听筒状态。
S904,接近光驱动判断听筒是否打开。
示例性的,接近光驱动基于从手势辅助算法读取到的听筒状态,判断听筒是否为打开状态。
如图5的S503,如果听筒打开,则执行S508,如果听筒关闭,则执行S504。
在本实例中,接近光驱动检测到听筒为关闭状态,执行S905。
S905,近光驱动判断是否为远离状态。
示例性的,接近光驱动基于S902获取到的实体接近光状态进行判断。如图5中的S504所示,如果判断为是,则执行S505,如果判断为否,则执行S506。
如图8所示,在该示例中,在S902中接近光驱动获取到的实体接近光状态为接近状 态,接近光驱动判断当前状态为远离状态(即判断为否),执行S906a。即,对应于图5,流程将走到S506。
S906a,接近光驱动确定上报接近状态。
示例性的,接近光驱动在确定当前为远离状态的情况下,则无需再读取算法接近光状态,可直接上报接近状态。
S906b,接近光驱动向SensorHal上报接近状态。
S906c,SensorHal向应用上报接近状态。
示例性的,请参照图4b,接近光驱动确定向SensorHal上报接近状态。SensorHal向Sensorservice上报接近状态。Sensorsevice将接近状态发送给聊天应用。
S907a,应用向AudioHal发送打开听筒指令。
示例性的,如图4b所示,向Audioservice发送听筒播放指令,用于指示调用听筒播放当前语音消息。Audioservice向AudioHal发送听筒播放指令,AudioHal向音频驱动发送听筒播放指令。音频驱动响应于接收到的指令,调用听筒(即图1中所示的受话器)播放当前音频数据,即语音消息中的语音内容。
同时,聊天应用向PoserManager(电源管理)模块发送指示消息,用于指示启动灭屏流程。PoserManager模块向硬件合成器(Hardware Composer,HWC)发送灭屏指令。HWC将灭屏指令发送给显示驱动。显示驱动响应于接收到的灭屏指令,控制显示屏熄屏。
S907b,AudioHal向Sensorevent发送写入听筒打开状态指令。
示例性的,AudioHal指示听筒打开之后,可确认听筒当前为打开状态。AudioHal向Sensorevent发送写入听筒打开状态指令,以指示Sensorevent记录听筒当前为打开状态。需要说明的是,在未接收到扬声器关闭指令之前,Sensorevent以及手势辅助算法记录的扬声器状态仍然为打开状态。
S907c,Sensorevnet记录听筒打开状态。
示例性的,Sensorevent响应于接收到的写入听筒打开状态指令,记录听筒当前为打开状态。例如,Sensorevent可设置有对应于听筒状态的标记位,其中“1”表示打开,“0”表示关闭。Sensorevent接收到听筒打开指令之前,扬声器状态的标记位为“0”,即为关闭状态。Sensorevent响应于接收到的写入听筒打开状态指令,将标记位切换为“1”,以指示听筒打开。
S908,SensorHal读取听筒打开状态。
示例性的,SensorHal实时读取Sensorevent中的数据,以获取听筒与扬声器的状态(包括打开或关闭)。在本实例中,Sensorevent记录听筒打开状态之后,SensorHal可监测到Sensorevnet中记录的听筒状态切换,即从关闭状态切换为打开状态。
S909,SensorHal向手势辅助算法发送指示信息。
示例性的,SenSorHal检测到听筒状态从关闭切换到打开之后,向手势辅助算法发送指示信息,指示信息中包括听筒打开状态信息,用于指示听筒为打开状态。
需要说明的是,在本申请实施例中,SensorHal在监测到扬声器和/或听筒的状态切换的情况下,触发指示信息。也就是说,SensorHal当前发送指示信息之后,扬声器与听筒 的状态无变化的情况下,例如听筒当前一直处于关闭状态,扬声器一直处于打开状态,SensorHal不会向手势辅助算法发送指示信息。
S910,手势辅助算法记录听筒打开状态。
示例性的,手势辅助算法响应于接收到的指示信息,确定听筒当前为打开状态。手势辅助算法可记录听筒状态为打开状态。可选地,手势辅助算法也可以设置标志位,用于记录扬声器状态以及听筒状态。记录方式可以参照Sensorevent,此处不再赘述。
S911a,手势辅助算法基于ACC数据,输出状态值。
仍参照图7,示例性的,如上文所述,手势辅助算法持续获取ACC数据,并计算对应的状态值。在该示例中,由于用户拿起手机至耳边接听,手势辅助算法执行S703时,判断为是,即各参数满足阈值,且扬声器为打开状态。
在该示例中,由于接近光驱动已上报过接近状态(即S906a~S906c),则S704执行时,接近光驱动判断上一次接近光驱动上报为接近状态,输出状态值为true,即,手势辅助算法计算出的算法接近光状态为接近状态。
S911b,手势辅助算法基于ACC数据,输出手势锁状态。
示例性的,手势辅助算法在扬声器打开之后,持续获取ACC数据,在上文所述的流程中,手势辅助算法可以仅计算状态值。可选地,手势辅助算法也可以同时计算状态值和手势锁,本申请不做限定。
在本实例中,手势辅助算法在检测到听筒打开之后,可以开始计算手势锁状态值。可选地,手势辅助算法也可以在接近光驱动需要读取手势锁状态值(即执行S915时)再计算手势锁状态值,本申请不做限定。
图10为示例性示出的手势锁计算流程示意图,请参照图10,具体包括但不限于如下步骤:
S1001,获取ACC数据。
S1002,计算pitch、roll和handupJudge数值。
示例性的,如上文所述,手势辅助算法实时获取ACC数据,并计算对应的pitch、roll和handupJudge数值。
一种可能的实现方式中,手势辅助算法可基于pitch、roll和handupJudge,输出状态值以及手机锁状态值。
另一种可能的实现方式中,手势辅助算法可以在接近光驱动调用时,再基于pitch、roll和handupJudge数值的计算结果,执行相应的判断流程,以输出状态值和/或手机锁状态值。
在本申请实施例中,以手势辅助算法在检测到听筒打开之后,持续获取手势锁状态为例进行说明。
S1003,判断当前接近光是否为接近状态。
示例性的,手势辅助算法获取接近光驱动获取到的实体接近光状态是否为接近状态。
一个示例中,如果为接近状态,则执行S1004。
另一个示例中,如果为远离状态,则执行S1005。
S1004,判断pitch、roll和handupJudge是否满足阈值。
具体流程可参照S703,此处不再赘述。
一个示例中,如果pitch、roll和handupJudge满足阈值,则手势锁锁定,即输出的手势锁状态为锁定。
另一个示例中,如果pitch、roll和handupJudge不满足阈值,则手势锁未锁定,即输出的手势锁状态为未锁定。
S1005,判断状态值是否为true。
示例性的,手势辅助算法可执行图7中的流程,以获取状态值。
一个示例中,如果状态值为true,则手势锁锁定,即输出的手势锁状态为锁定。
另一个示例中,如果状态值为false,则执行S1006。
S1006,判断当前是否为锁定状态。
示例性的,手势辅助算法检测手势锁当前(即上一次手势锁判断流程的输出结果)是否为锁定状态。
一个示例中,如果判断为是,则执行S100,7。
另一个示例中,如果判断为否,即手势锁未锁定,则继续保持未锁定状态。
S1007,判断pitch和roll是否满足阈值。
一个示例中,如果pitch、roll满足阈值,则手势锁锁定,即输出的手势锁状态为锁定。
另一个示例中,如果pitch、roll不满足阈值,则手势锁未锁定,即输出的手势锁状态为未锁定。
请继续参照图9a,在该示例中,用户拿起手机在耳边接听,手势辅助算法执行图10所示的流程,具体的,手势辅助算法执行S1003,确定当前接近光状态为接近状态,则执行S1004。如上文所述,手势辅助算法是实时获取并计算pitch、roll和handupJudge数值的,相应的,用户抬手之后,手势辅助算法可基于已获取(即听筒打开之前即获取到的)的pitch、roll和handupJudge,确定满足耳边接听的条件,即S1004判断为是,手势锁锁定,即将接近光状态锁定为接近状态。其中,用户拿起手机在耳边接听的情况下,pitch、roll会较之之前的数值发生变化,满足右手接听阈值范围或左手接听阈值范围。并且,由于用户将手机从远离状态拿到耳边,手机从远离位置移动到耳边,handupJudge数值大于0,即满足预设阈值。
S912,接近光驱动获取接近光状态。
示例性的,如4b所示,用户当前手持手机在耳边接听,即接近光驱动基于接近光传感器检测到的参数,确定手机处于接近状态。
S913,接近光驱动从手势辅助算法读取听筒状态。
示例性的,如上文所述,手势辅助记录有听筒与扬声器的状态。手势辅助记录的听筒当前的状态为关闭状态。也就是说,手势辅助算法默认记录扬声器与听筒的状态均是关闭的。手势辅助算法记录扬声器为打开状态,其中,听筒状态仍然为关闭。
接近光驱动可读取手势辅助算法记录的听筒与扬声器的状态,并获取到当前听筒状 态。
S914,接近光驱动判断听筒是否打开。
示例性的,接近光驱动基于从手势辅助算法读取到的听筒状态,判断听筒是否为打开状态。
如图5的S503,如果听筒打开,则执行S508,如果听筒关闭,则执行S504。
在本实例中,接近光驱动检测到听筒为打开状态,执行S914。对应图5中的流程走到S508。
S915,接近光驱动读取手势锁状态值。
示例性的,对应图5中的S508,如果判断手势锁状态为锁定状态,则执行S506,如果手势锁状态为未锁定状态,则执行S510。
请继续参照图9b,示例性的,如上文所述,手势辅助算法经过图10所示的流程,输出手势锁状态为锁定状态,即接近光锁定为接近状态。
S916a,接近光驱动确定上报接近状态。
示例性的,如上文所述,手势锁锁定状态下,接近光状态锁定为接近状态。也就是说,无论接近光驱动获取到的实体接近光状态为接近状态或者是远离状态,接近光驱动向聊天应用上报的接近光状态均为接近状态。
S916b,接近光驱动向SensorHal上报接近状态。
S916c,SensorHal向应用上报接近状态。
示例性的,请参照图4b,接近光驱动确定向SensorHal上报接近状态。SensorHal向Sensorservice上报接近状态。Sensorsevice将接近状态发送给聊天应用。
在用户耳边接听语音消息的过程中,各模块重复执行S911a~S916c,使得手机持续处于灭屏状态,且使用听筒播放语音消息。
在本申请实施例中,用户将手机拿到耳边接听之后,手机进入防误触模式,即灭屏,并使用听筒播放语音消息。在一些场景中,如果用户从耳边放下手机,则手机将关闭防误触模式,即亮屏,并使用扬声器继续播放语音消息。图11为示例性示出的语音播放方法的流程示意图,请参照图11,具体包括但不限于:
S1101a,手势辅助算法基于ACC数据,输出状态值。
示例性的,用户将手机拿开耳边之后,手势辅助算法计算出的pitch、roll和handupJudge将不满足阈值,相应的,在图7所示的流程中,手势辅助算法输出false。
S1001b,手势辅助算法基于ACC数据,输出手势锁状态。
示例性的,在图10所示的流程中,手势辅助算法执行S1003,由于器件性能或者是接近光驱动上报周期的影响,用户从耳边拿开手机后,接近光驱动可能还未获取到远离状态。相应的,手势辅助算法判断当前接近光状态为远离状态,执行S1005。示例性的,手势辅助算法获取状态值,如上文所述,状态值判断为false。手势辅助算法继续执行S1006。手势辅助算法当前为锁定状态,则执行S1007。相应的,手势辅助算法基于pitch、roll和handupJudge,判断不满足阈值,则解除锁定状态,即输出手势锁状态为未锁定状态。
S1002,接近光驱动获取接近光状态。
示例性的,接近光驱动获取到实体接近光状态为远离状态。
S1003,接近光驱动从手势辅助算法读取听筒状态。
示例性的,手势辅助记录的当前听筒状态为打开状态。接近光驱动读取到听筒为打开状态。
S1104,接近光驱动判断听筒是否打开。
S1105,接近光驱动从手势辅助算法读取手势锁状态。
S1106a,接近光驱动确定上报远离状态。
示例性的,在图5所示的流程中,接近光驱动确定听筒打开,且获取到手势所的状态为未锁定。流程走到S510,接近光驱动可确定上报当前获取到的实体接近光状态,即为远离状态。
S1106b,接近光驱动向SensorHal上报远离状态。
S1106c,SensorHal向应用上报远离状态。
相应的,聊天应用响应于接收到的远离状态,可从图6的S601a开始执行,即打开扬声器,以使用扬声器继续播放语音消息,且亮屏。
示例性的,如上文所述,接近光驱动可能发生异常上报,即,用户在耳边接听语音消息时,接近光传感器误检测到远离状态。在该场景中,接近光驱动可基于手势辅助算法的手势锁状态,以校正接近光驱动的实体接近光检测结果,以向聊天应用上报正确的接近状态,避免误报。图12为示例性示出的语音消息播放流程示意图,请参照图12,具体包括但不限于如下步骤:
S1201a,手势辅助算法基于ACC数据,输出状态值。
示例性的,手势辅助算法执行图7中的流程,在该示例中,手势辅助算法基于计算出的pitch、roll和handupJudge数值,判断满足阈值,且听筒和扬声器为打开状态。手势辅助算法判断为是,执行S704。在该示例中,由于接近光驱动本次是误报,而上一次上报的仍然为接近状态,相应的,手势辅助算法判断上一次上报的接近光状态为接近状态,判断为是,输出状态值为true。
S1201b,手势辅助算法基于ACC数据,输出手势锁状态。
示例性的,手势辅助算法基于ACC数据,执行图10中的流程。请参照图10,手势辅助算法执行S1003。一个示例中,S1202可能是在S1201b执行之前执行的,即,接近光驱动已经获取到当前实体接近状态为远离状态(即为误检测的状态),相应的,手势辅助算法执行S1003,判断为否,继续执行S1005。手势辅助算法可获取到输出状态值(即S1201a中计算的数值)为true。手势辅助算法确定手势锁锁定。
另一个示例中,由于器件性能或者是接近光驱动上报周期的影响,接近光驱动可能还未获取到实体接近光状态,则当前的接近光状态仍然是接近状态。相应的,手势辅助算法执行S1004,并基于pitch、roll,判断满足阈值,进一步确定手势锁锁定。
需要说明的是,在本申请实施例中,在手势锁锁定之后,也就是说,用户在耳边接听的过程中,由于手机的加速度不会发生变化,则handupJudge可能始终为0。相应的, 手势辅助算法在该场景中,在执行状态值的判断流程和手势锁状态的判断流程时,可以不考虑handupJudge。例如在执行S1004时,可以仅对pitch、roll进行判断,而无需再判断handupJudge。
S1202,接近光驱动获取接近光状态。
示例性的,在本实例中,以接近光驱动获取到异常状态,即实际上是接近状态,而接近光驱动获取到远离状态为例进行说明。
S1203,接近光驱动从手势辅助算法读取听筒状态。
示例性的,手势辅助记录的当前听筒状态为打开状态。接近光驱动读取到听筒为打开状态。
S1204,接近光驱动判断听筒是否打开。
S1205,接近光驱动从手势辅助算法读取手势锁状态。
S1206a,接近光驱动确定上报接近状态。
示例性的,在图5所示的流程中,接近光驱动确定听筒打开,且获取到手势所的状态为锁定。流程走到S506,接近光驱动可确定上报接近状态。也就是说,接近光驱动虽然获取到的实体接近光状态为远离状态,但是由于手势锁将接近状态锁定,相应的,接近光状态在检测到手势锁锁定的情况下,将原本要上报的远离状态更新为接近状态,从而实现对接近光状态误检测的及时修正,避免误报所导致的手机在用户耳边而通过扬声器播放,且亮屏可能发生误触的问题。
S1206b,接近光驱动向SensorHal上报接近状态。
S1206c,SensorHal向应用上报接近状态。
具体描述可参照上文中的相关内容,此处不再赘述。
在本申请实施例中,语音消息播放结束之后,聊天应用检测到语音消息播放结束,聊天应用可停止对扬声器或听筒的调用,即聊天应用可向AudioHal发送关闭扬声器和/或听筒指令。AudioHal响应于接收到的指令,指示音频驱动关闭听筒或扬声器。并且,AudioHal向Sensorevent发送写入扬声器和听筒关闭状态指示。Sensorevent响应于接收到的指示,记录听筒与扬声器的状态均为关闭状态。SensorHal从Sensorevent读取到扬声器和/或听筒的状态从打开状态切换为关闭状态,SensorHal向手势辅助算法发送扬声器关闭指示和/或听筒关闭指示。手势辅助算法可记录听筒和/或扬声器的状态为关闭状态。并且,手势辅助算法停止获取ACC数据,以降低系统功耗。
在本申请实施例中,扬声器打开之后,手势辅助算法即开始获取ACC数据,并计算电子设备的姿态数据(例如roll和pitch)与运动数据(例如handup judge),以提前准备手势锁状态计算时所需的数据。当用户将手机放到耳边接听时,接近光驱动检测到接近状态后上报聊天应用,聊天应用即可基于接近状态,启动防误触流程,即指示手机灭屏并切换到听筒播放语音消息。如上文所述,聊天应用调用听筒需要各模块之间进行交互,基于性能影响,接近光驱动可能还未获取到听筒打开状态。也就是说,在图5所示的流程中,流程继续走至S504,而此时听筒已经处于打开状态,即听筒播放语音消息,且屏幕处于灭屏状态。
在接近光驱动在接收到听筒打开消息之前,接近光驱动可继续获取手势辅助算法输出的状态值,此时,手势辅助算法基于姿态数据(即手机的姿态)和运动数据(即耳边接听的动态过程),已经判断出手机处于接近状态。即如图7所示,手势辅助算法执行S703判断为是,流程走到S704,其中,由于接近光驱动已经上报过接近状态,则该流程同样判断为是,相应的,输出的状态值为true,即指示算法接近光状态得到的结果为接近状态。在该示例中,如果实体接近光异常上报远离状态,即在图5中,流程将从S504走到S505。在执行S505时,接近光驱动基于手势辅助算法的输出状态值,可确定算法接近光状态为接近状态。相应的,接近光驱动将会上报接近状态,以实现对实体接近光状态的异常检测结果进行校正。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
一个示例中,图13示出了本申请实施例的一种装置1300的示意性框图装置1300可包括:处理器1301和收发器/收发管脚1302,可选地,还包括存储器1303。
装置1300的各个组件通过总线1304耦合在一起,其中总线1304除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线1304。
可选地,存储器1303可以用于前述方法实施例中的指令。该处理器1301可用于执行存储器1303中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
装置1300可以是上述方法实施例中的电子设备或电子设备的芯片。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于 执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (24)

  1. 一种语音消息播放方法,其特征在于,应用于电子设备,所述方法包括:
    响应于接收到的对语音消息的点击操作,调用扬声器播放所述语音消息,并且,所述电子设备的显示屏处于亮屏状态;
    基于接近光传感器的检测结果,获取第一接近光状态;
    基于第一状态数据,获取第二接近光状态;其中,状态数据包括姿态数据和运动数据,所述姿态数据用于描述所述电子设备当前的姿态,所述运动数据用于描述所述电子设备当前的运动状态;
    检测到所述第一接近光状态与所述第二接近光状态均指示为远离状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态;
    在用户抬手将所述电子设备置于耳边接听所述语音消息的情况下,基于所述接近光传感器的检测结果,获取第三接近光状态;
    基于第二状态数据,获取第四接近光状态;
    检测到所述第三接近光状态指示为远离状态,且所述第四接近光状态指示为接近状态,确定所述第三接近光状态为异常状态,以及,基于所述第四接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在用户将所述电子设备持续置于耳边接听所述语音消息的过程中,基于所述接近光传感器的检测结果,获取第五接近光状态;
    基于第三状态数据,获取第六接近光状态;
    检测到所述第五接近光状态指示为远离状态,且所述第六接近光状态指示为接近状态,确定所述第五接近光状态为异常状态,以及,基于所述第六接近光状态,确定所述电子设备仍处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  3. 根据权利要求1所述的方法,其特征在于,所述检测到所述第一接近光状态与所述第二接近光状态均指示为远离状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第一接近光状态是否指示为远离状态;
    判定所述第一接近光状态指示为远离状态,判断所述第二接近光状态是否指示为接近状态;
    判定所述第二接近光状态指示为远离状态,确定所述电子设备当前的目标接近光状态为远离状态;
    基于所述目标接近光状态,继续调用所述扬声器播放所述语音消息,并且,所述电 子设备显示屏处于亮屏状态。
  4. 根据权利要求1所述的方法,其特征在于,所述检测到所述第三接近光状态指示为远离状态,且所述第四接近光状态指示为接近状态,确定所述第三接近光状态为异常状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第三接近光状态是否指示为远离状态;
    判定所述第三接近光状态指示为远离状态,判断所述第四接近光状态是否指示为接近状态;
    判定所述第四接近光状态指示为接近状态,基于所述第三接近光状态与所述第四接近光状态,确定所述第三接近光状态为异常状态;
    确定所述第四接近光状态为目标接近光状态。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述第四接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态,包括:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  6. 根据权利要求2所述的方法,其特征在于,所述检测到所述第五接近光状态指示为远离状态,且所述第六接近光状态指示为接近状态,确定所述第五接近光状态为异常状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为打开状态,判断所述第六接近光状态是否指示为接近状态;
    判定所述第六接近光指示为接近状态,基于所述第五接近光状态与所述第六接近光状态,确定所述第五接近光状态为异常状态;
    确定所述第六接近光状态为目标接近光状态。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述第六接近光状态,确定所述电子设备仍处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态,包括:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  8. 一种电子设备,其特征在于,包括:
    一个或多个处理器、存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的对语音消息的点击操作,调用扬声器播放所述语音消息,并且,所述电子设备的显示屏处于亮屏状态;
    基于接近光传感器的检测结果,获取第一接近光状态;
    基于第一状态数据,获取第二接近光状态;其中,状态数据包括姿态数据和运动数据,所述姿态数据用于描述所述电子设备当前的姿态,所述运动数据用于描述所述电子设备当前的运动状态;
    检测到所述第一接近光状态与所述第二接近光状态均指示为远离状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态;
    在用户抬手将所述电子设备置于耳边接听所述语音消息的情况下,基于所述接近光传感器的检测结果,获取第三接近光状态;
    基于第二状态数据,获取第四接近光状态;
    检测到所述第三接近光状态指示为远离状态,且所述第四接近光状态指示为接近状态,确定所述第三接近光状态为异常状态,以及,基于所述第四接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  9. 根据权利要求8所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在用户将所述电子设备持续置于耳边接听所述语音消息的过程中,基于所述接近光传感器的检测结果,获取第五接近光状态;
    基于第三状态数据,获取第六接近光状态;
    检测到所述第五接近光状态指示为远离状态,且所述第六接近光状态指示为接近状态,确定所述第五接近光状态为异常状态,以及,基于所述第六接近光状态,确定所述电子设备仍处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  10. 根据权利要求8所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第一接近光状态是否指示为远离状态;
    判定所述第一接近光状态指示为远离状态,判断所述第二接近光状态是否指示为接近状态;
    判定所述第二接近光状态指示为远离状态,确定所述电子设备当前的目标接近光状态为远离状态;
    基于所述目标接近光状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态。
  11. 根据权利要求8所述的电子设备,其特征在于,当所述计算机程序被所述一个或 多个处理器执行时,使得所述电子设备执行以下步骤:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第三接近光状态是否指示为远离状态;
    判定所述第三接近光状态指示为远离状态,判断所述第四接近光状态是否指示为接近状态;
    判定所述第四接近光状态指示为接近状态,基于所述第三接近光状态与所述第四接近光状态,确定所述第三接近光状态为异常状态;
    确定所述第四接近光状态为目标接近光状态。
  12. 根据权利要求11所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  13. 根据权利要求9所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    检测所述听筒是否为打开状态;
    检测到所述听筒为打开状态,判断所述第六接近光状态是否指示为接近状态;
    判定所述第六接近光指示为接近状态,基于所述第五接近光状态与所述第六接近光状态,确定所述第五接近光状态为异常状态;
    确定所述第六接近光状态为目标接近光状态。
  14. 根据权利要求13所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  15. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-7任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-7任一项所述的方法。
  17. 一种芯片,其特征在于,包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行权利要求1-7任一项所述的方法。
  18. 一种语音消息播放方法,其特征在于,应用于电子设备,所述方法包括:
    响应于接收到的对语音消息的点击操作,调用扬声器播放所述语音消息,并且,所述电子设备的显示屏处于亮屏状态;
    基于接近光传感器的检测结果,获取第一接近光状态;
    基于第一状态数据,获取第二接近光状态;其中,状态数据包括姿态数据和运动数据,所述姿态数据用于描述所述电子设备当前的姿态,所述运动数据用于描述所述电子设备当前的运动状态;
    检测到所述第一接近光状态与所述第二接近光状态均指示为远离状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态;
    在用户抬手将所述电子设备置于耳边接听所述语音消息的情况下,基于所述接近光传感器的检测结果,获取第三接近光状态;
    基于第二状态数据,将手势锁锁定为接近状态;
    检测到所述第三接近光状态指示为远离状态,且所述手势锁锁定为接近状态,确定所述第三接近光状态为异常状态,并确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态;
    在用户将所述电子设备持续置于耳边接听所述语音消息的过程中,基于所述接近光传感器的检测结果,获取第七接近光状态;
    基于第四状态数据,将手势锁锁定为接近状态;
    基于所述第七接近光状态和所述手势锁的锁定状态,确定第一目标接近光状态为接近状态;
    基于所述第一目标接近光状态,继续调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态;
    在所述用户将所述电子设备远离耳边的情况下,基于所述接近光传感器的检测结果,获取第八接近光状态,所述第八接近光状态指示为远离状态;
    基于第五状态数据,将所述手势锁更新为未锁定状态,其中,所述未锁定状态用于指示所述电子设备的目标接近光状态与所述接近光传感器的检测结果保持一致;
    基于所述手势锁的未锁定状态,确定所述第八接近光状态为第二目标接近光状态;
    基于所述第二目标接近光状态,确定所述电子设备处于非耳边接听场景,调用扬声器播放所述语音消息,并且,所述电子设备的显示屏处于亮屏状态。
  19. 根据权利要求18所述的方法,其特征在于,所述基于所述第一目标接近光状态,继续调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态之后,所述方法还包括:
    在用户将所述电子设备持续置于耳边接听所述语音消息的过程中,基于所述接近光传感器的检测结果,获取第五接近光状态;
    基于第三状态数据,确定所述手势锁保持锁定为接近状态;
    检测到所述第五接近光状态指示为远离状态,且所述手势锁锁定为接近状态,确定所述第五接近光状态为异常状态,并确定所述电子设备仍处于耳边接听场景,继续调用 所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  20. 根据权利要求18所述的方法,其特征在于,所述检测到所述第一接近光状态与所述第二接近光状态均指示为远离状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第一接近光状态是否指示为远离状态;
    判定所述第一接近光状态指示为远离状态,判断所述第二接近光状态是否指示为接近状态;
    判定所述第二接近光状态指示为远离状态,确定所述电子设备当前的目标接近光状态为远离状态;
    基于所述目标接近光状态,继续调用所述扬声器播放所述语音消息,并且,所述电子设备显示屏处于亮屏状态。
  21. 根据权利要求18所述的方法,其特征在于,所述检测到所述第三接近光状态指示为远离状态,且所述手势锁锁定为接近状态,确定所述第三接近光状态为异常状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为关闭状态,判断所述第三接近光状态是否指示为远离状态;
    判定所述第三接近光状态指示为远离状态,基于所述第二状态数据,确定所述电子设备的状态值为第一状态值;
    基于所述第一状态值,确定所述手势锁锁定为接近状态;
    基于所述手势锁的锁定状态,确定所述电子设备的目标接近光状态为接近状态。
  22. 根据权利要求21所述的方法,其特征在于,所述确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态,包括:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,调用听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
  23. 根据权利要求19所述的方法,其特征在于,所述检测到所述第五接近光状态指示为远离状态,且所述手势锁锁定为接近状态,确定所述第五接近光状态为异常状态,包括:
    检测所述听筒是否为打开状态;
    检测到所述听筒为打开状态,判断所述手势锁是否为锁定状态;
    判定所述手势锁为锁定状态,确定所述电子设备的目标接近光状态为接近状态。
  24. 根据权利要求23所述的方法,其特征在于,所述确定所述电子设备仍处于耳边 接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态,包括:
    基于所述目标接近光状态,确定所述电子设备处于耳边接听场景,继续调用所述听筒播放所述语音消息,并且,所述电子设备的显示屏处于熄屏状态。
PCT/CN2024/083255 2023-06-13 2024-03-22 语音消息播放方法及电子设备 Ceased WO2024255380A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP24738264.1A EP4503584A4 (en) 2023-06-13 2024-03-22 METHOD FOR PLAYING VOICE MESSAGES AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310697883.5A CN116489272B (zh) 2023-06-13 2023-06-13 语音消息播放方法及电子设备
CN202310697883.5 2023-06-13

Publications (2)

Publication Number Publication Date
WO2024255380A1 true WO2024255380A1 (zh) 2024-12-19
WO2024255380A9 WO2024255380A9 (zh) 2025-02-13

Family

ID=87215932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/083255 Ceased WO2024255380A1 (zh) 2023-06-13 2024-03-22 语音消息播放方法及电子设备

Country Status (3)

Country Link
EP (1) EP4503584A4 (zh)
CN (2) CN116489272B (zh)
WO (1) WO2024255380A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116489272B (zh) * 2023-06-13 2024-03-15 荣耀终端有限公司 语音消息播放方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090186654A1 (en) * 2008-01-21 2009-07-23 Inventec Appliances Corp. Method of automatically playing text information in voice by an electronic device under strong light
CN103811033A (zh) * 2012-11-14 2014-05-21 北京新媒传信科技有限公司 一种控制语音播放模式的方法和装置
CN106941564A (zh) * 2017-03-16 2017-07-11 广东欧珀移动通信有限公司 语音消息的播放方法及移动终端
CN109547629A (zh) * 2018-11-07 2019-03-29 华为技术有限公司 一种接近光传感器的控制方法及电子设备
CN109995913A (zh) * 2019-01-30 2019-07-09 莱思特科技股份有限公司 自动播放语音消息的方法、智能型手机及计算机程序产品
CN111405110A (zh) * 2020-03-11 2020-07-10 Tcl移动通信科技(宁波)有限公司 屏幕控制方法、装置、存储介质及移动终端
CN116489272A (zh) * 2023-06-13 2023-07-25 荣耀终端有限公司 语音消息播放方法及电子设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050120390A (ko) * 2004-06-18 2005-12-22 엘지전자 주식회사 이동 통신 단말기의 거리 센서를 이용한 스피커폰 제어장치와 방법
JP2008187221A (ja) * 2007-01-26 2008-08-14 Fujitsu Ltd 携帯端末装置、その通話切替方法、その通話切替プログラム、及び通話切替プログラムを格納した記録媒体
JP5369937B2 (ja) * 2009-06-30 2013-12-18 富士通株式会社 電話機、通話制御方法及び通話制御プログラム
CN101964844A (zh) * 2010-09-26 2011-02-02 中兴通讯股份有限公司 一种手持通话设备中自动调节放音的方法和装置
CN103888596A (zh) * 2012-12-21 2014-06-25 腾讯科技(深圳)有限公司 一种终端中语音模式的切换方法、装置和终端
CN103369144B (zh) * 2013-07-11 2015-05-27 广东欧珀移动通信有限公司 结合加速度传感器来处理手机通话过程中误操作的方法
US20160044394A1 (en) * 2014-08-07 2016-02-11 Nxp B.V. Low-power environment monitoring and activation triggering for mobile devices through ultrasound echo analysis
CN106131312B (zh) * 2016-06-21 2020-01-10 Oppo广东移动通信有限公司 语音消息的播放方法、装置及移动终端
CN107454265B (zh) * 2017-08-11 2020-06-26 北京安云世纪科技有限公司 基于通话模式变化记录通话信息的方法及装置
CN108174038A (zh) * 2018-01-03 2018-06-15 上海传英信息技术有限公司 一种移动终端及控制其接听模式的方法
CN111345014B (zh) * 2018-07-17 2022-01-14 荣耀终端有限公司 一种终端
WO2022052065A1 (zh) * 2020-09-11 2022-03-17 深圳市汇顶科技股份有限公司 活体接近检测装置、电子设备及其方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090186654A1 (en) * 2008-01-21 2009-07-23 Inventec Appliances Corp. Method of automatically playing text information in voice by an electronic device under strong light
CN103811033A (zh) * 2012-11-14 2014-05-21 北京新媒传信科技有限公司 一种控制语音播放模式的方法和装置
CN106941564A (zh) * 2017-03-16 2017-07-11 广东欧珀移动通信有限公司 语音消息的播放方法及移动终端
CN109547629A (zh) * 2018-11-07 2019-03-29 华为技术有限公司 一种接近光传感器的控制方法及电子设备
CN109995913A (zh) * 2019-01-30 2019-07-09 莱思特科技股份有限公司 自动播放语音消息的方法、智能型手机及计算机程序产品
CN111405110A (zh) * 2020-03-11 2020-07-10 Tcl移动通信科技(宁波)有限公司 屏幕控制方法、装置、存储介质及移动终端
CN116489272A (zh) * 2023-06-13 2023-07-25 荣耀终端有限公司 语音消息播放方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4503584A4

Also Published As

Publication number Publication date
CN116489272B (zh) 2024-03-15
WO2024255380A9 (zh) 2025-02-13
CN116489272A (zh) 2023-07-25
EP4503584A4 (en) 2025-06-04
EP4503584A1 (en) 2025-02-05
CN119135796A (zh) 2024-12-13

Similar Documents

Publication Publication Date Title
US12425509B2 (en) Full-screen display method for mobile terminal and device
US12095941B2 (en) Message notification method and electronic device
EP3800876B1 (en) Method for terminal to switch cameras, and terminal
CN113596242B (zh) 传感器调整方法、装置、电子设备和存储介质
WO2021017889A1 (zh) 一种应用于电子设备的视频通话的显示方法及相关装置
CN113573390B (zh) 天线功率调节方法、终端设备及存储介质
US12323546B2 (en) Communication service status control method, terminal device, and readable storage medium
WO2020000448A1 (zh) 一种柔性屏幕的显示方法及终端
US11889386B2 (en) Device searching method and electronic device
US12316952B2 (en) Photo preview method, electronic device, and storage medium
CN114115770A (zh) 显示控制的方法及相关装置
CN115967851A (zh) 快速拍照方法、电子设备及计算机可读存储介质
CN113641271A (zh) 应用窗口的管理方法、终端设备及计算机可读存储介质
CN114089932A (zh) 多屏显示方法、装置、终端设备及存储介质
US20250042253A1 (en) Display method, vehicle, and electronic device
WO2024066933A1 (zh) 扬声器控制方法及设备
WO2024255380A1 (zh) 语音消息播放方法及电子设备
CN113407300B (zh) 应用误杀评估方法及相关设备
CN117692693B (zh) 多屏显示方法、设备、程序产品以及存储介质
CN117707659B (zh) 息屏显示方法和终端设备
CN114173381B (zh) 数据传输方法和电子设备
CN114006976B (zh) 一种界面显示方法及终端设备
RU2782255C1 (ru) Способ управления частотой кадров записи и соответствующее устройство
WO2024255316A1 (zh) 卫星通话方法和终端设备
WO2024234782A1 (zh) 一种信息交互方法和头显设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2024738264

Country of ref document: EP

Effective date: 20240712

WWE Wipo information: entry into national phase

Ref document number: 18836159

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE