WO2024067645A1 - 一种处理方法及相关装置 - Google Patents

一种处理方法及相关装置 Download PDF

Info

Publication number
WO2024067645A1
WO2024067645A1 PCT/CN2023/121776 CN2023121776W WO2024067645A1 WO 2024067645 A1 WO2024067645 A1 WO 2024067645A1 CN 2023121776 W CN2023121776 W CN 2023121776W WO 2024067645 A1 WO2024067645 A1 WO 2024067645A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
control unit
micro control
application processor
electronic device
Prior art date
Application number
PCT/CN2023/121776
Other languages
English (en)
French (fr)
Inventor
谭飏
张博
房帅磊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024067645A1 publication Critical patent/WO2024067645A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3287Power saving characterised by the action undertaken by switching off individual functional units in the computer system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present application relates to the field of terminal technology, and in particular to a processing method and related devices.
  • smart wearable devices are becoming more and more popular.
  • smart wearable devices play audio
  • the application processors are kept awake for a long time, and the running power consumption is high. Therefore, how to reduce the power consumption of smart wearable devices and extend the battery life of smart wearable devices has become an urgent problem to be solved.
  • the present application provides a processing method and related devices, which realize audio playback through a microcontroller unit and save power consumption of electronic equipment.
  • the present application provides a processing method, which is applied to an electronic device, wherein the electronic device includes an application processor, a microcontroller unit, and a speaker; the method includes:
  • the application processor receives a first input, where the first input is used to trigger the electronic device to play a first audio using a first application;
  • the application processor sends a first message to the micro control unit in response to the first input;
  • the micro control unit determines based on the first message that the micro control unit supports playing the first audio, the micro control unit sends a second message to the application processor;
  • the micro control unit After the micro control unit sends the second message to the application processor, the micro control unit controls the speaker to play the first audio, and the application processor switches to a sleep state.
  • the micro control unit plays the first audio, saving power consumption of the electronic device.
  • the application processor switches to a sleep state, which can further save power consumption of the device.
  • the method further includes:
  • the micro control unit determines based on the first message that the micro control unit does not support playing the first audio, the micro control unit sends a third message to the application processor;
  • the application processor decodes the first audio in response to the third message to obtain the first data
  • the application processor sends the first data to the micro control unit
  • the micro control unit controls the speaker to play the first data.
  • the micro control unit when it does not support playing the first audio, it can obtain the decoded first data through the application processor and play it.
  • the first message includes a first audio format of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first message, specifically including:
  • the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format, it is determined that the micro control unit supports playing the first audio.
  • the micro control unit can determine whether to support playing the audio based on the audio format of the first audio.
  • the micro control unit controls the speaker to play the first audio, and the application processor switches to a sleep state, specifically including:
  • the application processor sends the first audio to the micro control unit in response to the second message, wherein the second message indicates that the micro control unit supports decoding of the first audio;
  • the micro control unit After receiving the first audio, the micro control unit decodes the first audio to obtain the first data;
  • the micro control unit controls the speaker to play the first data
  • the application processor After sending the first audio, the application processor switches to a sleep state.
  • the micro control unit can obtain the first audio from the application processor and play the first audio when supporting decoding of the first audio.
  • the application processor can enter a sleep state after sending the first audio to the micro control unit to save power consumption.
  • the first message includes an identifier of the first audio; and the micro control unit determines, based on the first message, that the micro control unit supports playing the first audio, specifically including:
  • the micro control unit determines that the second audio corresponding to the identifier of the first audio is stored based on the identifier of the first audio, it is determined that the micro control unit supports playing the first audio.
  • the microcontroller since the microcontroller stores the audio decoded by the application processor, when the microcontroller plays the audio again, it can directly play the stored audio data without the application processor decoding again, saving the computing resources of the application processor and saving device power consumption.
  • the second audio includes the decoded first data; and the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit controls the speaker to play the second audio indicated by the identifier of the first audio.
  • the second audio includes first data encoded in a second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit controls the speaker to play the first data.
  • the microcontroller encodes the audio data decoded by the application processor in an audio format supported by the microcontroller, which can save storage space of the microcontroller.
  • the method further includes:
  • the application processor After receiving the second message, the application processor sends a fourth message to the micro control unit for instructing the micro control unit to play the first audio;
  • the micro control unit controls the speaker to play the first audio.
  • the application processor can notify the micro control unit to play the first audio based on the information of supporting the playing of the first audio fed back by the micro control unit.
  • the method further includes:
  • the micro control unit sends a fifth message to the application processor, where the fifth message is used to instruct the application processor to switch to a sleep state;
  • the application processor switches to sleep mode, including:
  • the application processor switches to a sleep state.
  • the application processor can switch to the sleep state only when it determines that the micro control unit plays the first audio.
  • the method further includes:
  • the application processor detects a first request from a second application, the first request being for requesting to use a speaker;
  • the application processor sends a sixth message to the micro control unit, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the application processor can ensure that high-priority applications use the speaker first.
  • the method further includes:
  • the application processor receives an input for adjusting the volume, and sets the volume value of the application processor to the adjusted volume value;
  • the application processor sends the adjusted volume value to the micro control unit
  • the micro control unit sets the volume value of the micro control unit to the adjusted volume value.
  • the application processor can synchronize the volume of the application processor and the micro control unit after receiving the input to adjust the volume.
  • the method when the electronic device is in a state of continuously playing audio and the micro control unit determines that the micro control unit supports playing the first audio based on the first message; the method further includes:
  • the microcontroller After the microcontroller finishes playing the first audio, it notifies the application processor to switch to a non-sleep state;
  • the seventh message is sent to the micro control unit, and the first audio and the third audio belong to the same play list;
  • the micro control unit determines whether the micro control unit supports playing the third audio based on the seventh message.
  • the application processor can query the microcontroller unit whether the microcontroller unit supports playing the third audio after the microcontroller unit finishes playing the first audio.
  • the present application provides another processing method, which is applied to an electronic device, wherein the electronic device includes an application processor, a micro control unit, and a speaker; the method includes:
  • the micro control unit receives a first input, where the first input is used to trigger the electronic device to play a first audio using a first application;
  • the micro control unit determines, in response to the first input and based on the first information of the first audio, whether the micro control unit supports playing the first audio;
  • the microcontroller unit determines based on the first information that the microcontroller unit supports playing the first audio, the microcontroller unit controls the speaker to play First Audio.
  • the micro control unit can run the first application instead of the application processor, saving power consumption of the device.
  • the micro control unit can also play the first audio, further saving power consumption of the device.
  • the method before the micro control unit receives the first input, the method further includes:
  • the application processor controls the display screen to display a first interface, where the first interface includes an icon of a first application;
  • the application processor receives a second input for the icon of the first application
  • the application processor sends a first instruction to the micro control unit in response to the second input, where the first instruction is used to instruct the micro control unit to display an interface of the first application;
  • the application processor After sending the first instruction, the application processor switches to a sleep state
  • the microcontrol unit controls the display screen to display the interface of the first application in response to the first instruction.
  • the interface of the first application includes a first control.
  • the first control is used to trigger the electronic device to play the first audio.
  • the first input is an input to the first control.
  • the application processor determines that the first application is run by the micro control unit, it can notify the micro control unit to display the interface of the first application, thereby saving power consumption of the device.
  • the method further includes:
  • the micro control unit determines based on the first information that the micro control unit does not support playing the first audio, the micro control unit sends a first message to the application processor;
  • the application processor decodes the first audio in response to the first message to obtain the first data
  • the application processor sends the first data to the micro control unit
  • the micro control unit controls the speaker to play the first data.
  • the micro control unit when it does not support playing the first audio, it can obtain the decoded first data through the application processor and play it.
  • the third message includes the first audio.
  • the first message is used to trigger the application processor to switch to a non-sleep state.
  • the application processor can switch to a non-sleep state and decode the first audio.
  • the first information includes a first audio format of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first information, specifically including:
  • the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format, it is determined that the micro control unit supports playing the first audio.
  • the micro control unit can determine whether to support playing the audio based on the audio format of the first audio.
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the first audio to obtain first data
  • the micro control unit controls the speaker to play the first data.
  • the first information includes an identifier of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first information, specifically including:
  • the micro control unit determines that the second audio corresponding to the identifier of the first audio is stored based on the identifier of the first audio, it is determined that the micro control unit supports playing the first audio.
  • the microcontroller since the microcontroller stores the audio decoded by the application processor, when the microcontroller plays the audio again, it can directly play the stored audio data without the application processor decoding again, saving the computing resources of the application processor and saving device power consumption.
  • the second audio includes the decoded first data; and the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit controls the speaker to play the second audio indicated by the identifier of the first audio.
  • the second audio includes first data encoded in a second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit controls the speaker to play the first data.
  • the microcontroller encodes the audio data decoded by the application processor in an audio format supported by the microcontroller, which can save the microcontroller The storage space of the control unit.
  • the method further includes:
  • the application processor detects a first request from a second application, the first request being for requesting to use a speaker;
  • the application processor sends a sixth message to the micro control unit, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the application processor can ensure that high-priority applications use the speaker first.
  • the method further includes:
  • the application processor receives an input for adjusting the volume, and sets the volume value of the application processor to the adjusted volume value;
  • the application processor sends the adjusted volume value to the micro control unit
  • the micro control unit sets the volume value of the micro control unit to the adjusted volume value.
  • the application processor can synchronize the volume of the application processor and the micro control unit after receiving the input to adjust the volume.
  • the electronic device is in a state of continuously playing audio and the microcontroller unit determines based on the first information that the microcontroller unit supports playing the first audio; the method further includes:
  • the microcontroller After the microcontroller finishes playing the first audio, if the microcontroller determines that the third audio is not supported based on the second information of the third audio, the microcontroller notifies the application processor to switch to a non-sleep state and sends the third audio to the application processor;
  • the third audio is decoded.
  • the micro control unit can wake up and notify the application processor to decode the audio when playing an audio that is not supported.
  • the present application provides another processing method, comprising:
  • the first electronic device receives a first input, where the first input is used to trigger the electronic device to play a first audio using a first application;
  • the first electronic device sends a first message to the second electronic device in response to the first input, where the first message is used to instruct the second electronic device to determine whether the second electronic device supports playing the first audio;
  • the first electronic device After receiving the second message sent by the second electronic device, the first electronic device switches to a sleep state.
  • the method further includes:
  • the application processor receives a third message sent by the micro control unit, where the third message is used to instruct the second electronic device to decode the first audio;
  • the first electronic device decodes the first audio in response to the third message to obtain the first data
  • the application processor sends the first data to the micro control unit.
  • the first electronic device switches to a dormant state, specifically including:
  • the application processor sends the first audio to the micro control unit in response to the second message, wherein the second message indicates that the micro control unit supports decoding of the first audio;
  • the application processor After sending the first audio, the application processor switches to a sleep state.
  • the method further includes:
  • the application processor sends a fourth message to the micro control unit, which is used to instruct the micro control unit to play the first audio.
  • the application processor switches to a sleep state, specifically including:
  • the application processor receives the fifth message sent by the micro control unit and switches to the sleep state.
  • the fifth message is used to instruct the application processor to switch to the sleep state.
  • the method further includes:
  • the application processor detects a first request of a second application when the microcontroller unit controls the speaker to play audio, the first request being used to request to use the speaker;
  • the application processor sends a sixth message to the micro control unit, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the method further includes:
  • the application processor receives an input for adjusting the volume during the process of the microcontroller unit playing the audio, and sets the volume value of the application processor to the adjusted volume value;
  • the application processor sends the adjusted volume value to the micro control unit.
  • the method further includes:
  • the application processor After receiving the notification of switching to the non-sleeping state sent by the micro control unit, the application processor switches to the non-sleeping state;
  • the seventh message is sent to the micro control unit, indicating that the first audio and the third audio belong to the same play list.
  • the present application provides another processing method, comprising:
  • the second electronic device After receiving the first message sent by the first electronic device, the second electronic device determines whether the second electronic device supports playing the first audio;
  • the second electronic device determines that the second electronic device supports playing the first audio based on the first message
  • the second electronic device sends a second message to the first electronic device
  • the second electronic device After the second electronic device sends the second message to the first electronic device, the second electronic device controls the speaker to play the first audio.
  • the method further includes:
  • the micro control unit determines based on the first message that the micro control unit does not support playing the first audio, the micro control unit sends a third message to the application processor;
  • the micro control unit controls the speaker to play the first data.
  • the first message includes a first audio format of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first message, specifically including:
  • the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format, it is determined that the micro control unit supports playing the first audio.
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit After receiving the first audio sent by the application processor, the micro control unit decodes the first audio to obtain the first data;
  • the micro control unit controls the speaker to play the first data.
  • the first message includes an identifier of the first audio; and the micro control unit determines, based on the first message, that the micro control unit supports playing the first audio, specifically including:
  • the micro control unit determines that the second audio corresponding to the identifier of the first audio is stored based on the identifier of the first audio, it is determined that the micro control unit supports playing the first audio.
  • the second audio includes the decoded first data; and the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit controls the speaker to play the second audio indicated by the identifier of the first audio.
  • the second audio includes first data encoded in a second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit controls the speaker to play the first data.
  • the method further includes:
  • the micro control unit After receiving the fourth message sent by the application processor, the micro control unit controls the speaker to play the first audio.
  • the fourth message is used to instruct the micro control unit to play the first audio.
  • the method further includes:
  • the micro control unit sends a fifth message to the application processor, where the fifth message is used to instruct the application processor to switch to a sleep state.
  • the method further includes:
  • the micro control unit sends a fifth message to the application processor, where the fifth message is used to instruct the application processor to switch to a sleep state.
  • the micro control unit receives the adjusted volume value sent by the application processor, and sets the volume value of the micro control unit to the adjusted volume value.
  • the method when the electronic device is in a state of continuously playing audio and the micro control unit determines that the micro control unit supports playing the first audio based on the first message; the method further includes:
  • the microcontroller After the microcontroller finishes playing the first audio, it notifies the application processor to switch to a non-sleep state;
  • the micro control unit After receiving the seventh message sent by the application processor, the micro control unit determines whether the micro control unit supports playing the third audio based on the seventh message.
  • the present application provides another processing method, comprising:
  • the second electronic device receives a first input, where the first input is used to trigger the electronic device to play a first audio using a first application;
  • the second electronic device determines, in response to the first input, whether the second electronic device supports playing the first audio based on the first information of the first audio;
  • the second electronic device determines, based on the first information, that the second electronic device supports playing the first audio
  • the second electronic device controls the speaker to play the first audio.
  • the method before the micro control unit receives the first input, the method further includes:
  • the micro control unit receives a first instruction sent by the application processor, where the first instruction is used to instruct the micro control unit to display an interface of a first application;
  • the microcontrol unit controls the display screen to display the interface of the first application in response to the first instruction.
  • the interface of the first application includes a first control.
  • the first control is used to trigger the electronic device to play the first audio.
  • the first input is an input to the first control.
  • the method further includes:
  • the micro control unit determines based on the first information that the micro control unit does not support playing the first audio, the micro control unit sends a first message to the application processor;
  • the micro control unit After receiving the first data sent by the application processor, the micro control unit controls the speaker to play the first data.
  • the first message is used to trigger the application processor to switch to a non-sleep state.
  • the first information includes a first audio format of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first information, specifically including:
  • the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format, it is determined that the micro control unit supports playing the first audio.
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the first audio to obtain first data
  • the micro control unit controls the speaker to play the first data.
  • the first information includes an identifier of the first audio; and the micro control unit determines that the micro control unit supports playing the first audio based on the first information, specifically including:
  • the micro control unit determines that the second audio corresponding to the identifier of the first audio is stored based on the identifier of the first audio, it is determined that the micro control unit supports playing the first audio.
  • the second audio includes the decoded first data; and the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit controls the speaker to play the second audio indicated by the identifier of the first audio.
  • the second audio includes first data encoded in a second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit controls the speaker to play the first audio, specifically including:
  • the micro control unit decodes the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit controls the speaker to play the first data.
  • the method further includes:
  • the application processor detects a first request from a second application, the first request being for requesting to use a speaker;
  • the application processor sends a sixth message to the micro control unit, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the micro control unit receives the adjusted volume value sent by the application processor, and sets the volume value of the micro control unit to the adjusted volume value.
  • the electronic device is in a state of continuously playing audio and the microcontroller unit determines based on the first information that the microcontroller unit supports playing the first audio; the method further includes:
  • the microcontroller After the microcontroller finishes playing the first audio, if the microcontroller determines that the third audio is not supported based on the second information of the third audio, the microcontroller notifies the application processor to switch to a non-sleep state and sends the third audio to the application processor.
  • the present application provides another processing method, comprising:
  • the first electronic device receives a first message sent by the second electronic device; the first message includes a first audio;
  • the first electronic device decodes the first audio to obtain first data
  • the first electronic device sends the first data to the second electronic device, and the first data is used for the second electronic device to play through a speaker.
  • the method further includes:
  • the application processor controls the display screen to display a first interface, where the first interface includes an icon of a first application;
  • the application processor receives a second input for the icon of the first application
  • the application processor sends a first instruction to the micro control unit in response to the second input, where the first instruction is used to instruct the micro control unit to display an interface of the first application;
  • the application processor After sending the first instruction, the application processor switches to a sleep state.
  • the first message is used to trigger the application processor to switch to a non-sleep state.
  • the method further includes:
  • the application processor receives an input for adjusting the volume during the process of the microcontroller unit playing the audio, and sets the volume value of the application processor to the adjusted volume value;
  • the application processor sends the adjusted volume value to the micro control unit.
  • the method further includes: after receiving the third audio sent by the micro control unit, the application processor switches to a non-sleep state and decodes the third audio.
  • the present application provides a processing device, including an application processor and a microcontroller unit; wherein:
  • the application processor is configured to receive a first input, where the first input is configured to trigger the electronic device to play a first audio using a first application;
  • the application processor is further configured to send a first message to the micro control unit in response to the first input;
  • the micro control unit When the micro control unit is used to determine that the micro control unit supports playing the first audio based on the first message, the micro control unit sends a second message to the application processor;
  • the micro control unit is used to control the speaker to play the first audio and switch the application processor to a sleep state after the micro control unit sends a second message to the application processor.
  • the micro control unit is configured to send a third message to the application processor when the micro control unit determines that the micro control unit does not support playing the first audio based on the first message;
  • the application processor is configured to decode the first audio to obtain the first data in response to the third message
  • the application processor is further used to send the first data to the micro control unit;
  • the micro control unit is also used to control the speaker to play the first data.
  • the first message includes a first audio format of the first audio
  • the micro control unit is specifically used to determine that the micro control unit supports playing the first audio when the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format.
  • the application processor is specifically configured to send the first audio to the micro control unit in response to the second message, where the second message indicates that the micro control unit supports decoding of the first audio;
  • the micro control unit is further configured to decode the first audio to obtain the first data after receiving the first audio;
  • the micro control unit is also used to control the speaker to play the first data
  • the application processor is further configured to switch to a sleep state after sending the first audio.
  • the first message includes an identifier of the first audio
  • the micro control unit is used to determine that the micro control unit supports playing the first audio when the micro control unit determines that the second audio corresponding to the identifier of the first audio is stored based on the identifier of the first audio.
  • the second audio includes decoded first data; and the micro control unit is specifically configured to control a speaker to play the second audio indicated by an identifier of the first audio.
  • the second audio includes the first data encoded in the second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit is specifically configured to decode the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit is also used to control the speaker to play the first data.
  • the application processor is further configured to send, after receiving the second message, to the micro control unit a fourth message for instructing the micro control unit to play the first audio;
  • the micro control unit is further used to control the loudspeaker to play the first audio after receiving the fourth message.
  • the micro control unit is further configured to send a fifth message to the application processor after the micro control unit receives the fourth message, where the fifth message is used to instruct the application processor to switch to a sleep state;
  • the application processor is specifically configured to switch to a sleep state in response to the fifth message.
  • the application processor is used to detect a first request of the second application when the micro control unit controls the speaker to play audio, where the first request is used to request to use the speaker;
  • the application processor is used to send a sixth message to the micro control unit after detecting the first request, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the application processor is further configured to receive an input for adjusting the volume during the process of the microcontroller unit playing the audio, and set the volume value of the application processor to the adjusted volume value;
  • the application processor is further used to send the adjusted volume value to the micro control unit;
  • the micro control unit is also used to set the volume value of the micro control unit to the adjusted volume value.
  • the microcontrol unit is further configured to, when the electronic device is in a state of continuously playing audio and the microcontrol unit determines based on the first message that the microcontrol unit supports playing the first audio, notify the application processor to switch to a non-sleep state after playing the first audio;
  • the application processor is further configured to send a seventh message to the micro control unit after switching to a non-sleeping state, wherein the first audio and the third audio belong to the same play list;
  • the micro control unit is further used to determine whether the micro control unit supports playing the third audio based on the seventh message.
  • the present application provides another processing device, including an application processor and a microcontroller unit; wherein:
  • a micro control unit configured to receive a first input, where the first input is used to trigger the electronic device to play a first audio using a first application
  • the microcontroller unit is configured to respond to a first input and determine, based on first information of a first audio, whether the microcontroller unit supports playing the first audio. frequency;
  • the micro control unit is further used to control the speaker to play the first audio when the micro control unit determines that the micro control unit supports playing the first audio based on the first information.
  • the method before the micro control unit receives the first input, the method further includes:
  • the application processor is further used to control the display screen to display a first interface, where the first interface includes an icon of a first application;
  • the application processor is further configured to receive a second input for the icon of the first application
  • the application processor is further used to send a first instruction to the micro control unit in response to the second input, where the first instruction is used to instruct the micro control unit to display an interface of the first application;
  • the application processor is further configured to switch to a sleep state after sending the first instruction
  • the microcontrol unit is also used to control the display screen to display the interface of the first application in response to the first instruction.
  • the interface of the first application includes a first control.
  • the first control is used to trigger the electronic device to play the first audio.
  • the first input is an input to the first control.
  • the micro control unit is configured to send a first message to the application processor when the micro control unit determines that the micro control unit does not support playing the first audio based on the first information;
  • the application processor is further configured to decode the first audio in response to the first message to obtain the first data
  • the application processor is further used to send the first data to the micro control unit;
  • the micro control unit is also used to control the speaker to play the first data.
  • the first message is used to trigger the application processor to switch to a non-sleep state.
  • the first information includes a first audio format of the first audio
  • the micro control unit is specifically used to determine that the micro control unit supports playing the first audio when the micro control unit determines that the micro control unit supports decoding the first audio in the first audio format based on the first audio format.
  • the micro control unit is specifically configured to decode the first audio to obtain the first data
  • the micro control unit is also used to control the speaker to play the first data.
  • the first information includes an identifier of the first audio; and the micro control unit is specifically configured to determine that the micro control unit supports playing the first audio when the micro control unit determines, based on the identifier of the first audio, that a second audio corresponding to the identifier of the first audio is stored.
  • the second audio includes decoded first data; and the micro control unit is specifically configured to control a speaker to play the second audio indicated by an identifier of the first audio.
  • the second audio includes the first data encoded in a second audio format
  • the micro control unit supports decoding the second audio in the second audio format
  • the micro control unit is specifically used to decode the second audio indicated by the identifier of the first audio to obtain the first data
  • the micro control unit is also used to control the speaker to play the first data.
  • the application processor is used to detect a first request of the second application when the micro control unit controls the speaker to play audio, where the first request is used to request to use the speaker;
  • the application processor is further used to send a sixth message to the micro control unit when detecting the first request of the second application, where the sixth message is used to instruct the micro control unit to stop playing audio.
  • the priority of the second application is higher than the priority of the first application.
  • the application processor is further configured to receive an input for adjusting the volume during the process of the microcontroller unit playing the audio, and set the volume value of the application processor to the adjusted volume value;
  • the application processor is further used to send the adjusted volume value to the micro control unit;
  • the micro control unit is also used to set the volume value of the micro control unit to the adjusted volume value.
  • the microcontroller unit is further configured to determine whether to support playing of a third audio based on the second information of the third audio after the electronic device is in a state of continuously playing audio and the microcontroller unit determines based on the first information that the microcontroller unit supports playing of the first audio.
  • the micro control unit is further used to notify the application processor to switch to a non-sleep state and send the third audio to the application processor when it is determined based on the second information of the third audio that the third audio is not supported to be played;
  • the application processor is further configured to decode the third audio after switching to a non-sleep state.
  • the present application provides an electronic device, comprising one or more processors and one or more memories.
  • the one or more memories are coupled to the one or more processors, and the one or more memories are used to store computer program codes, and the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, the electronic device executes the processing method in any possible implementation of any of the above aspects.
  • an embodiment of the present application provides a computer storage medium, including computer instructions.
  • the computer instructions When the computer instructions are executed on an electronic device, the electronic device executes a processing method in any possible implementation of any of the above aspects.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device.
  • the chip system includes one or more processors, and the processor is used to call computer instructions so that the electronic device executes a processing method in any possible implementation of any of the above aspects.
  • FIG1 is a schematic diagram of the hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of a process flow of a processing method provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of a software architecture provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a process flow of a processing method provided in an embodiment of the present application.
  • FIG5 is a flow chart of a speaker scheduling method provided in an embodiment of the present application.
  • FIG6 is a flow chart of a volume adjustment method provided in an embodiment of the present application.
  • FIG7 is a schematic diagram of a flow chart of another processing method provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of another software architecture provided in an embodiment of the present application.
  • FIG9 is a flow chart of another processing method provided in an embodiment of the present application.
  • FIG10 is a flow chart of another speaker scheduling method provided in an embodiment of the present application.
  • FIG. 11 is a flow chart of another volume adjustment method provided in an embodiment of the present application.
  • first and second are used for descriptive purposes only and are not to be understood as suggesting or implying relative importance or implicitly indicating the number of the indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features, and in the description of the embodiments of the present application, unless otherwise specified, "plurality” means two or more.
  • GUI graphical user interface
  • the electronic device 100 includes an application processor and a micro control unit.
  • the runtime power consumption of the micro control unit is lower than the runtime power consumption of the application processor.
  • the electronic device 100 can determine whether the micro control unit supports playing the first audio, and when it is determined that the micro control unit supports playing the first audio, play the data of the first audio through the micro control unit.
  • the electronic device 100 can decode the first audio through the application processor to obtain the first data.
  • the application processor can send the first data to the micro control unit.
  • the micro control unit controls the speaker to play the first data. In this way, the electronic device 100 uses the micro control unit to play audio.
  • the power consumption of the micro control unit is lower than the power consumption of the application processor, the power consumption of the electronic device 100 playing audio can be saved.
  • the application processor can switch to a sleep state when the micro control unit controls the speaker to play sound, thereby further saving power consumption.
  • the application processor When the application processor is in a dormant state (also known as a standby state or a low power consumption state), the current input to the application processor is small, and the power consumption of the application processor is low; when the application processor is in a non-dormant state (also known as a high power consumption state), the current input to the application processor is large, and the power consumption of the application processor is high. It can be understood that the power consumption of the application processor in the non-dormant state is higher than that of the application processor in the dormant state, and the input current of the application processor in the non-dormant state is greater than that of the application processor in the dormant state.
  • the electronic device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (PDA), an augmented reality (AR) device, a virtual reality (VR) device, an artificial intelligence (AI) device, a wearable device, an in-vehicle device, a smart home device and/or a smart city device.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearable device an in-vehicle device
  • smart home device a smart home device and/or a smart city device.
  • FIG. 1 exemplarily shows a hardware structure diagram of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include an application processor (AP) 101, a microcontroller unit (MCU) 102, a power switch 103, a memory 104, an audio module 105, a speaker 105A, etc.
  • AP application processor
  • MCU microcontroller unit
  • the above modules may be connected via a bus or other means, and the embodiment of the present application takes the bus connection as an example.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the application processor 101 and the micro control unit 102 can both be used to read and execute computer-readable instructions, and can generate operation control signals according to instruction operation codes and timing signals to complete the control of instruction fetching and instruction execution.
  • the application processor 101 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the application processor 101 may include multiple groups of I2C buses.
  • the application processor 101 can be coupled to different devices through different I2C bus interfaces, such as a touch sensor, a charger, a flash, a camera, etc.
  • the application processor 101 can be coupled to a touch sensor through the I2C interface, so that the application processor 101 communicates with the touch sensor through the I2C bus interface to realize the touch function of the electronic device 100.
  • the I2S interface can also be used for audio communication.
  • the application processor 101 can be coupled to the audio module 105 through the I2S bus to realize communication between the application processor 101 and the audio module 105.
  • the micro control unit 102 may also include one or more of the above interfaces. It is understandable that the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration and does not constitute a structural limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the application processor 101 may integrate multiple modules such as a central process unit (CPU), a graphics processing unit (GPU), a video codec, and a memory subsystem.
  • the application processor 101 may be an Arm-coretex-A core processor whose clock frequency (also known as an operating frequency) exceeds 1GHz, and typically includes 4 or more processing cores.
  • the memory capacity of the random access memory of the application processor 101 is approximately 2GB or more.
  • the application processor 101 can be used to run operating systems such as Linux, Android, Windows, and Hongmeng.
  • the runtime current of the application processor 101 is typically 100mAh, and the standby current is typically 4mAh. In this In the embodiment of the application, the application processor 101 can be used to decode audio to obtain audio data to be played.
  • the application processor 101 supports decoding of audio in multiple audio formats, such as moving picture experts group audio layer III (MP3), advanced audio coding (ACC), adaptive multi-rate (AMR) pulse code modulation (PCM), Ogg (oggvobis), the 3rd generation partner project (3GP), advanced streaming format (ASF), AV1, transport stream (TS), multimedia container file format (MKV), MP4 (MPEG-4 part 14), Windows Media Audio (WMA), Waveform Sound File (WAV), M4A, audio data transport stream (ADTS), SLK, etc.
  • MP3 moving picture experts group audio layer III
  • ACC advanced audio coding
  • AMR adaptive multi-rate
  • PCM adaptive multi-rate
  • Ogg oggvobis
  • 3rd generation partner project 3rd generation partner project
  • ASF advanced streaming format
  • AV1 transport stream
  • MKV multimedia container file format
  • MP4 MPEG-4 part 14
  • WMA Windows Media Audio
  • WAV Waveform Sound File
  • M4A audio data transport stream
  • ADTS audio data transport
  • the microcontroller unit 102 may include a central processing unit, a memory, a counter (Timer), and one or more interfaces.
  • the microcontroller unit 102 may be an Arm-Coretex-M core processor, whose clock frequency (also known as the operating frequency) is about 192MHz, and the processor core is usually a single core.
  • the memory capacity of the random access memory of the microcontroller unit 102 is about 2MB.
  • the microcontroller unit 102 can support the operation of lightweight Internet of Things operating systems, such as LiteOS, Hongmeng and other operating systems.
  • the operating current of the microcontroller unit 102 is usually 2mAh, and the standby current is usually 0.1mAh.
  • the microcontroller unit 102 may be an STL4R9 chip, a Dialog single-chip microcomputer, etc.
  • the microcontroller unit 102 may also be used to decode audio to obtain audio data.
  • the microcontroller unit 102 can only support decoding of audio in some audio formats.
  • the audio formats supported by the microcontroller unit 102 for decoding are PCM, WAV, AMR, MP3, ACC, and SLK.
  • the micro control unit does not support decoding of audio in audio formats such as Ogg, 3GP, ASF, TS, MKV, MP4, WMA, M4A, and ADTS.
  • the music application provides audio with restricted permissions.
  • the application processor 101 can use the parameters provided by the music application to decode the audio with restricted permissions to obtain audio data.
  • the micro control unit 102 does not support decoding audio with restricted permissions. Therefore, if a certain audio permission is restricted, no matter what audio format the audio is in, the micro control unit 102 does not support decoding the audio.
  • audio formats supported by the application processor 101 and the micro control unit 102 are only examples and are not limited to this in the embodiments of the present application.
  • the power switch 103 may be used to control the power supply to the electronic device 100 .
  • the memory 104 is used to store various software programs and/or multiple groups of instructions.
  • the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory in the application processor 101 and/or the micro control unit 102 is a cache memory.
  • the memory can save instructions or data that have just been used or are used in a cycle. If the instruction or data needs to be used again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the application processor 101 and/or the micro control unit 102 is reduced, thereby improving the efficiency of the system.
  • the memory 102 is coupled to the application processor 101 and the micro control unit 102.
  • the electronic device 100 can implement audio functions, such as music playback, through the audio module 105, the speaker 105A, and the application processor 101 (or the micro control unit 102).
  • audio functions such as music playback, through the audio module 105, the speaker 105A, and the application processor 101 (or the micro control unit 102).
  • the audio module 105 is used to convert digital audio information into analog audio signal output, and also to convert analog audio input into digital audio signal.
  • the speaker 105A also called “speaker”, is used to convert audio electrical signal into sound signal.
  • the electronic device 100 may further include a receiver, a microphone, etc.
  • the receiver also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the microphone also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the electronic device 100 may further include a display screen (not shown in FIG. 1 ), which may be used to display images, videos, controls, text information, etc.
  • the display screen may include a display panel.
  • the display panel may be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, a quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 205, where N is a positive integer greater than 1.
  • the electronic device 100 may further include a communication module (not shown in FIG. 1 ), which may include a Bluetooth module, a WLAN module, etc.
  • the electronic device 100 may receive or transmit wireless signals through the communication module.
  • the electronic device 100 may establish a communication connection with other electronic devices through the communication module, and perform data exchange with the electronic device 100 based on the communication connection.
  • the electronic device 100 may include one or more sensors.
  • the electronic device 100 may include a touch sensor, which may also be referred to as a "touch control device”.
  • the touch sensor may be disposed on a display screen, and a touch screen composed of the touch sensor and the display screen may also be referred to as a "touch control screen”.
  • the touch sensor may be used to detect a touch operation applied thereto or thereabout.
  • the electronic device 100 is a smart watch, and the electronic device 100 may also include a watch strap and a watch dial.
  • the display screen mentioned above may be included to display images, videos, controls, text information, etc.
  • the strap may be used to fix the electronic device 100 to the limbs of the human body for easy wearing.
  • the electronic device 100 includes an application processor and a microcontroller unit. After receiving an input for playing a first audio, the application processor of the electronic device 100 in a non-sleep state may send a query request to the microcontroller unit to inquire whether the microcontroller unit supports playing the first audio. If the application processor receives a response from the microcontroller unit indicating that the first audio is supported, the microcontroller unit may be notified to play the first audio, and the microcontroller unit may be switched to a sleep state when playing the first audio. If the application processor receives a response from the microcontroller unit indicating that the first audio is not supported, a decoding operation may be performed on the first audio to obtain the first data.
  • the application processor may send the first data to the microcontroller unit, and the microcontroller unit may control the speaker to play the first data.
  • the speaker is controlled to play audio by the microcontroller unit of the electronic device 100 with lower power consumption, so that the application processor with higher power consumption does not have to perform an audio playback operation, thereby saving power consumption of the device.
  • the processing method includes:
  • the application processor controls the display screen to display a desktop including a music application icon.
  • the application processor is in a non-sleep state, and the application processor can control the display screen to display a desktop including a music application icon.
  • the application processor receives input for a music application icon.
  • the application processor may obtain input for the music application icon.
  • the input for the music application icon may be a touch input, such as a single click, a long press, etc., or the input may be a voice command input, a floating gesture input (for example, waving a hand above the camera of the electronic device 100, etc.), a body input (for example, shaking the electronic device 100, etc.), etc., which is not limited in the embodiments of the present application.
  • the music application can be any application for playing audio, and the application can be installed in the electronic device 100 at the factory, or the application can be downloaded by the user through the network.
  • the music application can be the music application 301 shown in FIG.
  • the application processor displays an interface of the music application, where the interface is used to play audio and includes a play control.
  • the application processor may control the display screen to display the interface of the music application in response to the input for the music application icon.
  • the interface of the music application may include a play control, and the play control may be used to trigger the electronic device 100 to play the first audio.
  • the interface of the music application may also include song information of the first audio, and the song information may include but is not limited to the song title, singer name, etc.
  • S204 The application processor receives input for the playback control.
  • the play control may be used to trigger the electronic device 100 to play the first audio.
  • the application processor may receive an input for the play control, and execute a subsequent step of playing the first audio in response to the input.
  • the first audio is audio stored in the application processor.
  • the first audio is audio stored in a memory of the electronic device 100, which does not belong to the application processor or the micro control unit.
  • the application processor may obtain screen contact point position information of the user's touch through a sensor driver module (eg, a touch sensor module), and the application processor may determine that the input is an input for a playback control based on the screen contact point position information.
  • a sensor driver module eg, a touch sensor module
  • the application processor in a non-sleep state may respond to the user's voice instruction for playing the first audio and execute step S205.
  • the interface displayed on the display screen may not be the interface of the music application. In other words, when the application processor controls the display screen to display any interface, it may respond to the user's voice instruction for playing the first audio and execute step S205.
  • the electronic device 100 displays the interface of the file management application, the interface includes audio options corresponding to one or more stored audios, the one or more audios include the first audio, and the one or more audio options include the first audio option.
  • the electronic device 100 may execute step S205.
  • the application processor may execute step S205.
  • the application processor may determine whether the first audio has restricted permissions after receiving input for the playback control.
  • the application processor may execute step S208 when it is determined that the first audio has restricted permissions.
  • the application processor may execute step S205 when it is determined that the first audio has unrestricted permissions.
  • the file attributes of the first audio include a permission flag, and the application processor may determine whether the first audio has restricted permissions based on the permission flag of the first audio.
  • the application processor determines that the first audio has restricted permissions
  • the application processor determines that the first audio has unrestricted permissions, and the first value is different from the second value.
  • the application processor sends a query request 21 to the micro control unit.
  • the query request 21 includes the identifier and audio format of the first audio.
  • the query request 21 may be used to query whether the microcontroller unit supports playing the first audio.
  • the identifier of the first audio may be the name of the first audio.
  • the audio format is also called the audio type, which may be understood as the audio file format of the audio.
  • the application processor may Get the suffix of the audio file and get the audio format of the audio.
  • the micro control unit determines whether to store the second audio corresponding to the identifier of the first audio.
  • the microcontroller unit can store audio data obtained by decoding the audio by the application processor, and the audio data obtained by decoding the audio by the application processor stored in the microcontroller unit can be called cached audio. That is, if the application processor decoded the first audio when the electronic device 100 played the audio last time, the application processor can send the first data obtained by decoding the first audio to the microcontroller unit.
  • the microcontroller unit can store the first data.
  • the micro control unit may directly store the first data to obtain cached audio, and the cached audio including the first data may be referred to as second audio.
  • the electronic device 100 in order to reduce the storage space of the second audio, can store more second audio.
  • the micro control unit can encode the first data using the audio format supported by the micro control unit to obtain the second audio.
  • the second audio may include the first data encoded using the audio format supported by the micro control unit. In this way, by encoding and compressing the audio data, the storage space of the electronic device 100 can be saved, so that the electronic device 100 can store more second audio.
  • the micro control unit can determine whether the second audio corresponding to the identifier of the first audio to be played is stored based on the identifier of the first audio in the query request 21 and the correspondence between the identifier of the audio stored in the micro control unit and the stored audio data.
  • step S207 may be executed.
  • step S210 may be executed.
  • the micro control unit sends a query result 22 to the application processor, where the query result 22 indicates that the micro control unit stores the second audio corresponding to the identifier of the first audio.
  • the micro control unit may send a query result 22 to the application processor.
  • the query result 22 may indicate that the second audio corresponding to the identifier of the first audio is stored, that is, the micro control unit supports playing the first audio.
  • the micro control unit controls the speaker to play the second audio.
  • the microcontroller when the microcontroller stores the second audio corresponding to the first audio, if the second audio includes the first data obtained by the application processor decoding the first audio, the microcontroller can directly read the first data and control the speaker to play the first data. If the second audio includes the encoded first data, the microcontroller first decodes the second audio to obtain the first data, and then controls the speaker to play the first data.
  • step S207 and step S208 are not limited to executing step S207 first and then executing step S208.
  • the microcontroller unit can also execute step S208 first and then execute step S207.
  • the two steps can be executed simultaneously, which is not limited in the present embodiment.
  • the application processor receives the query result 22 and switches to a sleep state.
  • the application processor can switch from the non-sleep state to the sleep state when receiving the query result 22.
  • the microcontroller controls the speaker to play the first audio and switches the application processor to the sleep state, which greatly reduces the power consumption of the electronic device 100 when playing audio.
  • the microcontroller unit may send the query result 22 to the application processor after determining that the second audio is stored.
  • the application processor may send a play instruction to the microcontroller unit upon receiving the query result 22, and the play instruction is used to instruct the microcontroller unit to play the audio.
  • the microcontroller unit may control the speaker to play the second audio after receiving the play instruction.
  • the application processor may also switch from a non-sleep state to a sleep state after sending the play instruction to the microcontroller unit.
  • the microcontroller can control the speaker to play the second audio, and at the same time send the play status information indicating that the microcontroller is playing the audio to the application processor.
  • the application processor can switch from the non-sleep state to the sleep state.
  • the electronic device 100 when the electronic device 100 is playing the first audio, if the electronic device 100 is in a state of continuous audio playback (for example, a sequential playback state or a random playback state), the electronic device 100 can continue to play other audio in the playlist where the first audio is located after playing the first audio. For example, after controlling the speaker to play the first data, the electronic device 100 can continue to play the third audio, which is different from the first audio and belongs to the same playlist as the first audio.
  • a state of continuous audio playback for example, a sequential playback state or a random playback state
  • the microcontroller unit may notify the application processor to switch from the sleep state to the non-sleep state after playing the second audio.
  • the application processor may send a query request to the microcontroller unit again, the query request including the identifier and audio format of the third audio.
  • the query request including the identifier and audio format of the third audio.
  • the micro control unit determines whether the audio format of the first audio is supported for decoding.
  • the micro control unit may determine whether the micro control unit supports decoding the audio format of the first audio based on the audio format of the first audio in the query request 21 .
  • the microcontroller unit of the electronic device 100 stores audio formats that can be decoded by the microcontroller unit.
  • the electronic device 100 can search whether the audio format of the first audio to be played exists in the stored audio formats.
  • the micro control unit may execute step S216 when it is determined that the audio format of the first audio to be played exists in the stored audio formats, that is, when it is determined that the micro control unit supports decoding of the first audio to be played.
  • the micro control unit may execute step S211 when it is determined that the audio format of the first audio to be played does not exist in the stored audio formats, that is, when it is determined that the micro control unit does not support decoding of the first audio to be played.
  • the microcontroller unit may first determine whether the microcontroller unit supports decoding the audio format of the first audio, and then determine whether the microcontroller unit stores the second audio corresponding to the identifier of the first audio when the microcontroller unit does not support decoding the audio format of the first audio.
  • the microcontroller unit may simultaneously determine whether the microcontroller unit supports decoding the audio format of the first audio and whether the microcontroller unit stores the second audio corresponding to the identifier of the first audio, and the embodiments of the present application are not limited to this.
  • the query request 21 may only include the identifier of the first audio, and accordingly, the micro control unit may only determine whether the second audio is stored.
  • the query request 21 may only include the audio format of the first audio, and accordingly, the micro control unit may only determine whether decoding of the first audio is supported.
  • the micro control unit sends a query result 23 to the application processor, where the query result 23 indicates that the micro control unit does not support playing the first audio.
  • the microcontroller When the microcontroller determines that the second audio corresponding to the identifier of the first audio is not stored and does not support decoding of the audio format of the first audio, it can send a query result 23 to the application processor.
  • the query result 23 can indicate that the microcontroller does not support playing the first audio.
  • the application processor decodes the first audio to obtain first data.
  • the application processor can decode the first audio through a decoding algorithm to obtain the first data.
  • the parameters of the decoding algorithm can be specified parameters, or parameters provided by the music application. In this way, when the electronic device 100 decodes the audio provided by the third-party music application, it can use the parameters provided by the third-party music application to implement the decoding operation of the first audio.
  • the application processor sends the first data to the micro control unit.
  • the application processor may send the decoded first data to the micro control unit.
  • the micro control unit controls the speaker to play the first data.
  • the micro control unit may transmit the first data to the audio module 105 shown in FIG. 1 , and together with the audio module 105 and the speaker 105A implement the playing operation of the first audio.
  • the micro control unit stores the first data.
  • the micro control unit can store the first data and obtain the second audio.
  • the microcontroller unit may use a supported encoding method to encode the first data to obtain a second audio in a specified audio format.
  • the specified audio format is an audio format supported by the microcontroller unit.
  • FIG. 1 please refer to the embodiment shown in FIG. 1 , which will not be described in detail herein. In this way, by encoding and compressing the audio data, the storage space of the electronic device 100 can be saved, so that the electronic device 100 can store more second audio.
  • the electronic device 100 when the electronic device 100 is playing the first audio, if the electronic device 100 is in a state of continuous audio playback (for example, a sequential playback state or a random playback state), the electronic device 100 can continue to play other audio in the playlist where the first audio is located after playing the first audio. For example, after controlling the speaker to play the first data, the electronic device 100 can continue to play the third audio, which is different from the first audio and belongs to the same playlist as the first audio.
  • a state of continuous audio playback for example, a sequential playback state or a random playback state
  • the application processor may send a query request to the micro control unit again.
  • the query request includes the identifier and audio format of the third audio.
  • the micro control unit sends a query result 24 to the application processor, where the query result 24 indicates that the micro control unit supports decoding of the first audio.
  • the microcontroller unit may send a query result 24 to the application processor when determining that the audio format supports decoding of the first audio.
  • the query result 24 may indicate that the microcontroller unit supports decoding of the first audio.
  • the query result 24 may also be used to notify the application processor to send the first audio to the microcontroller unit.
  • the application processor sends the first audio to the micro control unit.
  • the application processor may send the first audio to the micro control unit.
  • the application processor may send the storage path of the first audio to the micro control unit.
  • the micro control unit may read the first audio based on the storage path.
  • the micro control unit controls the speaker to play the first audio.
  • the micro control unit decodes the first audio to obtain first data, and controls the speaker to play the first data.
  • the application processor switches to a sleep state.
  • the application processor may switch from a non-sleep state to a sleep state after sending the first audio to the micro control unit.
  • the micro control unit may send a signal to the application processor indicating that the micro control unit is playing the first audio.
  • the application processor may switch from a non-sleep state to a sleep state after receiving the play state information.
  • the application processor may send a play instruction to the micro control unit when sending the first audio.
  • the micro control unit may decode and play the first audio after receiving the play instruction and the first audio.
  • the micro control unit may also send play status information indicating that the micro control unit is playing the first audio to the application processor after receiving the first audio and the play instruction.
  • the application processor may switch from a non-sleep state to a sleep state after receiving the play status information.
  • the electronic device 100 when the electronic device 100 is playing the first audio, if the electronic device 100 is in a state of continuous audio playback (for example, a sequential playback state or a random playback state), the electronic device 100 can continue to play other audio in the playlist where the first audio is located after playing the first audio. For example, after controlling the speaker to play the first data, the electronic device 100 can continue to play the third audio, which is different from the first audio and belongs to the same playlist as the first audio.
  • a state of continuous audio playback for example, a sequential playback state or a random playback state
  • the microcontroller unit may notify the application processor to switch from the sleep state to the non-sleep state after playing the first audio.
  • the application processor may send a query request to the microcontroller unit again, and the query request includes the identifier and audio format of the third audio.
  • the query request includes the identifier and audio format of the third audio.
  • the application processor may directly control the speaker to play the first data, and send the first data to the micro control unit to instruct the micro control unit to store the first data.
  • the electronic device 100 may use an application processor to determine whether the microcontroller unit supports playing the first audio, wherein the application processor may obtain from the microcontroller unit an identifier of the audio corresponding to the audio data stored in the microcontroller unit and an audio format supported for decoding by the microcontroller unit to determine whether the microcontroller unit supports playing the first audio.
  • the description of the application processor determining whether the microcontroller unit supports playing the first audio can be found in steps S206 to S210, which will not be repeated here.
  • the electronic device 100 can continue to play other audio in the playlist where the first audio is located after playing the first audio. For example, after controlling the speaker to play the first data, the electronic device 100 can continue to play the third audio, which is different from the first audio, and the third audio belongs to the same playlist as the first audio. Since the application processor 101 of the electronic device 100 executes the above step S205 when playing different audios, it queries whether the microcontroller unit supports playing the audio, which increases power consumption.
  • the application processor may send a query request including the identification and audio format of all audios in the playlist to the microcontroller unit when receiving an input to play the first audio.
  • the microcontroller unit may directly send the query result to the application processor without the application processor sending a query request.
  • the microcontroller unit may send a corresponding query result to the application processor when playing audio that the microcontroller unit does not support, or playing audio that the microcontroller unit supports decoding.
  • the description of the microcontroller unit determining whether to support the playback of a certain audio and the microcontroller unit sending the query result to the application processor can be referred to the embodiment shown in FIG2, which will not be repeated here.
  • the layered architecture divides the software into several layers, each layer having a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the operating system includes but is not limited to an application layer (also known as an application layer), an application framework layer, a system library, and a kernel layer.
  • the application layer may include a series of application packages.
  • the application layer of the application processor 101 may include but is not limited to music application 301, call application 302, desktop application 303 and other applications.
  • the music application 301 may be used to play the first audio.
  • the call application 302 may be used to answer a call.
  • the desktop application 303 may be used to display a desktop including icons of the application.
  • the application framework layer can provide application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer of the application processor 101 may include but is not limited to a collaborative playback module 311, a decoding service module 312, a focus management module 313, a focus agent module 314, a volume synchronization module 315, a volume adjustment service module 316, a dual-core communication module 317, a power management module 318, and the like.
  • the collaborative playback module 311 can be used to send the audio of the music application 301 to the decoding service module 312.
  • the decoding service module 312 may be used to decode audio to obtain audio data.
  • the focus management module 313 can be used to allocate and manage the speaker.
  • the application can obtain the focus of the speaker from the focus management module 313. After obtaining the focus of the speaker, the application can use the speaker to play audio.
  • the focus management module 313 receives a request from another application to occupy the speaker, it notifies the application that the speaker focus has been lost. point and assigns speaker focus to another application that made the request. This allows the application to pause audio after being notified of losing speaker focus.
  • the focus proxy module 314 can be used to represent the micro control unit 102. When the micro control unit 102 occupies the speaker, the focus proxy module 314 can be notified. The focus proxy module 314 can send a request to occupy the speaker to the focus management module 313, so that the focus management module 313 records that the focus proxy module 314 is using the speaker. The focus management module 313 can notify the focus proxy module 314 when receiving a request from other applications (e.g., a phone application) to occupy the speaker. The focus proxy module 314 can notify the micro control unit 102, and the micro control unit 102 can perform an operation of pausing the audio playback after receiving the notification from the focus proxy module 314.
  • applications e.g., a phone application
  • the focus management module 313 of the application processor 101 is only responsible for the focus management of the application processor.
  • the focus management module 313 of the application processor 101 cannot notify the micro control unit 102. Therefore, the focus proxy module 314 represents the micro control unit, and when the application of the application processor 101 occupies the focus, the micro control unit 101 can be notified through the focus proxy module 314, so that the micro control unit 102 can perform an operation of pausing the audio playback.
  • the microcontroller unit 102 occupies the speaker, that is, when the focus management module 313 records that the focus proxy module 314 occupies the speaker, if the focus management module 313 receives a request from the call application to occupy the speaker, the focus management module 313 can notify the focus proxy module 314 to stop occupying the speaker and notify the phone application to use the speaker.
  • the call application can play audio (e.g., incoming call ringtone) through the speaker.
  • the focus proxy module 314 can notify the microcontroller unit 102 to stop using the speaker.
  • the microcontroller unit 102 can record the playback progress of the played audio.
  • the electronic device 100 can continue to play the audio from the recorded playback progress.
  • the microcontroller unit 102 can save the progress of the audio playback when the speaker is occupied by other applications or modules, so as to continue to play the audio at the stored playback progress when the audio is played next time.
  • the volume synchronization module 315 can be used to synchronize the volume value to the micro control unit 102 after receiving the volume value sent by the volume adjustment service module 316. Specifically, the volume synchronization module 315 can send the volume value to the volume adjustment service module 324 of the micro control unit 102 for adjusting the volume of the micro control unit 102.
  • the volume adjustment service module 316 may be used to change the volume of the application processor 101 of the electronic device 100 based on the volume adjustment operation of the user.
  • the volume adjustment service module 315 may send the changed volume to the volume synchronization module 315.
  • the dual-core communication module 317 can be used for data transmission between the application processor 101 and the micro control unit 102. For example, the first audio data provided by the application processor 101 is sent to the micro control unit 102.
  • the power management module 318 may be used to control the application processor 101 to switch from a sleep state to a non-sleep state, or to control the application processor 101 to switch from a non-sleep state to a sleep state. In some examples, the power management module 318 may control the magnitude of the input current provided to the application processor 101 and adjust the state of the application processor 101. For example, when the application processor 101 is in a sleep state, the input current input to the application processor 101 may be increased to switch the application processor 101 from the sleep state to the non-sleep state.
  • the application framework layer of the micro control unit 102 may include but is not limited to a collaborative playback module 321 , a decoding service module 322 , a volume synchronization module 323 , a volume adjustment service module 324 , a dual-core communication module 325 , and the like.
  • the collaborative playback module 321 can be used to determine whether the micro-control unit 102 stores the second audio corresponding to the identifier of the first audio, and can also be used to determine whether the micro-control unit 102 supports decoding of the first audio.
  • the collaborative playback module 321 can also be used to notify the decoding service module 312 of the application processor 101 to decode the first audio when it is determined that the micro-control unit 102 does not store the second audio and does not support decoding of the first audio.
  • the collaborative playback module 321 can always receive the first data obtained by decoding the first audio sent by the application processor 101, and send the first data to the audio driver module 327.
  • the collaborative playback module 321 can also be used to obtain cached audio based on the audio data obtained by decoding the audio by the application processor, for example, to obtain the second audio based on the first data.
  • the cooperative playback module 321 may be configured to notify the decoding service module 322 of the micro control unit 102 to decode the first audio when it is determined that the micro control unit 102 supports decoding of the first audio.
  • the decoding service module 322 can be used to decode the first audio to obtain the first data when the micro control unit 102 supports decoding the first audio.
  • the decoding service module 322 can send the first data to the audio driver module 327, and the audio driver module 327 controls the speaker to play the first data.
  • the decoding service module 322 can be used to decode the second audio to obtain the first data. It should be noted that due to the different operating main frequencies and memory capacities of the application processor 101 and the micro control unit 102, for example, the difference in the operating main frequency and memory capacity can be seen in the embodiment shown in Figure 1, which will not be repeated here.
  • the number of audio formats supported for decoding by the decoding service module 322 is less than the number of audio formats supported for decoding by the decoding service module 312.
  • the audio formats that the decoding service module 322 does not support decoding may include but are not limited to Ogg, 3GP, ASF, TS, MKV, MP4, WMA, M4A, and ADTS.
  • the volume synchronization module 323 can be used to synchronize the volume value to the application processor 101 after receiving the volume value sent by the volume adjustment service module 324. Specifically, the volume synchronization module 323 can send the volume value to the volume adjustment service module 316 of the application processor 101 for adjusting the volume of the application processor 101.
  • the volume adjustment service module 324 can be used to change the volume used when the micro control unit 102 plays audio based on the user's volume adjustment operation.
  • the volume adjustment service module 324 can send the changed volume to the volume synchronization module 323.
  • the dual-core communication module 325 can be used for data transmission between the micro control unit 102 and the application processor 101.
  • the decoding service module 312 of the application processor 101 can send the first data obtained based on the first audio decoding to the dual-core communication module 317, and the dual-core communication module 317 can send the first data to the dual-core communication module 325, and the dual-core communication module 325 then sends the first data to the audio driving module 327 through the cooperative playback module 321 and the audio channel module 327, so that the micro control unit 102 can realize the operation of controlling the speaker to play the first data decoded by the application processor 101.
  • a system library can include multiple functional modules.
  • the system library of the application processor 101 may include but is not limited to an audio channel module 319.
  • the audio channel module 319 may be used to transmit audio data between the application framework layer and the kernel layer.
  • the audio channel module 319 may send audio data of the decoding service module 312 to the audio driver module 320 of the kernel layer.
  • the system library of the microcontroller unit 102 may include but is not limited to an audio channel module 326.
  • the audio channel module 326 may be used to transmit audio data between the application framework layer and the kernel layer.
  • the audio channel module 319 may send audio data of the decoding service module 322 or the cooperative playback module 321 to the audio driver module 327 of the kernel layer.
  • the system library of the application processor 101 and/or the microcontroller unit 102 may also include a surface manager, a media library, etc.
  • the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of multiple commonly used audio and video formats, as well as static image files, etc.
  • the kernel layer is a layer between hardware and software.
  • the kernel layer of the application processor 101 may include but is not limited to an audio driver module 320, a sensor driver module, and a display driver module.
  • the audio driver module 320 may be used to call related hardware (the audio module 105 and the speaker 105A shown in FIG. 1 ) to implement an audio playback function.
  • the core layer of the microcontroller unit 102 may include but is not limited to an audio driver module 327, a sensor driver module, and a display driver module, etc.
  • the audio driver module 320 may be used to call related hardware (the audio module 105 and the speaker 105A shown in FIG. 1 ) to implement the audio playback function.
  • the application processor 101 includes a music application 301 , a cooperative playback module 311 , and a decoding service module 312 .
  • the micro control unit 102 includes a cooperative playback module 321 and an audio driver module 327 .
  • the application processor 101 is in a non-sleep state.
  • the interface includes a play control, and the play control can be used to trigger the electronic device 100 to play the first audio.
  • the application processor 101 and the micro control unit 102 of the electronic device 100 can play the first audio through the processing method provided in the embodiment of the present application.
  • the processing method includes:
  • the music application 301 sends a query request 41 to the collaborative playback module 321.
  • the query request 41 includes an identifier of the first audio, an audio format, etc.
  • the application processor 101 After receiving the input for the play control, the application processor 101 notifies the music application 301 to play the first audio.
  • the music application 301 may send a query request 41 to the collaborative play module 321 .
  • the collaborative playback module 321 determines whether the micro control unit 102 stores the second audio corresponding to the identifier of the first audio.
  • the cooperative playback module 321 may determine whether the micro control unit 102 stores the second audio based on the identifier of the first audio.
  • step S403 is executed.
  • step S412 is executed.
  • the collaborative playback module 321 determines whether the micro control unit 102 supports decoding the audio format of the first audio.
  • the collaborative playback module 321 may obtain the audio format supported for decoding by the decoding service module 322 , and determine whether the micro control unit 102 supports decoding of the first audio to be played.
  • the collaborative playback module 321 sends the query result 42 to the music application 301.
  • the query result 42 indicates that the micro control unit 102 does not support the playback of the first audio.
  • the music application 301 sends the first audio to the collaborative playback module 311 .
  • the collaborative playback module 311 sends the first audio to the decoding service module 312 .
  • the decoding service module 312 decodes the first audio to obtain first data.
  • the decoding service module 312 sends the first data to the collaborative playback module 321 .
  • the collaborative playback module 321 sends the first data to the audio driving module 327.
  • the collaborative playback module 321 stores the first data.
  • the audio driver module 327 controls the speaker to play the first data.
  • the collaborative playback module 321 sends a query result 43 to the music application 301.
  • the query result 43 indicates that the micro control unit 102 stores the second audio.
  • the music application 301 notifies the application processor 101 to switch to the sleep state.
  • the music application 301 may send a sleep request message to the power management module 318 after receiving the query result 43 , and the power management module 318 may control the application processor 101 to switch to sleep mode after receiving the sleep request message.
  • the music application 301 may send a play instruction 44 to the collaborative play module 321, where the play instruction 44 is used to instruct the micro control unit 102 to play the second audio.
  • the music application 301 may notify the application processor 101 to switch to the sleep state after sending the play instruction 44.
  • the collaborative play module 321 may send the play status information to the music application 301 after receiving the play instruction 44, and the music application 301 may notify the application processor 101 to switch to the sleep state only after receiving the play status information.
  • the collaborative playback module 321 sends a playback message 45 to the audio driver module 327 to instruct the audio driver module 327 to play the second audio.
  • the audio driver module 327 of the micro control unit 102 can directly read the first data in the second audio and control the speaker to play the first data. If the second audio includes the encoded first data, the cooperative playback module 321 of the micro control unit 102 first notifies the decoding service module 322 to decode the second audio, obtains the first data, and then notifies the audio driver module 327 to control the speaker to play the first data.
  • the audio driver module 327 controls the speaker to play the second audio.
  • the collaborative playback module 321 sends a query result 46 to the music application 301, and the query result 46 indicates that the micro control unit 102 supports decoding of the first audio.
  • the music application 301 sends the first audio to the collaborative playback module 321.
  • the music application 301 After receiving the query result 46 , the music application 301 sends the first audio to the collaborative playback module 321 .
  • the collaborative playback module 321 sends the first audio to the audio driving module 327.
  • the audio driver module 327 plays the first audio.
  • the cooperative playback module 321 When the cooperative playback module 321 receives the first audio, the cooperative playback module 321 first notifies the decoding service module 322 to decode the first audio to obtain the first data, and then notifies the audio driving module 327 to control the speaker to play the first data.
  • the music application 301 notifies the application processor 101 to switch to the sleep state.
  • the music application 301 can send a sleep request message to the power management module 318 after sending the first audio to the collaborative playback module 321, and the power management module 318 can control the application processor 101 to switch to sleep mode after receiving the sleep request message.
  • the microcontroller unit when the microcontroller unit uses the speaker, if the application processor receives a request from a designated application to use the speaker, the microcontroller unit may be notified to stop using the speaker and allocate the speaker to the designated application.
  • the designated application may include but is not limited to call applications, such as call application 302. It should be noted that in the embodiment of the present application, the designated application may refer to an application that does not play audio through the microcontroller unit.
  • the speaker scheduling method includes:
  • the collaborative playback module 311 sends a playback message 51 to the music application 301, indicating that the micro control unit 102 is playing audio.
  • the collaborative playback module 311 can send a playback message 51 to the music application 301 when the audio driver module 327 controls the speaker to play audio.
  • the audio driver module 327 may notify the collaborative playback module 311 that the audio is being played after executing step S412 shown in FIG4 .
  • the collaborative playback module 321 may send a play message 51 to the music application 301, indicating that the micro control unit 102 is playing the audio.
  • the play message 51 may be the play result 46 shown in FIG4 .
  • the music application 301 sends an occupation message 52 to the focus proxy module 314 , indicating that the speaker is occupied by the micro control unit 102 .
  • the music application 301 may send an occupation message 52 to the focus proxy module 314.
  • the occupation message 52 may indicate that the speaker is occupied by the micro control unit 102.
  • the focus proxy module 314 sends an occupation message 53 to the focus management module 313, indicating that the focus proxy module 314 occupies the speaker.
  • the focus proxy module 314 may send an occupation message 53 to the focus proxy module 314 to occupy the speaker focus. point.
  • the focus management module 313 sets the speaker to be occupied by the focus proxy module 314 .
  • the call application 302 sends an occupation request 54 to the focus management module 313, indicating that the call application 302 occupies the speaker.
  • the call application 302 may obtain the focus of the speaker from the focus management module 313 .
  • the focus management module 313 sets the speaker to be occupied by the call application 302 .
  • the focus management module 313 allocates the speaker focus to the call application 302, and sets the speaker focus to be occupied by the call application 302. After acquiring the speaker focus, the call application 302 can use the speaker to play the incoming call prompt tone.
  • the focus management module 313 can determine whether to allocate a speaker to the application based on the priority of the application. Exemplarily, when the focus management module 313 sets the speaker to be occupied by application A, and receives an occupation request from application B, the focus management module 313 can determine whether the priority of application B is higher than the priority of application A. When the focus management module 313 determines that the priority of application B is higher than the priority of application A, it notifies application A to stop occupying the speaker and allocates the speaker to application B. When the focus management module 313 determines that the priority of application B is lower than the priority of application A, it does not change the application occupying the speaker. At this time, application B cannot obtain the speaker.
  • the focus management module 313 sends a stop occupation message 55 to the focus proxy module 314, indicating that the speaker is occupied by other applications.
  • the focus management module 313 may send a stop occupation message 55 to the focus proxy module 314 , indicating that the focus proxy module 314 has lost the speaker focus and the speaker focus is occupied by other applications.
  • the focus proxy module 314 sends a focus loss message 56 to the music application 301, indicating that the speaker is occupied by other applications.
  • the focus proxy module 314 may send a focus loss message 56 to the music application 301 .
  • the music application 301 stops playing the first audio.
  • the music application 301 may execute an operation of stopping playing the first audio and stop using the speaker.
  • the music application 301 can save the playback progress of the first audio, and when the call application 302 finishes using the speaker and returns the speaker focus, it obtains the speaker focus and notifies the micro control unit to continue playing the first audio.
  • the music application 301 can continue playing the first audio from the progress when the speaker focus is lost based on the saved playback progress of the first audio.
  • the music application 301 sends a stop playing message 57 to the collaborative playing module 321, instructing the micro control unit 102 to stop playing the first audio.
  • the collaborative playback module 321 sends a stop playback message 58 to the audio driver module 327, instructing the audio driver module 327 to stop playing the audio.
  • call application 302 is only an example of an application and is not limited to the call application.
  • Other applications can also call the speaker, for example, an alarm clock application, etc., and can also implement the speaker scheduling method shown in Figure 5.
  • the application processor of the electronic device 100 can adjust the volume value of the application processor.
  • the application processor of the electronic device 100 can notify the microcontroller unit of the adjusted volume value, and the microcontroller unit can adjust the volume value of the microcontroller unit to be the same as the volume value of the application processor after receiving the notification of adjusting the volume.
  • the application processor detects the volume adjustment input, and can notify the microcontroller unit of the adjusted volume, and the microcontroller unit can continue to play the audio using the adjusted volume.
  • the volume adjustment method provided in the embodiment of the present application is exemplarily introduced.
  • the application processor 101 can synchronize the volume to the micro control unit 102 through the volume synchronization module 304 , wherein the volume adjustment method includes the following steps:
  • the volume adjustment service module 316 receives input for adjusting the volume.
  • the volume adjustment service module 316 sets the volume to the adjusted volume value.
  • the application processor 101 of the electronic device 100 controls the display screen to display the interface of the music application 301.
  • the interface of the music application 301 also includes a volume setting icon.
  • the application processor 101 of the electronic device 100 can control the display screen to display a volume bar after receiving an input (e.g., a single click) for the volume setting icon.
  • the input for adjusting the volume can be a sliding input for the volume bar.
  • the volume adjustment service module 316 may set the volume of the application processor 101 to an adjusted volume value based on the input for adjusting the volume.
  • the application processor 101 of the electronic device 100 may send the input to the volume adjustment service module 316, and the volume adjustment service module 316 may reduce the volume of the application processor 101 according to the sliding distance of the sliding input.
  • the application processor 101 of the electronic device 100 can detect an input of sliding the volume bar to the right or upward and send the input to the volume control unit.
  • the volume adjustment service module 316 can increase the volume of the application processor 101 according to the sliding distance of the sliding input.
  • the volume adjustment service module 304 sends a volume adjustment message 61 to the volume synchronization module 304.
  • the volume adjustment message 61 carries the adjusted volume value.
  • the volume adjustment service module 304 may send a volume adjustment message 61 carrying the adjusted volume value to the volume synchronization module 304 .
  • the volume synchronization module 304 sends a volume adjustment message 62 to the volume adjustment service module 324 .
  • the volume adjustment message 62 carries the adjusted volume value.
  • the volume synchronization module 304 may send a volume adjustment message 62 to the volume adjustment service module 324 of the micro control unit 102 , instructing the volume adjustment service module 324 to adjust the volume of the micro control unit 102 so that the volume value of the micro control unit 102 and the volume value of the application processor 101 remain the same.
  • the volume adjustment service module 324 sets the volume to the adjusted volume value.
  • the volume adjustment service module 324 of the micro control unit 102 may set the volume value carried in the volume adjustment message 62 as the volume value of the micro control unit 102 .
  • an application processor e.g., the application processor 101 shown in FIG. 1
  • the program code of the application can be retrieved from the memory of the application processor.
  • the programming language of the program code is the programming language indicated by the application processor (e.g., Java/C++).
  • the application processor can compile the program code into computer instructions through a compiler of the application processor and execute these computer instructions. Each of these computer instructions belongs to the instruction set of the application processor.
  • the program code of the application program can be taken out from the memory of the microcontroller unit.
  • the programming language of the program code is the programming language (e.g., C/C++) indicated by the microcontroller unit.
  • the microcontroller unit can compile the program code into computer instructions through a compiler of the microcontroller unit, and execute these computer instructions. Each of these computer instructions belongs to the instruction set of the microcontroller unit.
  • the programming languages are also different. If the application processor and the microcontroller are installed with the same application, the code of the application stored in the application processor is different from the code of the application stored in the microcontroller.
  • the application program run by the application processor may be referred to as an application program deployed on the application processor, or may also be referred to as an application program of the application processor.
  • the application program run by the microcontroller unit may be referred to as an application program deployed on the microcontroller unit, or may also be referred to as an application program of the microcontroller unit.
  • the application processor can be used to run highly complex computing services and network services, etc.
  • the application processor can run applications such as desktop, calls, voice chat, local screening for premature atrial fibrillation, and maps.
  • the microcontroller unit 102 can be used to run simple computing services.
  • the microcontroller unit 102 can be used to run applications such as heart rate detection, stress detection, daily activities, and music.
  • the programming language of the code of the application run by the application processor is the programming language indicated by the application processor.
  • the programming language of the code of the application run by the microcontroller unit is the programming language indicated by the microcontroller unit.
  • the electronic device 100 stores the processor identification of the application, and the processor identification can be used to indicate whether the electronic device 100 runs the application through the application processor or the micro control unit. That is to say, when the electronic device 100 receives the input from the user to open a certain application, it can determine whether the application is deployed on the micro control unit or on the application processor based on the processor identification of the application, and then run the application through the processor indicated by the processor identification. In this way, the electronic device 100 can deploy applications in the micro control unit and set corresponding processor identifications for these applications. When the user opens a certain application, the electronic device 100 can quickly determine whether to use the micro control unit to run the application, thereby saving device power consumption. Exemplarily, as shown in Table 1, Table 1 shows examples of applications and corresponding processor identifications.
  • the table shows some applications and their corresponding processor identifiers.
  • the processor identifiers of the exercise application, information application and music application are micro-control units.
  • the processor identifiers of the desktop application and the call application are application processors. That is to say, when the electronic device 100 displays the desktop, the application processor runs the desktop application and controls the display screen to display the desktop.
  • the application processor of the electronic device 100 receives an input to open an application, it can determine the processor identifier of the application and decide whether to notify the micro-control unit to run the opened application.
  • the application processor of the electronic device 100 receives an input to open a music application when controlling the display screen to display the desktop, the application processor determines that the music application is run by the micro-control unit based on the processor identifier of the music application.
  • the application processor can send a notification message to the micro-control unit, and the notification message can carry the identifier of the music application (for example, the application name).
  • the micro-control unit can run the music application, for example, control the display screen to display the interface of the music application.
  • the application processor of the electronic device 100 receives input to open a call application while controlling the display screen to display the desktop, the application processor determines, based on the processor identifier of the call application, that the call application is to be run by the application processor.
  • the application processor runs the call application, for example, controlling the display screen to display the interface of the call application.
  • the electronic device 100 includes an application processor and a microcontroller unit.
  • the microcontroller unit of the electronic device 100 is deployed with a music application.
  • the microcontroller unit displays the interface of the music application, it receives an input of a first audio of the music application.
  • the microcontroller unit can determine whether the microcontroller unit supports playing the first audio in response to the input.
  • the microcontroller unit can notify the application processor to process the first audio and obtain the first data when it is determined that the microcontroller unit does not support playing the audio.
  • the microcontroller unit can obtain the first data of the application processor and control the speaker to play the first data.
  • the microcontroller unit can play the first audio through the microcontroller unit when it is determined that the microcontroller unit supports playing the first audio.
  • the electronic device 100 uses the microcontroller unit to run the music application and play the first audio. Since the power consumption of the microcontroller unit is lower than that of the application processor, the power consumption of the electronic device 100 for playing the audio can be saved. In addition, the application processor can switch to a dormant state when the microcontroller unit controls the speaker to play the sound, further saving power consumption.
  • the processing method includes:
  • the application processor displays a desktop including a music application icon.
  • the application processor is in a non-sleep state, and the application processor can control the display screen to display a desktop including a music application icon.
  • the application processor receives input for a music application icon.
  • the application processor may obtain input for the music application icon, wherein the specific description of the input may refer to the embodiment shown in FIG. 2 , which will not be repeated here.
  • the music application may be any application for playing audio, and the application may be installed in the electronic device 100 at the factory, or the application may be downloaded by the user through the network.
  • the music application may be the music application 801 shown in FIG. 8 .
  • the application processor Based on the processor identifier of the music application, the application processor sends an instruction message 71 to the micro control unit for instructing the micro control unit to run the music application.
  • the application processor can determine that the music application is deployed in the micro control unit based on the identifier of the music application.
  • the application processor can send an indication message 71 to the micro control unit.
  • the indication message 71 can carry the identifier of the music application to instruct the micro control unit to run the music application.
  • the application processor may also switch from the non-sleeping state to the sleeping state after sending the indication message 71 to the micro control unit.
  • the micro control unit receives the indication message 71 and controls the display screen to display an interface of a music application, where the interface is used to play the first audio and includes a play control.
  • the micro control unit can control the display screen to display the interface of the music application.
  • the interface of the music application can include a play control, and the play control can be used to trigger the electronic device 100 to play the first audio.
  • the interface of the music application may also include song information of the first audio, and the song information may include but is not limited to the song title, singer name, etc.
  • the micro control unit receives input for the playback control.
  • the play control may be used to trigger the electronic device 100 to play the first audio.
  • the micro control unit may receive an input for the play control, and execute step S706 in response to the input.
  • the microcontroller unit may obtain the screen touch signal touched by the user through a sensor driver module (such as a touch sensor module).
  • the micro control unit can determine that the input is an input for the playback control based on the screen contact point position information.
  • the application processor in a non-sleep state can send a play request to the micro-control unit in response to the user's voice instruction for playing the first audio, and the play request is used to instruct the micro-control unit to play the first audio.
  • the micro-control unit can execute step S706.
  • the application processor receives the user's voice instruction, the interface displayed on the control display screen may not be the interface of the music application.
  • the application processor controls the display screen to display any interface, it can send a play request to the micro-control unit in response to the user's voice instruction for playing the first audio.
  • the application processor can switch from a non-sleep state to a sleep state.
  • the electronic device 100 displays the interface of the file management application, the interface includes audio options corresponding to one or more stored audios, the one or more audios include the first audio, and the one or more audio options include the first audio option.
  • the electronic device 100 may execute step S706.
  • the micro control unit can execute step S706.
  • the micro control unit determines whether to store the second audio corresponding to the identifier based on the identifier of the first audio.
  • step S707 is executed.
  • step S713 is executed.
  • the micro control unit determines whether it supports decoding of the first audio based on the audio format of the first audio.
  • step S708 is executed.
  • step S714 is executed.
  • the microcontroller unit may first determine whether the microcontroller unit supports decoding the audio format of the first audio, and then determine whether the microcontroller unit stores the second audio corresponding to the identifier of the first audio when the microcontroller unit does not support decoding the audio format of the first audio.
  • the microcontroller unit may simultaneously determine whether the microcontroller unit supports decoding the audio format of the first audio and whether the microcontroller unit stores the second audio corresponding to the identifier of the first audio, and the embodiments of the present application are not limited to this.
  • the micro control unit may only determine whether the second audio is stored. Alternatively, the micro control unit may only determine whether decoding of the first audio is supported.
  • the micro control unit sends a decoding message 72 to the application processor, where the decoding message 72 instructs the application processor to decode the first audio.
  • the microcontroller unit may send a decoding message 72 to the application processor when determining that the first audio is not supported for playback, that is, the second audio is not stored and decoding of the first audio is not supported.
  • the decoding message 72 may instruct the application processor to decode the first audio. It is understandable that the decoding message 72 may also be used to wake up the application processor so that the application processor switches from a sleep state to a non-sleep state.
  • the application processor decodes the first audio to obtain first data.
  • the application processor can switch from the sleep state to the non-sleep state, and then decode the first audio through the decoding algorithm to obtain the first data.
  • the parameters of the decoding algorithm can be specified parameters, or parameters provided by the music application. In this way, the electronic device 100 can use the parameters provided by the third-party music application to implement the decoding operation of the first audio.
  • the application processor sends first data to the micro control unit.
  • the application processor may transmit the first data to the micro control unit.
  • the application processor may switch from a non-sleeping state to a sleeping state after transmitting the first data to the micro control unit.
  • the micro control unit controls the speaker to play the first data.
  • the micro control unit may transmit the first data to the audio module 105 shown in FIG. 1 , and together with the audio module 105 and the speaker 105A, play the first data.
  • the micro control unit stores the first data.
  • the microcontroller unit can store the first data to obtain the second audio.
  • the microcontroller unit can use a supported encoding method to encode the first data to obtain the second audio of a specified audio format.
  • the specified audio format is the first audio format supported by the microcontroller unit. Specifically, please refer to the embodiment shown in Figure 1, which will not be repeated here. In this way, by encoding and compressing the first data, the storage space of the electronic device 100 can be saved, so that the electronic device 100 can store more second audio.
  • the micro control unit controls the speaker to play the second audio.
  • the micro control unit may control the speaker to play the second audio after determining that the second audio corresponding to the identifier storing the first audio is stored.
  • the micro control unit controls the speaker to play the first audio.
  • the micro control unit may decode the first audio to obtain the first data, and control the speaker to play the first data.
  • step S706, step S707, and step S711 to step S714 can refer to the embodiment shown in FIG2, and will not be repeated here.
  • the application processor may directly control the speaker to play the first data, and send the first data to the micro control unit to instruct the micro control unit to store the first data.
  • the application layer of the microcontroller unit 102 may include, but is not limited to, applications such as a music application 801.
  • the application layer of the application processor 101 may include, but is not limited to, applications such as a call application 802 and a desktop application 803.
  • the music application 801 may be used to play the first audio.
  • the call application 802 may be used to answer a call.
  • the desktop application 803 may be used to display an application icon.
  • the application framework layer of the application processor 101 may include but is not limited to a collaborative playback module 811, a decoding service module 812, a focus management module 813, a focus proxy module 814, a volume synchronization module 815, a volume adjustment service module 816, a dual-core communication module 817, and a power management module 818.
  • the application framework layer of the microcontroller unit 102 may include but is not limited to a collaborative playback module 821, a decoding service module 822, a volume synchronization module 823, a volume adjustment service module 824, and a dual-core communication module 825.
  • the system library of the application processor 101 may include an audio channel module 819 .
  • the system library of the micro control unit 102 may include an audio channel module 826 .
  • the kernel layer of the application processor 101 may include but is not limited to an audio driver module 820, a sensor driver module and a display driver module.
  • the kernel layer of the microcontroller unit 102 may include but is not limited to an audio driver module 827, a sensor driver module and a display driver module.
  • the description of the application framework layer, system library and kernel layer of the application processor 101 and the micro control unit 102 can refer to the embodiment shown in FIG. 3 , which will not be repeated here.
  • the application processor 101 includes a cooperative playback module 811 and a decoding service module 812 .
  • the micro control unit 102 includes a music application 801 , a cooperative playback module 821 and an audio driver module 827 .
  • the application processor 101 is in a non-sleep state, and the application processor 101 can control the display screen to display a desktop including an icon of the music application 801.
  • the application processor 101 can receive an input for the icon of the music application 801, and in response to the input, the application processor notifies the micro control unit 102 to run the music application 801 based on the processor identifier of the music application 801.
  • the application processor can also switch from the non-sleep state to the sleep state after notifying the micro control unit 102 to run the music application 801.
  • the microcontroller unit 102 can control the display screen to display the interface of the music application 801, which is used to play the first audio, and the interface includes a playback control.
  • the microcontroller unit 102 can receive input for the playback control and play the first audio through the processing method provided in the embodiment of the present application.
  • the application processor 101 in a non-sleep state can respond to the user's voice instruction for playing the first audio, and instruct the microcontroller unit 102 to play the first audio through the processor method provided in the embodiment of the present application.
  • the processing method includes:
  • the music application 801 determines whether to store the second audio based on the identifier of the first audio.
  • step S914 is executed; when the music application 801 determines that the second audio corresponding to the identifier of the first audio is not stored, step S902 is executed.
  • the music application 801 determines whether the micro control unit supports decoding of the first audio based on the audio format of the first audio.
  • step S903 is executed.
  • step S914 is executed.
  • the music application 801 sends the first audio to the collaborative playback module 821.
  • the music application 801 determines that the second audio corresponding to the identifier of the first audio to be played is not stored and does not support decoding of the first audio to be played, and sends the first audio to the collaborative playback module 821.
  • the collaborative playback module 821 sends the first audio to the collaborative playback module 811.
  • the cooperative playback module 821 of the micro control unit 102 may send the first audio to the cooperative playback module 811 of the application processor 101.
  • the application processor 101 switches from the sleep state to the non-sleep state.
  • the collaborative playback module 811 sends the first audio to the decoding service module 812.
  • the decoding service module 812 decodes the first audio to obtain first data.
  • the decoding service module 812 sends the first data to the collaborative playback module 821 .
  • the collaborative playback module 821 sends the first data to the audio driving module 827.
  • the audio driver module 827 controls the speaker to play the first data.
  • the collaborative playback module 821 sends first data to the music application 801.
  • the music application 801 stores the first data.
  • the music application 801 sends a play message 71 to the audio driver module 827.
  • the play message 71 is used to instruct the audio driver module 827 to play the second audio.
  • a play message 71 is sent to the audio driver module 827 .
  • the audio driver module 827 controls the speaker to play the second audio.
  • the music application 801 sends a play message 72 to the audio driver module 827.
  • the play message 72 is used to instruct the audio driver module 827 to play the first audio.
  • a play message 72 is sent to the audio driver module 827 .
  • the audio driver module 827 controls the speaker to play the first audio.
  • the audio driver module 827 plays the first audio.
  • the microcontroller unit when the microcontroller unit uses the speaker, if the application processor receives a request from a designated application to use the speaker, the microcontroller unit may be notified to stop using the speaker and allocate the speaker to the designated application.
  • the designated application may include but is not limited to call applications, such as call application 302. It should be noted that in the embodiment of the present application, the designated application may refer to an application that does not play audio through the microcontroller unit.
  • the application processor includes a focus proxy module, and when the microcontroller unit of the electronic device 100 uses a speaker, the focus proxy module of the application processor can obtain the focus of the speaker from the focus management module.
  • the focus management module can notify the focus proxy module when the focus of the speaker is occupied by other applications.
  • the focus proxy module can notify the microcontroller unit, and the microcontroller unit can perform an operation of pausing the audio playback after receiving the notification from the focus proxy module.
  • the focus proxy module represents the microcontroller unit, and when the application of the application processor occupies the focus, the microcontroller unit can be notified through the focus proxy module, so that the microcontroller unit can perform an operation of pausing the audio playback.
  • the speaker scheduling method provided by the embodiment of the present application is exemplarily introduced. As shown in FIG10 , the method includes:
  • the music application 801 sends an occupation message 81 to the focus proxy module 814, indicating that the micro control unit 102 is using the speaker.
  • the music application 801 may send an occupation message 81 to the focus proxy module 814 when the micro control unit 102 plays the first audio.
  • the focus proxy module 814 sends an occupation message 82 to the focus management module 813, indicating that the focus proxy module 814 occupies the speaker.
  • the focus management module 813 sets the speaker to be occupied by the focus proxy module 814 .
  • the call application 802 sends an occupation request 83 to the focus management module 813, indicating that the call application 802 occupies the speaker.
  • the call application 802 can use the speaker to play the incoming call prompt tone after obtaining the speaker focus.
  • the focus management module 813 sets the speaker to be occupied by the call application 802 .
  • the focus management module 813 can determine whether to allocate a speaker to the application based on the priority of the application. Exemplarily, when the focus management module 813 sets the speaker to be occupied by application A, and receives an occupation request from application B, the focus management module 813 can determine whether the priority of application B is higher than the priority of application A. When the focus management module 813 determines that the priority of application B is higher than the priority of application A, it notifies application A to stop occupying the speaker and allocates the speaker to application B. When the focus management module 813 determines that the priority of application B is lower than the priority of application A, it does not change the application occupying the speaker. At this time, application B cannot obtain the speaker.
  • the focus management module 813 sends a stop occupation message 84 to the focus proxy module 814, indicating that the speaker is occupied by other applications.
  • the focus proxy module 814 sends a focus loss message 85 to the music application 801, indicating that the speaker is occupied by other applications.
  • the music application 801 sends a stop playback message 86 to the audio playback driver 827, instructing the audio driver module 827 to stop playing the audio.
  • the microcontroller when the microcontroller determines that the first audio is not supported, it can notify the application processor to decode and play the first audio.
  • the microcontroller 102 can also occupy the speaker focus provided by the focus management module through the focus proxy module. In this way, since the music application is deployed in the microcontroller, when the microcontroller decodes and plays the first audio through the application processor, it can use the focus proxy module to occupy the speaker focus of the focus management module of the application processor. When the speaker focus is occupied by other applications of the application processor, it is convenient for the microcontroller to The control unit notifies the application processor to pause playing the first audio.
  • the music application 801 when the music application 801 receives the user's input to play the first audio, it can notify the focus proxy module 814.
  • the focus proxy module 814 can obtain the focus of the focus management module 813.
  • the focus proxy module 814 can notify the music application 801. Specifically, please refer to the above steps S1002 to S1007, which will not be repeated here.
  • the decoding service module 812 of the application processor 101 can receive the first audio provided by the music application 801, and decode the first audio to obtain the first data.
  • the decoding service module 812 can send the first data to the audio driver module 802, and the audio driver module 802 can control the speaker to play the first data.
  • the music application 801 can notify the application processor 101 to stop playing the first audio. Specifically, the music application 801 can notify the decoding service module 812 to stop decoding, and notify the audio driver module 802 to stop playing the audio.
  • the microcontroller unit of the electronic device 100 can adjust the volume value of the microcontroller unit.
  • the microcontroller unit of the electronic device 100 can notify the application processor of the adjusted volume value, and the application processor can adjust the volume value of the application processor to be the same as the volume value of the microcontroller unit after receiving the notification for adjusting the volume.
  • the microcontroller unit detects the volume adjustment input, and can notify the application processor of the adjusted volume, and the application processor can use the adjusted volume to play audio, for example, a prompt tone.
  • the volume adjustment method provided in the embodiment of the present application is exemplarily introduced.
  • the micro control unit 102 can synchronize the volume to the application processor 101 through the volume synchronization module 823 , wherein the volume adjustment method includes the following steps:
  • the volume adjustment service module 824 receives input for adjusting the volume.
  • the volume adjustment service module 824 sets the volume to the adjusted volume value.
  • the microcontroller unit 102 of the electronic device 100 controls the display screen to display the interface of the music application 801.
  • the interface of the music application 801 also includes a volume setting icon.
  • the microcontroller unit 102 of the electronic device 100 can control the display screen to display a volume bar after receiving an input (e.g., a single click) for the volume setting icon.
  • the input for adjusting the volume can be a sliding input for the volume bar.
  • the volume adjustment service module 824 may set the volume of the micro control unit 102 to an adjusted volume value based on the input for adjusting the volume.
  • the microcontroller unit 102 of the electronic device 100 may send the input to the volume adjustment service module 824, and the volume adjustment service module 824 may reduce the volume of the microcontroller unit 102 according to the sliding distance of the sliding input.
  • the microcontroller unit 102 of the electronic device 100 can detect an input of sliding the volume bar to the right or upward and send the input to the volume adjustment service module 824.
  • the volume adjustment service module 824 can increase the volume of the microcontroller unit 102 according to the sliding distance of the sliding input.
  • the volume adjustment service module 824 sends a volume adjustment message 91 to the volume synchronization module 823.
  • the volume adjustment message 91 carries the adjusted volume value.
  • the volume adjustment service module 824 may send a volume adjustment message 91 carrying the adjusted volume value to the volume synchronization module 823 .
  • the volume synchronization module 823 sends a volume adjustment message 92 to the volume adjustment service module 816.
  • the volume adjustment message 92 carries the adjusted volume value.
  • the volume synchronization module 823 may send a volume adjustment message 92 to the volume adjustment service module 816 of the application processor 101, instructing the volume adjustment service module 816 to adjust the volume of the application processor 101 so that the volume value of the application processor 101 and the volume value of the micro control unit 102 remain the same.
  • the volume adjustment service module 816 sets the volume to the adjusted volume value.
  • the volume adjustment service module 816 of the application processor 101 may set the volume value carried in the volume adjustment message 92 as the volume value of the application processor 101 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephone Function (AREA)

Abstract

本申请公开了一种处理方法及相关装置。电子设备包括应用处理器和微控制单元。其中,微控制单元的运行时功耗低于应用处理器的运行时功耗。电子设备可以在播放第一音频时,判断微控制单元是否支持播放该第一音频,并在判定出微控制单元支持播放该第一音频时,通过微控制单元播放第一音频的数据。电子设备可以在判定出微控制单元不支持播放第一音频时,通过应用处理器解码该第一音频,得到第一数据。应用处理器可以将第一数据发送至微控制单元。微控制单元控制扬声器播放第一数据。这样,电子设备使用微控制单元播放音频,节约电子设备播放音频的功耗。并且,应用处理器可以在微控制单元控制扬声器播放声音时,切换为休眠状态,进一步节约功耗。

Description

一种处理方法及相关装置
本申请要求在2022年09月30日提交中国国家知识产权局、申请号为202211214665.3的中国专利申请的优先权,发明名称为“一种处理方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种处理方法及相关装置。
背景技术
随着科技的发展,智能穿戴设备越来越普及。在智能穿戴设备播放音频时,由于智能穿戴设备需要通过应用处理器处理音频数据,导致应用处理器长时间处于唤醒状态,运行功耗高。因此,如何降低智能穿戴设备的功耗,延长智能穿戴设备的续航时间成为亟待解决的问题。
发明内容
本申请提供了一种处理方法及相关装置,实现了通过微控制单元播放音频,节约电子设备的功耗。
第一方面,本申请提供了一种处理方法,应用于电子设备,电子设备包括应用处理器、微控制单元和扬声器;方法包括:
应用处理器接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
应用处理器响应于第一输入,将第一消息发送至微控制单元;
当微控制单元基于第一消息判定出微控制单元支持播放第一音频时,微控制单元向应用处理器发送第二消息;
在微控制单元向应用处理器发送第二消息后,微控制单元控制扬声器播放第一音频,应用处理器切换为休眠状态。
这样,微控制单元播放第一音频,节约电子设备的功耗。并且,应用处理器切换为休眠状态,可以进一步节约设备功耗。
在一种可能的实现方式中,方法还包括:
当微控制单元基于第一消息判定出微控制单元不支持播放第一音频时,微控制单元向应用处理器发送第三消息;
应用处理器响应于第三消息,解码第一音频,得到第一数据;
应用处理器将第一数据发送至微控制单元;
微控制单元控制扬声器播放第一数据。
这样,微控制单元可以在不支持播放第一音频时,通过应用处理器获取解码后的第一数据,并进行播放。
在一种可能的实现方式中,第一消息包括第一音频的第一音频格式;微控制单元基于第一消息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
这样,微控制单元可以基于第一音频的音频格式,判断是否支持播放该音频。
在一种可能的实现方式中,在微控制单元向应用处理器发送第二消息后,微控制单元控制扬声器播放第一音频,应用处理器切换为休眠状态,具体包括:
应用处理器响应于第二消息,向微控制单元发送第一音频,第二消息表示微控制单元支持解码第一音频;
微控制单元接收到第一音频后,解码第一音频得到第一数据;
微控制单元控制扬声器播放第一数据;
应用处理器在发送完第一音频后,切换为休眠状态。
这样,微控制单元可以在支持解码第一音频时,从应用处理器处获取第一音频,并播放该第一音频,应用处理器可以在向微控制单元发送第一音频后,进入休眠状态,节约功耗。
在一种可能的实现方式中,第一消息包括第一音频的标识;微控制单元基于第一消息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
这样,由于微控制单元存储了应用处理器解码的音频,当微控制单元再次播放该音频时,可以直接播放存储的音频数据,不需要应用处理器再次进行解码,节约应用处理器的计算资源,并且节约设备功耗。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
这样,微控制单元以微控制单元支持的音频格式编码应用处理器解码得到的音频数据,可以节约微控制单元的存储空间。
在一种可能的实现方式中,在微控制单元向应用处理器发送第二消息后,方法还包括:
应用处理器收到第二消息后,向微控制单元发送用于指示微控制单元播放第一音频的第四消息;
微控制单元接收到第四消息后,控制扬声器播放第一音频。
这样,应用处理器可以在收到微控制单元的反馈后,基于微控制单元反馈的支持播放第一音频的信息,通知微控制单元播放第一音频。
在一种可能的实现方式中,在微控制单元收到第四消息后,方法还包括:
微控制单元向应用处理器发送第五消息,第五消息用于指示应用处理器切换为休眠状态;
应用处理器切换为休眠状态,具体包括:
应用处理器响应于第五消息,切换为休眠状态。
这样,应用处理器可以在确定出微控制单元播放第一音频时,才切换为休眠状态。
在一种可能的实现方式中,当微控制单元控制扬声器播放音频时,方法还包括:
应用处理器检测到第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
这样,应用处理器可以保证优先级高的应用优先使用扬声器。
在一种可能的实现方式中,在微控制单元播放音频的过程中,方法还包括:
应用处理器接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器将调节后的音量值发送给微控制单元;
微控制单元将微控制单元的音量值设置为调节后的音量值。
这样,应用处理器可以在收到调节音量的输入后,同步应用处理器和微控制单元的音量。
在一种可能的实现方式中,当电子设备处于连续播放音频的状态且微控制单元基于第一消息判定出微控制单元支持播放第一音频时;方法还包括:
微控制单元播放完第一音频后,通知应用处理器切换为非休眠状态;
应用处理器切换为非休眠状态后,将第七消息发送至微控制单元,第一音频与第三音频属于同一个播放列表;
微控制单元基于第七消息,判断微控制单元是否支持播放第三音频。
这样,当电子设备连续播放多个音频时,应用处理器可以在微控制单元播放完第一音频后,向微控制单元查询微控制单元是否支持播放第三音频。
第二方面,本申请提供另了一种处理方法,应用于电子设备,电子设备包括应用处理器、微控制单元和扬声器;方法包括:
微控制单元接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
微控制单元响应于第一输入,基于第一音频的第一信息,判断微控制单元是否支持播放第一音频;
当微控制单元在基于第一信息,判定出微控制单元支持播放第一音频时,微控制单元控制扬声器播放 第一音频。
这样,微控制单元可以代替应用处理器运行第一应用,节约设备功耗。微控制单元还可以播放第一音频,进一步节约设备功耗。
在一种可能的实现方式中,在微控制单元接收到第一输入之前,方法还包括:
应用处理器控制显示屏显示第一界面,第一界面包括第一应用的图标;
应用处理器接收到针对第一应用的图标的第二输入;
应用处理器响应于第二输入,向微控制单元发送第一指令,第一指令用于指示微控制单元显示第一应用的界面;
应用处理器在发送第一指令后,切换为休眠状态;
微控制单元响应于第一指令,控制显示屏显示第一应用的界面,第一应用的界面包括第一控件,第一控件用于触发电子设备播放第一音频,第一输入为针对第一控件的输入。
这样,应用处理器可以在判断出第一应用由微控制单元运行时,通知微控制单元显示第一应用的界面,节约设备功耗。
在一种可能的实现方式中,方法还包括:
当微控制单元基于第一信息判定出微控制单元不支持播放第一音频时,微控制单元向应用处理器发送第一消息;
应用处理器响应于第一消息,解码第一音频,得到第一数据;
应用处理器将第一数据发送至微控制单元;
微控制单元控制扬声器播放第一数据。
这样,微控制单元可以在不支持播放第一音频时,通过应用处理器获取解码后的第一数据,并进行播放。
在一种可能的实现方式中,第三消息包括第一音频。
在一种可能的实现方式中,第一消息用于触发应用处理器切换为非休眠状态。
这样,应用处理器在接收到解码第一音频的第三消息后,可以切换为非休眠状态,解码第一音频。
在一种可能的实现方式中,第一信息包括第一音频的第一音频格式;微控制单元基于第一信息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
这样,微控制单元可以基于第一音频的音频格式,判断是否支持播放该音频。
在一种可能的实现方式中,微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,第一信息包括第一音频的标识;微控制单元基于第一信息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
这样,由于微控制单元存储了应用处理器解码的音频,当微控制单元再次播放该音频时,可以直接播放存储的音频数据,不需要应用处理器再次进行解码,节约应用处理器的计算资源,并且节约设备功耗。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
这样,微控制单元以微控制单元支持的音频格式编码应用处理器解码得到的音频数据,可以节约微控 制单元的存储空间。
在一种可能的实现方式中,当微控制单元控制扬声器播放音频时,方法还包括:
应用处理器检测到第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
这样,应用处理器可以保证优先级高的应用优先使用扬声器。
在一种可能的实现方式中,在微控制单元播放音频的过程中,方法还包括:
应用处理器接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器将调节后的音量值发送给微控制单元;
微控制单元将微控制单元的音量值设置为调节后的音量值。
这样,应用处理器可以在收到调节音量的输入后,同步应用处理器和微控制单元的音量。
在一种可能的实现方式中,电子设备处于连续播放音频的状态且微控制单元基于第一信息判定出微控制单元支持播放第一音频;方法还包括:
微控制单元播放完第一音频后,若微控制单元基于第三音频的第二信息,确定出不支持播放第三音频,微控制单元通知应用处理器切换为非休眠状态,并且将第三音频发送至应用处理器;
应用处理器切换为非休眠状态后,解码第三音频。
这样,当电子设备连续播放多个音频时,微控制单元可以在播放至不支持播放的音频时,唤醒并通知应用处理器解码该音频。
第三方面,本申请提供了另一种处理方法,包括:
第一电子设备接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
第一电子设备响应于第一输入,将第一消息发送至第二电子设备,第一消息用于指示第二电子设备判断第二电子设备是否支持播放第一音频;
第一电子设备在接收到第二电子设备发送的第二消息后,切换为休眠状态。
在一种可能的实现方式中,方法还包括:
应用处理器接收到微控制单元发送的第三消息,第三消息用于指示第二电子设备解码第一音频;
第一电子设备响应于第三消息,解码第一音频,得到第一数据;
应用处理器将第一数据发送至微控制单元。
在一种可能的实现方式中,第一电子设备在接收到第二电子设备发送的第二消息后,切换为休眠状态,具体包括:
应用处理器响应于第二消息,向微控制单元发送第一音频,第二消息表示微控制单元支持解码第一音频;
应用处理器在发送完第一音频后,切换为休眠状态。
在一种可能的实现方式中,在应用处理器收到第二消息后,方法还包括:
应用处理器向微控制单元发送用于指示微控制单元播放第一音频的第四消息。
在一种可能的实现方式中,应用处理器切换为休眠状态,具体包括:
应用处理器接收到微控制单元发送的第五消息,切换为休眠状态,第五消息用于指示应用处理器切换为休眠状态。
在一种可能的实现方式中,方法还包括:
应用处理器在微控制单元控制扬声器播放音频时,检测到第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
在一种可能的实现方式中,方法还包括:
应用处理器在微控制单元播放音频的过程中,接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器将调节后的音量值发送给微控制单元。
在一种可能的实现方式中,方法还包括:
应用处理器接收到微控制单元发送的切换为非休眠状态的通知后,切换为非休眠状态;
应用处理器切换为非休眠状态后,将第七消息发送至微控制单元,第一音频与第三音频属于同一个播放列表。
第四方面,本申请提供了另一种处理方法,包括:
第二电子设备收到第一电子设备发送的第一消息后,判断第二电子设备是否支持播放第一音频;
当第二电子设备基于第一消息判定出第二电子设备支持播放第一音频时,第二电子设备向第一电子设备发送第二消息;
在第二电子设备向第一电子设备发送第二消息后,第二电子设备控制扬声器播放第一音频。
在一种可能的实现方式中,方法还包括:
当微控制单元基于第一消息判定出微控制单元不支持播放第一音频时,微控制单元向应用处理器发送第三消息;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,第一消息包括第一音频的第一音频格式;微控制单元基于第一消息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,在微控制单元向应用处理器发送第二消息后,微控制单元控制扬声器播放第一音频,具体包括:
微控制单元接收到应用处理器发送的第一音频后,解码第一音频得到第一数据;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,第一消息包括第一音频的标识;微控制单元基于第一消息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,在微控制单元向应用处理器发送第二消息后,方法还包括:
微控制单元接收到应用处理器发送的第四消息后,控制扬声器播放第一音频,第四消息用于指示微控制单元播放第一音频。
在一种可能的实现方式中,在微控制单元收到第四消息后,方法还包括:
微控制单元向应用处理器发送第五消息,第五消息用于指示应用处理器切换为休眠状态。
在一种可能的实现方式中,在微控制单元收到第四消息后,方法还包括:
微控制单元向应用处理器发送第五消息,第五消息用于指示应用处理器切换为休眠状态。
在一种可能的实现方式中,微控制单元接收到应用处理器发送的调节后的音量值,将微控制单元的音量值设置为调节后的音量值。
在一种可能的实现方式中,当电子设备处于连续播放音频的状态且微控制单元基于第一消息判定出微控制单元支持播放第一音频时;方法还包括:
微控制单元播放完第一音频后,通知应用处理器切换为非休眠状态;
微控制单元接收到应用处理器发送的第七消息后,基于第七消息,判断微控制单元是否支持播放第三音频。
第五方面,本申请提供了另一种处理方法,包括:
第二电子设备接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
第二电子设备响应于第一输入,基于第一音频的第一信息,判断第二电子设备是否支持播放第一音频;
当第二电子设备在基于第一信息,判定出第二电子设备支持播放第一音频时,第二电子设备控制扬声器播放第一音频。
在一种可能的实现方式中,在微控制单元接收到第一输入之前,方法还包括:
微控制单元接收到应用处理器发送的第一指令,第一指令用于指示微控制单元显示第一应用的界面;
微控制单元响应于第一指令,控制显示屏显示第一应用的界面,第一应用的界面包括第一控件,第一控件用于触发电子设备播放第一音频,第一输入为针对第一控件的输入。
在一种可能的实现方式中,方法还包括:
当微控制单元基于第一信息判定出微控制单元不支持播放第一音频时,微控制单元向应用处理器发送第一消息;
微控制单元接收应用处理器发送的第一数据后,控制扬声器播放第一数据。
在一种可能的实现方式中,第一消息用于触发应用处理器切换为非休眠状态。
在一种可能的实现方式中,第一信息包括第一音频的第一音频格式;微控制单元基于第一信息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,第一信息包括第一音频的标识;微控制单元基于第一信息判定出微控制单元支持播放第一音频,具体包括:
当微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;微控制单元控制扬声器播放第一音频,具体包括:
微控制单元解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元控制扬声器播放第一数据。
在一种可能的实现方式中,当微控制单元控制扬声器播放音频时,方法还包括:
应用处理器检测到第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
在一种可能的实现方式中,微控制单元接收到应用处理器发送的调节后的音量值,将微控制单元的音量值设置为调节后的音量值。
在一种可能的实现方式中,电子设备处于连续播放音频的状态且微控制单元基于第一信息判定出微控制单元支持播放第一音频;方法还包括:
微控制单元播放完第一音频后,若微控制单元基于第三音频的第二信息,确定出不支持播放第三音频,微控制单元通知应用处理器切换为非休眠状态,并且将第三音频发送至应用处理器。
第六方面,本申请提供了另一种处理方法,包括:
第一电子设备收到第二电子设备发送的第一消息;第一消息包括第一音频;
第一电子设备解码第一音频,得到第一数据;
第一电子设备将第一数据发送给第二电子设备,第一数据用于第二电子设备通过扬声器进行播放。
在一种可能的实现方式中,方法还包括:
应用处理器控制显示屏显示第一界面,第一界面包括第一应用的图标;
应用处理器接收到针对第一应用的图标的第二输入;
应用处理器响应于第二输入,向微控制单元发送第一指令,第一指令用于指示微控制单元显示第一应用的界面;
应用处理器在发送第一指令后,切换为休眠状态。
在一种可能的实现方式中,第一消息用于触发应用处理器切换为非休眠状态。
在一种可能的实现方式中,方法还包括:
应用处理器在微控制单元播放音频的过程中,接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器将调节后的音量值发送给微控制单元。
在一种可能的实现方式中,方法还包括:应用处理器收到微控制单元发送的用第三音频后,切换为非休眠状态,并解码第三音频。
第七方面,本申请提供了一种处理装置,包括应用处理器和微控制单元;其中,
应用处理器,用于接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
应用处理器,还用于响应于第一输入,将第一消息发送至微控制单元;
当微控制单元,用于基于第一消息判定出微控制单元支持播放第一音频时,微控制单元向应用处理器发送第二消息;
微控制单元,用于在微控制单元向应用处理器发送第二消息后,控制扬声器播放第一音频,应用处理器切换为休眠状态。
在一种可能的实现方式中,微控制单元,用于在微控制单元基于第一消息判定出微控制单元不支持播放第一音频时,向应用处理器发送第三消息;
应用处理器,用于响应于第三消息,解码第一音频,得到第一数据;
应用处理器,还用于将第一数据发送至微控制单元;
微控制单元,还用于控制扬声器播放第一数据。
在一种可能的实现方式中,第一消息包括第一音频的第一音频格式;
微控制单元,具体用于当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,应用处理器,具体用于响应于第二消息,向微控制单元发送第一音频,第二消息表示微控制单元支持解码第一音频;
微控制单元,还用于接收到第一音频后,解码第一音频得到第一数据;
微控制单元,还用于控制扬声器播放第一数据;
应用处理器,还用于在发送完第一音频后,切换为休眠状态。
在一种可能的实现方式中,第一消息包括第一音频的标识;
微控制单元,用于当微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元,具体用于控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;微控制单元,具体用于解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元,还用于控制扬声器播放第一数据。
在一种可能的实现方式中,应用处理器,还用于在收到第二消息后,向微控制单元发送用于指示微控制单元播放第一音频的第四消息;
微控制单元,还用于接收到第四消息后,控制扬声器播放第一音频。
在一种可能的实现方式中,微控制单元,还用于在微控制单元收到第四消息后,向应用处理器发送第五消息,第五消息用于指示应用处理器切换为休眠状态;
应用处理器,具体用于响应于第五消息,切换为休眠状态。
在一种可能的实现方式中,应用处理器,用于在微控制单元控制扬声器播放音频时,检测第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器,用于在检测到第一请求后,向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
在一种可能的实现方式中,应用处理器,还用于在微控制单元播放音频的过程中接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器,还用于将调节后的音量值发送给微控制单元;
微控制单元,还用于将微控制单元的音量值设置为调节后的音量值。
在一种可能的实现方式中,
微控制单元,还用于当电子设备处于连续播放音频的状态且微控制单元基于第一消息判定出微控制单元支持播放第一音频时,在播放完第一音频后,通知应用处理器切换为非休眠状态;
应用处理器,还用于切换为非休眠状态后,将第七消息发送至微控制单元,第一音频与第三音频属于同一个播放列表;
微控制单元,还用于基于第七消息,判断微控制单元是否支持播放第三音频。
第八方面,本申请提供了另一种处理装置,包括应用处理器和微控制单元;其中,
微控制单元,用于接收到第一输入,第一输入用于触发电子设备使用第一应用播放第一音频;
微控制单元,用于响应于第一输入,基于第一音频的第一信息,判断微控制单元是否支持播放第一音 频;
微控制单元,还用于在微控制单元在基于第一信息,判定出微控制单元支持播放第一音频时,控制扬声器播放第一音频。
在一种可能的实现方式中,在微控制单元接收到第一输入之前,方法还包括:
应用处理器,还用于控制显示屏显示第一界面,第一界面包括第一应用的图标;
应用处理器,还用于接收到针对第一应用的图标的第二输入;
应用处理器,还用于响应于第二输入,向微控制单元发送第一指令,第一指令用于指示微控制单元显示第一应用的界面;
应用处理器,还用于在发送第一指令后,切换为休眠状态;
微控制单元,还用于响应于第一指令,控制显示屏显示第一应用的界面,第一应用的界面包括第一控件,第一控件用于触发电子设备播放第一音频,第一输入为针对第一控件的输入。
在一种可能的实现方式中,微控制单元,用于当微控制单元基于第一信息判定出微控制单元不支持播放第一音频时,向应用处理器发送第一消息;
应用处理器,还用于响应于第一消息,解码第一音频,得到第一数据;
应用处理器,还用于将第一数据发送至微控制单元;
微控制单元,还用于控制扬声器播放第一数据。
在一种可能的实现方式中,第一消息用于触发应用处理器切换为非休眠状态。
在一种可能的实现方式中,第一信息包括第一音频的第一音频格式;
微控制单元,具体用于当微控制单元基于第一音频格式,判定出微控制单元支持解码第一音频格式的第一音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,微控制单元,具体用于解码第一音频,得到第一数据;
微控制单元,还用于控制扬声器播放第一数据。
在一种可能的实现方式中,第一信息包括第一音频的标识;微控制单元,具体用于在微控制单元基于第一音频的标识,判定出存储有第一音频的标识对应的第二音频时,确定出微控制单元支持播放第一音频。
在一种可能的实现方式中,第二音频包括解码后的第一数据;微控制单元,具体用于控制扬声器播放第一音频的标识指示的第二音频。
在一种可能的实现方式中,第二音频包括以第二音频格式编码后的第一数据,微控制单元支持解码第二音频格式的第二音频;
微控制单元,具体用于解码第一音频的标识指示的第二音频,得到第一数据;
微控制单元,还用于控制扬声器播放第一数据。
在一种可能的实现方式中,应用处理器,用于当微控制单元控制扬声器播放音频时,检测第二应用的第一请求,第一请求用于请求使用扬声器;
应用处理器,还用于在检测第二应用的第一请求时,向微控制单元发送第六消息,第六消息用于指示微控制单元停止播放音频。
在一种可能的实现方式中,第二应用的优先级高于第一应用的优先级。
在一种可能的实现方式中,应用处理器,还用于在微控制单元播放音频的过程中接收到调节音量的输入,将应用处理器的音量值设置为调节后的音量值;
应用处理器,还用于将调节后的音量值发送给微控制单元;
微控制单元,还用于将微控制单元的音量值设置为调节后的音量值。
在一种可能的实现方式中,微控制单元,还用于在电子设备处于连续播放音频的状态且微控制单元基于第一信息判定出微控制单元支持播放第一音频播放完第一音频后,用于基于第三音频的第二信息,确定出是否支持播放第三音频;
微控制单元,还用于在基于第三音频的第二信息,确定出不支持播放第三音频时,通知应用处理器切换为非休眠状态,并且将第三音频发送至应用处理器;
应用处理器,还用于在切换为非休眠状态后,解码第三音频。
第九方面,本申请提供了一种电子设备,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述任一方面任一项可能的实现方式中的处理方法。
第十方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一方面任一项可能的实现方式中的处理方法。
第十一方面,本申请实施例提供了一种芯片系统,芯片系统应用于电子设备,芯片系统包括一个或多个处理器,处理器用于调用计算机指令以使得电子设备执行上述任一方面任一项可能的实现方式中的处理方法。
附图说明
图1为本申请实施例提供的一种电子设备100的硬件结构示意图;
图2为本申请实施例提供的一种处理方法的流程示意图;
图3为本申请实施例提供的一种软件架构示意图;
图4为本申请实施例提供的一种处理方法的流程示意图;
图5为本申请实施例提供的一种扬声器调度方法的流程示意图;
图6为本申请实施例提供的一种音量调节方法的流程示意图;
图7为本申请实施例提供的另一种处理方法的流程示意图;
图8为本申请实施例提供的另一种软件架构示意图;
图9为本申请实施例提供的另一种处理方法的流程示意图;
图10为本申请实施例提供的另一种扬声器调度方法的流程示意图;
图11为本申请实施例提供的另一种音量调节方法的流程示意图。
具体实施方式
本申请下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
本申请实施例提供了一种处理方法。电子设备100包括应用处理器和微控制单元。其中,微控制单元的运行时功耗低于应用处理器的运行时功耗。电子设备100可以在播放第一音频时,判断微控制单元是否支持播放该第一音频,并在判定出微控制单元支持播放该第一音频时,通过微控制单元播放第一音频的数据。电子设备100可以在判定出微控制单元不支持播放第一音频时,通过应用处理器解码该第一音频,得到第一数据。应用处理器可以将第一数据发送至微控制单元。微控制单元控制扬声器播放该第一数据。这样,电子设备100使用微控制单元播放音频,由于微控制单元的功耗低于应用处理器的功耗,可以节约电子设备100播放音频的功耗。并且,应用处理器可以在微控制单元控制扬声器播放声音时,切换为休眠状态,进一步节约功耗。
其中,应用处理器处于休眠状态(又称为待机状态、低功耗状态)时,输入应用处理器的电流较小,应用处理器的功耗较低;应用处理器处于非休眠状态(又称为高功耗状态)时,输入应用处理器的电流较大,应用处理器的功耗较高。可以理解的是,处于非休眠状态的应用处理器的功耗高于处于休眠状态的应用处理器的功耗,处于非休眠状态的应用处理器的输入电流大于处于休眠状态的应用处理器的输入电流。
接下来介绍本申请实施例提供的一种电子设备100。
电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
请参考图1,图1示例性示出了本申请实施例提供的一种电子设备100的硬件结构示意图。
如图1所示,电子设备100可以包括有应用处理器(application processor,AP)101、微控制单元(microcontroller unit,MCU)102、电源开关103、存储器104、音频模块105和扬声器105A等等。上述各个模块可以通过总线或者其它方式连接,本申请实施例以通过总线连接为例。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
其中,应用处理器101和微控制单元102都可以用于读取并执行计算机可读指令。并且可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
在一些实施例中,应用处理器101可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
其中,I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,应用处理器101可以包含多组I2C总线。应用处理器101可以通过不同的I2C总线接口分别耦合不同的器件,例如,触摸传感器,充电器,闪光灯,摄像头等。例如:应用处理器101可以通过I2C接口耦合触摸传感器,使应用处理器101与触摸传感器通过I2C总线接口通信,实现电子设备100的触摸功能。I2S接口还可以用于音频通信。在一些实施例中,应用处理器101可以通过I2S总线与音频模块105耦合,实现应用处理器101与音频模块105之间的通信。
需要说明的是,微控制单元102也可以包括上述一个或多个接口。可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
具体实现中,应用处理器101可以集成有中央处理器(central process unit,CPU)、图形处理器(graphics processing unit,GPU)、视频编解码器、内存子系统等多个模块。在一些示例中,应用处理器101可以为Arm-coretex-A核处理器,其时钟频率(又称为工作频率)超过1GHz,通常包括4个及以上的处理核心。应用处理器101的随机访问存储器的内存容量约2GB及以上。应用处理器101可以用于运行Linux、Android、Windows、鸿蒙等操作系统。应用处理器101的运行时电流通常在100mAh,待机电流通常在4mAh。在本 申请实施例中,应用处理器101可以用于解码音频,得到待播放的音频的数据。其中,应用处理器101支持的解码多种音频格式的音频,例如,动态影像专家压缩标准音频层面3(moving picture experts group audio layer III,MP3)、高级音频编码(advanced audio coding,ACC)、自适应多速率(adaptive multi-rate,AMR)脉冲编码调制(pulse code modulation,PCM)、Ogg(oggvobis)、第三代合作伙伴计划(the 3rd generation partner project,3GP)、高级串流格式(advanced streaming format,ASF)、AV1、传输流(transport stream,TS)、多媒体容器文件格式(mkv file format,MKV)、MP4(mpeg-4part 14)、微软媒体音频(windows media audio,WMA)、波形声音文件(WAV)、M4A、音频数据传输流(audio data transport stream,ADTS)、SLK等。需要说明的是,应用处理器101不能通过系统提供的服务解码SLK音频格式的音频,应用处理器101可以基于应用提供的组件解码SLK音频格式的音频。
微控制单元102可以包括中央处理器、内存、计数器(Timer)以及一个或多个接口。在一些示例中,微控制单元102可以为Arm-Coretex-M核处理器,其时钟频率(又称为工作频率)约192MHz,处理器核心通常为单核。微控制单元102的随机访问存储器的内存容量约2MB。微控制单元102可以支持轻量级物联网操作系统的运行,例如,LiteOS、鸿蒙等操作系统。微控制单元102的运行时电流通常在2mAh,待机电流通常在0.1mAh。例如,微控制单元102可以为STL4R9芯片、Dialog单片机等。在本申请实施例中,微控制单元102也可以用于解码音频,得到音频数据。需要说明的是,由于微控制单元102的时钟频率较低,运行内存较小,微控制单元102仅能支持解码部分音频格式的音频。例如,微控制单元102支持解码的音频格式有PCM、WAV、AMR、MP3、ACC、SLK。相对于应用处理器101,微控制单元不支持解码Ogg、3GP、ASF、TS、MKV、MP4、WMA、M4A、ADTS等音频格式的音频。
在一些示例中,音乐应用提供权限受限的音频。应用处理器101可以使用音乐应用提供的参数,解码该具有权限受限的音频,得到音频数据。微控制单元102不支持解码权限受限的音频,因此,若某个音频权限受限,无论该音频为任意的音频格式,微控制单元102也不支持解码该音频。
需要说明的是,应用处理器101和微控制单元102支持的音频格式仅为示例,本申请实施例对此不作限定。
电源开关103可以用于控制电源向电子设备100供电。
存储器104用于存储各种软件程序和/或多组指令。具体实现中,存储器104可以包括高速随机存取的存储器,并且也可以包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在一些实施例中,应用处理器101和/或微控制单元102中的存储器为高速缓冲存储器。该存储器可以保存刚用过或循环使用的指令或数据。如果需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了应用处理器101和/或微控制单元102的等待时间,因而提高了系统的效率。在一些示例中,存储器102与应用处理器101、微控制单元102耦合。
电子设备100可以通过音频模块105,扬声器105A,以及应用处理器101(或微控制单元102)等实现音频功能。例如音乐播放等。
其中,音频模块105用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。扬声器105A,也称“喇叭”,用于将音频电信号转换为声音信号。
可选的,电子设备100还可以包括受话器,麦克风等。其中,受话器,也称“听筒”,用于将音频电信号转换成声音信号。麦克风,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
可选的,电子设备100还可以包括显示屏(图1中未示出),显示屏可以用于显示图像、视频、控件、文字信息等。显示屏可以包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏205,N为大于1的正整数。
可选的,电子设备100还可以包括通信模块(图1中未示出),通信模块可以包括蓝牙模块、WLAN模块等。电子设备100可以通过通信模块接收或发射无线信号。电子设备100可以通过通信模块与其他电子设备建立通信连接,并基于该通信连接与电子设备100进行数据交互。
可选的,电子设备100可以包括有一个或多个传感器。例如,电子设备100可以包括触摸传感器,该触传感器也可以称为“触控器件”。触摸传感器可以设置于显示屏,由触摸传感器与显示屏组成触摸屏,也可以称为“触控屏”。触摸传感器可以用于检测作用于其上或附近的触摸操作。
在本申请一些实施例中,电子设备100为智能手表,该电子设备100还可以包括有表带和表盘。表盘 可以包括有上述显示屏,以用于显示图像、视频、控件、文字信息等等。表带可以用于将电子设备100固定在人体四肢部位以便于穿戴。
在一种可能的实现方式中,电子设备100包括应用处理器和微控制单元。电子设备100的处于非休眠状态的应用处理器可以在接收到播放第一音频的输入后,向微控制单元发送用于查询微控制单元是否支持播放第一音频的查询请求。若应用处理器收到微控制单元回复的结果为支持播放第一音频,可以通知微控制单元播放第一音频,并且在微控制单元播放第一音频时,切换为休眠状态。若应用处理器收到微控制单元回复的结果为不支持播放第一音频,可以针对第一音频执行解码操作,得到第一数据。应用处理器可以将第一数据发送至微控制单元,微控制单元可以控制扬声器播放该第一数据。这样,通过功耗较低的电子设备100的微控制单元控制扬声器播放音频,使得功耗较高的应用处理器不必执行音频播放操作,节约设备功耗。
示例性的,如图2所示,该处理方法包括:
S201.应用处理器控制显示屏显示包括音乐应用图标的桌面。
此时,应用处理器处于非休眠状态,应用处理器可以控制显示屏显示包括音乐应用图标的桌面。
S202.应用处理器接收到针对音乐应用图标的输入。
应用处理器可以获取针对音乐应用图标的输入。其中,针对音乐应用图标的输入可以为触摸输入,例如单击、长按等等,或者,该输入也可以为语音指令输入、浮空手势输入(例如,在电子设备100的摄像头上方挥手等)、肢体输入(例如,晃动电子设备100等)等等,本申请实施例对此不作限定。
需要说明的是,音乐应用可以为任何用于播放音频的应用,该应用可以为出厂时安装在电子设备100中的,或者,该应用也可以为用户通过网络下载的。例如,音乐应用可以为图3所示的音乐应用301。
S203.应用处理器显示音乐应用的界面,该界面用于播放音频,该界面包括播放控件。
应用处理器可以响应于针对音乐应用图标的输入,控制显示屏显示音乐应用的界面。该音乐应用的界面可以包括播放控件,播放控件可以用于触发电子设备100播放第一音频。
可选的,该音乐应用的界面还可以包括第一音频的歌曲信息,该歌曲信息可以为包括但不限于歌曲名称、歌手名称等等。
S204.应用处理器接收到针对播放控件的输入。
其中,播放控件可以用于触发电子设备100播放第一音频。应用处理器可以接收到针对播放控件的输入,响应于该输入执行后续播放第一音频的步骤。
其中,第一音频为应用处理器存储的音频。或者,第一音频为存储在电子设备100的存储器的音频,该存储器不属于应用处理器,也不属于微控制单元。
在一些示例中,应用处理器可以通过传感器驱动模块(例如触摸传感器模块)获取用户触摸的屏幕触点位置信息,应用处理器可以基于屏幕触点位置信息确定该输入为针对播放控件的输入。
在另一些示例中,不限于上述步骤S201-步骤S204,处于未休眠状态的应用处理器可以响应于用户用于播放第一音频的语音指令,执行步骤S205。应用处理器在接收到用户的语音指令之前,控制显示屏显示的界面可以不为音乐应用的界面。也就是说,应用处理器在控制显示屏显示任意界面时,都可以响应于用户用于播放第一音频的语音指令,执行步骤S205。
或者,电子设备100显示文件管理应用的界面时,该界面包括存储的一个或多个音频对应的音频选项,该一个或多个音频包括第一音频,该一个或多个音频选项包括第一音频选项。电子设备100可以接收用户针对第一音频选项的输入后,执行步骤S205。
综上所述,应用处理器在接收到使用音乐应用播放第一音频的输入后,都可以执行步骤S205。
在一些实施例中,应用处理器可以在接收到针对播放控件的输入后,判断第一音频是否权限受限。应用处理器可以在判定出第一音频权限受限时,执行步骤S208。应用处理器可以在判定出第一音频权限不受限时,执行步骤S205。其中,第一音频的文件属性包括权限标志位,应用处理器可以基于第一音频的权限标志位判断第一音频是否权限受限。例如,当应用处理器获取到的第一音频的权限标志位的值为第一值时,应用处理器判定出第一音频权限受限,当应用处理器获取到的第一音频的权限标志位的值为第二值时,应用处理器判定出第一音频权限不受限,第一值与第二值不同。
S205.应用处理器向微控制单元发送查询请求21,该查询请求21包括第一音频的标识、音频格式。
查询请求21可以用于查询微控制单元是否支持播放第一音频。其中,第一音频的标识可以为第一音频的名称。音频格式又称为音频类型,可以理解为音频的音频文件格式。在一些示例中,应用处理器可以 获取音频的音频文件的后缀名,得到该音频的音频格式。
S206.微控制单元判断是否存储第一音频的标识对应的第二音频。
需要说明的是,微控制单元可以存储应用处理器解码音频得到的音频数据,微控制单元存储的应用处理器解码得到的音频数据可以称为缓存音频。也就是说,若电子设备100在上次一播放音频时,应用处理器解码过该第一音频,应用处理器可以将解码第一音频得到的第一数据发送给微控制单元。微控制单元可以存储该第一数据。
在一些示例中,微控制单元可以直接存储第一数据,得到缓存音频,该包括第一数据的缓存音频可以称为第二音频。
在另一些示例中,为了减小第二音频的存储空间,使得电子设备100可以存储更多的第二音频。微控制单元可以使用微控制单元支持的音频格式编码该第一数据,得到第二音频。也就是说,第二音频可以包括使用微控制单元支持的音频格式编码后的第一数据。这样,通过编码压缩音频数据,可以节约电子设备100的存储空间,使得电子设备100可以存储更多的第二音频。
微控制单元可以在收到查询请求21后,基于查询请求21的第一音频的标识,以及微控制单元存储的音频的标识与存储的音频数据对应关系,判断是否存储有待播放的第一音频的标识对应的第二音频。
当微控制单元判定出存储有该第二音频时,可以执行步骤S207。当微控制单元判定出未存储第二音频时,可以执行步骤S210。
S207.微控制单元向应用处理器发送查询结果22,查询结果22表示微控制单元存储有第一音频的标识对应的第二音频。
微控制单元可以在判定出存储有第一音频的标识对应的第二音频时,向应用处理器发送查询结果22。查询结果22可以表示存储有第一音频的标识对应的第二音频,即,微控制单元支持播放该第一音频。
S208.微控制单元控制扬声器播放第二音频。
需要说明的是,当微控制单元存储有第一音频对应的第二音频时。若第二音频包括应用处理器解码第一音频得到的第一数据,微控制单元可以直接读取第一数据,并控制扬声器播放该第一数据。若第二音频包括编码后的第一数据,微控制单元首先解码第二音频,得到第一数据,再控制扬声器播放该第一数据。
可以理解的是,微控制单元可以执行步骤S207和步骤S208的顺序不限于先执行步骤S207,再执行步骤S208。微控制单元也可以先执行步骤S208,再执行步骤S207。或者,同时执行该两个步骤,本申请实施例对此不作限定。
S209.应用处理器接收到查询结果22,切换至休眠状态。
应用处理器可以在接收到查询结果22时,从非休眠状态切换至休眠状态。这样,通过微控制单元控制扬声器播放第一音频,并将应用处理器切换为休眠状态,极大降低了电子设备100播放音频的功耗。
可选的,微控制单元可以在判定出存储有第二音频后,向应用处理器发送该查询结果22。应用处理器可以在接收到查询结果22时,向微控制单元发送播放指示,播放指示用于指示微控制单元播放音频。微控制单元可以在收到播放指示后,控制扬声器播放第二音频。应用处理器还可以在向微控制单元发送播放指示后,从非休眠状态切换至休眠状态。
或者,微控制单元可以在接收到播放指示后,控制扬声器播放第二音频,同时向应用处理器发送用于指示微控制单元正在播放音频的播放状态信息。应用处理器可以在收到播放状态信息后,从非休眠状态切换至休眠状态。
需要说明的是,电子设备100在播放第一音频时,若电子设备100处于连续播放音频的状态(例如,顺序播放状态或随机播放状态)时,电子设备100可以在播放完第一音频后,继续播放第一音频所在的播放列表中的其他音频。例如,电子设备100可以在控制扬声器播放该第一数据后,继续播放第三音频,第三音频与第一音频不同,第三音频与第一音频属于同一个播放列表。
在此,微控制单元可以在播放完第二音频后,通知应用处理器从休眠状态切换为非休眠状态。应用处理器可以在切换为非休眠状态后,再次向微控制单元发送查询请求,该查询请求包括第三音频的标识、音频格式,具体描述可以参见上述步骤S205,在此不再赘述。
S210.微控制单元判断是否支持解码第一音频的音频格式。
微控制单元可以在判定出未存储第一音频的标识对应的第二音频后,基于查询请求21的第一音频的音频格式,判断微控制单元是否支持解码第一音频的音频格式。
其中,电子设备100的微控制单元存储有微控制单元可以解码的音频格式。电子设备100可以查找存储的音频格式中是否存在待播放的第一音频的音频格式。
微控制单元可以在判定出存储的音频格式中存在待播放的第一音频的音频格式时,即,判定出微控制单元支持解码待播放的第一音频,执行步骤S216。
微控制单元可以在判定出存储的音频格式中不存在待播放的第一音频的音频格式时,即,判定出微控制单元不支持解码待播放的第一音频,执行步骤S211。
需要说明的是,不限于上述先判断微控制单元是否存储有第一音频的标识对应的第二音频,再在微控制单元未存储第二音频时判断微控制单元不支持解码第一音频的音频格式的顺序。微控制单元可以先判断微控制单元是否支持解码第一音频的音频格式,再在微控制单元不支持解码第一音频的音频格式时,判断微控制单元是否存储有第一音频的标识对应的第二音频。或者,微控制单元可以同时判断微控制单元是否支持解码第一音频的音频格式以及微控制单元是否存储有第一音频的标识对应的第二音频,本申请实施例对此不作限定。
在一些实施例中,查询请求21可以只包括第一音频的标识,相应的,微控制单元可以只判断是否存储有第二音频。或者,查询请求21可以只包括第一音频的音频格式,相应的,微控制单元可以只判断是否支持解码第一音频。
S211.微控制单元向应用处理器发送查询结果23,查询结果23表示微控制单元不支持播放第一音频。
微控制单元可以在判定出未存储有第一音频的标识对应的第二音频,且不支持解码第一音频的音频格式时,向应用处理器发送查询结果23。查询结果23可以表示微控制单元不支持播放该第一音频。
S212.应用处理器解码第一音频,得到第一数据。
应用处理器可以在接收到查询结果23后,通过解码算法,解码第一音频,得到第一数据。其中,解码算法的参数可以为指定参数,或者,音乐应用提供的参数。这样,电子设备100在解码第三方音乐应用提供的音频时,可以使用第三方音乐应用提供的参数,实现第一音频的解码操作。
S213.应用处理器将第一数据发送给微控制单元。
应用处理器可以将解码得到的第一数据发送给微控制单元。
S214.微控制单元控制扬声器播放第一数据。
在一些示例中,微控制单元可以将第一数据传输给图1所示的音频模块105,和音频模块105、扬声器105A共同实现第一音频的播放操作。
S215.微控制单元存储第一数据。
微控制单元可以存储第一数据,得到第二音频。
可选的,微控制单元可以使用支持的编码方式,将第一数据编码得到指定音频格式的第二音频。其中,指定音频格式为微控制单元支持的音频格式,具体的,请参见图1所示实施例,在此不再赘述。这样,通过编码压缩音频数据,可以节约电子设备100的存储空间,使得电子设备100可以存储更多的第二音频。
需要说明的是,电子设备100在播放第一音频时,若电子设备100处于连续播放音频的状态(例如,顺序播放状态或随机播放状态)时,电子设备100可以在播放完第一音频后,继续播放第一音频所在的播放列表中的其他音频。例如,电子设备100可以在控制扬声器播放该第一数据后,继续播放第三音频,第三音频与第一音频不同,第三音频与第一音频属于同一个播放列表。
在此,应用处理器可以在解码第一音频得到第一数据后,再次向微控制单元发送查询请求,该查询请求包括第三音频的标识、音频格式,具体描述可以参见上述步骤S205,在此不再赘述。
S216.微控制单元向应用处理器发送查询结果24,查询结果24表示微控制单元支持解码第一音频。
微控制单元可以在判定出支持解码第一音频的音频格式时,向应用处理器发送查询结果24。查询结果24可以表示微控制单元支持解码该第一音频。查询结果24还可以用于通知应用处理器向微控制单元发送第一音频。
S217.应用处理器向微控制单元发送第一音频。
应用处理器可以在接收到查询结果24后,向微控制单元发送第一音频。
可选的,若第一音频存储在电子设备100的不属于应用处理器且不属于微控制单元的存储器中时,应用处理器可以将第一音频的存储路径发送给微控制单元。微控制单元可以基于存储路径,读取第一音频。
S218.微控制单元控制扬声器播放第一音频。
微控制单元解码第一音频得到第一数据,并且控制扬声器播放第一数据。
S219.应用处理器切换为休眠状态。
应用处理器可以在向微控制单元发送第一音频后,从非休眠状态切换为休眠状态。
在一些实施例中,微控制单元可以在收到第一音频后,向应用处理器发送用于指示微控制单元正在播 放第一音频的播放状态信息。应用处理器可以在收到该播放状态信息后,从非休眠状态切换至休眠状态。
在另一些实施例中,应用处理器可以在发送第一音频时,向微控制单元发送播放指示。微控制单元可以在收到播放指示和第一音频后,解码并播放第一音频。微控制单元还可以在接收到第一音频和播放指示后,向应用处理器发送用于指示微控制单元正在播放第一音频的播放状态信息。应用处理器可以在收到该播放状态信息后,从非休眠状态切换至休眠状态。
需要说明的是,电子设备100在播放第一音频时,若电子设备100处于连续播放音频的状态(例如,顺序播放状态或随机播放状态)时,电子设备100可以在播放完第一音频后,继续播放第一音频所在的播放列表中的其他音频。例如,电子设备100可以在控制扬声器播放该第一数据后,继续播放第三音频,第三音频与第一音频不同,第三音频与第一音频属于同一个播放列表。
在此,微控制单元可以在播放完第一音频后,通知应用处理器从休眠状态切换为非休眠状态。应用处理器可以在切换为非休眠状态后,再次向微控制单元发送查询请求,该查询请求包括第三音频的标识、音频格式,具体描述可以参见上述步骤S205,在此不再赘述。
在一些实施例中,应用处理器在解码第一音频得到第一数据后,可以直接控制扬声器播放第一数据。并且将第一数据发送至微控制单元,指示微控制单元存储该第一数据。
在另一些示例中,电子设备100可以使用应用处理器判断微控制单元是否支持播放第一音频,其中,应用处理器可以从微控制单元获取微控制单元存储的音频数据对应的音频的标识,以及微控制单元支持解码的音频格式,判断微控制单元是否支持播放第一音频。其中,应用处理器判断微控制单元是否支持播放第一音频的描述可以参见步骤S206至步骤S210,在此不再赘述。
在一些实施例中,若电子设备100处于连续播放音频的状态(例如,顺序播放状态或随机播放状态)时,电子设备100可以在播放完第一音频后,继续播放第一音频所在的播放列表中的其他音频。例如,电子设备100可以在控制扬声器播放该第一数据后,继续播放第三音频,第三音频与第一音频不同,第三音频与第一音频属于同一个播放列表。由于电子设备100的应用处理器101在播放不同的音频时,都会执行上述步骤S205,查询微控制单元是否支持播放该音频,增加了功耗。
为了节约电子设备100的功耗,应用处理器可以在接收到播放第一音频的输入时,向微控制单元发送包括播放列表的所有音频的标识与音频格式的查询请求。微控制单元可以在播放下一个音频时,不需要应用处理器发送查询请求,直接将查询结果发送至应用处理器。
在另一些实施例中,微控制单元可以在播放至微控制单元不支持播放的音频,或者播放至微控制单元支持解码的音频时,向应用处理器发送相应的查询结果。具体的,关于微控制单元判断是否支持播放某个音频,以及微控制单元向应用处理器发送查询结果的描述可以参见图2所示实施例,在此不再赘述。
接下来结合图1所示的电子设备100的硬件结构,示例性的介绍一种应用处理器和微控制单元的软件架构。
示例性的,如图3所示,分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,操作系统包括但不限于应用层(又称为应用程序层),应用程序框架层,系统库以及内核层。
其中,应用层可以包括一系列应用程序包。其中,应用处理器101的应用层可以包括但不限于音乐应用301,通话应用302,桌面应用303等应用程序。其中,音乐应用301可以用于播放第一音频。通话应用302可以用于接听通话。桌面应用303可以用于显示包括应用程序的图标的桌面。
应用程序框架层可以为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架,应用程序框架层包括一些预先定义的函数。
其中,应用处理器101的应用程序框架层可以包括但不限于协同播放模块311、解码服务模块312、焦点管理模块313、焦点代理模块314、音量同步模块315、音量调节服务模块316、双核通信模块317、电源管理模块318等等。
其中,协同播放模块311可以用于将音乐应用301的音频发送至解码服务模块312。
解码服务模块312可以用于解码音频,得到音频的数据。
焦点管理模块313可以用于分配、管理扬声器。应用程序可以从焦点管理模块313获取扬声器的焦点,应用程序可以在获取到扬声器的焦点后,使用扬声器播放音频。焦点管理模块313在某个应用程序占用扬声器焦点时,若焦点管理模块313收到另一个应用程序占用扬声器的请求,通知该应用程序失去扬声器焦 点,并且将扬声器焦点分配给发送请求的另一个应用程序。这样,应用程序可以在收到丢失扬声器焦点的通知后,暂停播放音频。
焦点代理模块314可以用于代表微控制单元102,当微控制单元102占用扬声器时,可以通知焦点代理模块314。焦点代理模块314可以向焦点管理模块313发送占用扬声器的请求,让焦点管理模块313记录焦点代理模块314正在使用扬声器。焦点管理模块313可以在收到其他应用程序(例如,电话应用)占用扬声器的请求时,通知焦点代理模块314。焦点代理模块314可以通知微控制单元102,微控制单元102可以在接收到焦点代理模块314的通知后,执行暂停播放音频的操作。这样,由于应用处理器101的焦点管理模块313只负责应用处理器的焦点管理。微控制单元102占用扬声器时,应用处理器101的焦点管理模块313无法通知微控制单元102。因此,以焦点代理模块314代表微控制单元,可以在应用处理器101的应用程序占用焦点时,通过焦点代理模块314通知微控制单元101,便于微控制单元102执行暂停播放音频的操作。
例如,当微控制单元102占用扬声器时,即,焦点管理模块313记录焦点代理模块314占用扬声器时,若焦点管理模块313收到通话应用的占用扬声器的请求,焦点管理模块313可以通知焦点代理模块314停止占用扬声器,并且通知电话应用使用扬声器。通话应用可以通过扬声器播放音频(例如,来电铃声)。焦点代理模块314可以通知微控制单元102停止使用扬声器。微控制单元102收到焦点代理模块314的通知后,可以记录播放音频的播放进度。当应用处理器101再次收到用户播放音频的输入后,电子设备100可以从记录的播放进度处继续播放该音频。这样,微控制单元102可以在扬声器被其他应用或模块占用时,保存播放音频播放的进度,便于下次播放该音频时存储的播放进度处继续播放该音频。
音量同步模块315可以用于在收到音量调节服务模块316发送的音量值后,将该音量值同步到微控制单元102。具体的,音量同步模块315可以将音量值发送给微控制单元102的音量调节服务模块324,用于调整微控制单元102的音量。
音量调节服务模块316可以用于基于用户的音量调节操作,改变电子设备100的应用处理器101的音量。音量调节服务模块315可以将改变后的音量发送给音量同步模块315。
双核通信模块317可以用于应用处理器101和微控制单元102进行数据的传输。例如,将应用处理器101提供的第一音频的第一数据发送给微控制单元102。
电源管理模块318可以用于控制应用处理器101从休眠状态切换为非休眠状态,或者,控制应用处理器101从非休眠状态切换至休眠状态。在一些示例中,电源管理模块318可以控制给应用处理器101提供的输入电流的大小,调整应用处理器101所处的状态,例如,当应用处理器101处于休眠状态时,可以加大输入到应用处理器101的输入电流,让应用处理器101从休眠状态切换为非休眠状态。
微控制单元102的应用程序框架层可以包括但不限于协同播放模块321、解码服务模块322、音量同步模块323、音量调节服务模块324、双核通信模块325等等。
其中,协同播放模块321可以用于判断微控制单元102是否存储有第一音频的标识对应的第二音频,还可以用于判断微控制单元102是否支持解码第一音频。协同播放模块321还可以用于在判定出微控制单元102未存储第二音频,并且不支持解码第一音频时,通知应用处理器101的解码服务模块312解码第一音频。协同播放模块321可以永顺接收应用处理器101发送的解码第一音频得到的第一数据,并且将第一数据发送音频驱动模块327。协同播放模块321还可以用于基于应用处理器解码音频得到的音频数据,得到缓存音频,例如,基于第一数据得到第二音频。
协同播放模块321可以用于在判定出微控制单元102支持解码第一音频时,通知微控制单元102的解码服务模块322解码第一音频。
解码服务模块322可以用于在微控制单元102支持解码第一音频时,解码第一音频,得到第一数据。解码服务模块322可以将第一数据发送至音频驱动模块327,由音频驱动模块327控制扬声器播放该第一数据。或者,当第二音频包括微控制单元102编码后的第一数据时,解码服务模块322可以用于解码第二音频,得到第一数据。其中,需要说明的是,由于应用处理器101和微控制单元102的运行主频,内存容量不同,例如,该运行主频,内存容量的差异可以参见图1所示实施例,在此不再赘述。解码服务模块322支持解码的音频格式的数量少于解码服务模块312支持解码的音频格式。解码服务模块322不支持解码的音频格式可以包括但不限于Ogg、3GP、ASF、TS、MKV、MP4、WMA、M4A、ADTS。
音量同步模块323可以用于在收到音量调节服务模块324发送的音量值后,将该音量值同步到应用处理器101。具体的,音量同步模块323可以将音量值发送给应用处理器101的音量调节服务模块316,用于调整应用处理器101的音量。
音量调节服务模块324可以用于基于用户的音量调节操作,改变微控制单元102播放音频时使用的音量。音量调节服务模块324可以将改变后的音量发送给音量同步模块323。
双核通信模块325可以用于微控制单元102和应用处理器101进行数据的传输。例如,应用处理器101的解码服务模块312可以将基于第一音频解码得到的第一数据发送给双核通信模块317,双核通信模块317可以将第一数据发送给双核通信模块325,双核通信模块325再通过协同播放模块321、音频通道模块327将第一数据发给音频驱动模块327,使得微控制单元102可以实现控制扬声器播放应用处理器101解码得到的第一数据的操作。
系统库可以包括多个功能模块。
其中,应用处理器101的系统库可以包括但不限于音频通道模块319。音频通道模块319可以用于应用框架层和内核层之间相互传输音频数据。例如,音频通道模块319可以将解码服务模块312的音频数据发送给内核层的音频驱动模块320。
微控制单元102的系统库可以包括但不限于音频通道模块326。音频通道模块326可以用于应用框架层和内核层之间相互传输音频数据。例如,音频通道模块319可以将解码服务模块322或协同播放模块321的音频数据发送给内核层的音频驱动模块327。
可选的,应用处理器101和/或微控制单元102的系统库还可以包括表面管理器(surface manager),媒体库(Media Libraries)等。表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。
内核层是硬件和软件之间的层。其中,应用处理器101的内核层可以包括但不限于音频驱动模块320、传感器驱动模块和显示驱动模块等。其中,音频驱动模块320可以用于调用相关硬件(如图1所示的音频模块105和扬声器105A)实现音频播放功能。
微控制单元102的内核层可以包括但不限于音频驱动模块327、传感器驱动模块和显示驱动模块等。其中,音频驱动模块320可以用于调用相关硬件(如图1所示的音频模块105和扬声器105A)实现音频播放功能。
结合图3所示的软件架构图,示例性的介绍本申请实施例提供的处理方法。
如图4所示,应用处理器101包括音乐应用301、协同播放模块311和解码服务模块312。微控制单元102包括协同播放模块321和音频驱动模块327。
应用处理器101处于非休眠状态。当应用处理器101控制显示音乐应用301的界面时,该界面包括播放控件,播放控件可以用于触发电子设备100播放第一音频。应用处理器101接收到针对播放控件的输入后,电子设备100的应用处理器101和微控制单元102可以通过本申请实施例提供的处理方法,播放第一音频。其中,该处理方法包括:
S401.音乐应用301向协同播放模块321发送查询请求41,该查询请求41包括第一音频的标识、音频格式等。
应用处理器101接收到该针对播放控件的输入后,通知音乐应用301播放第一音频。音乐应用301可以向协同播放模块321发送查询请求41。
S402.协同播放模块321判断微控制单元102是否存储第一音频的标识对应的第二音频。
协同播放模块321可以基于第一音频的标识,判断微控制单元102是否存储第二音频。
当协同播放模块321判定出未存储待播放的第一音频的标识对应的第二音频时,执行步骤S403。当协同播放模块321判定出存储有待播放的第一音频的标识对应的第二音频时,执行步骤S412。
S403.协同播放模块321判断微控制单元102是否支持解码第一音频的音频格式。
其中,协同播放模块321可以获取解码服务模块322支持解码的音频格式,判断微控制单元102是否支持解码待播放的第一音频。
S404.协同播放模块321向音乐应用301发送查询结果42,查询结果42表示微控制单元102不支持播放第一音频。
S405.音乐应用301将第一音频发送给协同播放模块311。
S406.协同播放模块311将第一音频发送给解码服务模块312。
S407.解码服务模块312解码第一音频,得到第一数据。
S408.解码服务模块312将第一数据发送至协同播放模块321。
S409.协同播放模块321将第一数据发送给音频驱动模块327。
S410.协同播放模块321存储该第一数据。
S411.音频驱动模块327控制扬声器播放该第一数据。
S412.协同播放模块321向音乐应用301发送查询结果43,查询结果43表示微控制单元102存储有第二音频。
S413.音乐应用301通知应用处理器101切换为休眠状态。
其中,音乐应用301可以在接收到查询结果43后,向电源管理模块318发送休眠请求消息,电源管理模块318可以在接收到休眠请求消息后,控制应用处理器101切换为休眠模式。
可选的,音乐应用301可以在收到查询结果43后,向协同播放模块321发送播放指示44,播放指示44用于指示微控制单元102播放第二音频。
可选的,音乐应用301可以在发送播放指示44后,通知应用处理器101切换为休眠状态。或者,协同播放模块321可以在收到播放指示44后,向音乐应用301发送播放状态信息,音乐应用301可以在收到播放状态信息后,才通知应用处理器101切换为休眠状态。
S414.协同播放模块321向音频驱动模块327发送播放消息45,用于指示音频驱动模块327播放第二音频。
若第二音频包括第一数据,微控制单元102的音频驱动模块327可以直接读取第二音频中的第一数据,并控制扬声器播放该第一数据。若第二音频包括编码后的第一数据,微控制单元102的协同播放模块321首先通知解码服务模块322解码第二音频,得到第一数据,再通知音频驱动模块327控制扬声器播放该第一数据。
S415.音频驱动模块327控制扬声器播放第二音频。
S416.协同播放模块321向音乐应用301发送查询结果46,查询结果46表示微控制单元102支持解码第一音频。
S417.音乐应用301向协同播放模块321发送第一音频。
音乐应用301收到查询结果46后,向协同播放模块321发送第一音频。
S418.协同播放模块321向音频驱动模块327发送第一音频。
S419.音频驱动模块327播放第一音频。
当协同播放模块321收到第一音频后,协同播放模块321首先通知解码服务模块322解码第一音频,得到第一数据,再通知音频驱动模块327控制扬声器播放该第一数据。
S420.音乐应用301通知应用处理器101切换为休眠状态。
其中,音乐应用301可以在向协同播放模块321发送第一音频后,向电源管理模块318发送休眠请求消息,电源管理模块318可以在接收到休眠请求消息后,控制应用处理器101切换为休眠模式。
具体的,上述部分步骤的具体描述可以参见图2所示实施例,在此不再赘述。
在一种可能的实现方式中,在微控制单元使用扬声器的过程中,若应用处理器接收到指定应用使用扬声器的请求时,可以通知微控制单元停止使用扬声器,并将扬声器分配给指定应用。其中,指定应用可以包括但不限于通话类应用,例如通话应用302。需要说明的是,在本申请实施例中,指定应用可以代指没有通过微控制单元进行音频播放的应用程序。
接下来结合图3所示的软件架构图,示例性的介绍本申请实施例提供的扬声器调度方法。
如图5所示,该扬声器调度方法包括:
S501.协同播放模块311向音乐应用301发送播放消息51,表示微控制单元102正在播放音频。
其中,协同播放模块311可以在音频驱动模块327控制扬声器播放音频时,向音乐应用301发送播放消息51。
例如,音频驱动模块327可以在执行图4所示的步骤S412之后,通知协同播放模块311正在播放音频。协同播放模块321可以在确定音频驱动模块327正在播放音频后,给音乐应用301发送播放消息51,表示微控制单元102正在播放音频。再例如,该播放消息51可以为图4所示的播放结果46。
S502.音乐应用301向焦点代理模块314发送占用消息52,表示扬声器被微控制单元102占用。
音乐应用301收到播放消息51后,可以给焦点代理模块314发送占用消息52。占用消息52可以表示扬声器被微控制单元102占用。
S503.焦点代理模块314向焦点管理模块313发送占用消息53,表示焦点代理模块314占用扬声器。
焦点代理模块314可以在收到占用消息52后,给焦点代理模块314发送占用消息53,占用扬声器焦 点。
S504.焦点管理模块313设置扬声器被焦点代理模块314占用。
S505.通话应用302向焦点管理模块313发送占用请求54,表示通话应用302占用扬声器。
通话应用302可以在接收用户来电时,从焦点管理模块313获取扬声器的焦点。
S506.焦点管理模块313设置扬声器被通话应用302占用。
焦点管理模块313收到占用请求54后,将扬声器焦点分配给通话应用302,并设置扬声器焦点被通话应用302占用。通话应用302可以在获取到扬声器焦点后,使用扬声器播放来电提示音。
可以理解的是,由于扬声器焦点被通话应用302占用,音乐应用301无法使用扬声器播放音频。
在一些实施例中,焦点管理模块313可以基于应用的优先权,判断是否给该应用分配扬声器。示例性的,当焦点管理模块313设置扬声器被应用A占用时,收到来自应用B的占用请求,焦点管理模块313可以判断应用B的优先权是否高于应用A的优先权。当焦点管理模块313判定出应用B的优先权高于应用A的优先权时,通知应用A停止占用扬声器,并将扬声器分配给应用B。当焦点管理模块313判定出应用B的优先权低于应用A的优先权时,不改变占用扬声器的应用。此时,应用B无法获取到扬声器。
S507.焦点管理模块313向焦点代理模块314发送停止占用消息55,表示扬声器被其他应用占用。
焦点管理模块313可以在扬声器焦点被通话应用302占用后,给焦点代理模块314发送停止占用消息55,表示焦点代理模块314丢失扬声器焦点,扬声器焦点被其他应用占用。
S508.焦点代理模块314向音乐应用301发送焦点丢失消息56,表示扬声器被其他应用占用。
焦点代理模块314可以在接收到停止占用消息55后,给音乐应用301发送焦点丢失消息56。
S509.音乐应用301停止播放第一音频。
音乐应用301可以执行停止播放第一音频的操作,停止使用扬声器。
可选的,音乐应用301可以保存第一音频的播放进度,当通话应用302使用完扬声器,归还扬声器焦点后,获取扬声器焦点,通知微控制单元继续播放第一音频。或者,音乐应用301可以在接收到播放第一音频的输入(例如,图4所示的针对播放控件的输入)后,基于保存的第一音频的播放进度,从丢失扬声器焦点时播放的进度处继续播放该第一音频。
S510.音乐应用301给协同播放模块321发送停止播放消息57,指示微控制单元102停止播放第一音频。
S511.协同播放模块321给音频驱动模块327发送停止播放消息58,指示音频驱动模块327停止播放音频。
需要说明的是,上述通话应用302仅为应用程序的示例,不限于通话应用,其他应用程序也可以调用扬声器时,例如,闹钟应用等,也可以实施图5所示的扬声器调度方法。
在一种可能的实现方式中,电子设备100的应用处理器接收到调整音量的输入后,可以调整应用处理器的音量值。电子设备100的应用处理器可以通知微控制单元调整后的音量值,微控制单元可以在接收调整音量的通知后,将微控制单元的音量值调整至与应用处理器的音量值相同。这样,微控制单元在播放音频的过程中,应用处理器检测到音量调节输入,可以通知微控制单元调整后的音量,微控制单元可以使用调整后的音量继续播放音频。
结合图3所示的软件架构图,示例性的介绍本申请实施例提供的音量调节方法。
如图6所示,应用处理器101可以通过音量同步模块304,将音量同步到微控制单元102,其中,音量调节方法包括以下步骤:
S601.音量调节服务模块316接收到调节音量的输入。
S602.音量调节服务模块316将音量设置为调节后的音量值。
电子设备100的应用处理器101控制显示屏显示音乐应用301的界面。该音乐应用301的界面还包括音量设置图标。电子设备100的应用处理器101可以在接收到针对音量设置图标的输入(例如单击)后,控制显示屏显示音量条。该调节音量的输入可以为针对音量条的滑动输入。
音量调节服务模块316可以基于调节音量的输入,将应用处理器101的音量设置为调节后的音量值。
例如,电子设备100的应用处理器101可以在检测到向左或向下滑动音量条的输入,将该输入发送给音量调节服务模块316,音量调节服务模块316可以根据滑动输入的滑动距离,减小应用处理器101的音量。
电子设备100的应用处理器101可以在检测到向右或向上滑动音量条的输入,将该输入发送给音量调 节服务模块316,音量调节服务模块316可以根据滑动输入的滑动距离,增大应用处理器101的音量。
S603.音量调节服务模块304向音量同步模块304发送音量调节消息61,音量调节消息61携带有调节后的音量值。
音量调节服务模块304可以在调整应用处理器101的音量值后,将携带有调整后的音量值的音量调节消息61发送给音量同步模块304。
S604.音量同步模块304向音量调节服务模块324发送音量调节消息62,音量调节消息62携带有调节后的音量值。
音量同步模块304收到音量调节消息61后,可以向微控制单元102的音量调节服务模块324发送音量调节消息62,通知音量调节服务模块324调整微控制单元102的音量,使得微控制单元102的音量值和应用处理器101的音量值保持相同。
S605.音量调节服务模块324将音量设置为调节后的音量值。
微控制单元102的音量调节服务模块324收到音量调节消息62后,可以将音量调节消息62中携带的音量值设置为微控制单元102的音量值。
接下来,为了便于后续实施例的描述,首先介绍应用处理器与微控制单元运行应用程序的流程。
具体的,应用处理器(例如,图1所示的应用处理器101)运行应用程序时,可以从应用处理器的存储器中取出应用程序的程序代码。其中,程序代码的编程语言为应用处理器指示的编程语言(例如,Java/C++)。应用处理器可以通过应用处理器的编译器将程序代码编译为计算机指令,并执行这些计算机指令。这些计算机指令中的每一条指令都属于应用处理器的指令集。
同理,微控制单元(例如,图1所示的微控制单元102)运行应用程时,可以从微控制单元的存储器中取出应用程序的程序代码。其中,程序代码的编程语言为微控制单元指示的编程语言(例如,C/C++)。微控制单元可以通过微控制单元的编译器将程序代码编译为计算机指令,并执行这些计算机指令。这些计算机指令中的每一条指令都属于微控制单元的指令集。
需要说明的是,由于微控制单元和应用处理器的指令集不同,在一些情况下,编程语言也不同。若应用处理器和微控制单元安装有同一个应用程序,应用处理器中存储的应用程序的代码和微控制单元中存储的该应用程序的代码不同。
还需要说明的是,由于应用处理器的指令集与微控制单元的指令集不同,应用程序的代码也不同。应用处理器不能运行微控制单元的应用程序,微控制单元也不能运行应用处理器的应用程序。
其中,由应用处理器运行的应用程序可以称为部署在应用处理器上的应用程序,或者,也可以称为应用处理器的应用程序。由微控制单元运行的应用程序可以称为部署在微控制单元上的应用程序,或者,也可以称为微控制单元的应用程序。
需要说明的是,由于应用处理器和微控制单元的运行主频,内存容量不同,例如,该运行主频,内存容量的差异可以参见图1所示实施例,在此不再赘述。应用处理器可以用于运行复杂度高的计算型业务和网络业务等。例如,应用处理器可以运行桌面、通话、畅连语音、房颤早搏本地筛查、地图等应用程序。微控制单元102可以用于运行简单计算型业务。例如,微控制单元102可以用于运行例如心率检测、压力检测、日常活动、音乐等应用程序。其中,应用处理器运行的应用的代码的编程语言为应用处理器指示的编程语言。微控制单元运行的应用的代码的编程语言为微控制单元指示的编程语言。
其中,电子设备100存储有应用程序的处理器标识,该处理器标识可以用于指示电子设备100通过应用处理器还是微控制单元运行该应用程序。也就是说,电子设备100可以在接收到用户打开某个应用程序的输入时,基于该应用程序的处理器标识确定出该应用程序部署在微控制单元还是部署在应用处理器,再通过处理器标识指示的处理器运行该应用程序。这样,电子设备100可以在微控制单元部署应用程序,并给这些应用程序设置对应的处理器标识,电子设备100可以在用户打开某个应用程序时,快速确定出是否使用微控制单元运行该应用程序,节约设备功耗。示例性的,如表1所示,表1示出了应用程序和对应的处理器标识示例。
表1

如上表1所示,该表中示出了部分应用程序和其对应的处理器标识。其中,锻炼应用、信息应用和音乐应用的处理器标识为微控制单元。桌面应用、通话应用的处理器标识为应用处理器。也就是说,电子设备100显示桌面时,由应用处理器运行该桌面应用,控制显示屏显示桌面。电子设备100的应用处理器可以在接收到打开某个应用的输入时,判断该应用的处理器标识,决定是否通知微控制单元运行被打开的应用。例如,当电子设备100的应用处理器在控制显示屏显示桌面时,接收到打开音乐应用的输入时,应用处理器基于音乐应用的处理器标识,确定出由微控制单元运行音乐应用,应用处理器可以给微控制单元发送通知消息,该通知消息可以携带音乐应用的标识(例如,应用名称),微控制单元可以在接收到应用处理器的通知消息后,运行该音乐应用,例如,控制显示屏显示音乐应用的界面。再例如,当电子设备100的应用处理器在控制显示屏显示桌面时,接收到打开通话应用的输入时,应用处理器基于通话应用的处理器标识,确定出由应用处理器运行通话应用,应用处理器运行通话应用,例如,控制显示屏显示通话应用的界面。
在另一种可能的实现方式中,电子设备100包括应用处理器和微控制单元。电子设备100的微控制单元部署有音乐应用。微控制单元显示音乐应用的界面时,接收到播放音乐应用的第一音频的输入。微控制单元可以响应于该输入,判断微控制单元是否支持播放该第一音频。微控制单元可以在判定出微控制单元不支持播放该音频时,通知应用处理器处理该第一音频,得到第一数据。微控制单元可以获取应用处理器的第一数据,并控制扬声器播放该第一数据。微控制单元可以在判定出微控制单元支持播放该第一音频时,通过微控制单元播放该第一音频。
这样,电子设备100使用微控制单元运行音乐应用、播放第一音频,由于微控制单元的功耗低于应用处理器的功耗,可以节约电子设备100播放音频的功耗。并且,应用处理器可以在微控制单元控制扬声器播放声音时,切换为休眠状态,进一步节约功耗。
示例性的,如图7所示,该处理方法包括:
S701.应用处理器显示包括音乐应用图标的桌面。
此时,应用处理器处于非休眠状态,应用处理器可以控制显示屏显示包括音乐应用图标的桌面。
S702.应用处理器接收针对音乐应用图标的输入。
应用处理器可以获取针对音乐应用图标的输入,其中,该输入的具体描述可以参见图2所示实施例,在此不再赘述。需要说明的是,音乐应用可以为任何用于播放音频的应用,该应用可以为出厂时安装在电子设备100中的,或者,该应用也可以为用户通过网络下载的。例如,音乐应用可以为图8所示的音乐应用801。
S703.应用处理器基于音乐应用的处理器标识,向微控制单元发送用于指示微控制单元运行音乐应用的指示消息71。
应用处理器可以基于音乐应用的标识,确定出音乐应用部署在微控制单元,应用处理器可以给微控制单元发送指示消息71,指示消息71可以携带音乐应用的标识,用于指示微控制单元运行音乐应用。
应用处理器还可以在将指示消息71发送给微控制单元后,从非休眠状态切换为休眠状态。
S704.微控制单元接收到指示消息71,控制显示屏显示音乐应用的界面,该界面用于播放第一音频,该界面包括播放控件。
微控制单元接收指示消息71后,可以控制显示屏显示音乐应用的界面。该音乐应用的界面可以包括播放控件,播放控件可以用于触发电子设备100播放第一音频。
可选的,该音乐应用的界面还可以包括第一音频的歌曲信息,该歌曲信息可以为包括但不限于歌曲名称、歌手名称等等。
S705.微控制单元接收针对播放控件的输入。
其中,播放控件可以用于触发电子设备100播放第一音频。微控制单元可以接收到针对播放控件的输入,响应于该输入,执行步骤S706。
在一些示例中,微控制单元可以通过传感器驱动模块(例如触摸传感器模块)获取用户触摸的屏幕触 点位置信息,微控制单元可以基于屏幕触点位置信息确定该输入为针对播放控件的输入。
在另一些示例中,不限于上述步骤S701-步骤S705,处于未休眠状态的应用处理器可以响应于用户用于播放第一音频的语音指令,向微控制单元发送播放请求,播放请求用于指示微控制单元播放第一音频,微控制单元在接收到播放请求后,可以执行步骤S706。应用处理器在接收到用户的语音指令之前,控制显示屏显示的界面可以不为音乐应用的界面。也就是说,应用处理器在控制显示屏显示任意界面时,都可以响应于用户用于播放第一音频的语音指令,向微控制单元发送播放请求。可选的,应用处理器在向微控制单元发送播放请求后,可以从未休眠状态切换为休眠状态。
或者,电子设备100显示文件管理应用的界面时,该界面包括存储的一个或多个音频对应的音频选项,该一个或多个音频包括第一音频,该一个或多个音频选项包括第一音频选项。电子设备100可以接收用户针对第一音频选项的输入后,执行步骤S706。
综上所述,微控制单元在接收到使用音乐应用播放第一音频的输入后,都可以执行步骤S706。
S706.微控制单元基于第一音频的标识,判断是否存储该标识对应的第二音频。
当微控制单元判定出未存储待播放的第一音频的标识对应的第二音频时,执行步骤S707。当微控制单元判定出存储有待播放的第一音频的标识对应的第二音频时,执行步骤S713。
S707.微控制单元基于第一音频的音频格式,判断是否支持解码第一音频。
当微控制单元判定出微控制单元不支持解码待播放的第一音频时,执行步骤S708。当微控制单元判定出微控制单元支持解码待播放的第一音频时,执行步骤S714。
需要说明的是,不限于上述先判断微控制单元是否存储有第一音频的标识对应的第二音频,再在微控制单元未存储第二音频时判断微控制单元不支持解码第一音频的音频格式的顺序。微控制单元可以先判断微控制单元是否支持解码第一音频的音频格式,再在微控制单元不支持解码第一音频的音频格式时,判断微控制单元是否存储有第一音频的标识对应的第二音频。或者,微控制单元可以同时判断微控制单元是否支持解码第一音频的音频格式以及微控制单元是否存储有第一音频的标识对应的第二音频,本申请实施例对此不作限定。
在一些实施例中,微控制单元可以只判断是否存储有第二音频。或者,微控制单元可以只判断是否支持解码第一音频。
S708.微控制单元向应用处理器发送解码消息72,解码消息72指示应用处理器解码第一音频。
微控制单元可以在判定出不支持播放第一音频,即,没有存储第二音频且不支持解码第一音频时,向应用处理器发送解码消息72。解码消息72可以指示应用处理器解码该第一音频。可以理解的是,解码消息72还可以用于唤醒应用处理器,使得应用处理器从休眠状态切换为非休眠状态。
S709.应用处理器解码第一音频,得到第一数据。
应用处理器可以在接收到解码消息72后,从休眠状态切换为非休眠状态,再通过解码算法,解码第一音频,得到第一数据。其中,解码算法的参数可以为指定参数,或者,音乐应用提供的参数。这样,电子设备100可以使用第三方音乐应用提供的参数,实现第一音频的解码操作。
S710.应用处理器向微控制单元发送第一数据。
应用处理器可以在解码得到第一数据后,将第一数据传输给微控制单元。
可选的,应用处理器可以在将第一数据传输给微控制单元后,从非休眠状态切换为休眠状态。
S711.微控制单元控制扬声器播放第一数据。
在一些示例中,微控制单元可以将第一数据传输给图1所示的音频模块105,和音频模块105、扬声器105A播共同实现第一数据的播放。
S712.微控制单元存储第一数据。
微控制单元可以存储第一数据,得到第二音频。可选的,微控制单元可以使用支持的编码方式,将第一数据编码得到指定音频格式的第二音频。其中,指定音频格式为微控制单元支持的第一音频格式,具体的,请参见图1所示实施例,在此不再赘述。这样,通过编码压缩第一数据,可以节约电子设备100的存储空间,使得电子设备100可以存储更多的第二音频。
S713.微控制单元控制扬声器播放第二音频。
微控制单元可以在判定出存储有第一音频的标识对应的第二音频后,控制扬声器播放第二音频。
S714.微控制单元控制扬声器播放第一音频。
微控制单元可以在判定出支持解码第一音频时,解码第一音频得到第一数据,并控制扬声器播放该第一数据。
其中,步骤S706、步骤S707、步骤S711-步骤S714的详细描述都可以参见图2所示实施例,在此不再赘述。
在一些实施例中,应用处理器在解码第一音频得到第一数据后,可以直接控制扬声器播放第一数据。并且将第一数据发送至微控制单元,指示微控制单元存储该第一数据。
接下来结合图1所示的电子设备100的硬件结构,示例性的介绍另一种应用处理器和微控制单元的软件架构。
示例性的,如图8所示,微控制单元102的应用层可以包括但不限于音乐应用801等应用程序。应用处理器101的应用层可以包括但不限于通话应用802,桌面应用803等应用程序。其中,音乐应用801可以用于播放第一音频。通话应用802可以用于接听通话。桌面应用803可以用于显示应用程序的图标。
应用处理器101的应用程序框架层可以包括但不限于协同播放模块811、解码服务模块812、焦点管理模块813、焦点代理模块814、音量同步模块815、音量调节服务模块816、双核通信模块817和电源管理模块818等。微控制单元102的应用程序框架层可以包括但不限于协同播放模块821、解码服务模块822、音量同步模块823、音量调节服务模块824和双核通信模块825等。
其中,应用处理器101的系统库可以包括音频通道模块819。微控制单元102的系统库可以包括音频通道模块826。
应用处理器101的内核层可以包括但不限于音频驱动模块820、传感器驱动模块和显示驱动模块。微控制单元102的内核层可以包括但不限于音频驱动模块827、传感器驱动模块和显示驱动模块。
具体的,该应用处理器101和微控制单元102的应用程序框架层、系统库和内核层的描述可以参见图3所示实施例,在此不再赘述。
接下来结合图8所示的软件架构图,示例性的介绍本申请实施例提供的处理方法。
如图9所示,应用处理器101包括协同播放模块811和解码服务模块812。微控制单元102包括音乐应用801、协同播放模块821和音频驱动模块827。
应用处理器101处于非休眠状态,应用处理器101可以控制显示屏显示包括音乐应用801的图标的桌面。应用处理器101可以接收针对音乐应用801的图标的输入,响应于该输入,应用处理器基于音乐应用801的处理器标识,通知微控制单元102运行音乐应用801。应用处理器还可以在通知微控制单元102运行音乐应用801后,从非休眠状态切换为休眠状态。
微控制单元102可以在收到应用处理器101的通知后,控制显示屏显示音乐应用801的界面,该界面用于播放第一音频,该界面包括播放控件。微控制单元102可以接收针对播放控件的输入,通过本申请实施例提供的处理方法,播放第一音频。或者,处于未休眠状态的应用处理器101可以响应于用户用于播放第一音频的语音指令,指示微控制单元102通过本申请实施例提供的处理器方法,播放第一音频。其中,该处理方法包括:
S901.音乐应用801基于第一音频的标识,判断是否存储第二音频。
当音乐应用801判定出存储有第一音频的标识对应的第二音频时,执行步骤S914;当音乐应用801判定出未存储第一音频的标识对应的第二音频时,执行步骤S902。
S902.音乐应用801基于第一音频的音频格式,判断微控制单元是否支持解码第一音频。
当音乐应用801判定出微控制单元102不支持解码待播放的第一音频时,执行步骤S903。当音乐应用801判定出支持解码待播放的第一音频时,执行步骤S914。
S903.音乐应用801向协同播放模块821发送第一音频。
音乐应用801判定出未存储有待播放的第一音频的标识对应的第二音频,且不支持解码待播放的第一音频,向协同播放模块821发送第一音频。
S904.协同播放模块821向协同播放模块811发送第一音频。
微控制单元102的协同播放模块821可以在收到音乐应用801提供的第一音频后,可以将第一音频发送给应用处理器101的协同播放模块811。应用处理器101在接收到第一音频时,从休眠状态切换为非休眠状态。
S905.协同播放模块811向解码服务模块812发送第一音频。
S906.解码服务模块812解码第一音频,得到第一数据。
S907.解码服务模块812向协同播放模块821发送第一数据。
S908.协同播放模块821向音频驱动模块827发送第一数据。
S909.音频驱动模块827控制扬声器播放第一数据。
S910.协同播放模块821向音乐应用801发送第一数据。
S911.音乐应用801存储第一数据。
S912.音乐应用801向音频驱动模块827发送播放消息71,播放消息71用于指示音频驱动模块827播放第二音频。
当音乐应用801判定出存储有待播放的第一音频的标识对应的第二音频时,向音频驱动模块827发送播放消息71。
S913.音频驱动模块827控制扬声器播放第二音频。
S914.音乐应用801向音频驱动模块827发送播放消息72,播放消息72用于指示音频驱动模块827播放第一音频。
当音乐应用801判定出支持解码第一音频时,向音频驱动模块827发送播放消息72。
S915.音频驱动模块827控制扬声器播放第一音频。
音频驱动模块827收到播放消息72后,播放第一音频。
具体的,上述各个步骤的具体描述可以参见图7所示实施例,在此不再赘述。
在一种可能的实现方式中,在微控制单元使用扬声器的过程中,若应用处理器接收到指定应用使用扬声器的请求时,可以通知微控制单元停止使用扬声器,并将扬声器分配给指定应用。其中,指定应用可以包括但不限于通话类应用,例如通话应用302。需要说明的是,在本申请实施例中,指定应用可以代指没有通过微控制单元进行音频播放的应用程序。
在一种可能的实现方式中,应用处理器包括焦点代理模块,当电子设备100的微控制单元使用扬声器时,应用处理器的焦点代理模块可以从焦点管理模块获取扬声器的焦点。焦点管理模块可以在扬声器焦点被其他应用程序占用时,通知焦点代理模块。焦点代理模块可以通知微控制单元,微控制单元可以在接收到焦点代理模块的通知后,执行暂停播放音频的操作。这样,由于应用处理器的焦点管理模块只负责应用处理器的焦点管理。微控制单元占用扬声器时,应用处理器的焦点管理模块无法通知微控制单元。因此,以焦点代理模块代表微控制单元,可以在应用处理器的应用程序占用焦点时,通过焦点代理模块通知微控制单元,便于微控制单元执行暂停播放音频的操作。
结合图8所示的软件架构图,示例性的介绍本申请实施例提供的扬声器调度方法。如图10所示,该方法包括:
S1001.音乐应用801向焦点代理模块814发送占用消息81,表示微控制单元102正在使用扬声器。
音乐应用801可以在微控制单元102播放第一音频时,向焦点代理模块814发送占用消息81。
S1002.焦点代理模块814向焦点管理模块813发送占用消息82,表示焦点代理模块814占用扬声器。
S1003.焦点管理模块813设置扬声器被焦点代理模块814占用。
S1004.通话应用802向焦点管理模块813发送占用请求83,表示通话应用802占用扬声器。
可以理解的是,通话应用802可以在获取到扬声器焦点后,使用扬声器播放来电提示音。
S1005.焦点管理模块813设置扬声器被通话应用802占用。
在一些实施例中,焦点管理模块813可以基于应用的优先权,判断是否给该应用分配扬声器。示例性的,当焦点管理模块813设置扬声器被应用A占用时,收到来自应用B的占用请求,焦点管理模块813可以判断应用B的优先权是否高于应用A的优先权。当焦点管理模块813判定出应用B的优先权高于应用A的优先权时,通知应用A停止占用扬声器,并将扬声器分配给应用B。当焦点管理模块813判定出应用B的优先权低于应用A的优先权时,不改变占用扬声器的应用。此时,应用B无法获取到扬声器。
S1006.焦点管理模块813向焦点代理模块814发送停止占用消息84,表示扬声器被其他应用占用。
S1007.焦点代理模块814向音乐应用801发送焦点丢失消息85,表示扬声器被其他应用占用。
S1008.音乐应用801向音频播放驱动827发送停止播放消息86,指示音频驱动模块827停止播放音频。
具体的,上述步骤S1002至步骤S1008的详细描述可以参见图5所示实施例,在此不再赘述。
在一种可能的实现方式中,微控制单元在判定出不支持播放第一音频时,可以通知应用处理器解码并播放第一音频。微控制单元102也可以通过焦点代理模块占用焦点管理模块提供的扬声器焦点。这样,由于音乐应用部署在微控制单元,微控制单元通过应用处理器解码、播放第一音频时,可以使用焦点代理模块占用应用处理器的焦点管理模块的扬声器焦点。当扬声器焦点被应用处理器的其他应用占用后,便于微 控制单元通知应用处理器暂停播放第一音频。
具体的,结合图8所示的软件架构,当音乐应用801接收到用户播放第一音频的输入后,可以通知焦点代理模块814。焦点代理模块814可以获取焦点管理模块813的焦点。当焦点管理模块813的扬声器焦点被其他应用占用后,焦点代理模块814可以通知音乐应用801。具体的,可以参见上述步骤S1002至步骤S1007,在此不再赘述。
当应用处理器101解码并播放第一音频时,应用处理器101的解码服务模块812可以接收音乐应用801提供的第一音频,并解码该第一音频,得到第一数据。解码服务模块812可以将第一数据发送给音频驱动模块802,音频驱动模块802可以控制扬声器播放该第一数据。
音乐应用801可以在收到焦点丢失的通知消息后,通知应用处理器101停止播放第一音频。具体的,音乐应用801可以通知解码服务模块812停止解码,并通知音频驱动模块802停止播放音频。
在一种可能的实现方式中,电子设备100的微控制单元接收到调整音量的输入后,可以调整微控制单元的音量值。电子设备100的微控制单元可以通知应用处理器调整后的音量值,应用处理器可以在接收调整音量的通知后,将应用处理器的音量值调整至与微控制单元的音量值相同。这样,微控制单元在播放音频的过程中,微控制单元检测到音量调节输入,可以通知应用处理器调整后的音量,应用处理器可以使用调整后的音量播放音频,例如,提示音。
结合图8所示的软件架构图,示例性的介绍本申请实施例提供的音量调节方法。
如图11所示,微控制单元102可以通过音量同步模块823,将音量同步到应用处理器101,其中,音量调节方法包括以下步骤:
S1101.音量调节服务模块824接收到调节音量的输入。
S1102.音量调节服务模块824将音量设置为调节后的音量值。
电子设备100的微控制单元102控制显示屏显示音乐应用801的界面。该音乐应用801的界面还包括音量设置图标。电子设备100的微控制单元102可以在接收到针对音量设置图标的输入(例如单击)后,控制显示屏显示音量条。该调节音量的输入可以为针对音量条的滑动输入。
音量调节服务模块824可以基于调节音量的输入,将微控制单元102的音量设置为调节后的音量值。
例如,电子设备100的微控制单元102可以在检测到向左或向下滑动音量条的输入,将该输入发送给音量调节服务模块824,音量调节服务模块824可以根据滑动输入的滑动距离,减小微控制单元102的音量。
电子设备100的微控制单元102可以在检测到向右或向上滑动音量条的输入,将该输入发送给音量调节服务模块824,音量调节服务模块824可以根据滑动输入的滑动距离,增大微控制单元102的音量。
S1103.音量调节服务模块824向音量同步模块823发送音量调节消息91,音量调节消息91携带有调节后的音量值。
音量调节服务模块824可以在调整微控制单元102的音量值后,将携带有调整后的音量值的音量调节消息91发送给音量同步模块823。
S1104.音量同步模块823向音量调节服务模块816发送音量调节消息92,音量调节消息92携带有调节后的音量值。
音量同步模块823收到音量调节消息91后,可以向应用处理器101的音量调节服务模块816发送音量调节消息92,通知音量调节服务模块816调整应用处理器101的音量,使得应用处理器101的音量值和微控制单元102的音量值保持相同。
S1105.音量调节服务模块816将音量设置为调节后的音量值。
应用处理器101的音量调节服务模块816收到音量调节消息92后,可以将音量调节消息92中携带的音量值设置为应用处理器101的音量值。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (35)

  1. 一种处理方法,应用于电子设备,其特征在于,所述电子设备包括应用处理器、微控制单元和扬声器;所述方法包括:
    所述应用处理器接收到第一输入,所述第一输入用于触发所述电子设备使用第一应用播放第一音频;
    所述应用处理器响应于所述第一输入,将第一消息发送至所述微控制单元;
    当所述微控制单元基于所述第一消息判定出所述微控制单元支持播放所述第一音频时,所述微控制单元向所述应用处理器发送第二消息;
    在所述微控制单元向所述应用处理器发送所述第二消息后,所述微控制单元控制所述扬声器播放所述第一音频,所述应用处理器切换为休眠状态。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述微控制单元基于所述第一消息判定出所述微控制单元不支持播放所述第一音频时,所述微控制单元向所述应用处理器发送第三消息;
    所述应用处理器响应于所述第三消息,解码所述第一音频,得到所述第一数据;
    所述应用处理器将所述第一数据发送至所述微控制单元;
    所述微控制单元控制所述扬声器播放所述第一数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一消息包括所述第一音频的第一音频格式;所述微控制单元基于所述第一消息判定出所述微控制单元支持播放所述第一音频,具体包括:
    当所述微控制单元基于所述第一音频格式,判定出所述微控制单元支持解码所述第一音频格式的所述第一音频时,确定出所述微控制单元支持播放所述第一音频。
  4. 根据权利要求3所述的方法,其特征在于,在所述微控制单元向所述应用处理器发送所述第二消息后,所述微控制单元控制所述扬声器播放所述第一音频,所述应用处理器切换为休眠状态,具体包括:
    所述应用处理器响应于所述第二消息,向所述微控制单元发送所述第一音频,所述第二消息表示所述微控制单元支持解码所述第一音频;
    所述微控制单元接收到所述第一音频后,解码所述第一音频得到第一数据;
    所述微控制单元控制所述扬声器播放所述第一数据;
    所述应用处理器在发送完所述第一音频后,切换为所述休眠状态。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一消息包括所述第一音频的标识;所述微控制单元基于所述第一消息判定出所述微控制单元支持播放所述第一音频,具体包括:
    当所述微控制单元基于所述第一音频的标识,判定出存储有所述第一音频的标识对应的第二音频时,确定出所述微控制单元支持播放所述第一音频。
  6. 根据权利要求5所述的方法,其特征在于,所述第二音频包括解码后的第一数据;所述微控制单元控制所述扬声器播放所述第一音频,具体包括:
    所述微控制单元控制所述扬声器播放所述第一音频的标识指示的所述第二音频。
  7. 根据权利要求5所述的方法,其特征在于,所述第二音频包括以第二音频格式编码后的所述第一数据,所述微控制单元支持解码所述第二音频格式的所述第二音频;所述微控制单元控制所述扬声器播放所述第一音频,具体包括:
    所述微控制单元解码所述第一音频的标识指示的所述第二音频,得到所述第一数据;
    所述微控制单元控制所述扬声器播放所述第一数据。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,在所述微控制单元向所述应用处理器发送所述第二消息后,所述方法还包括:
    所述应用处理器收到所述第二消息后,向所述微控制单元发送用于指示所述微控制单元播放所述第一音频的第四消息;
    所述微控制单元接收到所述第四消息后,控制所述扬声器播放所述第一音频。
  9. 根据权利要求8所述的方法,其特征在于,在所述微控制单元收到所述第四消息后,所述方法还包括:
    所述微控制单元向所述应用处理器发送第五消息,所述第五消息用于指示所述应用处理器切换为所述休眠状态;
    所述应用处理器切换为所述休眠状态,具体包括:
    所述应用处理器响应于所述第五消息,切换为所述休眠状态。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,当所述微控制单元控制所述扬声器播放音频时,所述方法还包括:
    所述应用处理器检测到第二应用的第一请求,所述第一请求用于请求使用所述扬声器;
    所述应用处理器向所述微控制单元发送第六消息,所述第六消息用于指示所述微控制单元停止播放音频。
  11. 根据权利要求10所述的方法,其特征在于,所述第二应用的优先级高于所述第一应用的优先级。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,在所述微控制单元播放音频的过程中,所述方法还包括:
    所述应用处理器接收到调节音量的输入,将所述应用处理器的音量值设置为调节后的音量值;
    所述应用处理器将所述调节后的音量值发送给所述微控制单元;
    所述微控制单元将所述微控制单元的音量值设置为所述调节后的音量值。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,当所述电子设备处于连续播放音频的状态且所述微控制单元基于所述第一消息判定出所述微控制单元支持播放所述第一音频时;所述方法还包括:
    所述微控制单元播放完所述第一音频后,通知所述应用处理器切换为非休眠状态;
    所述应用处理器切换为所述非休眠状态后,将第七消息发送至所述微控制单元,所述第一音频与所述第三音频属于同一个播放列表;
    所述微控制单元基于所述第七消息,判断所述微控制单元是否支持播放所述第三音频。
  14. 一种处理方法,应用于电子设备,其特征在于,所述电子设备包括应用处理器、微控制单元和扬声器;所述方法包括:
    所述微控制单元接收到第一输入,所述第一输入用于触发所述电子设备使用第一应用播放第一音频;
    所述微控制单元响应于所述第一输入,基于所述第一音频的第一信息,判断所述微控制单元是否支持播放所述第一音频;
    当所述微控制单元在基于所述第一信息,判定出所述微控制单元支持播放所述第一音频时,所述微控制单元控制所述扬声器播放所述第一音频。
  15. 根据权利要求14所述的方法,其特征在于,在所述微控制单元接收到第一输入之前,所述方法还包括:
    所述应用处理器控制显示屏显示第一界面,所述第一界面包括所述第一应用的图标;
    所述应用处理器接收到针对所述第一应用的图标的第二输入;
    所述应用处理器响应于所述第二输入,向所述微控制单元发送第一指令,所述第一指令用于指示所述微控制单元显示所述第一应用的界面;
    所述应用处理器在发送所述第一指令后,切换为休眠状态;
    所述微控制单元响应于所述第一指令,控制显示屏显示所述第一应用的界面,所述第一应用的界面包括第一控件,所述第一控件用于触发所述电子设备播放第一音频,所述第一输入为针对所述第一控件的输入。
  16. 根据权利要求14或15所述的方法,其特征在于,所述方法还包括:
    当所述微控制单元基于所述第一信息判定出所述微控制单元不支持播放所述第一音频时,所述微控制单元向所述应用处理器发送第一消息;
    所述应用处理器响应于所述第一消息,解码所述第一音频,得到所述第一数据;
    所述应用处理器将所述第一数据发送至所述微控制单元;
    所述微控制单元控制所述扬声器播放所述第一数据。
  17. 根据权利要求16所述的方法,其特征在于,所述第一消息用于触发所述应用处理器切换为非休眠状态。
  18. 根据权利要求14-17中任一项所述的方法,其特征在于,所述第一信息包括所述第一音频的第一音频格式;所述微控制单元基于所述第一信息判定出所述微控制单元支持播放所述第一音频,具体包括:
    当所述微控制单元基于所述第一音频格式,判定出所述微控制单元支持解码所述第一音频格式的所述第一音频时,确定出所述微控制单元支持播放所述第一音频。
  19. 根据权利要求18所述的方法,其特征在于,所述微控制单元控制所述扬声器播放所述第一音频,具体包括:
    所述微控制单元解码所述第一音频,得到第一数据;
    所述微控制单元控制所述扬声器播放所述第一数据。
  20. 根据权利要求14-19中任一项所述的方法,其特征在于,所述第一信息包括所述第一音频的标识;所述微控制单元基于所述第一信息判定出所述微控制单元支持播放所述第一音频,具体包括:
    当所述微控制单元基于所述第一音频的标识,判定出存储有所述第一音频的标识对应的第二音频时,确定出所述微控制单元支持播放所述第一音频。
  21. 根据权利要求20所述的方法,其特征在于,所述第二音频包括解码后的第一数据;所述微控制单元控制所述扬声器播放所述第一音频,具体包括:
    所述微控制单元控制所述扬声器播放所述第一音频的标识指示的所述第二音频。
  22. 根据权利要求20所述的方法,其特征在于,所述第二音频包括以第二音频格式编码后的所述第一数据,所述微控制单元支持解码所述第二音频格式的所述第二音频;所述微控制单元控制所述扬声器播放所述第一音频,具体包括:
    所述微控制单元解码所述第一音频的标识指示的所述第二音频,得到所述第一数据;
    所述微控制单元控制所述扬声器播放所述第一数据。
  23. 根据权利要求17-22中任一项所述的方法,其特征在于,当所述微控制单元控制所述扬声器播放音频时,所述方法还包括:
    所述应用处理器检测到第二应用的第一请求,所述第一请求用于请求使用所述扬声器;
    所述应用处理器向所述微控制单元发送第六消息,所述第六消息用于指示所述微控制单元停止播放音频。
  24. 根据权利要求23所述的方法,其特征在于,所述第二应用的优先级高于所述第一应用的优先级。
  25. 根据权利要求17-24中任一项所述的方法,其特征在于,在所述微控制单元播放音频的过程中,所述方法还包括:
    所述应用处理器接收到调节音量的输入,将所述应用处理器的音量值设置为调节后的音量值;
    所述应用处理器将所述调节后的音量值发送给所述微控制单元;
    所述微控制单元将所述微控制单元的音量值设置为所述调节后的音量值。
  26. 根据权利要求17-25中任一项所述的方法,其特征在于,所述电子设备处于连续播放音频的状态且所述微控制单元基于所述第一信息判定出所述微控制单元支持播放所述第一音频;所述方法还包括:
    所述微控制单元播放完所述第一音频后,若所述微控制单元基于第三音频的第二信息,确定出不支持播放第三音频,所述微控制单元通知所述应用处理器切换为非休眠状态,并且将所述第三音频发送至所述应用处理器;
    所述应用处理器切换为所述非休眠状态后,解码所述第三音频。
  27. 一种处理方法,其特征在于,包括:
    所述第一电子设备接收到第一输入,所述第一输入用于触发所述电子设备使用第一应用播放第一音频;
    所述第一电子设备响应于所述第一输入,将第一消息发送至第二电子设备,所述第一消息用于指示所述第二电子设备判断所述第二电子设备是否支持播放所述第一音频;
    所述第一电子设备在接收到所述第二电子设备发送的第二消息后,切换为休眠状态。
  28. 一种处理方法,其特征在于,包括:
    第二电子设备收到第一电子设备发送的第一消息后,判断所述第二电子设备是否支持播放所述第一音频;
    当所述第二电子设备基于所述第一消息判定出所述第二电子设备支持播放所述第一音频时,所述第二电子设备向所述第一电子设备发送第二消息;
    在所述第二电子设备向所述第一电子设备发送所述第二消息后,所述第二电子设备控制所述扬声器播放所述第一音频。
  29. 一种处理方法,其特征在于,包括:
    所述第二电子设备接收到第一输入,所述第一输入用于触发所述电子设备使用第一应用播放第一音频;
    所述第二电子设备响应于所述第一输入,基于所述第一音频的第一信息,判断所述第二电子设备是否支持播放所述第一音频;
    当所述第二电子设备在基于所述第一信息,判定出所述第二电子设备支持播放所述第一音频时,所述第二电子设备控制所述扬声器播放所述第一音频。
  30. 一种处理方法,其特征在于,包括:
    第一电子设备收到第二电子设备发送的第一消息;所述第一消息包括第一音频;
    所述第一电子设备解码所述第一音频,得到第一数据;
    所述第一电子设备将所述第一数据发送给所述第二电子设备,所述第一数据用于所述第二电子设备通过所述扬声器进行播放。
  31. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个存储器;其中,所述一个或多个处理器包括应用处理器和微控制单元,所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器在执行所述计算机指令时,使得所述电子设备执行如权利要求1-13或14-26中任一项所述的方法。
  32. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-13、14-26、27、28、29或30中任一项所述的方法。
  33. 一种芯片系统,芯片系统应用于电子设备,芯片系统包括一个或多个处理器,所述一个或多个处理器包括应用处理器和微控制单元,处理器用于调用计算机指令以使得电子设备执行如权利要求1-13或14-26中任一项所述的方法。
  34. 一种电子设备,其特征在于,包括:一个或多个处理器、一个或多个存储器;所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器在执行所述计算机指令时,使得所述电子设备执行如权利要求27、28、29或30中任一项所述的方法。
  35. 一种芯片系统,芯片系统应用于电子设备,芯片系统包括一个或多个处理器,处理器用于调用计算机指令以使得电子设备执行如权利要求27、28、29或30中任一项所述的方法。
PCT/CN2023/121776 2022-09-30 2023-09-26 一种处理方法及相关装置 WO2024067645A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211214665.3 2022-09-30
CN202211214665.3A CN117850569A (zh) 2022-09-30 2022-09-30 一种处理方法及相关装置

Publications (1)

Publication Number Publication Date
WO2024067645A1 true WO2024067645A1 (zh) 2024-04-04

Family

ID=90476324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/121776 WO2024067645A1 (zh) 2022-09-30 2023-09-26 一种处理方法及相关装置

Country Status (2)

Country Link
CN (1) CN117850569A (zh)
WO (1) WO2024067645A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854705A (zh) * 2009-03-30 2010-10-06 联想(北京)有限公司 音频切换方法、音频切换设备、音频编解码器及终端设备
CN105743549A (zh) * 2014-12-10 2016-07-06 展讯通信(上海)有限公司 用户终端及其音频蓝牙播放方法、数字信号处理器
WO2019192030A1 (zh) * 2018-04-04 2019-10-10 华为技术有限公司 一种蓝牙播放方法及电子设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101854705A (zh) * 2009-03-30 2010-10-06 联想(北京)有限公司 音频切换方法、音频切换设备、音频编解码器及终端设备
CN105743549A (zh) * 2014-12-10 2016-07-06 展讯通信(上海)有限公司 用户终端及其音频蓝牙播放方法、数字信号处理器
WO2019192030A1 (zh) * 2018-04-04 2019-10-10 华为技术有限公司 一种蓝牙播放方法及电子设备

Also Published As

Publication number Publication date
CN117850569A (zh) 2024-04-09

Similar Documents

Publication Publication Date Title
US11915696B2 (en) Digital assistant voice input integration
RU2766255C1 (ru) Способ голосового управления и электронное устройство
CN107005612B (zh) 数字助理警报系统
JP2022549157A (ja) データ伝送方法及び関連装置
US20150148093A1 (en) Battery pack with supplemental memory
JP6284931B2 (ja) 多重動画再生方法及び装置
CN111263002B (zh) 一种显示方法和电子设备
CN114020197B (zh) 跨应用的消息的处理方法、电子设备及可读存储介质
CN112767231A (zh) 图层合成方法和设备
WO2022143258A1 (zh) 一种语音交互处理方法及相关装置
WO2022222713A1 (zh) 一种编解码器协商与切换方法
CN111857531A (zh) 移动终端及其文件显示方法
WO2021190225A1 (zh) 一种语音交互方法及电子设备
WO2021213451A1 (zh) 轨迹回放方法及相关装置
WO2024067645A1 (zh) 一种处理方法及相关装置
WO2023045597A1 (zh) 大屏业务的跨设备流转操控方法和装置
WO2022262366A1 (zh) 跨设备的对话业务接续方法、系统、电子设备和存储介质
JP2005038198A (ja) 情報処理装置
CN114281440B (zh) 一种双系统中用户界面的显示方法及电子设备
CN111182361B (zh) 一种通信终端及视频预览的方法
CN109348353B (zh) 智能音箱的服务处理方法、装置和智能音箱
WO2023160208A1 (zh) 图像删除操作的通知方法、设备和存储介质
WO2023065832A1 (zh) 视频的制作方法和电子设备
WO2023174322A1 (zh) 图层处理方法和电子设备
CN115033200A (zh) 智能眼镜及其控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870872

Country of ref document: EP

Kind code of ref document: A1