CN117689776B - Audio playing method, electronic equipment and storage medium - Google Patents
Audio playing method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN117689776B CN117689776B CN202310699757.3A CN202310699757A CN117689776B CN 117689776 B CN117689776 B CN 117689776B CN 202310699757 A CN202310699757 A CN 202310699757A CN 117689776 B CN117689776 B CN 117689776B
- Authority
- CN
- China
- Prior art keywords
- audio
- preset
- electronic device
- image frame
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000000694 effects Effects 0.000 claims abstract description 151
- 238000000605 extraction Methods 0.000 claims description 115
- 230000015654 memory Effects 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims 1
- 230000009471 action Effects 0.000 description 35
- 238000004891 communication Methods 0.000 description 34
- 230000006854 communication Effects 0.000 description 34
- 239000010410 layer Substances 0.000 description 25
- 230000006870 function Effects 0.000 description 24
- 238000007726 management method Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 238000010295 mobile communication Methods 0.000 description 12
- 210000000988 bone and bone Anatomy 0.000 description 9
- 210000004027 cell Anatomy 0.000 description 9
- 230000004048 modification Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000012790 confirmation Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000002195 synergetic effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002138 osteoinductive effect Effects 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
Abstract
The application provides an audio playing method, electronic equipment and a storage medium. Under the condition that preset conditions are met, the electronic equipment acquires a first characteristic file corresponding to the first audio file, wherein the first characteristic file comprises audio intensities of the first audio file at different moments; under the condition that the audio intensity at the first moment is larger than a first preset audio intensity, the electronic equipment determines a first dynamic parameter based on the audio intensity at the first moment; the electronic equipment plays a first image frame in a first animation based on a first dynamic parameter, wherein the first dynamic parameter is used for controlling the switching speed when a second image frame is switched to the first image frame, and the second image frame is one frame of image frame before the first image frame; the first dynamic effect parameter is larger than the preset dynamic effect parameter at the first moment. By the method, the audio characteristics and the audio dynamic effects can be associated, the larger the audio intensity of the audio is, the faster the playing speed of the animation is, and the sound-picture coordination is realized.
Description
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an audio playing method, an electronic device, and a storage medium.
Background
With the development of internet technology, the life style of people has changed greatly, for example, in daily life, the song listening mode of people has changed, and songs can be listened to through an application program with an audio playing function on a smart device (such as a smart phone, a tablet computer, or a smart television).
However, the current audio playing mode of the electronic device is single, so that how to improve the interest of the audio playing of the electronic device is to be further studied.
Disclosure of Invention
The application provides an audio playing method, electronic equipment and a storage medium, which realize the correlation of audio characteristics and audio dynamic effects and improve the interestingness of audio playing.
In a first aspect, the present application provides an audio playing method, including: under the condition that a preset condition is met, the electronic equipment acquires a first audio file and a first animation, wherein the first animation comprises a plurality of frame images; the electronic equipment plays the first audio file and plays the first animation; the electronic device plays a first animation, comprising: the electronic equipment acquires a first characteristic file corresponding to the first audio file, wherein the first characteristic file comprises audio intensities of the first audio file at different moments; under the condition that the audio intensity at the first moment is larger than a first preset audio intensity, the electronic equipment determines a first dynamic parameter based on the audio intensity at the first moment; the electronic equipment plays a first image frame in a first animation based on a first dynamic parameter, wherein the first dynamic parameter is used for controlling the switching speed when a second image frame is switched to the first image frame, and the second image frame is one frame of image frame before the first image frame; the first dynamic effect parameter is larger than the preset dynamic effect parameter at the first moment.
Optionally, the first audio file and the first feature file have the same duration. The electronic device may play the first audio file and the first animation simultaneously. When the first audio file starts to be played, a window with fixed time length can be adopted to traverse the first characteristic file from front to back according to time sequence, so that the first moment when the audio intensity is larger than the first audio intensity is determined, the playing speed of the animation is controlled, and sound-picture coordination is achieved. The synchronization of the audio and the animation is realized through the time attribute, and the shorter the fixed duration of the window is, the stronger the synchronization of the first audio file and the first animation is, and the smaller the error is.
Alternatively, the first time may be a time point or a period of time. The audio intensity at the first time may be the audio intensity at a certain time point or may be the average audio intensity over a period of time.
In some embodiments, the animations of the different audio may be the same, and the animations of the different audio may also be different.
Alternatively, the electronic device may also match different animations based on the audio characteristics of the audio. Or the audio-associated animation may be preset.
The switching speed between two adjacent frames of image frames in the first animation is preset. That is, the preset motion parameters of each image frame in the first animation are preset, and the preset motion parameters of each image frame are used for controlling the switching speed between the image frame and the previous image frame by the electronic device.
By the method, the audio characteristics and the audio dynamic effects can be associated, the larger the audio intensity of the audio is, the faster the playing speed of the animation is, the sound-picture coordination is realized, and the interestingness of the audio playing is improved.
With reference to the first aspect, in one possible implementation manner, the method further includes: under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires preset dynamic effect parameters at the second moment; the electronic equipment plays a second image frame based on a preset dynamic parameter at a second moment, wherein the preset dynamic parameter at the second moment is used for controlling the switching speed when a third image frame is switched to the second image frame, and the third image frame is one frame of image frame before the second image frame.
In this way, the electronic device may not update the dynamic efficiency parameter at the time or play the image frame at the time based on the preset dynamic efficiency parameter at the time when the audio intensity of the audio is smaller than the first preset audio intensity.
With reference to the first aspect, in one possible implementation manner, the electronic device obtains a preset dynamic parameter at the second moment, which specifically includes: under the condition that the audio intensity at the second moment is larger than the second preset audio intensity and smaller than the first preset audio intensity, the electronic equipment acquires the preset dynamic effect parameters at the second moment.
With reference to the first aspect, in one possible implementation manner, the method further includes: under the condition that the audio intensity at the third moment is smaller than the second preset audio intensity, the electronic equipment determines a second dynamic effect parameter based on the audio intensity at the third moment; the electronic equipment plays a fourth image frame based on the second dynamic effect parameter, wherein the first dynamic effect parameter is used for controlling the switching speed when switching from a fifth image frame to the fourth image frame, and the fifth image frame is one frame of image frame before the fourth image frame; the second dynamic effect parameter is smaller than the preset dynamic effect parameter at the third moment.
In this way, under the condition that the audio intensity of the audio is smaller than the second preset audio intensity, the electronic equipment also needs to update the dynamic efficiency parameter at the moment to obtain the second dynamic efficiency parameter, and play the image frame at the moment based on the second dynamic efficiency parameter. The second dynamic parameter is different from the preset dynamic parameter at the third moment. The second dynamic effect parameter is smaller than the preset dynamic effect parameter at the third moment.
The larger the audio intensity of the audio is, the faster the playing speed of the animation is, the smaller the audio intensity of the audio is, the slower the playing speed of the animation is, the sound and picture coordination is realized, and the interestingness of the audio playing is improved.
With reference to the first aspect, in one possible implementation manner, in a case where the preset motion effect parameter at the first moment and the preset motion effect parameter at the second moment are the same, a switching speed when the second image frame is switched to the first image frame is greater than a switching speed when the third image frame is switched to the second image frame.
That is, the first dynamic parameter is greater than the second dynamic parameter.
Optionally, preset dynamic parameters at different moments in the first animation may be the same or different.
With reference to the first aspect, in one possible implementation manner, in a case where the preset motion effect parameter at the second moment and the preset motion effect parameter at the third moment are the same, a switching speed when the third image frame is switched to the second image frame is greater than a switching speed when the fifth image frame is switched to the fourth image frame. That is, the preset dynamic parameter at the second moment is greater than the second dynamic parameter.
With reference to the first aspect, in one possible implementation manner, the determining, by the electronic device, a first dynamic efficiency parameter based on the audio intensity at the first moment specifically includes: according to the formulaDetermining a first dynamic parameter; wherein Y represents a first dynamic effect parameter, Y max represents a maximum dynamic effect parameter value, Y min represents a minimum dynamic effect parameter value, and Y max and Y min are preset; x represents the audio intensity at the first time, X max represents the maximum audio intensity value, X min represents the minimum audio intensity value, and X max and X min are preset.
Thus, when the audio intensity is greater than the first preset audio intensity or the audio intensity is less than the second preset audio intensity, the electronic device can update the dynamic efficiency parameter based on the audio intensity in the mode, so that the sound-picture synergistic effect is realized.
With reference to the first aspect, in one possible implementation manner, before the electronic device plays the first audio file and plays the first animation, the method further includes: the electronic equipment decodes the first audio file to obtain a first PCM file; and the electronic equipment performs feature extraction on the first PCM file to obtain a first feature file corresponding to the first audio file.
In this way, the electronic device may preprocess and save the first profile. When the first animation is played, the electronic equipment does not need to determine the first characteristic file in real time any more so as to improve the synchronization effect of the first audio file and the first animation.
With reference to the first aspect, in one possible implementation manner, before the electronic device obtains a first feature file corresponding to the first audio file, the method further includes: under the condition that the electronic equipment determines that the first characteristic file corresponding to the first audio file is not stored locally, the electronic equipment decodes the first audio file to obtain a first PCM file, and performs characteristic extraction on the first PCM file to obtain the first characteristic file corresponding to the first audio file.
In some embodiments, the electronic device may update the stored profile corresponding to the audio. Such as deleting unused profiles over a period of time. The electronic device may determine whether a first profile corresponding to the first audio file is stored locally before playing the first animation. The situation that sound-picture coordination cannot be achieved is avoided.
With reference to the first aspect, in a possible implementation manner, the electronic device includes a motor, and the method further includes: under the condition that the audio intensity at the first moment is larger than a first preset audio intensity, the electronic equipment determines a first vibration frequency of the motor based on the audio intensity at the first moment; the electronic equipment plays a first image frame in a first animation based on a first dynamic parameter and vibrates based on a first vibration frequency through a motor; the first vibration frequency is larger than the preset vibration frequency at the first moment.
The vibration frequency of each image frame in the first animation is preset. The preset vibration frequency of each image frame is used for the electronic device to vibrate by the motor when the image frame is displayed.
By the method, the audio characteristics, the audio dynamic effects and the vibration effects can be associated, the larger the audio intensity of the audio is, the faster the playing speed of the animation is, the faster the vibration speed of the motor is, the three-dimensional synergy of sound, picture and vibration is realized, and the interestingness of audio playing is improved.
With reference to the first aspect, in one possible implementation manner, the method further includes: under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires the preset vibration frequency of the motor at the second moment; and the electronic equipment vibrates based on the preset vibration frequency at the second moment through the motor while playing the second image frame based on the preset dynamic parameter at the second moment. Alternatively, the preset vibration frequency may be 0, that is, there is no vibration effect.
With reference to the first aspect, in one possible implementation manner, in a case where the audio intensity at the second moment is greater than the second preset audio intensity and is less than the first preset audio intensity, the electronic device obtains the preset vibration frequency at the second moment.
With reference to the first aspect, in one possible implementation manner, the method further includes: under the condition that the audio intensity at the third moment is smaller than the second preset audio intensity, the electronic equipment determines a second vibration frequency based on the audio intensity at the third moment; and the electronic equipment vibrates based on the second vibration frequency through the motor while playing the fourth image frame based on the second dynamic parameter at the third moment.
Therefore, when the audio intensity of the audio is smaller than the second preset audio intensity, the electronic equipment also needs to update the vibration frequency at the moment to obtain the second vibration frequency, and the electronic equipment vibrates based on the second vibration frequency through the motor while playing the fourth image frame based on the second dynamic parameter at the third moment. The second vibration frequency is smaller than the preset vibration frequency at the third moment.
The larger the audio intensity of the audio is, the faster the playing speed of the animation is, the faster the vibration speed of the motor is, the smaller the audio intensity of the audio is, the slower the playing speed of the animation is, the slower the vibration speed of the motor is, the sound, picture and vibration cooperation is realized, and the interestingness of audio playing is improved.
With reference to the first aspect, in one possible implementation manner, the determining, by the electronic device, the first vibration frequency based on the audio intensity at the first moment specifically includes: according to the formulaDetermining a first dynamic parameter; wherein, Z represents a first vibration frequency, Z max represents a maximum vibration frequency, Z min represents a minimum vibration frequency, and Z max and Z min are preset; x represents the audio intensity at the first time, X max represents the maximum audio intensity value, X min represents the minimum audio intensity value, and X max and X min are preset.
Thus, when the audio intensity is greater than the first preset audio intensity or the audio intensity is less than the second preset audio intensity, the electronic equipment can obtain updated vibration frequency based on the audio intensity in the mode, so that the effect of sound-picture-vibration cooperation is achieved.
With reference to the first aspect, in one possible implementation manner, the first audio file is an alarm clock audio, and the preset condition includes a preset start time of the time to reach the alarm clock.
With reference to the first aspect, in one possible implementation manner, the first audio file is an incoming call audio, and the preset condition includes the electronic device receiving an incoming call request sent by other electronic devices. The incoming call request here may be an operator telephone request.
In other embodiments, the incoming call request herein may also be a web phone request.
With reference to the first aspect, in one possible implementation manner, the first audio file is first music downloaded in real time in the music application, and the preset condition includes that the electronic device receives a user operation to play the first music.
In a second aspect, the present application provides an electronic device comprising one or more processors, one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform an audio playback method provided in any one of the possible implementations of the above.
In a third aspect, the present application provides a chip system comprising one or more processors, wherein the processors are configured to invoke computer instructions to cause an electronic device to perform an audio playing method provided in any of the possible implementations of the above aspect.
In a fourth aspect, the application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform an audio playing method as provided in any one of the possible implementations of the above.
The advantages of the second aspect to the fourth aspect are unique, and reference may be made to the description of the advantages of the first aspect, and the present application is not repeated here.
Drawings
FIGS. 1A-1J illustrate a scenario in which an electronic device 100 plays an incoming ring tone;
fig. 2A-2I show schematic views of a scenario in which the electronic device 100 plays an alarm bell;
3A-3E illustrate a schematic view of a scene in which the electronic device 100 plays music in a music application;
FIGS. 4A and 4B illustrate a schematic diagram of an electronic device 100 extracting and saving audio features of an audio file;
FIG. 5 shows a schematic diagram of another electronic device 100 extracting and saving audio features of an audio file;
FIG. 6 shows a schematic diagram of an electronic device 100 playing audio and exhibiting dynamic effects;
FIGS. 7A-7B are a set of audio intensity diagrams provided by the present application;
FIG. 8 illustrates a method schematic of an audio feature extraction engine acquiring a first feature file;
Fig. 9 is a schematic flow chart of an audio playing method according to the present application;
Fig. 10 shows a schematic structural diagram of an electronic device;
fig. 11 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of a user interface is a graphical user interface (graphic user interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be a visual interface element of text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc., displayed in a display of the electronic device.
First, an audio playing scene provided by the present application is described.
Fig. 1A-1J show schematic views of a scenario in which an electronic device 100 plays an incoming ring tone.
Setting incoming call bell (figures 1A-1D)
Fig. 1A shows a main interface of the electronic device 100. The main interface includes icons of a plurality of applications, such as an icon of a weather application, an icon of a Hua shop application, an icon of a smart home application, an icon of a sports health application, an icon of a memo application, an icon of an alarm application, an icon of a music application, an icon of a setup application, an icon of a camera application, an icon of an address book application, an icon of a telephone application, and an icon of an information application. The main interface 210 also shows a power indicator, weather indicator, date indicator, network signal indicator, page indicator, etc.
As shown in fig. 1A, the electronic device 100 may receive an input operation (e.g., a single click) by a user for setting an icon of an application, and in response to the input operation by the user, the electronic device 100 may display the user interface 1200 shown in fig. 1B. The user interface 1200 is a main interface for setting an application.
As shown in fig. 1B, a plurality of settings options, such as a flight mode settings option, are shown in user interface 1200 with the flight mode of electronic device 100 off. Wi-Fi setup options, wi-Fi functionality of the electronic device 100 is turned on. Bluetooth set option Bluetooth function of the electronic device 100 is turned on. Personal hotspot setting options, mobile network setting options, do not disturb mode setting options, display and brightness setting options, hua Cheng account numbers, voice setting options, and the like.
As shown in fig. 1B, the electronic device 100 may receive an input operation (e.g., a click) of a user for a sound setting option in the user interface 1200, and in response to the input operation of the user, the electronic device 100 may display the user interface 1300 shown in fig. 1C, in which the user may set an incoming ring tone. A plurality of settings may be included in the user interface 1300. For example, a ringing vibration mode setting item, the ringing vibration mode is turned off. A mute mode vibration setting item, and a mute mode vibration is turned on. The volume setting item of the incoming call ring is adjusted by the button, and the volume of the incoming call ring is adjusted by the button to be closed. Also shown in user interface 1300 are a plurality of ring tones, such as ring tone 1, ring tone 2, and ring tone 3, with ring tone 1, ring tone 2, and ring tone 3 being different. The current incoming ring tone of the electronic device 100 is ring tone 1.
The electronic device 100 may also receive incoming ring tones for the user to replace the electronic device 100. For example, as shown in fig. 1C, the electronic device 100 may receive an input operation (e.g., a click) by a user for a ring tone 2 option, and in response to the input operation by the user, the electronic device 100 may set the ring tone 2 as an incoming ring tone of the electronic device 100. After setting the ring tone 2 as the incoming call ring tone of the electronic device 100, the electronic device 100 may display the user interface shown in fig. 1D to prompt the user that the incoming call ring tone of the electronic device 100 is the ring tone 2.
Then after the electronic device 100 receives the incoming call request from the other device, the electronic device 100 may play the ring 2 to prompt the user that there is an incoming call currently to answer.
In some implementations, to enhance the interest of the electronic device 100 in playing incoming ring tones. The electronic device 100 may play the action on the incoming call interface while playing the ringtone 2. In this way, the user's use experience is visually and audibly prompted.
Playing incoming call bell and dynamic effect (FIGS. 1E-1G)
When the electronic device 100 receives a call answering request sent by another device, the electronic device 100 may display the incoming call interface shown in fig. 1E. A plurality of incoming call information, such as the incoming number "123456789", reject answer controls and answer controls are shown in the incoming call interface. The user can click the reject answering control to stop establishing the call connection with other equipment, and can click the answer control to establish the call connection with other equipment.
The electronic device 100 may continue playing ring 2 before the user clicks the answer control or refuses to answer the control to alert the user that a new phone call is currently needed to answer.
During the playing of the ring tone 2 by the electronic device 100, the electronic device 100 may be displayed on the incoming call interface for electrical effect. For example, the electronic device 100 may display the incoming electronic effects shown in fig. 1E-1G in sequence.
The incoming call effect shown in fig. 1E-1G may be a complete set of incoming call effects, which may be water ripple effects. At different moments when the electronic device 100 plays the ring tone 2, the incoming call displayed by the electronic device 100 has different electric effects.
After the electronic device 100 displays the incoming call effect shown in fig. 1G, the electronic device 100 may further display the incoming call effect shown in fig. 1E, 1F, and 1G.
It should be noted that, between fig. 1E and fig. 1F, between fig. 1F and fig. 1G, 0 or more dynamic effect diagrams may be further included, which is not limited in the present application.
Playing incoming call bell and dynamic effect (figures 1H-1J)
In other embodiments, the electronic device 100 may display on the incoming call interface to be electrically active during the playing of the ring tone 2 by the electronic device 100. For example, the electronic device 100 may display the incoming phone calls shown in fig. 1H-1J in sequence.
At different moments when the electronic device 100 plays the ring tone 2, the incoming call displayed by the electronic device 100 has different electric effects. After the electronic device 100 displays the incoming call effect shown in fig. 1J, the electronic device 100 may further display the incoming call effect shown in fig. 1H, 1I, and 1J.
It should be noted that, between fig. 1H and fig. 1I, between fig. 1I and fig. 1J, 0 or more dynamic effect diagrams may be further included, which is not limited in the present application.
Alternatively, the incoming call effect is not limited to the water ripple effect shown in fig. 1E-1G and 1H-1J, and the electronic device 100 may display other incoming call effects, for example, the user may select one incoming call effect from the multiple incoming call effects, so that the electronic device 100 may play the incoming call effect selected by the user when displaying the incoming call interface.
For another example, the preset corresponding incoming call actions of different incoming call ringtones may be different, or the preset corresponding incoming call actions of different incoming call ringtones may be the same.
Fig. 2A-2I show schematic views of a scenario in which the electronic device 100 plays an alarm bell.
Alarm clock bell (fig. 2A-2C)
As shown in fig. 2A, the electronic device 100 may receive an input operation (e.g., a single click) by a user for an icon of a clock application in a main interface, and in response to the input operation by the user, the electronic device 100 may display the user interface 220 shown in fig. 2B. The user interface 220 is the main interface of the clock application. The user may set an alarm clock in the user interface 220.
As shown in fig. 2B, a plurality of alarm settings are shown in the user interface 220. The starting time of the alarm clock in one alarm clock setting item is 6:30 am every day, and the alarm clock is in a closing state. The starting time of the alarm clock in the other alarm clock setting item is 9:58 am of Monday, tuesday and Friday, and the alarm clock is also in a closing state.
As shown in fig. 2B, the electronic device 100 may receive an input operation (e.g., a click) by a user for a first alarm clock setting item in the user interface 220, and in response to the input operation by the user, the electronic device 100 may display the user interface shown in fig. 2C.
As shown in fig. 2C, the electronic device 100 activates the first alarm clock and the alarm bell will be played at 6:30 am each day.
Optionally, the electronic device 100 may also receive an operation of the user to modify the alarm clock ringtone.
Optionally, the electronic device 100 may also receive an input operation of the user for editing options in the user interface 220, and reset other alarm clock information.
Playing alarm clock bell and dynamic effect (fig. 2D-2F)
After the user sets an alarm clock of 6:30 am each day, the electronic device 100 will start the alarm clock and play an alarm clock bell when the morning time of each day reaches 6:30.
When the electronic device 100 plays an alarm clock ring, the electronic device 100 may display the alarm clock interface shown in fig. 2D. The alarm clock interface may include a turn-off alarm clock option and a later reminder option. The user can press the turn-off alarm clock option to slide left or right, turning off the alarm clock. The user may also click on a later alert option, causing the electronic device 100 to play the alarm ringtone again after 10 minutes.
The electronic device 100 may continuously play an alarm ring before the user clicks the off alarm option or the later alert option to alert the user that an alarm alert is currently available.
During the time that the electronic device 100 plays the alarm clock ring, the electronic device 100 may display the alarm clock action on the alarm clock interface. For example, the electronic device 100 may display the alarm clock action shown in fig. 2D-2F in sequence.
The alarm clock action shown in fig. 2D-2F may be a complete set of alarm clock actions, which may be water ripple actions. At different moments when the electronic device 100 plays the alarm clock bell, the alarm clock dynamic effect displayed by the electronic device 100 is different.
After the electronic device 100 displays the alarm clock action shown in fig. 2F, the electronic device 100 may further display the alarm clock actions shown in fig. 2D, 2E, and 2F.
It should be noted that, between fig. 2D and fig. 2E, between fig. 2E and fig. 2F, 0 or more dynamic effect diagrams may be further included, which is not limited in the present application.
Alternatively, the alarm clock action is not limited to the water ripple action shown in fig. 1E-1G and 1H-1J, and the electronic device 100 may display other alarm clock actions, for example, the user may select one alarm clock action from a plurality of alarm clock actions, so that the electronic device 100 may play the alarm clock action selected by the user when displaying the incoming call interface.
For another example, the preset alarm clock actions of different incoming call ringtones are different, or the preset alarm clock actions of different incoming call ringtones may be the same.
Playing alarm clock bell and dynamic effect (fig. 2G-2I)
In other embodiments, the electronic device 100 may display the alarm actuation on the alarm interface during the time the electronic device 100 plays the alarm ring. For example, the electronic device 100 may display the alarm clock action shown in fig. 2G-2I in sequence.
At different moments when the electronic device 100 plays the alarm clock bell, the alarm clock dynamic effect displayed by the electronic device 100 is different. After the electronic device 100 displays the alarm clock action shown in fig. 2I, the electronic device 100 may further display the alarm clock action shown in fig. 2G, fig. 2H, and fig. 2I.
It should be noted that, between fig. 2G and fig. 2H, between fig. 2H and fig. 2I, 0 or more dynamic effect diagrams may be further included, which is not limited in the present application.
Optionally, the alarm clock action is not limited to the water ripple action shown in fig. 2D-2F and fig. 2G-2I, and the electronic device 100 may display other alarm clock actions, for example, the user may select one alarm clock action from the plurality of alarm clock actions, so that the electronic device 100 may play the alarm clock action selected by the user when displaying the alarm clock interface.
For another example, the preset respective alarm clock actions of different alarm clock ringtones may be different, or the preset respective alarm clock actions of different alarm clock ringtones may be the same.
Fig. 3A and 3E show a schematic view of a scene in which the electronic device 100 plays music in a music application.
Playing music (fig. 3A-3B)
As shown in fig. 3A, the electronic device 100 may receive an input operation (e.g., a click) of an icon of a music application in the main interface by a user, and in response to the input operation by the user, the electronic device 100 may display the user interface 320 shown in fig. 3B. The user interface 320 is the main interface of the music application. The user may select to play music in the user interface 320.
The user interface 320 shown in fig. 3B is the main interface of the music application, and the user interface 320 includes a search icon 1001, a my control 1002, a music control 1003, a discovery control 1004, a hot-air station control 1005, and a collection control 1007; below the hotcast station control 1005 will present a list 1006 of songs most popular for the music application, as shown in fig. 3B, with the song list first named AAA; the second song name is BBB; the third song name is CCC; the fourth song name is DDD.
The collection control 1007 may receive and collect the song in response to a user's click.
As shown in fig. 3B, the electronic device 100 may receive an input operation (e.g., a single click) by a user for an icon of a song title "DDD" in the user interface 320, and in response to the input operation by the user, the electronic device 100 may display the audio playback interface shown in fig. 3C.
As shown in fig. 3C, the audio playing interface may include controls such as a music play/pause button, a switch next button, a switch previous button, a music play progress bar, a music download button, a music share button, and the like.
Playing music dynamic effect (FIG. 3C-FIG. 3E)
During the time that the electronic device 100 plays music, the electronic device 100 may display musical actions on the audio playback interface. For example, the electronic device 100 may display the musical performance shown in fig. 3C-3E in sequence.
At different moments when the electronic device 100 plays music, the musical action displayed by the electronic device 100 is different. After the electronic device 100 displays the alarm clock action shown in fig. 3E, the electronic device 100 may further display the music action shown in fig. 3C, 3D, and 3E.
It should be noted that, between fig. 3C and fig. 3D, between fig. 3D and fig. 3E, 0 or more dynamic effect diagrams may be further included, which is not limited in the present application.
Alternatively, the musical performance is not limited to the petal performance shown in fig. 3C-3E, and the electronic device 100 may display other musical performance, for example, the user may select one musical performance from a plurality of musical performance, so that the electronic device 100 may play the musical performance selected by the user when displaying the music playing interface.
For another example, the respective corresponding musical actions of the different musical presets may be different, or the respective corresponding musical actions of the different musical presets may be the same.
Next, a specific implementation of how the electronic device 100 displays the dynamic effects will be described.
Fig. 4A shows a schematic diagram of an electronic device 100 extracting and saving audio features of an audio file.
As shown in fig. 4A, the electronic device 100 includes a first application, an audio feature extraction engine. In some embodiments, the audio feature extraction engine may also be referred to as a sound visualization engine.
S401, receiving user operation by the first application to obtain a first audio file to be played.
The first audio may be audio stored within the electronic device 100.
The first application may be a clock application and the first audio file may be a user-set ring tone 2 as shown in fig. 1A-1D, for example. After the electronic device 100 receives an incoming call request sent by another device, the music player may play the ring tone 2.
The first application may be a setting application, and the first audio file may also be a user-set alarm bell as shown in fig. 2A-2C, for example. After the time reaches the starting time of the alarm clock. For example, when the time arrives at 6:30 a day in the morning, the music player may play an alarm bell.
The first application may be a music application and the first audio file may be music 1 downloaded by the user as shown in fig. 3A-3C, for example. When the first application receives a user operation to play music 1 in the music application, the first application may play music 1. Music 1 may be cached music.
S402, the first application obtains a first storage path of the first audio file.
S403, the first application sends the first storage path of the first audio file to the audio feature extraction engine.
After the first application stores the first audio file, the first application may obtain a first storage path of the first audio file and send the first storage path of the first audio file to the audio feature extraction engine.
S404, the audio feature extraction engine acquires the first audio file based on the first storage path of the first audio file.
After receiving the first storage path of the first audio file sent by the first application, the audio feature extraction engine may obtain the first audio file based on the first storage path of the first audio file.
Alternatively, S403 and S404 may not be executed, and after the first application obtains the first storage path of the first audio file, the first application may obtain the first audio file based on the first storage path of the first audio file, and send the first audio file to the audio feature extraction engine. The audio feature extraction engine may directly obtain the first audio file.
S405, performing feature extraction on the first audio file by the audio feature extraction engine to obtain a first feature file corresponding to the first audio file.
The first audio file may be a formatted audio file, such as a WAV format, an MP3 format, an MP4 format, a 3GP format, and the like.
After the first audio file is obtained, the audio feature extraction module may decode the first audio file in a preset format to obtain pulse code modulation (pulse code modulation, PCM) data, such as first PCM data. The first PCM data is discrete data. The audio feature extraction module may perform feature extraction on the discrete data.
The audio feature extraction module may perform feature extraction on the first PCM data based on a preset algorithm, to obtain a first feature file. The preset algorithm may be designed based on the perceptual characteristics of the human ear to sound.
Audio features include, but are not limited to, loudness (or called sound intensity), audio time of day corresponding to loudness, and the like.
For example, the first audio file is a file of a first duration, the first PCM data is also a file of the first duration, and the first feature file obtained after the audio feature extraction module performs feature extraction on the first PCM data is also a file of the first duration.
The first profile may represent audio characteristics of the audio at different times in the first audio file, such as sound intensity characteristics of the audio at different times.
Fig. 4B shows a schematic view of a first profile.
As shown in fig. 4B, the duration of the first profile is a first duration, for example, the first duration may be 60s, and the first profile further includes corresponding sound intensities at different times.
The sound intensity corresponding to the partial time audio is shown as in fig. 4B.
For example, the sound intensity of the audio at 0s is 40 db, the sound intensity of the audio at 5s is 42 db, the sound intensity of the audio at 10s is 75 db, the sound intensity of the audio at 15s is 80 db, the sound intensity of the audio at 20s is 38 db, the sound intensity of the audio at 25s is 78 db, the sound intensity of the audio at 30s is 80 db, the sound intensity of the audio at 35s is 43 db, and the sound intensity of the audio at 60s is 36 db. Fig. 4B shows only the sound intensities corresponding to part of the time audio, and the first profile may also include the sound intensities corresponding to other more time audio.
S406, the audio feature extraction engine stores the first feature file.
The audio feature extraction engine may save the first feature file after deriving the first feature file based on the first audio file. After the audio feature extraction engine saves the first feature file, the audio feature extraction engine may obtain a storage path for the first feature file.
S407, the audio feature extraction engine sends a confirmation message to the first application.
After the audio feature extraction engine saves the first feature file or obtains the first feature file, the audio feature extraction engine may send a confirmation message to the first application, where the confirmation message is used to inform the first application that the audio feature extraction engine has obtained the first feature file corresponding to the first audio file.
Optionally, the audio feature extraction engine further sends the storage path of the first feature file to the first application. In one possible implementation, the storage path of the first profile may be sent to the first application carried in an acknowledgement message. In other possible implementations, the storage path of the first profile may also be sent to the first application independently of the acknowledgement message.
Optionally, the storage path of the first feature file corresponding to the first audio file may also be sent by the first application to the audio feature extraction engine. Such that the audio feature extraction engine may store the first feature file based on a storage path of the first feature file sent by the first application.
Fig. 5 shows a schematic diagram of another electronic device 100 extracting and saving audio features of an audio file.
S501, a first application receives user operation and obtains a first audio file to be played.
S502, the first application obtains a first storage path of the first audio file.
For descriptions of S501 and S502, reference may be made to descriptions of S401 and S402, and the description of the present application is not repeated here.
S503, the first application applies for a second storage path of the feature file corresponding to the first audio file.
The first application applies for a second storage path of the feature file corresponding to the first audio file, and the audio feature extraction engine may store the feature file corresponding to the first audio file in a storage area corresponding to the second storage path.
S504, the first application sends a first storage path of the first audio file and a second storage path of the feature file corresponding to the first audio file to the audio feature extraction engine.
S505, the audio feature extraction engine acquires the first audio file based on the storage path of the first audio file.
S506, the audio feature extraction engine performs feature extraction on the first audio file to obtain a first feature file corresponding to the first audio file.
For descriptions of S505 and S506, reference may be made to descriptions of S404 and S405, and the present application is not repeated here.
S507, the audio feature extraction engine stores the first feature file in a storage area corresponding to the second storage path.
After the audio feature extraction engine obtains the first feature file based on the first audio file, the audio feature extraction engine can acquire the second feature file and store the second feature file in a storage area corresponding to the second storage path. The second storage path is sent by the first application to the audio feature extraction engine.
S508, the audio feature extraction engine sends a confirmation message to the first application.
After the audio feature extraction engine saves the first feature file or obtains the first feature file, the audio feature extraction engine may send a confirmation message to the first application, where the confirmation message is used to inform the first application that the audio feature extraction engine has obtained the first feature file corresponding to the first audio file.
In this way, before the electronic device 100 plays the first audio file, the electronic device 100 may calculate in advance a first feature file corresponding to the first audio file, and when the electronic device 100 plays the first audio file, it may be determined whether to modify the dynamic efficiency parameter directly based on the first feature file. And a first characteristic file corresponding to the first audio file is not required to be calculated in real time, so that the dynamic effect and the audio can be corresponding.
Fig. 6 shows a schematic diagram of an electronic device 100 playing audio and exhibiting dynamic effects.
As shown in fig. 6, the electronic device 100 includes a first application, an audio player, an audio feature extraction engine, and a dynamic effect engine. In some embodiments, the audio feature extraction engine may also be referred to as a sound visualization engine.
S601, a first application starts to play a first audio file when a preset condition is met.
In some embodiments, the first application may be a clock application, the first audio file may be an alarm bell, the preset condition may be that the time reaches a start time of the alarm clock, and the clock application may start playing the first audio file.
In other embodiments, the first application may be a setup application, the first audio file may be an incoming ring tone (e.g., ring tone 2), the preset condition may be that the electronic device 100 has received an incoming call request sent by another device, and the clock application may start playing the first audio file.
In other embodiments, the first application may also be a music application and the first audio file may also be music 1 downloaded by the user. The preset condition may be that the first application receives a user operation to play music 1 in the music application, and the first application may play music 1. Music 1 may be music downloaded in real time or may be music downloaded and cached in advance.
S602, the first application sends a first storage path of the first audio file to the audio player.
S603, the audio player acquires the first audio file based on the first storage path of the first audio file.
S604, the audio player plays the first audio file.
When the first application determines that the preset condition is met and the first audio file needs to be played, the first application can acquire a first storage path of the first audio file and send the first storage path of the first audio file to the audio player. After receiving the first storage path of the first audio file sent by the first application, the audio player may acquire the first audio file based on the first storage path of the first audio file, and start playing the first audio file. Illustratively, the duration of the first audio file may be a first duration (e.g., 60 s), and the audio player may repeat playing the first audio file before receiving a user operation to stop playing the first audio file. Or the audio player may stop playing the first audio file after the playing time period reaches the maximum time period before receiving the user operation to stop playing the first audio file.
Alternatively, in S602 and S604, the first application may acquire the first audio file based on the first storage path of the first audio file, and then send the first audio file to the audio player. The audio player may directly acquire the first audio file and begin playing the first audio file.
S605, the first application sends a play dynamic effect request to the dynamic effect engine.
When the first application determines that the preset condition is met and the first audio file needs to be played, in order to improve the interestingness when the electronic device 100 plays the first audio file, the first application may be executed when the audio player plays the first audio file. And playing the dynamic effect corresponding to the first audio file through the dynamic effect engine, so that the hearing experience and the visual experience of the user are improved.
S606, the first application sends a second storage path of the first feature file to the audio feature extraction engine.
S607, the audio feature extraction engine acquires the first feature file based on the second storage path.
When the first application determines that the preset condition is met and the first audio file needs to be played, the first application can also send a second storage path of the first feature file to the audio feature extraction engine. After receiving the second storage path of the first feature file sent by the first application, the audio feature extraction engine may obtain the first feature file based on the second storage path of the first feature file. The audio feature extraction engine may determine whether to modify the modification efficiency parameter based on a first feature file corresponding to the first audio file.
Optionally, in S606 and S607, the first application may also acquire the first feature file based on the second storage path, and then send the first feature file to the audio feature extraction engine, where the audio feature extraction engine may directly acquire the first feature file.
S605 and S606 may be executed simultaneously with S602, and S605 and S606 may be executed before S602, which is not limited in the present application.
S608, the audio feature extraction engine acquires preset dynamic parameters of the first animation associated with the first audio file.
During the process of playing the first audio file, the dynamic effect engine can acquire a first animation corresponding to the first audio file and acquire preset dynamic effect parameters in the first animation. The preset motion effect parameters of the first animation may include a switching speed between two front and rear frames of images in the animation, and the like. The faster the switching speed between the front and rear frames of image frames in the animation, the shorter the switching time between the front and rear frames of image frames, the faster the speed of playing the animation. The slower the switching speed between the front and rear two frames of image frames in the animation, the longer the switching time between the front and rear two frames of image frames, the slower the speed of playing the animation.
The preset motion effect parameters of the first animation may further include transparency of the image frame, image frame size, center point position of the image frame, and the like. The following embodiments of the present application will be described by taking the switching speed between the front and rear two frames of image frames in the animation as an example.
The preset dynamic parameters of the first animation may be preset, independent of the audio file. When the electronic device 100 plays the first animation based on the preset dynamic parameters, the dynamic parameters of each frame of image frame in the first animation are preset, and the electronic device 100 only needs to switch the image frames in the first animation based on the dynamic parameters of each frame of image frame.
For example, as shown in fig. 7A, the first profile corresponding to the first audio file may be a file of a first time length (e.g., 60 s). The first profile may include sound intensities corresponding to the audio at different times. The electronic device 100 may play the first animation while playing the first audio file. The first animation may include a plurality of image frames, each of which has a preset motion effect parameter. As shown in fig. 7A, the audio frequency at each moment corresponds to a preset dynamic parameter.
The dynamic parameters of the image frames corresponding to the audios at different moments may be all the same, may be all different, or may be partially the same, which is not limited in the present application.
When the electronic device 100 plays the first animation, for example, before playing the image frame at the time 1s, the electronic device 100 may acquire the preset motion effect parameter at the time 1s, and switch to display the image frame at the time 1s based on the preset motion effect parameter at the time 1 s. Before the electronic device 100 plays the image frame at the time of 10s, the electronic device 100 may acquire the preset motion parameter at the time of 10s, and switch and display the image frame at the time of 10s based on the preset motion parameter at the time of 10 s.
Alternatively, the different audio files may correspond to the same animation, e.g., both are first animations.
Alternatively, different audio files may correspond to different animations, e.g., a first audio file may correspond to a first animation and a second audio file may correspond to a second animation. The electronic device 100 may store a plurality of different animations, and the user may select one animation (e.g., the first animation) from the plurality of different animations when setting the first audio file, so that the electronic device 100 may play the first animation simultaneously when playing the first audio file, and the animation includes a plurality of frames.
Wherein, the animation is associated with preset dynamic parameters.
In one possible implementation, the preset motion effect parameters of different image frames in the first animation are the same, the first animation includes multiple frame frames, and the switching time between any two adjacent image frames in the first animation is equal.
For example, the switching time between the two frames of fig. 1E and 1F is equal to the switching time between the two frames of fig. 1F and 1G. For another example, the switching time between the two frames of fig. 1H and 1I is equal to the switching time between the two frames of fig. 1I and 1J.
For example, the switching time between the two frames of fig. 2D and 2E is equal to the switching time between the two frames of fig. 2E and 2F. For another example, the switching time between the two frames of fig. 2G and 2H is equal to the switching time between the two frames of fig. 2H and 2I.
In other possible implementations, the preset motion parameters of different image frames in the first animation are different. The first animation comprises a plurality of frames, and the switching time between two adjacent frames of image frames from front to back in the first animation is sequentially reduced.
For example, the switching time between the two frames of fig. 1E and 1F is longer than the switching time between the two frames of fig. 1F and 1G. For another example, the switching time between the two frames of fig. 1H and 1I is longer than the switching time between the two frames of fig. 1I and 1J.
For example, the switching time between the two frames of fig. 2D and 2E is longer than the switching time between the two frames of fig. 2E and 2F. For another example, the switching time between the two frames of fig. 2G and 2H is longer than the switching time between the two frames of fig. 2H and 2I.
In other possible implementations, the preset motion parameters of different image frames in the first animation are different. The first animation comprises a plurality of frames, and the switching time between two adjacent frames of image frames from front to back in the first animation sequentially increases.
For example, the switching time between the two frames of fig. 1E and 1F is smaller than the switching time between the two frames of fig. 1F and 1G. For another example, the switching time between the two frames of fig. 1H and 1I is smaller than the switching time between the two frames of fig. 1I and 1J.
For example, the switching time between the two frames of fig. 2D and 2E is smaller than the switching time between the two frames of fig. 2E and 2F. For another example, the switching time between two frames of image frames of fig. 2G and 2H is smaller than the switching time between two frames of image frames of fig. 2H and 2I.
It should be noted that the animation effects of different animations are different, and the preset animation parameters of different animations may be the same or different.
S609, the audio feature extraction engine sequentially traverses the first feature file to determine whether the audio intensity is greater than a preset audio intensity.
As can be seen from the embodiment of fig. 4B, the audio feature extraction module obtains the first audio file corresponding to the first feature file. The first profile includes sound intensities corresponding to the audio at different times. The audio feature extraction engine may traverse the first feature file sequentially to determine a relationship between the audio intensity at each time in the first audio file and the preset audio intensity.
When the audio intensity is greater than the preset audio intensity, the audio feature extraction engine can update the dynamic efficiency parameters of the audio moment with the audio intensity greater than the preset audio intensity to obtain updated dynamic efficiency parameters. And switching display image frames based on the updated dynamic parameters. Therefore, the playing effect of the animation can be associated with the characteristics of the audio file, and the association relationship between the animation and the audio file is improved.
When the audio intensity is smaller than the preset audio intensity, the audio feature extraction engine can switch display image frames based on preset dynamic parameters of the audio time.
FIG. 7B illustrates a schematic diagram of an audio feature extraction engine obtaining updated dynamic parameters.
As shown in fig. 7B, the audio feature extraction engine may obtain a window (or called a timer) of a fixed duration (e.g., 20 ms), traverse the first feature file from front to back in time sequence, and determine an audio time when the audio intensity is greater than the preset audio intensity. Specifically, firstly, the audio feature extraction engine may traverse the 1ms to 20ms feature file in the first feature file, determine whether the part of the feature file includes an audio time with an audio intensity greater than a preset audio intensity, and when traversing the 21ms to 40ms feature file in the absence of the audio time, determine whether the part of the feature file includes an audio time with an audio intensity greater than the preset audio intensity, and when not including the audio time, traverse the 31ms to 60ms feature file in the first feature file. And so on, when traversing to the feature file from 601ms to 620ms in the first feature file, judging whether the part of the feature file comprises the audio time with the audio intensity being greater than the preset audio intensity, and under the condition of the part of the feature file comprising the audio time, the audio feature extraction engine can determine the audio time with the audio intensity being greater than the preset audio intensity, for example, when the audio time with the audio intensity being 601ms is greater than the preset audio intensity. The audio feature extraction engine may obtain the audio intensity (e.g., 75 db) at the 601ms audio time, and then obtain the updated dynamic parameter one based on the audio intensity at the 601ms audio time. When displaying the image frame corresponding to the 601ms audio time, the motion effect engine may switch to display the image frame corresponding to the 601ms audio time based on the updated motion effect parameter. The audio feature extraction engine then traverses the feature file after 620ms in turn and determines whether to modify the dynamic parameters.
As shown in fig. 7B, in the above traversal manner, the audio feature extraction engine may obtain updated dynamic parameters two based on the audio intensity at the 901ms audio time. When displaying the image frame corresponding to the 901ms audio time, the dynamic effect engine can switch and display the image frame corresponding to the 901ms audio time based on the updated dynamic effect parameter two.
The audio feature extraction engine may derive updated dynamic parameters three based on the audio strength at 1501ms audio time. When displaying an image frame corresponding to 1501ms of audio time, the motion effect engine may switch to display an image frame corresponding to 1501ms of audio time based on the updated motion effect parameter three.
The audio feature extraction engine may derive updated kinetic parameter four based on the audio strength at 1801ms audio time. When displaying the image frame corresponding to 1801ms audio time, the motion effect engine may switch to display the image frame corresponding to 1801ms audio time based on updating the motion effect parameter four.
When the audio intensity is greater than the preset audio intensity, S610 to S612 may be performed.
When the audio intensity is less than the preset audio intensity, S613-S614 may be performed.
The shorter the window or timer is, the more accurately the playing progress of the first audio file is matched with the playing progress of the first animation, and the smaller the error is.
Alternatively, the window or timer time may be adjusted, and the application is illustrated with 20ms only.
S610, the audio feature extraction engine determines updated dynamic parameters based on the audio intensity.
Under the condition that the audio intensity is larger than the preset audio intensity, the audio feature extraction engine can determine to update the dynamic efficiency parameters based on the audio intensity.
Alternatively, the audio feature extraction engine may convert the audio intensity to updated dynamic parameters via equation (1). Wherein Y represents an updated dynamic parameter, X represents an audio intensity, Y max represents a maximum dynamic parameter, Y min represents a minimum dynamic parameter, Y max and Y min are preset, X max represents a maximum audio intensity value, C min represents a minimum audio intensity value, and X max and X min are preset.
For example, in the case where the motion effect parameter is the switching time between any two adjacent frames of image frames in the first animation, Y max may be 30, Y min may be 5, x max may be 100, and x min may be 0.
According to the formula (1), the audio feature extraction engine can map the audio intensity to the dynamic efficiency parameter under the condition that the audio intensity is greater than the preset audio intensity, can obtain updated dynamic efficiency parameters based on the audio intensity, and then switches and displays the image frames corresponding to the audio intensity based on the updated dynamic efficiency parameters.
For example, the audio feature extraction engine may update the kinetic parameter one based on an audio intensity of 75 db by equation (1). The audio feature extraction engine may obtain updated kinetic parameters two based on an audio intensity of 80 db through equation (1). The audio feature extraction engine may obtain updated kinetic parameters three based on an audio intensity of 78 db through equation (1). The audio feature extraction engine may obtain updated kinetic parameter four based on an audio intensity of 80 db through equation (1). The updated dynamic parameter I, the updated dynamic parameter II and the updated dynamic parameter III are different. The updated dynamic parameter I is smaller than the updated dynamic parameter III and smaller than the updated dynamic parameter II.
S611, the audio feature extraction engine sends the updated dynamic effect parameters to the dynamic effect engine.
S612, the dynamic effect engine switches and displays the first dynamic effect image frame based on the updated dynamic effect parameters.
After the audio feature extraction engine obtains updated dynamic parameters based on the audio strength, the audio feature extraction engine may send the updated dynamic parameters to the dynamic engine. The motion effect engine may switch display of the first motion effect image frame based on the updated motion effect parameters.
For example, when the motion parameters of different image frames in the first animation are the same, the first animation includes multiple frames, and the switching time between any two adjacent image frames in the first animation is equal. If the dynamic effect engine modifies the dynamic effect parameter corresponding to the image frame of FIG. 1G to update the dynamic effect parameter I. Then the switching time between the two frames of fig. 1F and 1G after the modification of the dynamic parameter is longer than the switching time between the two frames of fig. 1E and 1F before the modification of the dynamic parameter.
For another example, when the motion effect parameters of different image frames in the first animation are different, the first animation comprises multiple frame frames, and the switching time between two adjacent frame image frames from front to back in the first animation is sequentially reduced. If the dynamic effect engine modifies the dynamic effect parameter corresponding to the image frame of FIG. 1G to update the dynamic effect parameter I. The switching time between the two frames of image frames of fig. 1F and 1G after the modification of the dynamic parameter is longer than the switching time between the two frames of image frames of fig. 1F and 1G before the modification of the dynamic parameter.
For another example, when the motion parameters of different image frames in the first animation are different, the first animation includes multiple frame frames, and the switching time between two adjacent frame image frames from front to back in the first animation sequentially increases. If the dynamic effect engine modifies the dynamic effect parameter corresponding to the image frame of FIG. 1G to update the dynamic effect parameter I. The switching time between the two frames of fig. 1F and 1G after the modification of the dynamic parameter is smaller than the switching time between the two frames of fig. 1F and 1G before the modification of the dynamic parameter.
S613, the audio feature extraction engine sends the preset dynamic efficiency parameters to the dynamic efficiency engine.
S614, the dynamic effect engine switches and displays the second dynamic effect image frame based on the preset dynamic effect parameters.
Under the condition that the audio intensity is smaller than the preset audio intensity, the audio feature extraction engine can not update the dynamic effect parameters or display the second dynamic effect image frame based on the preset dynamic effect parameters in a switching mode.
Optionally, in other embodiments, the audio feature extraction engine determines whether the audio intensity is less than a minimum audio intensity if the audio intensity is less than a preset audio intensity. Under the condition of the lowest audio intensity of light rain, the audio feature extraction engine can also obtain updated dynamic efficiency parameters based on the audio intensity, and send the updated dynamic efficiency parameters to the dynamic efficiency engine, and the dynamic efficiency engine switches and displays fourth dynamic efficiency image frames based on the preset dynamic efficiency parameters.
S609 to S614 are periodically performed until the audio player stops playing the first audio file, and S609 to S614 stop performing.
Optionally, before the audio feature extraction engine performs S609, the audio feature extraction engine needs to determine whether the first feature file is acquired before the first audio file is acquired. In some embodiments, the electronic device 100 periodically clears the memory to delete the first feature file that is not used for a long time. In other embodiments, if the electronic device 100 has not previously stored the first audio file, the first audio file is downloaded in real time, for example, the first audio file downloaded in real time by the electronic device 100 is music, and the first feature file of the first audio file downloaded in real time is not calculated before the electronic device 100. The electronic device 100 cannot obtain the first feature file corresponding to the first audio file. Then before the audio feature extraction engine executes S609, it may be first determined whether the first feature file corresponding to the first audio file is acquired, so as to avoid the occurrence of the execution failure.
Fig. 8 shows a schematic diagram of a method for the audio feature extraction engine to obtain the first feature file.
S801, the audio feature extraction engine reads a first audio file.
S802, the audio feature extraction engine needs to determine whether a first feature file is acquired.
Because in some embodiments the electronic device 100 periodically clears the memory, the first feature file that was not used for a long time is deleted. In other embodiments, if the electronic device 100 has not previously stored the first audio file, the first audio file is downloaded in real time, for example, the first audio file downloaded in real time by the electronic device 100 is music, and the first feature file of the first audio file downloaded in real time is not calculated before the electronic device 100. The electronic device 100 cannot obtain the first feature file corresponding to the first audio file. Therefore, before executing S609, the audio feature extraction engine needs to determine whether the first feature file is acquired.
In the case where the audio feature extraction engine acquires the first feature file, S806 is performed.
In the case where the audio feature extraction engine does not acquire the first feature file, S803 is performed.
S803, the audio feature extraction engine acquires the first audio file.
S804, the audio feature extraction engine decodes the first audio file to obtain first PCM data.
S805, the audio feature extraction engine performs feature extraction on the first PCM data to obtain a first feature file corresponding to the first audio file.
The first audio file may be a formatted audio file, such as a WAV format, an MP3 format, an MP4 format, a 3GP format, and the like.
After the first audio file is obtained, the audio feature extraction module may decode the first audio file in a preset format to obtain pulse code modulation (pulse code modulation, PCM) data, such as first PCM data. The first PCM data is discrete data. The audio feature extraction module may perform feature extraction on the discrete data.
The audio feature extraction module may perform feature extraction on the first PCM data based on a preset algorithm, to obtain a first feature file. The preset algorithm may be designed based on the perceptual characteristics of the human ear to sound.
Audio features include, but are not limited to, loudness (or called sound intensity), time instants to which loudness corresponds, and the like.
For example, the first audio file is a file of a first duration, the first PCM data is also a file of the first duration, and the first feature file obtained after the audio feature extraction module performs feature extraction on the first PCM data is also a file of the first duration.
The first profile may represent audio characteristics of the audio at different times in the first audio file, such as loudness characteristics of the audio at different times.
For example, the first profile may be the profile shown in FIG. 4B. After the audio feature extraction engine obtains the first feature file corresponding to the first audio file, the audio feature extraction engine may save the first feature file and obtain a storage path of the first feature file. The audio feature extraction engine also needs to send the storage path of the first feature file to the first application so as to inform the first application of the storage position of the first feature file corresponding to the first audio file.
S806, the audio feature extraction engine executes S609-S614 in the embodiment of FIG. 6.
After the audio feature extraction engine obtains the first feature file corresponding to the first audio file, the audio feature extraction engine may execute S609-S614 in the embodiment of fig. 6, traverse the first feature file in sequence, and determine a relationship between the audio intensity at each moment in the first audio file and the preset audio intensity.
When the audio intensity is greater than the preset audio intensity, the audio feature extraction engine can update the dynamic efficiency parameters of the audio moment with the audio intensity greater than the preset audio intensity to obtain updated dynamic efficiency parameters. And switching display image frames based on the updated dynamic parameters. In this way, the playing effect of the animation can be associated with the characteristics of the audio files, the higher the audio intensity in the first audio file is, the faster the playing speed of the animation is, the lower the audio intensity in the first audio file is, the slower the playing speed of the animation is, and the association relation between the animation and the audio files is improved.
When the audio intensity is smaller than the preset audio intensity, the audio feature extraction engine can switch display image frames based on preset dynamic parameters of the audio time.
Specifically, reference may be made to the descriptions in the embodiments of S609 to S614, and the description of the present application is omitted here.
Fig. 9 is a flow chart of an audio playing method provided by the application.
S901, under the condition that a preset condition is met, the electronic equipment acquires a first audio file and a first animation, wherein the first animation comprises a plurality of frame image frames; the electronic device plays the first audio file and plays the first animation.
In some embodiments, the animations of the different audio may be the same, and the animations of the different audio may also be different.
Alternatively, the electronic device may also match different animations based on the audio characteristics of the audio. Or the audio-associated animation may be preset.
S902, the electronic device acquires a first characteristic file corresponding to the first audio file, wherein the first characteristic file comprises audio intensities of the first audio file at different moments.
S903, under the condition that the audio intensity at the first moment is greater than the first preset audio intensity, the electronic device determines a first dynamic parameter based on the audio intensity at the first moment.
Optionally, the first audio file and the first feature file have the same duration. The electronic device may play the first audio file and the first animation simultaneously. When the first audio file starts to be played, a window with fixed time length can be adopted to traverse the first characteristic file from front to back according to time sequence, so that the first moment when the audio intensity is larger than the first audio intensity is determined, the playing speed of the animation is controlled, and sound-picture coordination is achieved. The shorter the fixed duration of the window, the stronger the synchronicity of the first audio file and the first animation, and the smaller the error.
Alternatively, the first time may be a time point or a period of time. The audio intensity at the first time may be the audio intensity at a certain time point or may be the average audio intensity over a period of time.
By way of example, the first time may be 10ms, 15ms, 25ms, 30ms, etc. as shown in FIG. 7A.
The first dynamic parameter may be an updated dynamic parameter one determined based on 75 db at 10ms as shown in fig. 7B.
The first dynamic parameter may also be an updated dynamic parameter two determined based on 80 db at 15ms shown in fig. 7B.
The first dynamic parameter may also be an updated dynamic parameter three, which is determined based on 78 db at 25ms as shown in fig. 7B.
The first dynamic parameter may also be an updated dynamic parameter four determined based on 80 db at 30ms shown in fig. 7B.
The switching speed between two adjacent frames of image frames in the first animation is preset. That is, the preset motion parameters of each image frame in the first animation are preset, and the preset motion parameters of each image frame are used for controlling the switching speed between the image frame and the previous image frame by the electronic device.
S904, the electronic equipment plays a first image frame in a first animation based on a first dynamic parameter, wherein the first dynamic parameter is used for controlling the switching speed when a second image frame is switched to the first image frame, and the second image frame is one frame of image frame before the first image frame; the first dynamic effect parameter is larger than the preset dynamic effect parameter at the first moment.
In one possible implementation, the method further includes: under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires preset dynamic effect parameters at the second moment; the electronic equipment plays a second image frame based on a preset dynamic parameter at a second moment, wherein the preset dynamic parameter at the second moment is used for controlling the switching speed when a third image frame is switched to the second image frame, and the third image frame is one frame of image frame before the second image frame.
In this way, the electronic device may not update the dynamic efficiency parameter at the time or play the image frame at the time based on the preset dynamic efficiency parameter at the time when the audio intensity of the audio is smaller than the first preset audio intensity.
By way of example, the second time may be 0ms time, 5ms time, 20ms time, 35ms time, 60ms time, etc., as shown in FIG. 7A.
In a possible implementation manner, the electronic device obtains a preset dynamic effect parameter at the second moment, which specifically includes: under the condition that the audio intensity at the second moment is larger than the second preset audio intensity and smaller than the first preset audio intensity, the electronic equipment acquires the preset dynamic effect parameters at the second moment.
In one possible implementation, the method further includes: under the condition that the audio intensity at the third moment is smaller than the second preset audio intensity, the electronic equipment determines a second dynamic effect parameter based on the audio intensity at the third moment; the electronic equipment plays a fourth image frame based on the second dynamic effect parameter, wherein the first dynamic effect parameter is used for controlling the switching speed when switching from a fifth image frame to the fourth image frame, and the fifth image frame is one frame of image frame before the fourth image frame; the second dynamic effect parameter is smaller than the preset dynamic effect parameter at the third moment.
In this way, under the condition that the audio intensity of the audio is smaller than the second preset audio intensity, the electronic equipment also needs to update the dynamic efficiency parameter at the moment to obtain the second dynamic efficiency parameter, and play the image frame at the moment based on the second dynamic efficiency parameter. The second dynamic parameter is different from the preset dynamic parameter at the third moment. The second dynamic effect parameter is smaller than the preset dynamic effect parameter at the third moment.
The larger the audio intensity of the audio is, the faster the playing speed of the animation is, the smaller the audio intensity of the audio is, the slower the playing speed of the animation is, the sound and picture coordination is realized, and the interestingness of the audio playing is improved.
In one possible implementation, in the case that the preset motion effect parameter at the first time and the preset motion effect parameter at the second time are the same, the switching speed when the second image frame is switched to the first image frame is greater than the switching speed when the third image frame is switched to the second image frame.
That is, the first dynamic parameter is greater than the second dynamic parameter.
Optionally, preset dynamic parameters at different moments in the first animation may be the same or different.
In one possible implementation, in the case that the preset motion effect parameter at the second time and the preset motion effect parameter at the third time are the same, the switching speed when the third image frame is switched to the second image frame is greater than the switching speed when the fifth image frame is switched to the fourth image frame. That is, the preset dynamic parameter at the second moment is greater than the second dynamic parameter.
In one possible implementation manner, the electronic device determines a first dynamic parameter based on the audio intensity at the first moment, and specifically includes: according to the formulaDetermining a first dynamic parameter; wherein Y represents a first dynamic effect parameter, Y max represents a maximum dynamic effect parameter value, Y min represents a minimum dynamic effect parameter value, and Y max and Y min are preset; x represents the audio intensity at the first time, X max represents the maximum audio intensity value, X min represents the minimum audio intensity value, and X max and X min are preset.
Thus, when the audio intensity is greater than the first preset audio intensity or the audio intensity is less than the second preset audio intensity, the electronic device can update the dynamic efficiency parameter based on the audio intensity in the mode, so that the sound-picture synergistic effect is realized.
In one possible implementation, before the electronic device plays the first audio file and plays the first animation, the method further includes: the electronic equipment decodes the first audio file to obtain a first PCM file; and the electronic equipment performs feature extraction on the first PCM file to obtain a first feature file corresponding to the first audio file.
In this way, the electronic device may preprocess and save the first profile. When the first animation is played, the electronic equipment does not need to determine the first characteristic file in real time any more so as to improve the synchronization effect of the first audio file and the first animation.
In one possible implementation manner, before the electronic device obtains the first feature file corresponding to the first audio file, the method further includes: under the condition that the electronic equipment determines that the first characteristic file corresponding to the first audio file is not stored locally, the electronic equipment decodes the first audio file to obtain a first PCM file, and performs characteristic extraction on the first PCM file to obtain the first characteristic file corresponding to the first audio file.
In some embodiments, the electronic device may update the stored profile corresponding to the audio. Such as deleting unused profiles over a period of time. The electronic device may determine whether a first profile corresponding to the first audio file is stored locally before playing the first animation. The situation that sound-picture coordination cannot be achieved is avoided.
In one possible implementation, the electronic device includes a motor, the method further comprising: under the condition that the audio intensity at the first moment is larger than a first preset audio intensity, the electronic equipment determines a first vibration frequency of the motor based on the audio intensity at the first moment; the electronic equipment plays a first image frame in a first animation based on a first dynamic parameter and vibrates based on a first vibration frequency through a motor; the first vibration frequency is larger than the preset vibration frequency at the first moment.
The vibration frequency of each image frame in the first animation is preset. The preset vibration frequency of each image frame is used for the electronic device to vibrate by the motor when the image frame is displayed.
By the method, the audio characteristics, the audio dynamic effects and the vibration effects can be associated, the larger the audio intensity of the audio is, the faster the playing speed of the animation is, the faster the vibration speed of the motor is, the three-dimensional synergy of sound, picture and vibration is realized, and the interestingness of audio playing is improved.
In one possible implementation, the method further includes: under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires the preset vibration frequency of the motor at the second moment; and the electronic equipment vibrates based on the preset vibration frequency at the second moment through the motor while playing the second image frame based on the preset dynamic parameter at the second moment.
In one possible implementation, the electronic device obtains the preset vibration frequency at the second time when the audio intensity at the second time is greater than the second preset audio intensity and less than the first preset audio intensity.
In one possible implementation, the method further includes: under the condition that the audio intensity at the third moment is smaller than the second preset audio intensity, the electronic equipment determines a second vibration frequency based on the audio intensity at the third moment; and the electronic equipment vibrates based on the second vibration frequency through the motor while playing the fourth image frame based on the second dynamic parameter at the third moment.
Therefore, when the audio intensity of the audio is smaller than the second preset audio intensity, the electronic equipment also needs to update the vibration frequency at the moment to obtain the second vibration frequency, and the electronic equipment vibrates based on the second vibration frequency through the motor while playing the fourth image frame based on the second dynamic parameter at the third moment. The second vibration frequency is smaller than the preset vibration frequency at the third moment.
The larger the audio intensity of the audio is, the faster the playing speed of the animation is, the faster the vibration speed of the motor is, the smaller the audio intensity of the audio is, the slower the playing speed of the animation is, the slower the vibration speed of the motor is, the sound, picture and vibration cooperation is realized, and the interestingness of audio playing is improved.
In one possible implementation manner, the electronic device may also determine the first vibration frequency based on the audio intensity at the first moment, and specifically includes: according to the formulaDetermining a first dynamic parameter; wherein, Z represents a first vibration frequency, Z max represents a maximum vibration frequency, Z min represents a minimum vibration frequency, and Z max and Z min are preset; x represents the audio intensity at the first time, X max represents the maximum audio intensity value, X min represents the minimum audio intensity value, and X max and X min are preset.
Thus, when the audio intensity is greater than the first preset audio intensity or the audio intensity is less than the second preset audio intensity, the electronic equipment can obtain updated vibration frequency based on the audio intensity in the mode, so that the effect of sound-picture-vibration cooperation is achieved.
In one possible implementation, the first audio file is an alarm clock audio, and the preset condition includes a preset start time of the time to reach the alarm clock.
In one possible implementation, the first audio file is an incoming call audio, and the preset condition includes the electronic device receiving an incoming call request sent by another electronic device. The incoming call request here may be an operator telephone request.
In other embodiments, the incoming call request herein may also be a web phone request.
In one possible implementation, the first audio file is first music downloaded in real time in the music application, and the preset condition includes the electronic device receiving a user operation to play the first music.
The application provides an electronic device, which comprises one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform an audio playback method as shown in fig. 9.
The present application provides a chip system comprising one or more processors for invoking computer instructions to cause an electronic device to perform an audio playback method as shown in fig. 9.
The present application provides a computer readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform an audio playback method as shown in fig. 9.
Referring to fig. 10, fig. 10 shows a schematic structural diagram of an electronic device.
The electronic device 100 may be a cell phone, tablet computer, desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer (UMPC), netbook, and cellular telephone, personal Digital Assistant (PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) device, wearable device, vehicle-mounted device, smart home device, and/or smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise and brightness of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include a static random-access memory (SRAM), a dynamic random-access memory (dynamic random access memory, DRAM), a synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), a double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.; the nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC-LEVEL CELL), multi-level memory cells (multi-LEVEL CELL, MLC), triple-level memory cells (LEVEL CELL, TLC), quad-LEVEL CELL, QLC), etc. divided according to a memory cell potential order, may include general FLASH memory (english: universal FLASH storage, UFS), embedded multimedia memory card (eMMC) MEDIA CARD, eMMC), etc. divided according to a memory specification.
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like.
The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 11 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 11, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 11, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video by the camera 193.
The embodiments of the present application may be arbitrarily combined to achieve different technical effects.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Drive (SSD)), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
In summary, the foregoing description is only exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present invention should be included in the protection scope of the present invention.
Claims (17)
1. An audio playing method, characterized in that the method comprises:
Under the condition that a preset condition is met, the electronic equipment acquires a first audio file and a first animation, wherein the first animation comprises a plurality of frame image frames;
The electronic equipment plays the first audio file and plays the first animation;
The electronic device plays the first animation, including:
the electronic equipment acquires a first characteristic file corresponding to a first audio file, wherein the first characteristic file comprises audio intensities of the first audio file at different moments;
under the condition that the audio intensity at the first moment is larger than a first preset audio intensity, the electronic equipment determines a first dynamic parameter based on the audio intensity at the first moment;
The electronic equipment plays a first image frame in the first animation based on the first dynamic parameter, wherein the first dynamic parameter is used for controlling the switching speed when a second image frame is switched to the first image frame, and the second image frame is one frame of image frame before the first image frame;
The first dynamic effect parameter is larger than the preset dynamic effect parameter at the first moment.
2. The method according to claim 1, wherein the method further comprises:
Under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires preset dynamic effect parameters at the second moment;
the electronic device plays a second image frame based on the preset dynamic effect parameter at the second moment, wherein the preset dynamic effect parameter at the second moment is used for controlling the switching speed when a third image frame is switched to the second image frame, and the third image frame is one frame of image frame before the second image frame.
3. The method of claim 2, wherein the electronic device obtains the preset dynamic parameters at the second moment, specifically including:
And under the condition that the audio intensity at the second moment is larger than the second preset audio intensity and smaller than the first preset audio intensity, the electronic equipment acquires the preset dynamic effect parameters at the second moment.
4. A method according to claim 3, characterized in that the method further comprises:
Under the condition that the audio intensity at the third moment is smaller than the second preset audio intensity, the electronic equipment determines a second dynamic effect parameter based on the audio intensity at the third moment;
The electronic equipment plays a fourth image frame based on the second dynamic effect parameter, wherein the first dynamic effect parameter is used for controlling the switching speed when a fifth image frame is switched to the fourth image frame, and the fifth image frame is one frame of image frame before the fourth image frame;
the second dynamic effect parameter is smaller than the preset dynamic effect parameter at the third moment.
5. The method according to any one of claims 2-4, wherein a switching speed when switching from the second image frame to the first image frame is greater than a switching speed when switching from the third image frame to the second image frame, in case the preset motion parameter at the first moment and the preset motion parameter at the second moment are the same.
6. The method according to claim 4, wherein a switching speed when switching from the third image frame to the second image frame is greater than a switching speed when switching from the fifth image frame to the fourth image frame in a case where the preset motion parameter at the second time and the preset motion parameter at the third time are the same.
7. The method according to any one of claims 1-4 or 6, wherein the electronic device determines a first activity parameter based on the audio intensity at the first moment, specifically comprising:
According to the formula Determining the first dynamic efficiency parameter;
wherein, Y represents the first dynamic parameter, Y max represents a maximum dynamic parameter value, Y min represents a minimum dynamic parameter value, and Y max and Y min are preset;
the X represents the audio intensity at the first time, the X max represents the maximum audio intensity value, the X min represents the minimum audio intensity value, and the X max and the X min are preset.
8. The method of any of claims 1-4 or 6, wherein prior to the electronic device playing the first audio file and playing the first animation, the method further comprises:
the electronic equipment decodes the first audio file to obtain a first PCM file;
And the electronic equipment performs feature extraction on the first PCM file to obtain the first feature file corresponding to the first audio file.
9. The method of any one of claims 1-4 or claim 6, wherein before the electronic device obtains the first profile corresponding to the first audio file, the method further comprises:
And under the condition that the electronic equipment determines that the first characteristic file corresponding to the first audio file is not stored locally, the electronic equipment decodes the first audio file to obtain a first PCM file, and performs characteristic extraction on the first PCM file to obtain the first characteristic file corresponding to the first audio file.
10. The method of claim 2, wherein the electronic device comprises a motor, the method further comprising:
Under the condition that the audio intensity at the first moment is larger than the first preset audio intensity, the electronic equipment determines a first vibration frequency of the motor based on the audio intensity at the first moment;
The electronic equipment vibrates based on the first vibration frequency through the motor while playing a first image frame in the first animation based on the first dynamic parameter;
wherein, the first vibration frequency is greater than the preset vibration frequency at the first moment.
11. The method according to claim 10, wherein the method further comprises:
Under the condition that the audio intensity at the second moment is smaller than the first preset audio intensity, the electronic equipment acquires the preset vibration frequency of the motor at the second moment;
and the electronic equipment vibrates based on the preset vibration frequency at the second moment through the motor while playing the second image frame based on the preset dynamic effect parameter at the second moment.
12. The method according to any of claims 1-4 or claims 6 or 10-11, wherein the first audio file is an alarm clock audio, and the preset condition comprises a preset activation time of the time to alarm clock.
13. The method according to any one of claims 1-4 or claims 6 or 10-11, wherein the first audio file is incoming call audio, and the preset condition includes the electronic device receiving an incoming call request sent by another electronic device.
14. The method of claim 9, wherein the first audio file is first music downloaded in real-time in a music application, and wherein the preset condition comprises the electronic device receiving a user operation to play the first music.
15. An electronic device, characterized in that, the electronic device includes one or more processors, one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-14.
16. A chip system for application to an electronic device, the chip system comprising one or more processors configured to invoke computer instructions to cause the electronic device to perform the method of any of claims 1-14.
17. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310699757.3A CN117689776B (en) | 2023-06-13 | 2023-06-13 | Audio playing method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310699757.3A CN117689776B (en) | 2023-06-13 | 2023-06-13 | Audio playing method, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117689776A CN117689776A (en) | 2024-03-12 |
CN117689776B true CN117689776B (en) | 2024-09-13 |
Family
ID=90135917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310699757.3A Active CN117689776B (en) | 2023-06-13 | 2023-06-13 | Audio playing method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117689776B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011150713A1 (en) * | 2010-06-02 | 2011-12-08 | 腾讯科技(深圳)有限公司 | Method, device for playing animation and method and system for displaying animation background |
CN115997189A (en) * | 2022-10-11 | 2023-04-21 | 广州酷狗计算机科技有限公司 | Display method, device and equipment of playing interface and readable storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005260391A (en) * | 2004-03-10 | 2005-09-22 | Nippon Telegr & Teleph Corp <Ntt> | Moving image display apparatus, moving image display method, moving image display program, and computer-readable recording medium for recording this program |
KR101473249B1 (en) * | 2012-10-30 | 2014-12-17 | 주식회사 케이티 | Server, device and method for creating refresh rate of contents |
CN107645630B (en) * | 2016-07-20 | 2021-02-23 | 中兴通讯股份有限公司 | Image pickup processing method and device |
CN111190514A (en) * | 2016-09-13 | 2020-05-22 | 华为机器有限公司 | Information display method and terminal |
-
2023
- 2023-06-13 CN CN202310699757.3A patent/CN117689776B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011150713A1 (en) * | 2010-06-02 | 2011-12-08 | 腾讯科技(深圳)有限公司 | Method, device for playing animation and method and system for displaying animation background |
CN115997189A (en) * | 2022-10-11 | 2023-04-21 | 广州酷狗计算机科技有限公司 | Display method, device and equipment of playing interface and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117689776A (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11450322B2 (en) | Speech control method and electronic device | |
CN112231025B (en) | UI component display method and electronic equipment | |
CN115473957B (en) | Image processing method and electronic equipment | |
CN109559270B (en) | Image processing method and electronic equipment | |
CN114397981A (en) | Application display method and electronic equipment | |
CN113067940B (en) | Method for presenting video when electronic equipment is in call and electronic equipment | |
US11972106B2 (en) | Split screen method and apparatus, and electronic device | |
CN110633043A (en) | Split screen processing method and terminal equipment | |
CN110989961A (en) | Sound processing method and device | |
CN113641271A (en) | Application window management method, terminal device and computer readable storage medium | |
CN113448658A (en) | Screen capture processing method, graphical user interface and terminal | |
CN113141483B (en) | Screen sharing method based on video call and mobile device | |
CN115359156B (en) | Audio playing method, device, equipment and storage medium | |
CN114740986B (en) | Handwriting input display method and related equipment | |
CN117689776B (en) | Audio playing method, electronic equipment and storage medium | |
CN113645595B (en) | Equipment interaction method and device | |
CN116301483A (en) | Application card management method, electronic device and storage medium | |
CN115291779A (en) | Window control method and device | |
CN113495733A (en) | Theme pack installation method and device, electronic equipment and computer readable storage medium | |
CN117133311B (en) | Audio scene recognition method and electronic equipment | |
CN117221713B (en) | Parameter loading method and electronic equipment | |
CN116795476B (en) | Wallpaper deleting method and electronic equipment | |
CN118555327A (en) | Audio processing method and electronic equipment | |
CN117785312A (en) | Audio playing method, electronic device and storage medium | |
CN118646821A (en) | Card display method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |