CN115834759A - Audio playing method and related device - Google Patents

Audio playing method and related device Download PDF

Info

Publication number
CN115834759A
CN115834759A CN202111094119.6A CN202111094119A CN115834759A CN 115834759 A CN115834759 A CN 115834759A CN 202111094119 A CN202111094119 A CN 202111094119A CN 115834759 A CN115834759 A CN 115834759A
Authority
CN
China
Prior art keywords
electronic device
audio
user
audio file
alarm clock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111094119.6A
Other languages
Chinese (zh)
Inventor
张超
丁宁
张茹
张乐韶
徐波
金伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111094119.6A priority Critical patent/CN115834759A/en
Publication of CN115834759A publication Critical patent/CN115834759A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an audio playing method and a related device. The electronic device can repeatedly execute the following steps along with the change of the user awakening degree: the electronic equipment determines a corresponding music type from the corresponding relation between the user awakening degree interval and the music type based on the user awakening degree, and plays an audio file in the music type. Or, the electronic device may repeatedly perform the following steps following the change of time after the alarm clock is started: the electronic equipment determines a corresponding music type from the corresponding relation between the time interval after the alarm clock is started and the music type based on the time after the alarm clock is started, and plays an audio file in the music type. By using the method provided by the application, the electronic equipment can play different types of audio files in the waking process from the sound sleep to the waking process of the user, and guide the emotion change of the user in the waking process, so that the user generates positive emotion in the waking process.

Description

Audio playing method and related device
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an audio playing method and a related apparatus.
Background
With the continuous development of electronic devices, in order to meet the diversified demands of users, the electronic devices in daily life have more and more functions. For example, many electronic devices have an alarm clock application that enables the function of waking a user.
Currently, most of alarm clock applications in electronic devices wake up users by sound, and when the time reaches the alarm clock starting time input by the user, the electronic device plays an audio file preset by the user or the system as an alarm. However, the audio file played by the electronic device has a single content, which easily causes a negative emotion in the waking process of the user.
Disclosure of Invention
The application provides an audio playing method and a related device, which realize that different types of audio files are played in the waking process from sleeping to waking of a user, and the emotion change of the user is guided in the waking process, so that the user generates positive emotion in the waking process.
In a first aspect, the present application provides an audio playing method, including:
the first electronic device detects that the alarm clock is started. The first electronic equipment determines a first audio file in a first time period after the alarm clock is started, and plays the first audio file. And determining a second audio file by the first electronic equipment in a second time period after the alarm clock is started, wherein the audio type of the first audio file is different from the audio type of the second audio file. The first electronic equipment plays the first audio file and the second audio file in the second time period.
The application provides an audio playing method. The first electronic device may play different types of audio files throughout the wake-up process of the user. Therefore, the first electronic equipment can guide the emotion change of the user by playing the audio files of different audio types, so that the user generates positive emotion in the waking process.
With reference to the first aspect, in a possible implementation manner, the playing, by the first electronic device, the first audio file and the second audio file in the second time period specifically includes: the first electronic device mixes the first audio file and the second audio file into a third audio file. And the first electronic equipment plays the third audio file in the second time period. Therefore, the audio switching in the audio playing process can be more fluent and natural.
With reference to the first aspect, in a possible implementation manner, the mixing, by the first electronic device, the first audio file and the second audio file into a third audio file specifically includes: and the first electronic equipment superposes the audio track of the first audio file and the audio track of the second audio file to obtain the third audio file.
With reference to the first aspect, in a possible implementation manner, the playing, by the first electronic device, the first audio file and the second audio file in the second time period specifically includes: and the first electronic equipment gradually reduces the volume to play the first audio file and gradually increases the volume to play the second audio file in the second time period. Therefore, the audio switching in the audio playing process can be more fluent and natural.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, the first audio file in a first time period after the alarm clock is started specifically includes: the first target type is determined from the mapping relation between the alarm clock starting time interval and the audio type in a first time period after the alarm clock is started by the first electronic device based on the first time period, and different alarm clock starting time intervals correspond to different audio types. The first electronic device determines the first audio file corresponding to the first target type based on the first target type.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, the first audio file in a first time period after the alarm clock is started specifically includes: the first electronic equipment obtains a first awakening degree of a user in a first time period after the alarm clock is started. The first electronic equipment determines a first audio file corresponding to a first awakening degree based on the first awakening degree of the user.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, a first audio file corresponding to a first wakefulness of a user based on the first wakefulness includes: the first electronic device determines that the first arousal degree is in a first emotion quadrant of a pleasure arousal VA model based on the first arousal degree of the user. The first electronic device determines a first target type from a mapping relation between an emotion quadrant and an audio type based on the first emotion quadrant, and the audio types corresponding to different emotion quadrants are different. The first electronic device determines the first audio file corresponding to the first target type based on the first target type.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the first electronic device, the first wakefulness of the user in a first time period after the alarm clock is started specifically includes: the first electronic equipment collects first physiological data of a user in a first time period after the alarm clock is started. The first electronic device determines the first wake-up level based on the first physiological data. Therefore, under the condition that the first electronic device works independently, the real-time awakening degree of the user can be obtained by collecting the physiological data of the user, the audio type corresponding to the played audio file is ensured to be matched with the awakening degree of the user most, and the emotion change in the awakening process of the user is guided better.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the first electronic device, the first wakefulness of the user in a first time period after the alarm clock is started specifically includes: the first electronic device receives first physiological data sent by the second electronic device in a first time period after the alarm clock is started. The first electronic device determines the first wake-up level based on the first physiological data. Therefore, under the condition that the first electronic device does not have the capability of acquiring the physiological data of the user, the second electronic device with the physiological data acquisition function can assist the first electronic device to realize the audio playing method provided by the application, and the audio type corresponding to the played audio file is ensured to be matched with the awakening degree of the user, so that the emotion change in the awakening process of the user is guided better.
With reference to the first aspect, in a possible implementation manner, before the first electronic device receives the first physiological data sent by the second electronic device, the method further includes: the first electronic device sends a first request to the second electronic device, wherein the first request is used for requesting the second electronic device to acquire the first physiological data of the user. In this way, the second electronic device may start to collect the physiological data of the user after receiving the first request, and energy consumption of the second electronic device may be saved.
With reference to the first aspect, in a possible implementation manner, before the first electronic device determines the first target type from a mapping relationship between an alarm clock starting duration interval and an audio type based on the first time period, the method further includes: the first electronic device sends a first request to a second electronic device, wherein the first request is used for requesting the second electronic device to acquire first physiological data of a user. The first electronic device receives a first response sent by the second electronic device, wherein the first response is used for indicating that the second electronic device refuses to acquire the first physiological data. Therefore, under the condition that the second electronic device cannot normally collect the physiological data of the user, the first electronic device can determine the played audio file by adopting the mapping relation between the alarm clock starting time interval and the audio type.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, a second audio file in a second time period after the alarm clock is started specifically includes: and determining a second target type from the mapping relation between the alarm clock starting time interval and the audio type based on a second time period after the first electronic device starts the alarm clock, wherein different alarm clock starting time intervals correspond to different audio types. And the first electronic equipment determines the second audio file corresponding to the second target type based on the second target type.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, a second audio file in a second time period after the alarm clock is started specifically includes: and the first electronic equipment acquires a second awakening degree of the user in a second time period after the alarm clock is started. And the first electronic equipment determines a second audio file corresponding to the second wakefulness based on the second wakefulness of the user.
With reference to the first aspect, in a possible implementation manner, the determining, by the first electronic device, a second audio file corresponding to a second wakefulness of the user based on the second wakefulness includes: the first electronic device determines, based on a second arousal level of the user, that the second arousal level is in a second emotional quadrant of the pleasurable arousal VA model. And the first electronic equipment determines a second target type from the mapping relation between the emotion quadrants and the audio types based on the second emotion quadrant, wherein the audio types corresponding to different emotion quadrants are different. And the first electronic equipment determines the second audio file corresponding to the second target type based on the second target type.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the first electronic device, the second wakefulness of the user in a second time period after the alarm clock is started specifically includes: the first electronic equipment collects second physiological data of the user in a second time period after the alarm clock is started. The first electronic device determines the second wakefulness based on the second physiological data.
With reference to the first aspect, in a possible implementation manner, the acquiring, by the first electronic device, the second wakefulness of the user in a second time period after the alarm clock is started specifically includes: and the first electronic equipment receives second physiological data sent by the second electronic equipment in a second time period after the alarm clock is started. The first electronic device determines the second wakefulness based on the second physiological data.
With reference to the first aspect, in one possible implementation manner, the first physiological data or the second physiological data includes one or more of the following: skin conductance data, electrocardiogram data, blood pressure data and blood sugar data.
With reference to the first aspect, in a possible implementation manner, the first electronic device may determine an audio playing policy based on a time length after the alarm clock is started or an arousal level of a user, and play an audio file specified by the audio playing policy in a time period corresponding to the time length interval or the arousal level interval.
With reference to the first aspect, in a possible implementation manner, a user may select one of one or more audio files corresponding to a first target type provided by a first electronic device as the first audio file corresponding to the first target type. The user may also select one of the one or more audio files corresponding to the second target type provided by the first electronic device as the second audio file corresponding to the second target type.
With reference to the first aspect, in a possible implementation manner, before the first electronic device determines the first target type from a mapping relationship between an alarm clock starting duration interval and an audio type based on the first time period, the method further includes: the first electronic device sends a first request to a second electronic device, wherein the first request is used for requesting the second electronic device to send first physiological data of a user to the first electronic device. In this way, the second electronic device may directly send the collected first physiological data of the user to the first electronic device after receiving the first request of the first electronic device, and the first electronic device does not need to wait for the second electronic device to collect the first physiological data of the user.
With reference to the first aspect, in one possible implementation manner, the first electronic device may receive and stop playing the audio file in response to an input operation of a user for the first electronic device. Wherein the input operation may include any one or more of: voice input, fingerprint image input, facial image input, gesture input, slide operation, click operation, and the like.
With reference to the first aspect, in a possible implementation manner, the first electronic device may also stop playing the audio file based on time.
In a second aspect, the present application provides an audio playback device comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, and the one or more memories are configured to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the audio playback apparatus to perform the audio playback method of any possible implementation of the above aspects.
In a third aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, an audio playing apparatus is caused to execute an audio playing method in any possible implementation manner of any one of the foregoing aspects.
In a fourth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the audio playing method in any one of the possible implementation manners of the foregoing aspect.
In a fifth aspect, an electronic device according to an embodiment of the present application is a first electronic device, where the first electronic device includes a module/unit that performs the method according to the first aspect or any one of the possible designs of the first aspect; these modules/units may be implemented by hardware or by hardware executing corresponding software.
For the beneficial effects of the second aspect to the fifth aspect, please refer to the beneficial effects of the first aspect, and the description is not repeated.
Drawings
Fig. 1A is a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application;
fig. 1B is a block diagram of a software structure of the electronic device 100 provided in the embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 200 provided in an embodiment of the present application;
fig. 3A is a schematic diagram illustrating a relationship between a user's arousal level and a user's emotion type provided in an embodiment of the present application;
fig. 3B is a schematic diagram illustrating a relationship between a user's wakefulness and an audio type according to an embodiment of the present application;
fig. 3C is a schematic diagram of determining an audio playing strategy based on a user's wakefulness in the embodiment of the present application;
fig. 3D is a schematic diagram illustrating determining an audio playing strategy based on a duration of time after an alarm clock is started according to the embodiment of the application;
fig. 4 is a schematic diagram of an audio playing system 10 provided in an embodiment of the present application;
fig. 5A is a functional block diagram of the audio playback system 10 provided in the embodiment of the present application;
fig. 5B is a schematic workflow diagram of the data monitoring module 121 and the data analysis module 112 provided in this embodiment of the application;
figure 5C is a schematic illustration of a plurality of skin conductance data collected by a galvanic skin sensor over a sampling period as provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of an audio playing method provided in an embodiment of the present application;
7A-7B are a set of user interface diagrams for implementing alarm setting operations on the electronic device 100 provided in embodiments of the present application;
FIGS. 8A-8E are a set of user interface diagrams illustrating the electronic device 100 stopping playing audio files in embodiments of the present application;
fig. 9A is a functional block diagram of an audio playback system 20 provided in an embodiment of the present application;
fig. 9B is a schematic flowchart of an audio playing method provided in an embodiment of the present application;
fig. 10A is a functional block diagram of an audio playback system 30 provided in an embodiment of the present application;
fig. 10B is a schematic flowchart of an audio playing method provided in this embodiment of the present application;
fig. 11 is a flowchart illustrating an audio playing method provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, and implements conversion between an internal form of information and a form acceptable to the user. The user interface is source code written by java, extensible markup language (XML) and other specific computer languages, and the interface source code is analyzed and rendered on the electronic equipment and finally presented as content which can be identified by a user. A commonly used presentation form of the user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations and displayed in a graphical manner. It may be a visual interface element such as text, an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. displayed in the display of the electronic device.
The electronic device may be a portable terminal device, such as a mobile phone, a tablet computer, a wearable device, or the like, which carries an iOS, android, microsoft, or other operating system, and may also be a non-portable terminal device such as a Laptop computer (Laptop) with a touch-sensitive surface or touch panel, a desktop computer with a touch-sensitive surface or touch panel, or the like.
The embodiment of the application provides an audio playing method and a related device, and electronic equipment can repeatedly execute the following steps along with the change of user awakening degree: the electronic equipment determines a corresponding music type from the corresponding relation between the user awakening degree interval and the music type based on the user awakening degree, and plays an audio file in the music type.
Or, the electronic device may repeatedly perform the following steps following the change of time after the alarm clock is started: the electronic equipment determines a corresponding music type from the corresponding relation between the time interval after the alarm clock is started and the music type based on the time after the alarm clock is started, and plays an audio file in the music type.
By the method provided by the embodiment of the application, the electronic equipment can play different types of audio files in the process from sleeping to waking of the user, and guide the emotion change of the user in the waking process, so that the user can generate positive emotion in the waking process.
Fig. 1A shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, an Artificial Intelligence (AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
As shown in fig. 1A, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, demodulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display screen 115 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals.
Video codecs are used to compress or decompress digital video.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 121 may include one or more Random Access Memories (RAMs) and one or more non-volatile memories (NVMs).
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 100 may be provided with at least one microphone 170C.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint characteristics to implement fingerprint unlocking and the like.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194. The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration prompts as well as for touch vibration feedback.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 1B is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 1B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
In this embodiment, the application layer further includes an alarm clock starting module, a data analyzing module, and a sound processing module, and in some embodiments, the application layer further includes a data monitoring module. The specific functions of the modules will be described in the functional module part of the system, and will not be described herein again.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 1B, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Fig. 2 schematically shows a structural diagram of an electronic device 200 provided in an embodiment of the present application.
It should be understood that the electronic device 200 shown in fig. 2 is merely an example, and that the electronic device 200 may have more or fewer components than shown in fig. 2, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
As shown in fig. 2, the electronic device 200 may include: the system comprises a processor 201, a memory 202, a wireless communication processing module 203, a power switch 205 and a sensor module 207. Wherein:
the processor 201 is operable to read and execute computer readable instructions. In particular implementations, the processor 201 may mainly include a controller, an operator, and a register. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In a specific implementation, the hardware architecture of the processor 201 may be an Application Specific Integrated Circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, etc.
In some embodiments, the processor 201 may be configured to parse a signal received by the bluetooth communication processing module 203A, such as a pairing mode modification request sent by the electronic device 100, and so on. The processor 201 may be configured to perform corresponding processing operations according to the parsing result, such as generating a pairing mode modification response, and the like.
A memory 202 is coupled to the processor 201 for storing various software programs and/or sets of instructions. In particular implementations, memory 202 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 202 may store an operating system, such as an embedded operating system like uCOS, vxWorks, RTLinux, etc. Memory 202 may also store communication programs that may be used to communicate with electronic device 100, one or more servers, or other devices.
The wireless communication processing module 203 may include one or more of a Bluetooth (BT) communication processing module 203A, WLAN communication processing module 203B.
In some embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may listen to signals, such as probe requests, scan signals, etc., transmitted by other devices (such as the electronic device 100) and may transmit response signals, such as probe responses, scan responses, etc., so that the other devices (such as the electronic device 100) may discover the electronic device 200 and establish wireless communication connections with the other devices (such as the electronic device 100) to communicate with the other devices (such as the electronic device 100) via one or more wireless communication technologies in bluetooth or WLAN.
In other embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may also transmit signals, such as broadcast bluetooth signals and beacon signals, so that other devices (e.g., the electronic device 100) may discover the electronic device 200 and establish wireless communication connections with other devices (e.g., the electronic device 100) to communicate with other devices (e.g., the electronic device 100) via one or more wireless communication technologies of bluetooth or WLAN.
The wireless communication processing module 203 may also include a cellular mobile communication processing module (not shown). The cellular mobile communication processing module may communicate with other devices, such as servers, via cellular mobile communication technology.
In some embodiments there may be one or more antennas of the bluetooth communication processing module. Antennas may be used to transmit and receive electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands.
The power switch 205 can be used to control the power source to supply power to the electronic device 200.
In some embodiments, the electronic device 200 may further include a USB communication processing module 206, and the USB communication processing module 206 may be used to communicate with other devices through a USB interface (not shown).
The sensor module 207 may include a pico-electric sensor or a cardiac electric sensor, and the sensor module 207 is used for collecting physiological data (e.g., pico-electric, cardiac electric, blood pressure, blood sugar, etc.) of the user, which is used for calculating the arousal level of the user.
In some embodiments, the electronic device 200 may further include an audio module 204, and the audio module 204 may be configured to output an audio signal through the audio output interface, such that the electronic device 200 supports audio playback. The audio module may also be configured to receive audio data through the audio input interface. The electronic device 200 may be a watch, a bracelet, a smart bed, or the like.
In some embodiments, the electronic device 200 may also include a display screen (not shown), wherein the display screen may be used to display images, prompts, and the like. The display screen may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED) display screen, an active-matrix organic light-emitting diode (AMOLED) display screen, a flexible light-emitting diode (FLED) display screen, a quantum dot light-emitting diode (QLED) display screen, or the like.
In the embodiment of the application, the awakening degree of the user is represented by the awakening degree of the user, and the mood of the user is represented by the joy mood of the user.
The relationship between the user's arousal level and the user's emotional type is described in detail below.
Fig. 3A shows a relationship diagram between the arousal level of the user and the emotion type of the user provided in the embodiment of the present application.
As shown in fig. 3A, in the graph of the relationship between the user's arousal level and the user's emotion type, the horizontal axis may represent the arousal level, the vertical axis may represent the happy mood level, and the origin of coordinates may represent the neutral happy mood level and the middle level arousal level. The graph of the relationship between the user's arousal level and the user's happy mood level may be divided into eight quadrants on a two-dimensional space: the first quadrant, the second quadrant, the third quadrant, the fourth quadrant, the fifth quadrant, the sixth quadrant, the seventh quadrant and the eighth quadrant. The user awakening degree is higher and higher along with the increase of the awakening degree, and the user emotion is more and more pleasant along with the increase of the happy mood.
In the first quadrant, the user emotions may include one or more of happy (happy), delightful (delighted), and the like, e.g., in the first quadrant, the user emotion "happy" corresponds to a smaller arousal than the user emotion "happy".
In the second quadrant, the user emotion may include one or more of excited (excited), surprised (astonished), and the like, for example, in the second quadrant, the user emotion "excited" corresponds to a smaller wakefulness than the user emotion "surprised" corresponds to a wakefulness.
In the third quadrant, the user emotion may include one or more of an alarmed (alarmed), a strained (tense), an afraid (afraid), an angry (angry), and the like, for example, in the third quadrant, the user emotion corresponds to a arousal degree that is, in order from small to large: "anger", "fear", "tension", "surprise".
In the fourth quadrant, the emotion of the user may include one or more of impaired emotions (e.g., frustrated emotions), frustrated emotions (e.g., frustrated emotions) of the user may correspond to a smaller arousal level than emotional emotions of the user.
In the fifth quadrant, the user emotions may include one or more of painful (miserable), sad (sad), depressed (depressed), gloomy (gloomy), and the like, for example, in the fifth quadrant, the corresponding arousals of the user emotions are, from small to large: "yin depression", "sadness" and "pain".
In the sixth quadrant, the user emotion may include one or more of boredom (bored), retting (dropy), tired (tired), and the like, for example, in the sixth quadrant, the arousal degrees corresponding to the user emotion are, in order from small to large: fatigue, loss of essence, loss of precision, and boredom.
In the seventh quadrant, the user emotions may include one or more of calm (call), relaxed (relaxed), satisfied (satisfield), comfortable (at ease), and the like, for example, in the seventh quadrant, the user emotions correspond to degrees of arousal in order from small to large: "calm", "relax", "satisfy", "comfortable".
In the eighth quadrant, the user emotion may include one or more of ease (content), tranquility (serene), happy (glad), happy (enjoyed), and the like, for example, in the eighth quadrant, the user emotion corresponds to arousal in the order of: "peaceful", "quiet", "happy" and "satisfied".
In some embodiments, the first quadrant and the second quadrant may be collectively referred to as a first quadrant, the third quadrant and the fourth quadrant may be collectively referred to as a second quadrant, the fifth quadrant and the sixth quadrant may be collectively referred to as a third quadrant, and the seventh quadrant and the eighth quadrant may be collectively referred to as a fourth quadrant.
In this embodiment of the application, the relationship diagram between the arousal level of the user and the emotion type of the user shown in fig. 3A is only one schematic diagram, and the relationship diagram between the arousal level of the user and the emotion type of the user may be obtained based on a value-arousal (VA) model or other models, which is not limited in this embodiment of the application.
The process of the mood changes of the user can be different from the process of sleeping to waking of the user. For example, as shown in fig. 3A, the process of changing the emotion of the user from asleep to awake may be the process of changing the emotion shown in path one.
In the first path shown in fig. 3A, the user emotion sequentially passes through the seventh quadrant, the eighth quadrant, the first quadrant, and the second quadrant. In the seventh quadrant and the eighth quadrant, as the user's arousal level increases, the user's happy keynote level gradually increases until the user's arousal level reaches a certain value. In the first quadrant and the second quadrant, the user's joy level gradually decreases as the user's arousal level gradually increases from the certain value. And waking up in the emotion change process shown as the path I, wherein the happy mood of the user is gradually increased and then attenuated in the process from sleeping to waking, and the user can generate positive emotion in the waking process.
The following describes a process for determining an audio type based on a user's arousal level, which is provided in an embodiment of the present application.
Fig. 3B is a schematic diagram illustrating a relationship between a user's wakefulness and an audio type provided in an embodiment of the present application.
As shown in fig. 3B, in the graph of the relationship between the wakefulness and the audio type, the horizontal axis may represent the wakefulness, the vertical axis may represent the happy key level, and the origin of coordinates may represent the neutral happy key level and the middle level wakefulness. The relationship graph of the wakefulness and the audio type can be divided into eight mood quadrants on a two-dimensional space, wherein each mood quadrant corresponds to one audio type, namely: the first quadrant corresponds to a first audio type, the second quadrant corresponds to a second audio type, the third quadrant corresponds to a third audio type, the fourth quadrant corresponds to a fourth audio type, the fifth quadrant corresponds to a fifth audio type, the sixth quadrant corresponds to a sixth audio type, the seventh quadrant corresponds to a seventh audio type, and the eighth quadrant corresponds to an eighth audio type. In each quadrant, the relationship between the user's arousal level and the user's emotional type may refer to the related contents described in fig. 3A, and will not be described in detail herein.
It should be noted that in the embodiments provided in the present application, the audio type may also be referred to as a target type.
In each quadrant, the corresponding audio type may guide the user to produce one or more emotions in that quadrant.
For example, in the first quadrant, the first audio type may guide the user to create an emotion such as happy or happy.
In the second quadrant, the second audio type may guide the user to create an emotion of excitement or surprise, etc.
In the third quadrant, a third audio type may guide the user to create emotions such as surprise, tension, fear, anger, or anger.
In the fourth quadrant, the fourth audio type can guide the user to create emotions such as sadness or disappointment.
In the fifth quadrant, a fifth audio type may guide the user to create a feeling of suffering, sadness, depression, or depression.
In the sixth quadrant, a sixth audio type may guide the user to create emotions such as boring, not getting precision, or tired.
In the seventh quadrant, a seventh audio type may guide the user to create a calm, relaxed, satisfied, or comfortable mood.
In the eighth quadrant, an eighth audio type may guide the user to create an emotion of peace, calm, happy, or happy.
In this embodiment of the application, the relationship diagram of the wakefulness and the audio type of the user shown in fig. 3B is only one schematic diagram, and the relationship diagram of the wakefulness and the audio type of the user may be obtained based on a music-emotional-enjoyment-and-wake (VA) model, or obtained based on other models, and is not limited in this embodiment of the application.
During the period from sleep to awake, the electronic device 100 may guide the user's emotional change process by playing audio files of different audio types.
Illustratively, in path one shown in fig. 3B, the audio types played by the electronic device 100 are a seventh audio type, an eighth audio type, a first audio type, and a second audio type in this order. Correspondingly, the emotion of the user sequentially passes through a seventh quadrant, an eighth quadrant, a first quadrant and a second quadrant. The user may wake up with positive emotions by playing the corresponding audio file in the manner shown by path one.
The following describes the manner in which the audio types of the audio files in the embodiment of the present application are divided.
The audio type of the audio file may be divided by audio features. Among other things, audio features may include, without limitation, one or more of the following musical elements: melody, rhythm, mode, speed (tempo), melody (form), body, tone (tones), force, chord, volume, timbre (time), method of playing, and the like.
1. The melody, also called melody, is formed by horizontally organizing fluctuant musical tones in order according to a certain rhythm.
2. The rhythm is the length and strength of the scale, note or syllable in the music rhythm. The beat is a unit for measuring the rhythm, and refers to an organization form for expressing the duration and the strength of a fixed unit in music.
3. The pitch style is a system in which tones used in music are connected with each other with one tone as a center (a key). Such as large tone mode, small tone mode, five tone mode in our country, etc. The tones in the key form are arranged from low to high from the dominant tone to form the scale.
4. The speed is how fast the music progresses. The tempo of music is typically expressed in beats per minute (bpm), for example, 132bpm means 132 beats in a minute.
5. The melody is a structural form of a musical piece. During the development of the tune, various paragraphs are formed, and the format with common features is found out according to the regularity formed by the paragraphs. The song is a concept of "structure" and refers to the structure of music in time. The body refers to the structure of music in space.
6. The pitch is the level of the frequency of a sound in the unit of beauty (mel).
7. The strength is the degree of the sound intensity in the music.
8. A chord refers to a set of sounds in a musical interval relationship. Three or more tones are longitudinally combined according to a three-degree or non-three-degree overlapping relationship to obtain a chord. The chord can be divided into a simple chord and a complex chord according to the number and the composition relationship of the medians of the chord.
9. The volume is also called loudness and sound intensity, and refers to the subjective feeling of the human ear on the magnitude of the heard sound, and the objective evaluation scale is the amplitude of the sound, and the volume unit is decibel (dB).
10. The tone quality is also called tone quality, which means that different sounds always have distinctive characteristics in terms of waveform.
11. The performance methods include short break (stmcatissimo), break (staccato), non-legato, break-legato, and the like. The audio files adopting different playing methods have note interval duration difference between adjacent notes.
The audio types may be divided into various types, for example, as shown in fig. 3B, the audio types may include: a first audio type, a second audio type, a third audio type, a fourth audio type, a fifth audio type, a sixth audio type, a seventh audio type, and an eighth audio type.
The correspondence between the audio type and the mood quadrant can be shown in the following table 1:
TABLE 1
Figure BDA0003268465400000141
As shown in table 1:
1. the emotion quadrant is in the first quadrant, corresponding to the first audio type, and the audio features of the first audio type may include: the musical mode is major, the beat is fast, the chord is simple, the volume is high, the playing method is stop, the tone is high, and the rhythm is smooth.
Here, the tempo may refer to a musical tempo equal to or greater than a first tempo and equal to or less than a second tempo, and illustratively, the first tempo may be 120bpm and the second tempo may be 168bpm. The volume may be higher than or equal to the first volume and lower than or equal to the second volume, and the first volume may be 40dB and the second volume may be 60dB. Pitch high means that the pitch is larger than the first pitch, which may be 1300mel, for example.
2. The mood quadrant is at a second quadrant corresponding to a second audio type, and the audio features of the second audio type may include: the method comprises the steps of rapid beat, high volume, large key of music mode, high key, large key change, simple chord, rapid note onset (note onset rapid), high key range, positive pitch contour (pitch contour up) and intermittent playing.
Wherein the tempo may refer to a musical speed greater than or equal to a first speed and less than or equal to a second speed, and illustratively, the first speed may be 120bpm and the second speed may be 168bpm. The volume high may refer to a relative volume equal to or greater than a third volume and equal to or less than a fourth volume, and illustratively, the third volume may be 60dB and the fourth volume may be 70dB. Pitch high means that the pitch is larger than the first pitch, which may be 1300mel, for example. The pitch variation is large and the pitch range is high, which means that the pitch ranges from a first pitch or more and a second pitch or less, and illustratively, the first pitch may be 1300mel and the second pitch may be 2500mel.
3. The mood quadrant is at a third quadrant corresponding to a third audio type, and the audio features of the third audio type may include: the musical mode is minor key, large volume, fast beat, complex chord, fast note starting point, flat tone contour, high tone range and large tone variation.
4. When the mood quadrant is at the fourth quadrant, corresponding to the fourth audio type, the audio features of the fourth audio type may include: the music mode is a minor key, the chord is complex, the playing method is a legato, the tone change is small, and the beat is fast.
5. The mood quadrant is at a fifth quadrant, corresponding to a fifth audio type, and the audio features of the fifth audio type may include: slow beat, continuous playing method, musical mode of minor tune, complex chord or simple chord, soft volume, low tone and slow note starting point.
Here, the tempo is a tempo equal to or greater than a third tempo and equal to or less than a fourth tempo, and illustratively, the third tempo may be 76bpm and the fourth tempo may be 108bpm. The soft volume means that the relative volume is equal to or greater than the fifth volume and equal to or less than the sixth volume, and for example, the fifth volume may be 20dB, and the sixth volume may be 40dB. Pitch low means that the pitch is less than the third pitch, which may be 1300mel, for example.
6. The emotion quadrant is in the sixth quadrant, corresponding to the sixth audio type, and the audio features of the sixth audio type may include: the volume is soft, the beat is slow, the pitch change is small, the playing method is continuous, the note starting point is slow, and the pitch is low.
7. The mood quadrant is at a seventh quadrant, corresponding to a seventh audio type, and the audio features of the seventh audio type may include: slow beat, soft volume, continuous playing method and slow note starting point.
Here, the tempo is a tempo equal to or greater than a fifth tempo and equal to or less than a sixth tempo, and illustratively, the fifth tempo may be 40bpm and the sixth tempo may be 76bpm. The soft volume means that the relative volume is equal to or greater than the seventh volume and equal to or less than the eighth volume, and illustratively, the seventh volume may be 0dB and the eighth volume may be 20dB.
8. The mood quadrant is in the eighth quadrant, corresponding to the eighth audio type, and the audio features of the eighth audio type may include: the music mode is major, the chord is simple, the beat is slow, the volume is low, and the playing method is the interruption.
Here, the tempo is a tempo equal to or greater than a third tempo and equal to or less than a fourth tempo, and illustratively, the third tempo may be 76bpm and the fourth tempo may be 108bpm. The soft volume means that the relative volume is equal to or greater than the fifth volume and equal to or less than the sixth volume, and for example, the fifth volume may be 20dB, and the sixth volume may be 40dB.
In the above description, the music velocity values are, from small to large: fifth speed, sixth speed, third speed, fourth speed, first speed, second speed.
The volume values are as follows from low to high: seventh volume, eighth volume, fifth volume, sixth volume, first volume, second volume, third volume and fourth volume.
The pitch values are sequentially from low to high: a third tone, a first tone, a second tone.
The following describes an audio playing strategy after the alarm clock is started, which is provided in the embodiment of the present application.
Fig. 3C schematically illustrates a schematic diagram for determining an audio playing policy based on a user's awakening degree provided in the embodiment of the present application.
As shown in fig. 3C, in the embodiment of the present application, the waking process of the user may be divided into a plurality of stages, for example, five stages. Wherein the five phases include a first awake phase, a second awake phase, a third awake phase, a fourth awake phase, and a fifth awake phase. The awakening degree of the user corresponding to each awakening stage is different.
When the wakefulness is in a first wakefulness interval (e.g., wakefulness is 0 to 30%), the user is in a first wake-up phase, and in the first wake-up phase, the electronic device 100 may play an audio file of the seventh audio type. When the wakefulness is in a second wakefulness interval (e.g., wakefulness of 30% to 50%), the user is in a second wake-up phase in which the electronic device 100 may play an audio file of an eighth audio type. When the wakefulness is in a third wakefulness interval (e.g., wakefulness is 50% to 80%), the user is in a third wake-up phase in which the electronic device 100 may play the audio file of the first audio type. When the wakefulness is in a fourth wakefulness interval (e.g., wakefulness of 80% to 100%), the user is in a fourth wake-up phase in which the electronic device 100 may play an audio file of the second audio type. When the wakefulness is in a fifth wakefulness interval (e.g., the wakefulness is greater than or equal to 100%), the user is in a fifth wake-up phase, and in the fifth wake-up phase, the electronic device 100 may play the audio file of the first audio type.
It is understood that the embodiments of the present application are merely illustrative of the different ranges of wakefulness for different wake-up phases. In other embodiments, dividing the wake-up process of the user into different wake-up phases based on the wake-up degree of the user may include more or less wake-up phases than those shown in the drawings, and the embodiments of the present application are not limited herein.
The wake-up process of the user can be divided into a pre-wake-up stage, a quick wake-up stage and a soothing stage according to the wake-up stage. The first awakening stage and the second awakening stage are pre-awakening stages, the third awakening stage and the fourth awakening stage are quick awakening stages, and the fifth awakening stage is a pacifying stage.
During the waking process of the user, optionally, the volume of the audio file playing can be adjusted. For example, as shown in FIG. 3C, during the pre-wake-up phase, the volume at which the audio file is played is slowly increased. During the fast wake-up phase, the volume at which the audio file is played is significantly increased. In the soothing stage, the volume of the audio file playing slowly falls back.
It is understood that the embodiments of the present application are only exemplary to illustrate that the volume of the audio file playing can be adjusted during the user wake-up phase. In other embodiments, different volume change curves may also be available, and the embodiments of the present application are not limited herein.
The wake-up process of a user may be divided into a shallow sleep stage and a wake stage according to a sleep state. The light sleep stage comprises a pre-awakening stage and a most of quick awakening stages; the awake phase includes a soothing phase and a small portion of a fast awake phase.
Fig. 3D schematically illustrates a schematic diagram for determining an audio playing strategy based on a time length after an alarm clock is started according to an embodiment of the present application.
As shown in fig. 3D, in the embodiment of the present application, the waking process of the user may be divided into a plurality of stages, for example, five stages. Wherein the five phases include a first awake phase, a second awake phase, a third awake phase, a fourth awake phase, and a fifth awake phase. The awakening degree of the user corresponding to each awakening stage is different.
When the time length after the alarm clock is started is within a first time length interval (for example, the time length is 0 to 30 seconds), the user is in a first wake-up stage, and in the first wake-up stage, the electronic device 100 may play an audio file of a seventh audio type. When the time length after the alarm clock is started is within a second time length interval (for example, the time length is 30 to 60 seconds), the user is in a second wake-up stage, and in the second wake-up stage, the electronic device 100 may play an audio file of an eighth audio type. When the time length after the alarm clock is started is within a third time length interval (for example, the time length is 60 to 100 seconds), the user is in a third wake-up stage, and in the third wake-up stage, the electronic device 100 may play the audio file of the first audio type. When the time length after the alarm clock is started is within a fourth time length interval (for example, the time length is 100 to 140 seconds), the user is in a fourth wake-up stage, and in the fourth wake-up stage, the electronic device 100 may play the audio file of the second audio type. When the time length after the alarm clock is started is within a fifth time length interval (for example, the time length is 140 to 180 seconds), the user is in a fifth wake-up stage, and in the fifth wake-up stage, the electronic device 100 may play the audio file of the first audio type.
It is understood that the embodiments of the present application are merely illustrative of the differences in the ranges of the durations after the alarm clock starts corresponding to different wake-up phases. In other embodiments, dividing the wake-up process of the user into different wake-up stages based on the time length after the alarm clock is started may include more or less wake-up stages than those shown in the drawings, and the embodiments of the present application are not limited herein.
In some embodiments, before the electronic device 100 determines the audio playing strategy based on the time length after the alarm clock is started, the electronic device 100 may obtain a wake-up degree dataset of the user, where the wake-up degree dataset is a time length corresponding to each wake-up phase and is divided based on the wake-up degree dataset of the user. After the electronic device 100 divides the respective durations of each waking stage, the electronic device 100 may determine the waking stage where the current user is located from the corresponding relationship between the waking stage and the durations based on the durations after the current alarm clock is started.
The embodiment of the application provides a method for determining the corresponding duration of each awakening stage based on an awakening degree data set.
For example, the duration may be segmented, for example, 5 seconds is taken as a unit duration, the probabilities that the time length of a certain stage of the wake-up process is 0-5 seconds, 6-10 seconds, 11-15 seconds, … in each wake-up process of the user are counted, and the time length with the highest probability is taken as the time length of the wake-up stage. Taking the first wake-up period as an example, when the probability that the time length of the first wake-up period is 35-40 seconds is the largest, the time length of the first wake-up period is 40 seconds. The durations corresponding to the remaining awake phases may also be determined in the same manner.
It can be understood that the above method is a method for determining a duration corresponding to each awake phase based on an awake degree data set provided in this embodiment of the present application, and in some embodiments, other methods may also be used to determine the duration corresponding to each awake phase, for example, taking a mean value of the durations of each awake phase in the awake process as the duration corresponding to the awake phase, and the like, which is not limited in this embodiment of the present application.
The following describes an architecture of an audio playing system provided in an embodiment of the present application.
Fig. 4 shows a schematic diagram of an audio playing system 10 provided in the embodiment of the present application. As shown in fig. 4, the audio playing system 10 may include: electronic device 100 and electronic device 200.
The electronic device 100 may be a cell phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra mobile personal computer, a netbook, as well as a cell phone, a PDA, an AR device, a VR device, an AI device, a wearable device (e.g., a watch, a bracelet, etc.), an in-vehicle device, a smart home device, and/or a smart city device.
The electronic device 200 may be a watch, a bracelet, a smart bed, or other electronic device capable of collecting physiological data of a user. The physiological data of the user may include, but is not limited to: skin conductance data, electrocardiogram data, blood pressure data and blood sugar data.
Illustratively, the electronic device 100 shown in fig. 4 is a cellular phone and the electronic device 200 is a watch.
In the embodiment of the present application, a communication connection is established between the electronic apparatus 100 and the electronic apparatus 200. The communication connection between the electronic device 100 and the electronic device 200 may refer to a wired connection, a wireless connection. The wireless connection may be a Wireless Local Area Network (WLAN) connection, a wireless fidelity (Wi-Fi) connection, a bluetooth connection, an infrared connection, a Near Field Communication (NFC) connection, zigBee, other wireless communication technologies that may be developed later, or other similar short-range connections. The electronic device 100 may also establish a long-range connection with the electronic device 200, including but not limited to a long-range connection over a mobile network based on 2g,3g,4g,5g and subsequent standard protocols. The electronic device 100 and the electronic device 200 may also log in to the same user account (for example, hua is account), and then make a long-distance connection through the server.
At time one, the electronic device 100 acquires the physiological data of the user transmitted by the electronic device 200 in real time through the communication connection.
Optionally, at time one, the electronic device 100 sends a request one to the electronic device 200. After receiving the request, the electronic device 200 transmits the physiological data of the user to the electronic device 100 in real time through the communication connection. The electronic device 100 receives the physiological data of the user transmitted by the electronic device 200 in real time.
After the electronic device 100 receives the physiological data of the user sent by the electronic device 200 in real time, the electronic device 100 may determine the arousal level of the user according to the physiological data of the user. After determining the audio playing strategy based on the awakening degree of the user, the electronic device 100 plays the audio file based on the audio playing strategy.
The functional modules of the audio playback system 10 provided in the embodiment of the present application are described below.
Fig. 5A shows a functional block diagram of the audio playback system 10 provided in the embodiment of the present application. As shown in fig. 5A, system 10 may include electronic device 100 and electronic device 200. The electronic device 100 may include an alarm clock starting module 111, a data parsing module 112, and a sound processing module 113. The electronic device 200 may include a data monitoring module 121.
The alarm clock starting module 111 can be used for sending a command one to the data monitoring module 121 in the electronic device 200 at a moment. It should be noted that, in the embodiment of the present application, an instruction one may also be referred to as a request one.
The data monitoring module 121 may be configured to receive and respond to the instruction one, and send the physiological data of the user collected by the data monitoring module 121 to the data analysis module 112. In some embodiments, the data monitoring module 121 may also receive and respond to the instruction one to start continuously collecting the physiological data of the user and send the collected physiological data of the user to the data parsing module 112.
The data analysis module 112 may be configured to receive the physiological data of the user, determine a wakefulness of the user based on the physiological data of the user, and send the wakefulness of the user to the sound processing module 113.
The sound processing module 113 may be configured to receive the user's arousal level, determine an audio playing policy based on the user's arousal level, and play an audio file based on the audio playing policy.
It should be noted that the data monitoring module 121 may periodically collect the physiological data of the user and periodically send the collected physiological data of the user to the data analysis module 112. The data analysis module 112 periodically receives physiological data of the user, periodically confirms the arousal level of the user, and periodically transmits the arousal level of the user to the sound processing module 113.
The sound processing module 113 may be configured to determine an audio playing policy according to the awakening degree of the user, and play the audio file based on the audio playing policy.
Optionally, the alarm clock starting module 111 may also be disposed in the electronic device 200, that is, the alarm clock starting module 111 in the electronic device 200 sends a first instruction to the data monitoring module 121 at a moment, where the first instruction is used for instructing the data monitoring module 121 to send the acquired physiological data of the user to the data analyzing module 112.
In some embodiments, after determining the user's arousal level, the data parsing module 112 may also determine an audio playing policy based on the user's arousal level and send the audio playing policy to the sound processing module 113. After receiving the audio playing strategy, the sound processing module 113 plays the audio file in the audio playing strategy based on the audio playing strategy.
In other embodiments, the data parsing module 112 may also be disposed in the electronic device 200, that is, after the data parsing module 112 in the electronic device 200 receives the physiological data of the user sent by the data monitoring module 121 and determines the arousal level of the user based on the physiological data of the user, the data parsing module 112 in the electronic device 200 sends the arousal level of the user to the sound processing module 113 in the electronic device 100.
Fig. 5B is a schematic diagram illustrating the work flow of the data monitoring module 121 and the data analysis module 112.
The data monitoring module 121 is configured to continuously collect physiological data of the user through the sensor, and send the physiological data of the user to the data analysis module 112. For example, as shown in fig. 5B, the data monitoring module 121 may include a galvanic sensor for collecting skin conductance data of the user.
The data parsing module 112 is configured to receive the skin conductance data of the user sent by the data monitoring module 121, and determine the arousal level of the user based on the skin conductance data of the user.
The physiological data of the user may include, but is not limited to, skin conductance data, electrocardiogram data, blood pressure data, blood glucose data. The following embodiments of the present application are described in terms of determining the arousal level of a user from skin conductance data. In other embodiments, the electronic device 100 may also determine the arousal level of the user based on other physiological data, which is not limited herein.
Electro Dermal Activity (EDA) refers to the change in skin conductivity under autonomic nerve modulation. In the sleeping process of a human body, the EDA has different characteristics in different sleeping stages, and the skin conductance value of a user is in an ascending trend in the process that the user gradually enters an awakening state from the state. For example, the skin conductance value of the user in the awake state is higher than the skin conductance value of the user in the asleep state.
Referring to fig. 5C, fig. 5C is a schematic diagram illustrating a plurality of skin conductance data collected by a pico sensor within a sampling period according to an embodiment of the present application.
As shown in fig. 5C, the horizontal axis represents time, the vertical axis represents skin conductance values, the skin conductance data is collected by the skin conductance sensor at a fixed sampling rate, the dots in fig. 5C represent a plurality of skin conductance data collected by the skin conductance sensor, and t1 to t5 may represent one sampling period. The skin conductance value one and the skin conductance value two respectively represent skin conductance values corresponding to a first peak value and a second peak value of a skin conductance waveform in one sampling period. As shown in the figure, the skin conductance value corresponding to the time t2 is a first skin conductance value, and the skin conductance value corresponding to the time t4 is a second skin conductance value. The reference response value is a skin conductance value corresponding to a time when the skin conductance value starts to increase. As shown in the figure, the skin conductance values acquired by the skin conductance sensor gradually increase from time t1 to time t2, and then the skin conductance value corresponding to time t1 is a reference response value one. From the time t3 to the time t4, the skin conductance value acquired by the skin conductance sensor gradually increases, and then the skin conductance value corresponding to the time t t is the reference response value two. Amplitude one represents the difference between the maximum skin conductance value (e.g., skin conductance one) acquired by the skin-type sensor and the baseline response value one between time t1 and time t2, and amplitude two represents the difference between the maximum skin conductance value (e.g., skin conductance two) acquired by the skin-type sensor and the baseline response value two between time t3 and time t 4.
According to the single algorithm of the galvanic skin sensor, in a sampling period, when the skin conductance value at a certain moment is larger than the skin conductance value at the previous moment, the electronic device calculates the increment of the skin conductance value at the moment compared with the previous moment. And adding the increment of all skin conductance in a sampling period, and dividing the increment by the interval of the sampling period to obtain the awakening value of the user in the sampling period. For example, as shown, the wake-up value between time t1 and time t2 may be determined based on equation (1).
B = A1/(t 2-t 1) equation (1)
As shown in equation (1), B represents the wake-up value between time t1 and time t2, A1 represents the amplitude one, and (t 2-t 1) represents the time interval between the two times.
the wake-up value between time t1 and time t3 may be determined based on equation (2).
B = A1/(t 3-t 1) equation (2)
As shown in equation (2), B represents the wake-up value between time t1 and time t3, A1 represents the amplitude one, and (t 3-t 1) represents the time interval between the two times.
the wake-up value between time t1 and time t4 may be determined based on equation (3).
B = (A1 + A2)/(t 4-t 1) formula (3)
As shown in equation (3), B represents the wake-up value between time t1 and time t4, A1 represents amplitude one, A2 represents amplitude two, and t4-t1 represents the time interval between the two times.
It can be understood that a single algorithm of the pyroelectric sensor is a method for determining the wake-up value of the user in this embodiment, and in some embodiments, other wake-up value algorithms may also be used to determine the wake-up value of the user, which is not limited herein.
After the electronic device 100 calculates the wake-up value in a sampling period, the electronic device 100 may determine the wake-up degree of the user in the sampling period based on formula (4) according to the wake-up value, the preset maximum wake-up value, and the preset minimum wake-up value.
D = (B-P)/(Q-P) × 100% formula (4)
As shown in formula (4), D represents the user's wake-up value in a sampling period, P represents the maximum wake-up value, Q represents the minimum wake-up value, P and Q are both preset values, and B represents the monitored wake-up value of the user of the electronic device 100 in a sampling period. For example, in fig. 5C, if the time t1 and the time t5 are a sampling period, the wake-up value B of the user is the sum of the amplitude one and the amplitude two before the time t1 and the time t 5.
Fig. 6 is a flowchart illustrating an audio playing method according to an embodiment of the present application. Fig. 6 schematically shows an interaction flow between the electronic devices 100 and 200.
As shown in fig. 6, the method may include the steps of:
s601, in the preparation phase, the electronic device 100 and the electronic device 200 establish a communication connection.
In the embodiment of the present application, the communication connection between the electronic device 100 and the electronic device 200 may be a wired connection or a wireless connection. The wireless connection may be a WLAN connection, a Wi-Fi connection, a bluetooth connection, an infrared connection, an NFC connection, zigBee, other wireless communication technology that appears in later development, or the like. The electronic device 100 may also establish a long-range connection with the electronic device 200, including but not limited to a long-range connection over a mobile network based on 2g,3g,4g,5g and subsequent standard protocols. The electronic device 100 and the electronic device 200 may also log in to the same user account (for example, hua is account), and then make a long-distance connection through the server.
In a specific example, the electronic apparatus 100 may search for a nearby electronic apparatus using a wireless communication technology (e.g., bluetooth), and transmit a connection request to the electronic apparatus 200 in response to a received user operation after searching for the nearby electronic apparatus (e.g., the electronic apparatus 200), or the electronic apparatus 100 may automatically transmit a connection request to the electronic apparatus 200 after searching for the nearby once-connected electronic apparatus 200. After the electronic device 200 accepts the request, the electronic device 100 may establish a wireless connection with the electronic device 200.
Fig. 7A-7B illustrate a set of user interface diagrams for implementing an alarm setting operation on the electronic device 100 according to an embodiment of the present application. Fig. 7A shows a user interface 71 displayed on the electronic device 100, the user interface 71 may be provided by an alarm clock application installed on the electronic device 100. As shown in fig. 7A, the user interface 71 includes an edit completion control 701, a cancel edit control 702, an alarm time setting area 703, an alarm frequency setting area 704, an alarm ringtone setting area 705, and an alarm delete control 706.
The edit complete control 701 is available for saving alarm settings on the user interface 71. The cancel editing control 702 can be used to cancel the content of the current alarm setting. The alarm time setting area 703 is used to set the start time of the alarm. Alarm frequency setting area 704 may be used to set the activation date of the alarm, for example, to set the alarm to be activated on monday through friday. The alarm delete control 706 can be used to delete the current alarm. The alarm clock setting area 705 is used to select one of the one or more audio files of each audio type as the audio file to be played in the audio type.
The alarm setting area 705 may include a plurality of alarm setting controls, such as a first audio type setting control 705a, a second audio type setting control 705b, a seventh audio type setting control 705c, an eighth audio type setting control 705d, and the like, for selecting one of the audio files of one or more audio types as an audio file to be played of the audio type.
For example, clicking on the first audio type setting control 705a while setting an audio file in the first audio type may trigger the electronic device 100 to display the user interface 72, as shown in FIG. 7B. The user interface 72 may include a recommendation area 707 and a discretionary area 708. One or more audio files of the first audio type recommended for use by the user by the electronic device 100 are displayed in the recommendation area 707, one or more audio files of the first audio type available for selection by the user are displayed in the discretionary area 708, and the audio files displayed in the discretionary area 708 may include the audio files displayed in the recommendation area 707. When the user selects any of the audio files displayed in the recommendation area 707 or the discretionary area 708, the electronic device 100 may play the corresponding audio file for the user to listen on. For example, as shown in fig. 7B, the user selects the audio file corresponding to music 3, and the electronic device 100 may play the audio file corresponding to music 3 for trial listening. In some embodiments, the electronic device 100 may also play the real-time mixed audio for the user to listen on trial, for example, when the user sets an audio file in the second audio type, one of the one or more audio files in the second audio type is selected, and the electronic device 100 may mix and play the audio file with the audio file set in the first audio type in real time.
It should be noted that, when setting an audio file in the second audio type, the audio files displayed in the recommended area and the optional area are both of the second audio type. When the audio files in the seventh audio type are set, the audio files displayed in the recommended area and the selected area are the seventh audio type. When the audio files in the eighth audio type are set, the audio files displayed in the recommended area and the optional area are all of the eighth audio type.
S602, the electronic device 200 continuously collects physiological data of the user.
Before the electronic device 200 transmits the physiological data of the user to the electronic device 100, the electronic device 200 starts to continuously collect the physiological data of the user, such as skin conductance data, electrocardiogram data, blood pressure data or blood glucose data of the user. In some embodiments, the electronic device 200 periodically acquires the physiological data of the user with a period of 5 seconds and a sampling period of 0.5 seconds, that is, acquires the physiological data of the user every 5 seconds, and acquires the physiological data of the user every time at a fixed sampling rate for 0.5 seconds.
In some embodiments, step S602 may also be that the electronic device 200 receives and responds to the start request sent by the electronic device 100 to start continuously acquiring the physiological data of the user.
It is understood that step S602 is only required to be executed before the electronic device 200 transmits the physiological data of the user to the electronic device 100, and the execution sequence of step S602 is not limited in this embodiment.
S603, at time one, the electronic device 100 sends a request one to the electronic device 200.
In this embodiment of the application, the time one may be an alarm clock starting time (for example, time two) of the electronic device 100, and the time one may also be a time less than the time two.
When the time I is equal to the time II, the electronic device 100 determines that the current time is the time I, and the electronic device 100 plays the audio file in the audio playing policy I. Meanwhile, the electronic apparatus 100 sends a request one to the electronic apparatus 200. The request one is for requesting the electronic device 200 to determine whether the electronic device 200 can collect the physiological data of the user.
When the time one is less than the time two, the electronic device 100 sends a request one to the electronic device 200 before the electronic device 100 starts playing the audio file according to the audio playing policy. The first request is also for requesting the electronic device 200 to determine whether the electronic device 200 can collect the physiological data of the user.
In some embodiments, the electronic device 100 may not perform step S603. When the time point one is equal to the time point two, the electronic device 100 plays the audio file in the audio playing policy one, and meanwhile, the electronic device 200 executes step S604.
Alternatively, when the time one is less than the time two, the electronic device 200 directly executes step S604. In this way, before the electronic device 200 reaches the alarm clock start time, it is determined whether the electronic device 200 can collect the physiological data of the user, so that after the electronic device 200 reaches the time two, the electronic device 200 executes step S604, which causes the time that the electronic device 100 plays the audio file to be later than the alarm clock start time (time two).
In other embodiments, the electronic device 100 may not perform steps S604, S605 and S606. In this way, the electronic device 200 receives and responds to the request one sent by the electronic device 100, and directly performs step S607, that is, sends the physiological data of the user to the electronic device 100. In this case, the request sent by the electronic device 100 is used to request the electronic device 200 to acquire the physiological data of the user.
S604, the electronic device 200 receives and responds to the request one, and does the electronic device 200 determine whether the electronic device 200 can transmit the physiological data of the user to the electronic device 100?
S604, S605, and S606 are optional steps, and are applied to a scenario in which the user forgets to wear the electronic device 200 or the electronic device 200 cannot work normally, so as to ensure that the electronic device 100 can work normally with the method provided in the present application under the aforementioned circumstances, and the user is happy to wake up.
When the electronic device 200 can transmit the physiological data of the user to the electronic device 100, for example, the electronic device 200 can collect the physiological data of the user and transmit the physiological data to the electronic device 100, the electronic device 200 performs S606.
When the electronic device 200 may not transmit the physiological data of the user to the electronic device 100, for example, the electronic device 200 may not acquire the physiological data of the user, the electronic device 200 performs S604.
For example, when the electronic device 200 is a wearable device (e.g., a smart bracelet, a smart watch, etc.), in a case where the user does not wear the electronic device 200 or the electronic device 200 is low in power, the electronic device 200 may not collect the physiological data of the user, and thus may not transmit the physiological data of the user to the electronic device 100.
When the electronic device 200 is an intelligent bed, the electronic device 200 may not collect the physiological data of the user and thus may not transmit the physiological data of the user to the electronic device 100 when the user does not use the electronic device 200 or the electronic device 200 is low in power.
S605, when the electronic device 200 may not transmit the physiological data of the user to the electronic device 100, the electronic device 200 transmits a response one to the electronic device 100.
S606, the electronic device 100 receives and responds to the first response, determines an audio playing strategy based on the time length after the alarm clock is started, and plays an audio file specified by the audio playing strategy.
The electronic device 100 receives the response one sent by the electronic device 200, and determines that the electronic device 200 cannot collect the physiological data of the user, so that the electronic device 100 determines an audio playing strategy based on the duration of the alarm clock after being started.
For example, when the electronic device 100 detects that the duration after the alarm clock is started is in the first duration interval, the electronic device 100 may play an audio file specified by the audio playing policy. When the electronic device 100 detects that the time length after the alarm clock is started is in the second time length interval, the electronic device 100 may play an audio file specified by the audio playing policy two. The first time interval and the second time interval are different, and the audio file specified by the audio playing strategy I is different from the audio file specified by the audio playing strategy II.
The duration interval may be a duration interval set before the electronic device 100 leaves the factory, or may be determined by the electronic device 100 based on the wake-up degree data set of the user. In an embodiment of the present application, the wake-up level data set of the user may be a data set stored by the electronic device 100 and related to the previous wake-up process of the user. The method for determining each duration interval based on the wake-up degree data set by the electronic device 100 may refer to the related description in fig. 3D, and is not described herein again.
In the embodiment of the present application, the audio file in the audio playing strategy may be a single track audio file. Taking the wakeup process of the user divided into five wakeup stages as an example, the first audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio playing policy, the electronic device 100 may play one of the one or more audio files in the first audio type. The second audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio play policy two, the electronic device 100 may play one of the one or more audio files in the second audio type. The third audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio play policy three, the electronic device 100 may play one of the one or more audio files in the third audio type. The fourth audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio play policy four, the electronic device 100 may play one of the one or more audio files in the fourth audio type. When the electronic device 100 plays the audio file specified by the audio play policy five, the electronic device 100 may play one of the one or more audio files in the third audio type.
In some embodiments, when the electronic device 100 switches the audio playing policy, the volume of the audio file specified by the previous audio playing policy may be gradually decreased, and the volume of the audio file specified by the next audio playing policy may be gradually increased. Thus, the problem that the transition is unnatural when the electronic equipment 100 switches to play different types of audio files can be solved.
In some embodiments, the audio file specified by the audio playing policy may also be a plurality of audio files, that is, the audio file specified by the audio playing policy is a multi-track audio file obtained by mixing a plurality of different audio files. When the electronic device 100 switches to play the audio file, the electronic device 100 may play the audio file specified by the previous audio playing policy mixed with the audio file specified by the current audio playing policy. It should be noted that, when the playing of the audio file is switched, the electronic device 100 may play the audio file specified by the previous audio playing policy and the audio file specified by the current audio playing policy in a mixed manner in real time, and before the electronic device 100 leaves the factory, the audio file specified by the previous audio playing policy and the audio file specified by the current audio playing policy may be mixed and then stored locally. For example, the first audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio playback policy, the electronic device 100 may play one of the one or more audio files of the first audio type. The second audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio playing policy two, the electronic device 100 may mix the audio file already played in the first audio type with an audio file to be played in the second audio type and then play the audio file. The third audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio playing policy three, the electronic device 100 may play the audio file that has been played in the first audio type and the second audio type after mixing the audio file with the audio file to be played in the third audio type. The fourth audio type may include one or more audio files, and when the electronic device 100 plays an audio file specified by the audio playing policy four, the electronic device 100 may play the audio file that has been played in the first audio type, the second audio type, and the third audio type after mixing the audio file to be played in the fourth audio type. When the electronic device 100 plays the audio file specified by the audio playing policy five, the electronic device 100 may delete the audio file of the fourth audio type played at the previous stage, and play the audio file after mixing the audio files played in the first audio type, the second audio type, and the third audio type. In this way, the audio file specified by the previous audio playing policy and the audio file specified by the current audio playing policy are mixed and then played, so that the problem that transition is unnatural when the electronic device 100 switches to play different types of audio files can be solved.
Next, a specific implementation of the electronic device 100 mixing and playing the audio file specified by the audio playing policy will be described.
After the electronic device 100 finishes playing the audio file specified by the upper-level audio playing policy, the electronic device 100 switches and displays the audio file specified by the lower-level audio playing policy. The electronic device 100 may perform mixed-sound playing on the audio file after the time when the audio file specified by the previous-stage audio playing policy ends and the audio file corresponding to the start time of the audio file specified by the next-stage playing policy.
For example, the upper level audio playing strategy may be an audio playing strategy one, and the lower level audio playing strategy may be an audio playing strategy two. The total time length of the audio file specified by the audio playing policy one is 30 seconds, and when the electronic device 100 switches the audio playing policy, the total time length of the audio file specified by the audio playing policy one is 15 seconds. When the electronic device 200 mixes the audio files specified by the first audio playing policy and the second audio playing policy, the electronic device 100 mixes the audio files 15 seconds after the audio file specified by the first audio playing policy and the audio files 0 time after the audio file specified by the second audio playing policy.
S607, in case that the electronic apparatus 200 can transmit the physiological data of the user to the electronic apparatus 100, the electronic apparatus 200 continuously transmits the physiological data of the user to the electronic apparatus 100.
S608, the electronic device 100 receives the physiological data of the user sent by the electronic device 200, and determines the arousal level of the user based on the physiological data of the user.
For determining the arousal level of the user based on the physiological data of the user, the electronic device 100 may refer to a single algorithm of the pico-sensor described in fig. 5C, which is not described herein again in this embodiment of the present application.
S609, the electronic device 100 determines an audio playing strategy according to the awakening degree of the user, executes the audio playing strategy, and plays the audio file specified by the audio strategy.
How the electronic device 100 determines the audio playing policy according to the wakefulness of the user, executes the audio playing policy, and plays the audio file specified by the audio policy may refer to the method of determining the audio playing policy based on the wakefulness of the user by the electronic device 100 described in the embodiment of fig. 3C, for example, when the electronic device 100 detects that the wakefulness of the user is in the first wakefulness interval, the electronic device 100 may play the audio file specified by the audio playing policy. When the electronic device 100 detects that the wakefulness of the user is in the second wakefulness interval, the electronic device 100 may play an audio file specified by the audio playing policy two. The first awakening degree interval and the second awakening degree interval are different, and the audio file specified by the audio playing strategy I is different from the audio file specified by the audio playing strategy II.
In S609, when playing the audio file specified by each audio playing policy, the electronic device 100 may play a single track audio file, or play a multi-track audio file in a mixed manner, which may specifically refer to the relevant description in S606, and this embodiment of the present application is not described herein again.
S610, the electronic device 100 stops playing the audio file.
The electronic device 100 may stop playing the audio file automatically, or may receive an input operation from the user to stop playing the audio file.
Specifically, under the condition that the electronic device 100 automatically stops playing the audio file, the electronic device determines that the current time is time three, and the electronic device 100 stops playing the audio file. Time three may be a time when the electronic device 100 is awake from the user by more than or equal to a 100% certain awake threshold (e.g., 99%). Optionally, the third time may be a time when the preset time period elapses after the awakening degree of the electronic device 100 reaches a certain awakening degree threshold (e.g., 99%). The preset time period may be a default waiting time of the electronic device 100, or a time period input by the user in advance.
When the electronic apparatus 100 receives an input operation of the user to stop playing the audio file, the input of the user may be any one of voice, a fingerprint image, a face image, a gesture, a click operation, and a slide operation.
Illustratively, as shown in fig. 8A, in response to a sliding operation (e.g., a left-sliding operation or a right-sliding operation) or a clicking operation by the user on the screen of the electronic apparatus 100, the electronic apparatus 100 stops playing the audio file. In the embodiment of the present application, the sliding direction of the sliding operation may not be limited to sliding leftward or sliding rightward, and in a possible implementation, the sliding operation may also be sliding downward, sliding upward, or the like.
It should be noted that the starting position of the sliding operation is within a preset distance on or near the icon, and the sliding track of the sliding operation may be a straight line or a curved line. The icon is an icon for stopping the playing of the audio file displayed on the screen of the electronic apparatus 100 in the case of audio playing.
Illustratively, as shown in fig. 8B, the electronic device 100 recognizes that the voice of the user results in "turn off the alarm clock", and the electronic device 100 stops playing the audio file.
Illustratively, as shown in fig. 8C, the electronic device 100 recognizes that the gesture image of the user is a gesture for turning off the alarm clock, and the electronic device 100 stops playing the audio file.
As shown in fig. 8D and 8E, in the case that the electronic device 200 is an intelligent bed, S610 may also be that the electronic device 100 receives and stops playing the audio file in response to the instruction three sent by the electronic device 200. That is, after the electronic device 200 detects that the user gets out of bed, the electronic device 200 sends a third instruction to the electronic device 100. The electronic device 100 receives and responds to instruction three, and the electronic device 100 stops playing the audio file.
The embodiment of the application provides a method for playing an audio file based on the time length after an alarm clock is started, and an electronic device 100 can play at least two different types of audio files in the waking process of a user by controlling the playing time lengths of the different types of audio files, wherein the at least two different types of audio files belong to at least two of audio types corresponding to a seventh quadrant, an eighth quadrant, a first quadrant and a second quadrant in a relation schematic diagram of the waking degree and the music type shown in fig. 3B. That is, the electronic device 100 enables the user to wake up pleasantly and quietly by playing different types of audio files.
The functional modules of the audio playback system 20 provided in the embodiment of the present application are described below.
Fig. 9A shows a functional block diagram of the audio playback system 20 provided in the embodiment of the present application. As shown in fig. 9A, the audio playback system 20 may include an electronic device 100. The electronic device 100 may include an alarm clock initiation module 211, a data parsing module 212, and a sound processing module 213.
The alarm clock initiation module 211 may be configured to send a command one to the sound processing module 212 at a time.
The data parsing module 212 may be configured to receive the first instruction, and in response to the first instruction, the data parsing module 212 sends the audio playing policy to the sound processing module 213 after determining the audio playing policy based on the time.
How the data parsing module 212 determines the audio playing strategy based on the time of day refers to the related description in the embodiment of fig. 9B.
The sound processing module 213 is configured to receive the audio playing policy sent by the data parsing module 212, and play an audio file specified by the audio playing policy.
Fig. 9B illustrates a flowchart of an audio playing method provided in an embodiment of the present application.
In this embodiment of the application, the alarm clock setting method may refer to the related description in S601 of fig. 6, and is not described herein again.
For example, the following embodiments of the present application divide the waking process of a user into five stages, where the audio playing strategy corresponding to each stage is different. In other embodiments, the wake-up process of the user may be further divided into more or fewer stages, which is not limited herein.
As shown in fig. 9B, the method may include the steps of:
s901, when the electronic device 100 detects that the time duration after the alarm clock is started is in the first time duration interval, the electronic device 100 plays an audio file specified by the audio playing policy.
When the electronic device 100 detects that the time length after the alarm clock is started is in the first time length interval, the electronic device 100 determines that the current audio playing strategy is the audio playing strategy one, and the electronic device 100 plays the audio file specified by the audio strategy one.
In the embodiment of the application, in a first time period after the alarm clock starting time, the electronic device 100 plays two different types of audio files, so that the pleasure of the user is gradually increased. The first time period is the sum of the time lengths of the first time interval and the second time interval.
When playing an audio file specified by the audio playing policy or any one of the following audio playing policies, the electronic device 100 may play a single-track audio file, or play a multi-track audio file in a mixed manner, which may be specifically described with reference to S605 in fig. 6, and this embodiment is not described herein again.
S902, when the electronic device 100 detects that the time length after the alarm clock is started is in the second time length interval, the electronic device 100 plays the audio file specified by the audio playing policy II.
When the electronic device 100 detects that the time length after the alarm clock is started is in the second time length interval, the electronic device 100 determines that the current audio playing policy is the audio playing policy two, and the electronic device 100 plays an audio file specified by the audio playing policy two.
S903, when the electronic device 100 detects that the time length after the alarm clock is started is in the third time length interval, the electronic device 100 plays an audio file specified by the audio playing policy three.
When the electronic device 100 detects that the time length after the alarm clock is started is in the third time length interval, the electronic device 100 determines that the current audio playing strategy is the audio playing strategy three, and the electronic device 100 plays the audio file specified by the audio strategy three.
S904, when the electronic device 100 detects that the time length after the alarm clock is started is in the fourth time interval, the electronic device 100 plays the audio file specified by the audio playing policy four.
When the electronic device 100 detects that the time length after the alarm clock is started is in the fourth time length interval, the electronic device 100 determines that the current audio playing strategy is the audio playing strategy four, and the electronic device 100 plays the audio file specified by the audio strategy four.
S905, when the electronic device 100 detects that the time length after the alarm clock is started is in the fifth time length interval, the electronic device 100 plays the audio file specified by the audio playing policy five.
When the electronic device 100 detects that the time length after the alarm clock is started is in the fifth time length interval, the electronic device 100 determines that the current audio playing strategy is the audio playing strategy five, and the electronic device 100 plays the audio file specified by the audio strategy five.
Step S905 is an optional step, and in some embodiments, the electronic device 100 may not perform S905.
S906, the electronic device 100 stops playing the audio file.
The electronic device 100 may stop playing the audio file automatically, or may receive an input operation from the user to stop playing the audio file. Specifically, reference may be made to the relevant description in S610 of fig. 6, and details of the embodiment of the present application are not repeated herein.
The embodiment of the application provides a method for playing audio files based on the wakefulness of a user, wherein an electronic device 100 determines the wakefulness of the user by collecting physiological data of the user, the electronic device 100 can play at least two different types of audio files in the wakefulness process of the user based on the wakefulness of the user, and the at least two different types of audio files belong to at least two of audio types corresponding to a seventh quadrant, an eighth quadrant, a first quadrant and a second quadrant in a relationship diagram of the wakefulness and music types shown in fig. 3B. That is, the electronic device 100 enables the user to wake up pleasantly and quietly by playing different types of audio files.
Fig. 10A is a functional block diagram of an audio playing system 30 provided in the embodiment of the present application.
The audio playback system 30 may include an electronic device 100. The electronic device 100 may be a cell phone, a tablet, a desktop, a laptop, a handheld, a notebook, an ultra mobile personal computer, a netbook, as well as a cell phone, a PDA, an AR device, a VR device, an AI device, a wearable device (e.g., watch, bracelet, etc.), a smart bed, a vehicle mounted device, a smart home device, and/or a smart city device.
As shown in fig. 10A, the electronic device 100 may include an alarm clock starting module 311, a data monitoring module 312, a data parsing module 313, and a sound processing module 314.
The alarm clock starting module 311 may be configured to send a command one to the data monitoring module 312 at a time.
The data monitoring module 312 may be used to continuously acquire physiological data of the user. The data monitoring module 312 receives and responds to the instruction one, sending the physiological data of the user to the data parsing module 313. In some embodiments, the data monitoring module 312 may receive and respond to the instruction one to begin continuously acquiring physiological data of the user.
The data analysis module 313 may be configured to receive the physiological data of the user, determine a wake-up level of the user based on the physiological data of the user, and send the wake-up level of the user to the sound processing module 314.
The sound processing module 314 may be configured to receive the user's arousal level, determine an audio playing policy based on the user's arousal level, and play an audio file based on the audio playing policy.
It should be noted that the data monitoring module 312 may periodically collect the physiological data of the user and periodically send the collected physiological data of the user to the data parsing module 313. The data analysis module 313 periodically receives physiological data of the user, periodically confirms the arousal level of the user, and periodically transmits the arousal level of the user to the sound processing module 314.
The sound processing module 314 may be configured to determine an audio playing policy according to the user's arousal level and play an audio file based on the audio playing policy.
In other embodiments, after determining the user's awakening degree, the data parsing module 313 may also determine an audio playing strategy based on the user's awakening degree, and send the audio playing strategy to the sound processing module 314. After receiving the audio playing policy, the sound processing module 314 plays the audio file in the audio playing policy based on the audio playing policy.
Fig. 10B is a flowchart illustrating an audio playing method provided in this embodiment of the present application.
As shown in fig. 10B, the method may include the steps of:
s1001, at time one, the electronic device 100 starts to periodically collect physiological data of the user.
In the embodiment of the present application, the time one may be an alarm clock starting time (for example, time two) of the electronic device 100, and the time one may also be a time earlier than the time two.
When the time I is equal to the time II, the electronic device 100 determines that the current time is the time I, and the electronic device 100 plays the audio file in the audio playing policy I. At the same time, the electronic device 100 starts to periodically collect physiological data of the user.
When the time one is less than the time two, before the electronic device 100 starts playing the audio file according to the audio playing strategy, the electronic device 100 starts to periodically collect the physiological data of the user.
In some embodiments, the electronic device 100 periodically acquires the physiological data of the user with a period of 5 seconds and a sampling period of 0.5 seconds, that is, acquires the physiological data of the user every 5 seconds, and acquires the physiological data of the user every time at a fixed sampling rate for 0.5 seconds.
S1002, the electronic device 100 determines a arousal level of the user based on the physiological data of the user.
The method for determining the arousal level of the user by the electronic device 100 based on the physiological data of the user may refer to the single algorithm of the pico-sensor described in the embodiment shown in fig. 5C, and will not be described herein again.
S1003, the electronic device 100 determines an audio playing strategy based on the awakening degree of the user.
How the electronic device 100 determines the audio playing policy according to the wakefulness of the user, executes the audio playing policy, and plays the audio file specified by the audio policy may refer to the method of determining the audio playing policy based on the wakefulness of the user by the electronic device 100 described in the embodiment of fig. 3C, for example, when the electronic device 100 detects that the wakefulness of the user is in the first wakefulness interval, the electronic device 100 may determine that the audio playing policy is the audio playing policy one. When the electronic device 100 detects that the wakefulness of the user is in the second wakefulness interval, the electronic device 100 may determine that the audio playing policy is the audio playing policy two. The first awakening degree interval is different from the second awakening degree interval, and the first audio playing strategy is different from the second audio playing strategy.
In S1003, the description related to S606 may be referred to for each audio file specified by the audio playing policy determined by the electronic device 100, and this embodiment is not described herein again.
S1004, the electronic device 100 plays the audio file specified by the audio playing strategy.
When playing the audio file specified by each audio playing policy, the electronic device 100 may play a single-track audio file, or play a mixture of multiple-track audio files, and so on, which may specifically refer to the relevant description in S606 of fig. 6, and this embodiment of the present application is not described herein again.
S1005, the electronic device 100 stops playing the audio file.
The electronic device 100 may stop playing the audio file automatically, or may receive an input operation from the user to stop playing the audio file. Specifically, reference may be made to the relevant description in S610 of fig. 6, and details of the embodiment of the present application are not repeated herein.
Fig. 11 is a flowchart illustrating an audio playing method provided in an embodiment of the present application.
As shown in fig. 11, the method may include the steps of:
s1101, the electronic device 100 detects the start of an alarm clock.
S1102, the electronic device 100 determines a first audio file in a first time period after the alarm clock is started, and plays the first audio file.
In a first time period after the alarm clock is started, the electronic device 100 may determine the first target type from a mapping relationship between the alarm clock start time interval and the audio type based on the first time period. The first target type is an audio type, and different alarm clock starting time intervals correspond to different audio types. The specific method for determining the first target type by the electronic device 100 based on the alarm clock starting duration interval may refer to the related description in fig. 3D, and details are not repeated here. After determining the first target type, the electronic device 100 may determine a first audio file corresponding to the first target type and play the first audio file.
In a possible implementation manner, the electronic device 100 may also obtain a first wakefulness of the user in a first time period after the alarm clock is started, and determine a first target type from a relationship between the wakefulness and the audio type based on the first wakefulness of the user, where the first target type is one audio type, and different wakefulness intervals correspond to different audio types. After determining the first target type, the electronic device 100 may determine a first audio file corresponding to the first target type and play the first audio file.
The specific method for the electronic device 100 to determine the first target type based on the arousal degree interval may be to determine a first emotion quadrant of the first arousal degree of the user in the VA model based on the first arousal degree of the user, and determine the first target type from a mapping relationship between the emotion quadrant and the audio type based on the first emotion quadrant. For details, reference may be made to the related description in fig. 3A and fig. 3B, and details are not repeated here. After determining the first target type, the electronic device 100 may determine a first audio file corresponding to the first target type and play the audio file.
In a possible implementation manner, the specific method for the electronic device 100 to determine the first target type based on the wakefulness interval may also be to determine an audio playing policy from a mapping relationship between the wakefulness interval and the audio playing policy based on the first wakefulness of the user, and play an audio file specified by the audio playing policy. For details, reference may be made to the related description in fig. 3C, which is not described herein again.
In the foregoing possible implementation manner, the manner in which the electronic device 100 obtains the first wakefulness of the user may be that the electronic device 100 collects first physiological data of the user in a first time period after the alarm clock is started, and determines the first wakefulness based on the first physiological data.
The manner in which the electronic device 100 obtains the first wakefulness of the user may also be that the electronic device 100 receives the first physiological data of the user sent by the electronic device 200 in a first time period after the alarm clock is started, and determines the first wakefulness based on the first physiological data.
In the foregoing implementation manner in which the electronic device 100 receives the first physiological data of the user sent by the electronic device 200 and determines the first arousal level based on the first physiological data, optionally, the electronic device 100 may send a first request to the electronic device 200 before receiving the first physiological data sent by the electronic device 200, where the first request may be used to request the electronic device 200 to collect the first physiological data of the user. In some embodiments, the first request may also be for requesting the electronic device 200 to send the first physiological data of the user to the electronic device 100.
In other embodiments, when the electronic device 200 cannot acquire the physiological data of the user, the electronic device 200 receives and transmits a first response to the electronic device 100 in response to the first request transmitted by the electronic device 100, wherein the first response is used for indicating that the electronic device 200 refuses to acquire the first physiological data of the user. The electronic device 100 receives and responds to the first response, and determines a first target type from the mapping relation between the alarm clock starting duration interval and the audio type based on the first time period. After determining the first target type, the electronic device 100 may determine a first audio file corresponding to the first target type and play the audio file. In this way, in a case that the electronic device 200 cannot normally acquire the physiological data of the user, the electronic device 100 may also independently implement the audio playing method provided by the present application.
In the foregoing embodiments, the mentioned physiological data of the user includes any one or more of: skin conductance data, electrocardiogram data, blood pressure data and blood sugar data.
S1103, the electronic device 100 determines a second audio file in a second time period after the alarm clock is started.
The electronic device 100 may determine the second audio file at a second time period after the alarm is started based on the alarm start time period or a second wake-up level of the user. For specific content, reference may be made to the related description in step S1102, which is not described herein again.
S1104, the electronic device 100 plays the first audio file and the second audio file in a second time period after the alarm clock is started.
In a second time period after the alarm clock is started, the electronic device 100 plays the first audio file and the second audio file.
In a possible implementation manner, the electronic device 100 may mix the first audio file and the second audio file to obtain a third audio file, and play the third audio file within a second time period after the alarm clock is started. The specific way of the electronic device 100 mixing the first audio file and the second audio file to obtain the third audio file may include: and superposing the audio tracks of the first audio file and the second audio file to obtain a third audio file. Therefore, the audio switching in the audio playing process can be more smooth and natural.
In one possible implementation, during a second time period after the alarm clock is started, the electronic device 100 may play the first audio file in a gradually decreasing volume manner while playing the second audio file in a gradually increasing volume manner. Therefore, the audio switching in the audio playing process can be more smooth and natural.
By adopting the audio playing method provided by the application, different types of audio files can be played in the waking process from sleeping to waking of the user, and the emotion change of the user can be guided in the waking process, so that the user can generate positive emotion in the waking process.
The embodiments of the present application can be combined arbitrarily to achieve different technical effects.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
In short, the above description is only an example of the technical solution of the present invention, and is not intended to limit the scope of the present invention. Any modifications, equivalents, improvements and the like made in accordance with the disclosure of the present invention are intended to be included within the scope of the present invention.

Claims (20)

1. An audio playing method, comprising:
the first electronic equipment detects the start of an alarm clock;
the first electronic equipment determines a first audio file in a first time period after the alarm clock is started, and plays the first audio file;
the first electronic equipment determines a second audio file in a second time period after the alarm clock is started, wherein the audio type of the first audio file is different from the audio type of the second audio file;
the first electronic equipment plays the first audio file and the second audio file in the second time period.
2. The method according to claim 1, wherein the first electronic device playing the first audio file and the second audio file in the second time period specifically includes:
the first electronic device mixing the first audio file and the second audio file into a third audio file;
and the first electronic equipment plays the third audio file in the second time period.
3. The method according to claim 2, wherein the first electronic device mixing the first audio file and the second audio file into a third audio file specifically comprises:
and the first electronic equipment superposes the audio track of the first audio file and the audio track of the second audio file to obtain the third audio file.
4. The method according to claim 1, wherein the playing, by the first electronic device, the first audio file and the second audio file in the second time period specifically includes:
and the first electronic equipment gradually reduces the volume to play the first audio file in the second time period, and gradually increases the volume to play the second audio file.
5. The method according to any one of claims 1 to 4, wherein the determining, by the first electronic device, the first audio file at a first time period after the alarm clock is started specifically comprises:
the method comprises the steps that a first target type is determined from a mapping relation between an alarm clock starting time interval and audio types by the first electronic equipment at a first time period after the alarm clock is started, and different alarm clock starting time intervals correspond to different audio types;
the first electronic equipment determines the first audio file corresponding to the first target type based on the first target type.
6. The method according to any one of claims 1-4, wherein the determining, by the first electronic device, the first audio file at a first time period after the alarm clock is started specifically includes:
the first electronic equipment acquires a first awakening degree of a user in a first time period after the alarm clock is started;
the first electronic equipment determines a first audio file corresponding to a first awakening degree based on the first awakening degree of a user.
7. The method according to claim 6, wherein the determining, by the first electronic device, the first audio file corresponding to the first wakefulness based on the first wakefulness of the user specifically includes:
the first electronic equipment determines that the first awakening degree is in a first emotion quadrant of a pleasure awakening VA model based on the first awakening degree of the user;
the first electronic equipment determines a first target type from a mapping relation between an emotion quadrant and an audio type based on the first emotion quadrant, wherein the audio types corresponding to different emotion quadrants are different;
the first electronic equipment determines the first audio file corresponding to the first target type based on the first target type.
8. The method according to claim 6 or 7, wherein the first electronic device obtains the first wakefulness of the user in a first time period after the alarm clock is started, and specifically comprises:
the first electronic equipment acquires first physiological data of a user in a first time period after the alarm clock is started;
the first electronic device determines the first wake-up level based on the first physiological data.
9. The method according to claim 6 or 7, wherein the first electronic device obtains the first wakefulness of the user in a first time period after the alarm clock is started, and specifically comprises:
the first electronic equipment receives first physiological data sent by second electronic equipment in a first time period after the alarm clock is started;
the first electronic device determines the first wakefulness based on the first physiological data.
10. The method of claim 8, wherein before the first electronic device receives the first physiological data transmitted by the second electronic device, the method further comprises:
the first electronic device sends a first request to the second electronic device, wherein the first request is used for requesting the second electronic device to collect the first physiological data of the user.
11. The method of claim 2, wherein before the first electronic device determines the first target type from a mapping relationship between an alarm clock start time duration and an audio type based on the first time period, the method further comprises:
the method comprises the steps that first electronic equipment sends a first request to second electronic equipment, wherein the first request is used for requesting the second electronic equipment to collect first physiological data of a user;
and the first electronic equipment receives a first response sent by the second electronic equipment, wherein the first response is used for indicating that the second electronic equipment refuses to acquire the first physiological data.
12. The method according to claim 1 or 2, wherein the determining, by the first electronic device, the second audio file at a second time period after the alarm clock is started specifically comprises:
determining a second target type from the mapping relation between the alarm clock starting time interval and the audio type based on a second time period after the first electronic device starts the alarm clock, wherein different alarm clock starting time intervals correspond to different audio types;
and the first electronic equipment determines the second audio file corresponding to the second target type based on the second target type.
13. The method according to claim 1 or 3, wherein the determining, by the first electronic device, the second audio file at a second time period after the alarm clock is started specifically comprises:
the first electronic equipment acquires a second awakening degree of the user in a second time period after the alarm clock is started;
and the first electronic equipment determines a second audio file corresponding to a second awakening degree based on the second awakening degree of the user.
14. The method according to claim 13, wherein the determining, by the first electronic device, the second audio file corresponding to the second wakefulness based on the second wakefulness of the user specifically includes:
the first electronic equipment determines that the second awakening degree is in a second emotion quadrant of a pleasure awakening VA model based on the second awakening degree of the user;
the first electronic equipment determines a second target type from the mapping relation between the emotion quadrants and the audio types based on the second emotion quadrant, and the audio types corresponding to different emotion quadrants are different;
and the first electronic equipment determines the second audio file corresponding to the second target type based on the second target type.
15. The method according to claim 13 or 14, wherein the obtaining, by the first electronic device, the second wakefulness of the user in a second time period after the alarm clock is started specifically includes:
the first electronic equipment acquires second physiological data of the user in a second time period after the alarm clock is started;
the first electronic device determines the second wakefulness based on the second physiological data.
16. The method according to claim 13 or 14, wherein the obtaining, by the first electronic device, the second wakefulness of the user in a second time period after the alarm clock is started specifically includes:
the first electronic equipment receives second physiological data sent by second electronic equipment in a second time period after the alarm clock is started;
the first electronic device determines the second wakefulness based on the second physiological data.
17. The method of any of claims 8-9, 15-16, wherein the first physiological data or the second physiological data comprises one or more of: skin conductance data, electrocardiogram data, blood pressure data and blood glucose data.
18. An electronic device, being a first electronic device, comprising: one or more processors, one or more memories, and an audio playback module; the one or more memories coupled with the one or more processors for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the first electronic device to perform the method of any of claims 1-17 above.
19. A computer readable storage medium comprising computer instructions that, when executed on a first electronic device, cause the first electronic device to perform the method of any of claims 1-17.
20. A computer program product, characterized in that it causes a computer to carry out the method of any of the preceding claims 1-17 when said computer program product is run on the computer.
CN202111094119.6A 2021-09-17 2021-09-17 Audio playing method and related device Pending CN115834759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094119.6A CN115834759A (en) 2021-09-17 2021-09-17 Audio playing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094119.6A CN115834759A (en) 2021-09-17 2021-09-17 Audio playing method and related device

Publications (1)

Publication Number Publication Date
CN115834759A true CN115834759A (en) 2023-03-21

Family

ID=85515900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094119.6A Pending CN115834759A (en) 2021-09-17 2021-09-17 Audio playing method and related device

Country Status (1)

Country Link
CN (1) CN115834759A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4316273A (en) * 1980-03-17 1982-02-16 Jetter Milton W Remote-controlled alarm clock
CN101795323A (en) * 2010-02-02 2010-08-04 青岛海信移动通信技术股份有限公司 Electronic alarm operation method, electronic alarm and mobile communication terminal
CN105391870A (en) * 2015-12-14 2016-03-09 广州酷狗计算机科技有限公司 Timing reminding method and device
CN112995408A (en) * 2021-03-29 2021-06-18 厦门理工学院 Alarm clock music playing method, device and equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4316273A (en) * 1980-03-17 1982-02-16 Jetter Milton W Remote-controlled alarm clock
CN101795323A (en) * 2010-02-02 2010-08-04 青岛海信移动通信技术股份有限公司 Electronic alarm operation method, electronic alarm and mobile communication terminal
CN105391870A (en) * 2015-12-14 2016-03-09 广州酷狗计算机科技有限公司 Timing reminding method and device
CN112995408A (en) * 2021-03-29 2021-06-18 厦门理工学院 Alarm clock music playing method, device and equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US10381016B2 (en) Methods and apparatus for altering audio output signals
CN111819533B (en) Method for triggering electronic equipment to execute function and electronic equipment
CN107085512A (en) A kind of audio frequency playing method and mobile terminal
WO2020173391A1 (en) Song recording method, sound correction method and electronic device
KR20180085920A (en) Apparatus and method for providing a meeting adaptively
CN113545745B (en) Usage monitoring method and medium for wearable electronic device and electronic device thereof
US11198154B2 (en) Method and apparatus for providing vibration in electronic device
CN111524501A (en) Voice playing method and device, computer equipment and computer readable storage medium
CN109327608A (en) Method, terminal, server and the system that song is shared
CN109657236A (en) Guidance information acquisition methods, device, electronic device and storage medium
CN108806670B (en) Audio recognition method, device and storage medium
CN109819306A (en) Media file clipping method, electronic device and server
CN109243479A (en) Acoustic signal processing method, device, electronic equipment and storage medium
CN112435641B (en) Audio processing method, device, computer equipment and storage medium
CN114694646A (en) Voice interaction processing method and related device
CN115834759A (en) Audio playing method and related device
CN105852810B (en) A kind of sleep control method
CN109065047B (en) Method and device for awakening application service
WO2020253694A1 (en) Method, chip and terminal for music recognition
CN116456035B (en) Enhanced vibration prompting method and electronic equipment
CN116828102B (en) Recording method, recording device and storage medium
EP4236285A1 (en) Enhanced vibration prompting method and electronic device
WO2023207149A1 (en) Speech recognition method and electronic device
EP4343756A1 (en) Cross-device dialogue service connection method, system, electronic device, and storage medium
CN113220261A (en) Audio data acquisition method based on virtual microphone and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination