CN111078181A - Method and system for playing intelligent music, storage medium and terminal equipment - Google Patents

Method and system for playing intelligent music, storage medium and terminal equipment Download PDF

Info

Publication number
CN111078181A
CN111078181A CN201911265212.1A CN201911265212A CN111078181A CN 111078181 A CN111078181 A CN 111078181A CN 201911265212 A CN201911265212 A CN 201911265212A CN 111078181 A CN111078181 A CN 111078181A
Authority
CN
China
Prior art keywords
user
terminal
music
playing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911265212.1A
Other languages
Chinese (zh)
Inventor
龚爱民
韦耀庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN201911265212.1A priority Critical patent/CN111078181A/en
Publication of CN111078181A publication Critical patent/CN111078181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

The embodiment of the application provides an intelligent music playing method and system, a storage medium and terminal equipment. According to the music playing method and device, the user data collected by the second terminal are acquired through the first terminal, the state of the user is judged, the corresponding music list is generated by combining the favorite data recorded by the first terminal, the music in the favorite data is preferentially played, different music is played according to the user in different states, and therefore manual operation is reduced, and the music playing method and device are more intelligent.

Description

Method and system for playing intelligent music, storage medium and terminal equipment
Technical Field
The application belongs to the field of terminals, and particularly relates to a method and a system for playing intelligent music, a storage medium and terminal equipment.
Background
With the rise of the internet and the popularization of mobile multimedia electronic devices, it is a common thing to listen to music using electronic devices such as mobile phones. Digital music has reportedly been rapidly developed in our country over the past decade. Statistical data show that the market scale of China Mobile music in 2013 is 31.2 hundred million yuan, which is increased by 14.6% in 2012. The market scale of the Chinese mobile music is predicted to reach 35.2 billion yuan in 2014, and 13.1% of growth is realized. Based on such rapid development of digital music, people can come into contact with more and more music. Research shows that music creation is a kind of emotional expression of human body to a great extent, and the appreciation of music can promote the emotional release of human body. Different songs are listened in different states, so that efficiency is improved for people. For example, some songs with strong rhythm sense are played during sports, so that people can be more excited; the music which is easy and relaxed is played when people sleep or have a rest, so that people can be relaxed and can more easily enter a deep rest state; some gentle music is played when the music box works, so that the work rhythm of people can be stable, and the generation of larger pressure and the like can be avoided. Practice shows that the existing music playing system is not intelligent enough, the existing playing system needs manual addition/modification of a playing list, manual control (pause, play and next switching) is needed when songs are played, and if disliked songs exist in the list, the disliked songs need to be deleted manually, and the uninterrupted unpredictable intervention operation greatly limits the exertion of the effect of improving the efficiency of enjoying the songs for people.
Therefore, a method and system for playing music intelligently is needed.
Disclosure of Invention
The embodiment of the application provides an intelligent music playing method and system, a storage medium and terminal equipment.
According to a first aspect of the present application, there is provided a method of intelligent music playing, comprising: the first terminal starts an intelligent music function and establishes connection with the second terminal; the first terminal acquires user data acquired by the second terminal; the first terminal judges the state of the user according to the user data; the first terminal generates a corresponding first music list according to the state of the user and first data, wherein the first data comprise the searching times and playing times of the user on the corresponding music track of the first terminal; and the first terminal circularly plays the first music list.
Further, the first data includes preference data including music pieces for which the number of search times or the number of play times for the corresponding music piece is larger than a first threshold value.
Further, the user data includes heartbeat information, motion state, motion rhythm, blood oxygen content and sleep state of the user.
Further, after the step of the first terminal playing the first music list circularly and preferentially playing the music in the favorite data, the method further comprises: when the first terminal detects that the state of the user changes, a second music list is generated and played; and when the first terminal detects that the state of the user is in a sleep state, stopping playing the first music list or the second music list.
According to a second aspect of the present application, there is provided a system for intelligent music playing, the system comprising: a first terminal and a second terminal; wherein the second terminal includes: the acquisition module is used for acquiring user data; the sending module is used for sending the user data and is connected with the acquisition module; the first terminal includes: the starting module is used for starting the intelligent music function and establishing connection with the second terminal; the acquisition module is connected with the starting module and is used for acquiring the user data; the judging module is connected with the acquiring module and used for judging the state of the user according to the user data; the generating module is connected with the judging module and used for generating a corresponding first music list according to the state of the user and first data, wherein the first data comprises the searching times and the playing times of the user on the corresponding music track at the first terminal; and the playing module is connected with the generating module and is used for circularly playing the first music list.
Further, the user data includes heartbeat information, motion state, motion rhythm, blood oxygen content and sleep state of the user.
Further, the first data includes preference data, and the preference data includes music tracks for which the number of times of search and the number of times of play of the music tracks are greater than a first threshold.
Further, the playing module is further configured to generate and play a second music list when the first terminal detects that the state of the user changes; and when the first terminal detects that the state of the user is in a sleep state, stopping playing the first music list or the second music list.
According to a third aspect of the present application, there is provided a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the above-described method of smart music playing.
According to a fourth aspect of the present application, the present application provides a terminal device, comprising a processor and a memory, wherein the processor is electrically connected to the memory, the memory is used for storing instructions and data, and the processor is used for executing the steps in the above-mentioned intelligent music playing method.
The embodiment of the application provides a method and a device for acquiring user data acquired by a second terminal through a first terminal, judging the state of a user, generating a corresponding music list by combining favorite data recorded by the first terminal, and preferentially playing music in the favorite data so as to realize playing different music according to different states of the user, thereby reducing manual operation and being more intelligent.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flowchart illustrating steps of a method for playing smart music according to an embodiment of the present application.
Fig. 2 is a diagram illustrating a correspondence between an emotional state of a user and a heartbeat frequency according to an embodiment of the present application.
Fig. 3 is a schematic flowchart illustrating steps of another method for playing smart music according to an embodiment of the present application
Fig. 4 is a schematic structural diagram of a system for intelligent music playing according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of the first terminal according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of the second terminal according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and the like in the description and in the claims of the present application and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the objects so described are interchangeable under appropriate circumstances. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
In particular embodiments, the drawings discussed below and the various embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present application may be implemented in any suitably arranged system. Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Further, a terminal according to an exemplary embodiment will be described in detail with reference to the accompanying drawings. Like reference symbols in the various drawings indicate like elements.
The terminology used in the detailed description is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts of the present application. Unless the context clearly dictates otherwise, expressions used in the singular form encompass expressions in the plural form. In the present specification, it will be understood that terms such as "including," "having," and "containing" are intended to specify the presence of the features, integers, steps, acts, or combinations thereof disclosed in the specification, and are not intended to preclude the presence or addition of one or more other features, integers, steps, acts, or combinations thereof. Like reference symbols in the various drawings indicate like elements.
As illustrated in fig. 1, the present application provides a method of intelligent music playing, which includes the following steps.
And step S10, the first terminal starts the intelligent music function and establishes connection with the second terminal.
In the embodiment of the present application, the first terminal may include, but is not limited to, a mobile phone, a tablet, a computer, and the like. The first terminal supports a music playing function, a networking function, a Bluetooth function, an infrared function and the like.
The second terminal may include, but is not limited to, a bracelet, a smart watch, and the like. Wherein the second terminal supports the Bluetooth function, the function of collecting the user state information and the like.
And after the first terminal starts the intelligent music function, the second terminal is started and the connection with the second terminal is established. And in the process of establishing connection, when prompting to receive confirmation of the brief function item, selecting to receive all the supported function items. In addition, the connection mode of the first terminal and the second terminal includes: bluetooth connection, wireless connection and infrared connection.
Step S20, the first terminal obtains the user data collected by the second terminal.
In the present embodiment, the user data may include, but is not limited to, heartbeat information, motion status, motion rhythm, blood oxygen content, and sleep status of the user. The second terminal comprises an acceleration sensor, a gyroscope, a heart rate sensor, a red light LED, an infrared light LED, a body movement recorder and other sensors, so that data of the heartbeat, the movement state, the movement rhythm, the blood oxygen content, the sleep state and the like of the user can be acquired.
And step S30, the first terminal judges the state of the user according to the user data.
In the embodiment of the application, the first terminal acquires the position information and the use time of the user and combines the user data acquired by the second terminal to judge the state of the user. The user's state includes an emotional state, a sleep state, an exercise state, a work state, a reading state, and the like.
Specifically, the emotional state of the user includes a high mood, a low mood, a negative mood, and a normal mood, and is determined by the brain waves and the heartbeat of the user. The correspondence relationship between the electroencephalograms of the user and the states of the user is shown in table 1 below.
TABLE 1 correspondence table of brain waves and user states
Brain wave Frequency (Hz) Amplitude (μ V) User status
δ
1~3 20~200 Extreme fatigue and lethargy or anesthesia
θ 4~7 5~20 Frustration or depression
α 8~13 20~100 Clear and quiet
β
14~30 100~150 Mental stress and emotional agitation or hyperactivity
The corresponding relation between the emotional state and the heartbeat frequency of the user is shown in fig. 2, the emotional state is divided into a plurality of levels according to different heartbeat frequencies, the emotional state shown in fig. 2 is divided into 3 levels, including low and negative mood, normal mood and excited mood, the different emotional states respectively correspond to different heartbeat frequencies, and the mood is normal when the heartbeat frequency of the user is 60-100 times/minute; if the number of times of the treatment exceeds 100 times/minute, the condition is excited.
The sleep state is divided into three states of deep sleep, light sleep and waking, wherein the deep sleep and the light sleep are both in sleep state. When the body motion recorder monitors that the position of the user does not move, and further, according to data of electroencephalogram, electrocardio, body surface temperature and the like of the user, which are obtained by the heart rate sensor, the red light LED and the infrared light LED, the electroencephalogram and electrocardio data are found to be stable, the body surface temperature is lower than the ordinary temperature, the user is judged to be in a deep sleep state or a light sleep state by integrating the information, and otherwise, the user is in a clear state.
The motion state is judged by detecting the motion number through an acceleration sensor, and judging the position change of a user and the change of heartbeat through GPS positioning. If the electrocardio data and the electroencephalogram data of the user are detected to be active, if the current heart rate of the user is 160-180 times/minute and the electroencephalogram is 14-30 Hz, the current excited motion state of the user can be judged by integrating the data.
The reading state is determined as the working state by detecting that the heartbeat of the user is relatively quiet (for example, when the heartbeat frequency of the user is 60-100 times/minute) and the position is less moved in a period of time, and then reading is related when the mobile phone app is operated.
The working state is relative to the reading state, the heartbeat of the user is relatively fast (such as when the heartbeat frequency of the user is 80-110 times/minute), the position is slightly moved, and the operation of the mobile phone is less.
Step S40, the first terminal generates a corresponding first music list according to the state of the user and first data, where the first data includes the number of times the user searches for and plays the corresponding music track at the first terminal.
In an embodiment of the present application, the first data includes preference data, and the preference data includes music tracks for which the number of times of search or the number of times of play for the corresponding music track is greater than a first threshold. The first threshold may be 5, and music tracks with search times and play times greater than 5 are recorded in the preference data. The preference data may include music tracks for which the user has actively played a single music track more than a first threshold number of times, music tracks for which a certain style of song has been played more than a first threshold number of times, or music tracks for which a new song has been played more than a first threshold number of times.
Specifically, a corresponding search keyword is generated according to the user state, for example, the user state is a motion state (running, etc.), and the "running, dynamic, music" is used as a keyword to search for a dynamic fast tempo and urge a person to inspire a song. The user state is a sleep-wake state, and the user searches for easy and cheerful songs by taking refreshing, cheerful and music as keywords. The user state is a sleep state, and the hypnosis music is searched by taking the hypnosis music as key words. Searching in a local music library and a networked music library of the mobile phone, finding out songs of corresponding types and generating a first music list.
Step S50, the first terminal plays the first music list in a loop.
In the embodiment of the present application, music in the favorite data is preferentially played in the first music list. For example, the first music list includes a certain music track with a higher number of active playing times of the user, and the music track is preferentially played; the first music list comprises songs of a certain style which are played by a user with more times actively, and music tracks of the style are played preferentially; the first music list contains new songs, and the number of times of actively playing the new songs by the user is large, namely the user has strong willingness to try the new songs, and the new songs (music tracks) are preferentially played.
The embodiment of the application provides a method and a device for acquiring user data acquired by a second terminal through a first terminal, judging the state of a user, generating a corresponding music list by combining favorite data recorded by the first terminal, and preferentially playing music in the favorite data, so that manual operation is reduced, and the method and the device are more intelligent.
As illustrated in fig. 3, the present application provides another method of intelligent music playing, which includes the following steps.
And step S10, the first terminal starts the intelligent music function and establishes connection with the second terminal.
In the embodiment of the present application, the first terminal may include, but is not limited to, a mobile phone, a tablet, a computer, and the like. The first terminal supports a music playing function, a networking function, a Bluetooth function, an infrared function and the like.
The second terminal may include, but is not limited to, a bracelet, a smart watch, and the like. Wherein the second terminal supports the Bluetooth function, the function of collecting the user state information and the like.
And after the first terminal starts the intelligent music function, the second terminal is started and the connection with the second terminal is established. And in the process of establishing connection, when prompting to receive confirmation of the brief function item, selecting to receive all the supported function items. In addition, the connection mode of the first terminal and the second terminal includes: bluetooth connection, wireless connection and infrared connection.
Step S20, the first terminal obtains the user data collected by the second terminal.
In the present embodiment, the user data may include, but is not limited to, heartbeat information, motion status, motion rhythm, blood oxygen content, and sleep status of the user. The second terminal comprises an acceleration sensor, a gyroscope, a heart rate sensor, a red light LED, an infrared light LED, a body movement recorder and other sensors, so that user data of the heartbeat, the movement state, the movement rhythm, the blood oxygen content, the sleep state and the like of a person can be acquired.
And step S30, the first terminal judges the state of the user according to the user data.
In the embodiment of the application, the first terminal acquires the position information and the use time of the user and combines the user data acquired by the second terminal to judge the state of the user. The user's state includes states of distraction, difficulty, calmness, waking up just after sleep, going to sleep, exercise, work, and reading.
Step S40, the first terminal generates a corresponding first music list according to the state of the user and first data, where the first data includes the number of times the user searches for and plays the corresponding music track at the first terminal.
In an embodiment of the present application, the first data includes preference data, and the preference data includes music tracks for which the number of times of search or the number of times of play for the corresponding music track is greater than a first threshold. The first threshold may be 5, and music tracks with search times and play times greater than 5 are recorded in the preference data. The preference data may include music tracks for which the user has actively played a single music track more than a first threshold number of times, music tracks for which a certain style of song has been played more than a first threshold number of times, or music tracks for which a new song has been played more than a first threshold number of times.
Specifically, a corresponding search keyword is generated according to the user state, for example, the user state is a motion state (running, etc.), and the "running, dynamic, music" is used as a keyword to search for a dynamic fast tempo and urge a person to inspire a song. The user state is a sleep-wake state, and the user searches for easy and cheerful songs by taking refreshing, cheerful and music as keywords. The user state is a sleep state, and the hypnosis music is searched by taking the hypnosis music as key words. Searching in a local music library and a networked music library of the mobile phone, finding out songs of corresponding types and generating a first music list.
Step S50, the first terminal plays the first music list in a loop.
In the embodiment of the present application, music in the favorite data is preferentially played in the first music list. For example, the first music list includes a certain music track with a higher number of active playing times of the user, and the music track is preferentially played; the first music list comprises songs of a certain style which are played by a user with more times actively, and music tracks of the style are played preferentially; the first music list contains new songs, and the number of times of actively playing the new songs by the user is large, namely the user has strong willingness to try the new songs, and the new songs (music tracks) are preferentially played.
Step S60, when the first terminal detects that the state of the user changes, a second music list is generated and played.
In the embodiment of the application, the music list changes along with the change of the state of the user, is close to the user, and further brings better experience to the user through deep learning and self-improvement functions.
Step S70, when the first terminal detects that the state of the user is a sleep state, stopping playing the first music list or the second music list.
In the embodiment of the application, when the state of the user is detected to be the sleep state, the playing is stopped, so that a comfortable sleep environment is provided for the user, the sleep quality of the user is improved, and meanwhile the endurance of the first terminal can also be increased.
The embodiment of the application provides a method and a device for acquiring user data acquired by a second terminal through a first terminal, judging the state of a user, generating a corresponding music list by combining favorite data recorded by the first terminal, and preferentially playing music in the favorite data, so that manual operation is reduced, and the method and the device are more intelligent.
As shown in fig. 4, an embodiment of the present application provides a system for intelligent music playing, including: a first terminal 1 and a second terminal 2.
As shown in fig. 5, the first terminal provided in the embodiment of the present application includes an opening module 10, an obtaining module 11, a determining module 12, a generating module 13, and a playing module 14.
As shown in fig. 6, the second terminal provided in the embodiment of the present application includes an acquisition module 21 and a sending module 22.
The first terminal may include, but is not limited to, a mobile phone, a tablet, a computer, and the like. The first terminal supports a music playing function, a networking function, a Bluetooth function, an infrared function and the like.
The second terminal may include, but is not limited to, a bracelet, a smart watch, and the like. Wherein the second terminal supports the Bluetooth function, the function of collecting the user state information and the like.
The connection mode of the first terminal 1 and the second terminal 2 includes: bluetooth connection, wireless connection and infrared connection.
The acquisition module 21 is used for acquiring user data. The acquisition module 21 includes sensors such as an acceleration sensor, a gyroscope, a heart rate sensor, a red light LED, an infrared light LED, and a body motion recorder, so as to acquire data such as heartbeat, a motion state, a motion rhythm, blood oxygen content, and a sleep state of the user.
The sending module 22 is configured to send the user data, and the sending module 22 is connected to the collecting module 21.
Specifically, after the first terminal 1 starts the intelligent music function, the second terminal 2 is started and the connection with the second terminal 2 is established. And in the process of establishing connection, when prompting to receive confirmation of the brief function item, selecting to receive all the supported function items. After the connection is successful, the sending module 22 sends the collected user data to the obtaining module 12 in the first terminal 1.
The opening module 11 is used for opening the intelligent music function and establishing connection with the second terminal 2.
The acquisition module 12 is connected with the opening module 11. The obtaining module 12 is used for obtaining user data.
In this embodiment of the application, after the first terminal 1 and the second terminal 2 are successfully connected, the obtaining module 12 starts to obtain the user data collected by the sending module 22. The user data may include, but is not limited to, heartbeat information, motion state, motion rhythm, blood oxygen content, and sleep state of the user.
The judging module 13 is connected with the acquiring module 12. The judging module 13 is configured to judge a state of the user according to the user data.
In the embodiment of the application, the first terminal 1 acquires the position information and the use time of the user and combines the user data acquired by the second terminal 2 to judge the state of the user. The user's state includes states of distraction, difficulty, calmness, waking up just after sleep, going to sleep, exercise, work, and reading.
The generating module 14 is connected to the judging module 13. The generating module 14 is configured to generate a corresponding first music list according to the state of the user and first data, where the first data includes the number of times that the user searches for and plays the corresponding music track at the first terminal.
In an embodiment of the present application, the first data includes preference data, and the preference data includes music tracks for which the number of times of search or the number of times of play for the corresponding music track is greater than a first threshold. The first threshold may be 5, and music tracks with search times and play times greater than 5 are recorded in the preference data. The preference data may include music tracks for which the user has actively played a single music track more than a first threshold number of times, music tracks for which a certain style of song has been played more than a first threshold number of times, or music tracks for which a new song has been played more than a first threshold number of times.
Specifically, a corresponding search keyword is generated according to the user state, for example, the user state is a motion state (running, etc.), and the "running, dynamic, music" is used as a keyword to search for a dynamic fast tempo and urge a person to inspire a song. The user state is a sleep-wake state, and the user searches for easy and cheerful songs by taking refreshing, cheerful and music as keywords. The user state is a sleep state, and the hypnosis music is searched by taking the hypnosis music as key words. Searching in a local music library and a networked music library of the mobile phone, finding out songs of corresponding types and generating a first music list.
The playing module 15 is connected with the generating module 14. The playing module 1 is configured to play the first music list in a loop.
In the embodiment of the present application, music in the favorite data is preferentially played in the first music list. For example, the first music list includes a certain music track with a higher number of active playing times of the user, and the music track is preferentially played; the first music list comprises songs of a certain style which are played by a user with more times actively, and music tracks of the style are played preferentially; the first music list contains new songs, and the number of times of actively playing the new songs by the user is large, namely the user has strong willingness to try the new songs, and the new songs (music tracks) are preferentially played.
The playing module 15 is further configured to generate and play a second music list when the first terminal 1 detects that the state of the user changes; and when the first terminal 1 detects that the state of the user is in a sleep state, stopping playing the first music list or the second music list.
In the embodiment of the application, the music list changes along with the change of the state of the user, is close to the user, and further brings better experience to the user through deep learning and self-improvement functions. When the state of the user is detected to be the sleep state, the playing is stopped, so that a comfortable sleep environment is provided for the user, the sleep quality of the user is improved, and meanwhile the endurance of the first terminal can be increased.
The embodiment of the application provides a method and a device for acquiring user data acquired by a second terminal through a first terminal, judging the state of a user, generating a corresponding music list by combining favorite data recorded by the first terminal, and preferentially playing music in the favorite data so as to realize playing different music according to different states of the user, thereby reducing manual operation and being more intelligent.
Referring to fig. 7, an embodiment of the present application further provides a terminal device 200, where the terminal device 200 may be a mobile phone, a tablet, a computer, and other devices. As shown in fig. 7, the terminal device 200 includes a processor 201 and a memory 202. The processor 201 is electrically connected to the memory 202.
The processor 201 is a control center of the terminal device 200, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or loading an application program stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the terminal device.
In this embodiment, the terminal device 200 is provided with a plurality of memory partitions, the plurality of memory partitions includes a system partition and a target partition, the processor 201 in the terminal device 200 loads instructions corresponding to processes of one or more application programs into the memory 202 according to the following steps, and the processor 201 runs the application programs stored in the memory 202, so as to implement various functions:
the first terminal starts an intelligent music function and establishes connection with the second terminal;
the first terminal acquires user data acquired by the second terminal;
the first terminal judges the state of the user according to the user data;
the first terminal generates a corresponding first music list according to the state of the user and first data, wherein the first data comprise the searching times and playing times of the user on the corresponding music track of the first terminal; and
and the first terminal plays the first music list in a circulating way.
Fig. 8 shows a specific block diagram of a terminal device 300 provided in an embodiment of the present application, where the terminal device 300 may be used to implement the method for playing smart music provided in the foregoing embodiment. The terminal device 300 may be a mobile phone or a tablet.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performing interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Mobile Communication (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11 a, IEEE802.11 b, IEEE802.1 g and/or IEEE802.1 n), Voice over Internet Protocol (VoIP), world wide Internet Protocol (Microwave Access for Wireless communications, Wi-Max), and other short message protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed.
The memory 320 may be used to store software programs and modules, such as program instructions/modules corresponding to the method for intelligent music playing in the above-mentioned embodiments, and the processor 380 executes various functional applications and data processing by running the software programs and modules stored in the memory 320, so as to implement the functions of intelligent music playing. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 320 may further include memory located remotely from processor 380, which may be connected to terminal device 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal apparatus 300, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 331 may overlay display panel 341, and when touch-sensitive surface 331 detects a touch operation thereon or thereabout, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on display panel 341 in accordance with the type of touch event. Although in FIG. 8, touch-sensitive surface 331 and display panel 341 are implemented as two separate components for input and output functions, in some embodiments, touch-sensitive surface 331 and display panel 341 may be integrated for input and output functions.
The terminal device 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the terminal device 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device 300, detailed descriptions thereof are omitted.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between a user and terminal device 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signal into an electrical signal, which is received by the audio circuit 360 and converted into audio data, which is then processed by the audio data output processor 380 and then transmitted to, for example, another terminal via the RF circuit 310, or the audio data is output to the memory 320 for further processing. The audio circuit 360 may also include an earbud jack to provide communication of peripheral headphones with the terminal device 300.
The terminal device 300 may assist the user in e-mail, web browsing, streaming media access, etc. through the transmission module 370 (e.g., a Wi-Fi module), which provides the user with wireless broadband internet access. Although fig. 8 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the terminal device 300, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the terminal device 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
Terminal device 300 also includes a power supply 390 (e.g., a battery) for powering the various components, which may be logically coupled to processor 380 via a power management system in some embodiments to manage charging, discharging, and power consumption management functions via the power management system. The power supply 390 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal device 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the display unit of the terminal device is a touch screen display, the terminal device further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
the first terminal starts an intelligent music function and establishes connection with the second terminal;
the first terminal acquires user data acquired by the second terminal;
the first terminal judges the state of the user according to the user data;
the first terminal generates a corresponding first music list according to the state of the user and first data, wherein the first data comprise the searching times and playing times of the user on the corresponding music track of the first terminal; and
and the first terminal plays the first music list in a circulating way.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by instructions controlling associated hardware, and the instructions may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the methods for intelligent music playing provided in the present application.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any of the methods for playing intelligent music provided in the embodiments of the present application, the beneficial effects that can be achieved by any of the methods for playing intelligent music provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here. The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The embodiment of the application provides a method and a device for acquiring user data acquired by a second terminal through a first terminal, judging the state of a user, generating a corresponding music list by combining favorite data recorded by the first terminal, and preferentially playing music in the favorite data so as to realize playing different music according to different states of the user, thereby reducing manual operation and being more intelligent.
The method and apparatus for playing intelligent music, the storage medium, and the terminal device provided in the embodiments of the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for intelligent music playing, comprising:
the first terminal starts an intelligent music function and establishes connection with the second terminal;
the first terminal acquires user data acquired by the second terminal;
the first terminal judges the state of the user according to the user data;
the first terminal generates a corresponding first music list according to the state of the user and first data, wherein the first data comprise the searching times and playing times of the user on the corresponding music track of the first terminal; and
and the first terminal plays the first music list in a circulating way.
2. The method of claim 1, wherein the first data comprises preference data, and wherein the preference data comprises music tracks for which the number of searches or plays for the corresponding music track is greater than a first threshold.
3. The method of claim 1, wherein the user data includes heartbeat information, motion status, motion cadence, blood oxygen content, and sleep status of the user.
4. The method according to claim 1, further comprising, after the steps of the first terminal playing the first music list in a loop and preferentially playing the music in the preference data, the steps of:
when the first terminal detects that the state of the user changes, a second music list is generated and played; and
and when the first terminal detects that the state of the user is in a sleep state, stopping playing the first music list or the second music list.
5. A system for intelligent music playing, the system comprising: a first terminal and a second terminal;
wherein the second terminal includes:
the acquisition module is used for acquiring user data; and
the sending module is used for sending the user data and is connected with the acquisition module;
the first terminal includes:
the starting module is used for starting the intelligent music function and establishing connection with the second terminal;
the acquisition module is connected with the starting module and is used for acquiring the user data;
the judging module is connected with the acquiring module and used for judging the state of the user according to the user data;
the generating module is connected with the judging module and used for generating a corresponding first music list according to the state of the user and first data, wherein the first data comprises the searching times and the playing times of the user on the corresponding music track at the first terminal; and
and the playing module is connected with the generating module and is used for circularly playing the first music list.
6. The system of claim 5, wherein the user data includes heartbeat information, motion status, motion cadence, blood oxygen content, and sleep status of the user.
7. The system of claim 5, wherein the first data comprises preference data, the preference data comprising music tracks for which the number of seeks or plays for the corresponding music track is greater than a first threshold.
8. The system according to claim 5, wherein the playing module is further configured to generate and play a second music list when the first terminal detects that the state of the user changes; and when the first terminal detects that the state of the user is in a sleep state, stopping playing the first music list or the second music list.
9. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the method of smart music playing of any of claims 1 to 4.
10. A terminal device, comprising a processor and a memory, wherein the processor is electrically connected to the memory, and the memory is used for storing instructions and data, and the processor is used for executing the steps of the method for intelligent music playing according to any one of claims 1 to 4.
CN201911265212.1A 2019-12-11 2019-12-11 Method and system for playing intelligent music, storage medium and terminal equipment Pending CN111078181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911265212.1A CN111078181A (en) 2019-12-11 2019-12-11 Method and system for playing intelligent music, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911265212.1A CN111078181A (en) 2019-12-11 2019-12-11 Method and system for playing intelligent music, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN111078181A true CN111078181A (en) 2020-04-28

Family

ID=70313788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911265212.1A Pending CN111078181A (en) 2019-12-11 2019-12-11 Method and system for playing intelligent music, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN111078181A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307250A (en) * 2020-11-10 2021-02-02 珠海格力电器股份有限公司 Music recommendation and play control method and equipment
CN112413829A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Sleep data processing method, device, equipment and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102198301A (en) * 2011-05-20 2011-09-28 哈尔滨工业大学 Music playing system based on body feature monitoring
CN104917902A (en) * 2015-06-30 2015-09-16 苏州寅初信息科技有限公司 Method and system for controlling music playing status of terminal
CN105516468A (en) * 2015-11-27 2016-04-20 上海与德通讯技术有限公司 Mobile terminal and music play control method thereof
CN109167878A (en) * 2018-08-23 2019-01-08 三星电子(中国)研发中心 A kind of driving method, system and the device of avatar model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102198301A (en) * 2011-05-20 2011-09-28 哈尔滨工业大学 Music playing system based on body feature monitoring
CN104917902A (en) * 2015-06-30 2015-09-16 苏州寅初信息科技有限公司 Method and system for controlling music playing status of terminal
CN105516468A (en) * 2015-11-27 2016-04-20 上海与德通讯技术有限公司 Mobile terminal and music play control method thereof
CN109167878A (en) * 2018-08-23 2019-01-08 三星电子(中国)研发中心 A kind of driving method, system and the device of avatar model

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307250A (en) * 2020-11-10 2021-02-02 珠海格力电器股份有限公司 Music recommendation and play control method and equipment
CN112413829A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Sleep data processing method, device, equipment and readable medium
CN112413829B (en) * 2020-11-16 2022-03-18 珠海格力电器股份有限公司 Sleep data processing method, device, equipment and readable medium

Similar Documents

Publication Publication Date Title
CN107360327B (en) Speech recognition method, apparatus and storage medium
CN106686396B (en) Method and system for switching live broadcast room
CN105549740B (en) A kind of method and apparatus of playing audio-fequency data
CN106782600B (en) Scoring method and device for audio files
CN109256146B (en) Audio detection method, device and storage medium
CN108320742A (en) Voice interactive method, smart machine and storage medium
CN104427083B (en) The method and apparatus for adjusting volume
CN105554522B (en) Method, server and the terminal of audio are played in group
WO2018223837A1 (en) Music playing method and related product
CN109872710B (en) Sound effect modulation method, device and storage medium
WO2017088527A1 (en) Audio file re-recording method, device and storage medium
CN107680614B (en) Audio signal processing method, apparatus and storage medium
CN107229629B (en) Audio recognition method and device
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN109189953A (en) A kind of selection method and device of multimedia file
CN106791916B (en) Method, device and system for recommending audio data
CN107798107A (en) The method and mobile device of song recommendations
CN107452361B (en) Song sentence dividing method and device
CN109582817A (en) A kind of song recommendations method, terminal and computer readable storage medium
CN106599204A (en) Method and device for recommending multimedia content
CN111078181A (en) Method and system for playing intelligent music, storage medium and terminal equipment
CN106844528A (en) The method and apparatus for obtaining multimedia file
CN110531852A (en) Information processing method and electronic equipment
CN105047185B (en) A kind of methods, devices and systems obtaining audio accompaniment
CN106792014B (en) A kind of method, apparatus and system of recommendation of audio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428