CN112507161A - Music playing method and device - Google Patents

Music playing method and device Download PDF

Info

Publication number
CN112507161A
CN112507161A CN202011474741.5A CN202011474741A CN112507161A CN 112507161 A CN112507161 A CN 112507161A CN 202011474741 A CN202011474741 A CN 202011474741A CN 112507161 A CN112507161 A CN 112507161A
Authority
CN
China
Prior art keywords
lyrics
music
lyric
user
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011474741.5A
Other languages
Chinese (zh)
Inventor
王国胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011474741.5A priority Critical patent/CN112507161A/en
Publication of CN112507161A publication Critical patent/CN112507161A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor

Abstract

The application discloses a music playing method and device. In the method, a lyric display mode corresponding to the body state of a user is determined; and then displaying first lyrics on a playing interface of the music according to the lyric displaying mode in the music playing process, wherein the first lyrics comprise lyrics corresponding to the playing music time interval. The user's posture includes the multiform, and correspondingly, the lyrics display mode corresponding to the user's posture also includes the multiform, therefore in the music playback process, can show the lyrics through the multiform lyrics display mode, the lyrics display mode is more diversified. And if the user body state changes, the lyric display mode also changes correspondingly, and the flexible change of the lyric display mode is realized. Moreover, when checking the lyrics displayed on the playing interface, the user can also experience the interactive feeling between the user and the terminal equipment playing the music, so that the immersion experience of the user can be increased, and the user experience when listening to the music is improved.

Description

Music playing method and device
Technical Field
The application relates to the technical field of multimedia, in particular to a music playing method and device.
Background
With the increasing demand of people for entertainment and leisure, various terminal devices (such as mobile phones, tablet computers, televisions, etc.) can support installation of various application programs, wherein the application program for playing music can meet the demand of users for listening to music.
In order to meet the requirement that the user knows the lyrics, the lyrics of the music can be displayed in an interface of the application program in the process of playing the music by the application program. It is now common to display all the lyrics of the music in the interface of the application, with the lyrics scrolling with the music tempo and highlighting the lyrics being played. The manner of highlighting is typically to enlarge the font or change the font color. Referring to an exemplary diagram of a lyric display interface shown in fig. 1, in this example, the lyric being played is "i know that it sometimes needs to be noisy", and then in the interface, the font of the lyric is marked in a bold form to achieve a highlight effect.
However, in the current music playing process, the lyrics being played can be displayed only by means of amplifying the fonts or changing the font colors, and the method for displaying the lyrics is single.
Disclosure of Invention
The embodiment of the application provides a music playing method and device.
In a first aspect, an embodiment of the present application discloses a music playing method, including:
determining a lyric display mode corresponding to the posture of the user;
and in the music playing process, displaying first lyrics on a playing interface of the music according to the lyric displaying mode, wherein the first lyrics comprise lyrics corresponding to the playing music time interval.
The user posture usually comprises a plurality of forms, and correspondingly, the lyric display mode corresponding to the user posture also comprises a plurality of forms, so that the lyric display mode is more diversified by displaying the lyrics in the music playing process through the steps.
In an alternative design, the determining a lyric display mode corresponding to the user's posture includes:
determining the posture of the user according to at least two images which comprise the user and are not at the same moment;
and determining the lyric display mode corresponding to the body state of the user according to the corresponding relation between the body state of the user and the lyric display mode.
Through the steps, the lyric display mode corresponding to the user body state can be determined based on at least two images of the user which are not at the same time and the corresponding relation between the user body state and the lyric display mode.
In an alternative design, the determining a lyric display mode corresponding to the user's posture includes:
and determining a lyric display mode corresponding to the body posture of the user according to first information transmitted by the first server, wherein the first information is used for indicating the lyric display mode corresponding to the body posture of the user.
Through the steps, the lyric display mode corresponding to the user posture can be determined according to the first information transmitted by the first server, and in the step, the first server determines the first information used for indicating the lyric display mode corresponding to the user posture, so that the operations required to be executed by the music playing device corresponding to the application can be reduced, and the efficiency of determining the lyric display mode is improved.
In an alternative design, the user's posture includes: a local limb motion of the user or a full body motion of the user;
the local limb action of the user comprises: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion;
the user's whole body actions include: at least one of jumping, whole body swinging, and walking;
the lyric display mode comprises the following steps: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing.
In an alternative design, the method further comprises:
and when the first lyrics are displayed on the music playing interface, adjusting the variation amplitude of the first lyrics in the displaying process according to the action amplitude of the user.
Through the steps, the change amplitude of the first lyrics in the display process can correspond to the action amplitude of the user, and the display diversity of the lyrics is further improved.
In an alternative design, the method further comprises:
determining a picture corresponding to at least one word included in the first lyrics, wherein the picture includes: a dynamic picture and/or a static picture;
and displaying the picture in the process of displaying the first lyrics on the playing interface of the music, wherein the picture is positioned between the background of the playing interface and the display layer of the first lyrics.
Through the steps, the picture corresponding to at least one word included by the first lyrics can be displayed in the displaying process of the first lyrics, and the diversity of a playing interface is further improved.
In an alternative design, the method further comprises:
if the difference between the time for displaying the first lyrics and the time for displaying the second lyrics is smaller than the first time difference, displaying a picture corresponding to at least one word included in the second lyrics while displaying a picture corresponding to at least one word included in the first lyrics, wherein the picture corresponding to at least one word included in the second lyrics is located between the background of the playing interface and the display layer of the first lyrics.
Through the steps, the picture corresponding to at least one word included by the first lyrics and the picture corresponding to at least one word included by the second lyrics can be displayed simultaneously under the condition that the music melody is faster, so that the phenomenon that the picture displayed on the playing interface is switched too fast to cause discomfort of watching of a user is avoided.
In an alternative design, the determining that the first lyric includes the frame corresponding to at least one word includes:
transmitting the first lyrics to a second server so that the second server determines a picture corresponding to at least one word included in the first lyrics according to the first lyrics;
and acquiring a picture corresponding to at least one word included in the first lyrics transmitted by the second server.
In an alternative design, the determining that the first lyric includes the frame corresponding to at least one word includes:
determining a picture corresponding to at least one word included in the first lyrics by querying a database, wherein the database includes at least one picture and a corresponding relation between the picture and the word.
In an alternative design, the method further comprises:
determining the font display style of the first lyrics according to the font display elements of the first lyrics, wherein the font display elements comprise: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics;
and when the first lyrics are displayed on the playing interface of the music, the font of the first lyrics displayed on the playing interface is played or adjusted according to the font display style of the first lyrics.
Through the steps, the font of the first lyrics displayed on the playing interface can be played or adjusted according to the font display style of the first lyrics, so that the diversity of the first lyrics displayed on the playing interface is further improved.
In an alternative design, the method further comprises:
determining an animation effect corresponding to the tone of the playing music time interval;
and displaying the animation effect while displaying the first lyrics on the playing interface of the music, wherein the animation effect is positioned between the background of the playing interface and the display layer of the first lyrics.
Through the steps, the first lyrics can be displayed, and meanwhile, the animation effect corresponding to the tone of the playing music time interval is displayed, so that the display content of the playing interface is further enriched, and the diversified display of the playing interface is realized.
In an alternative design, the method further comprises:
and adjusting the evolution effect of the animation effect according to the rhythm speed of the playing music time interval.
Through the steps, the evolution effect of the animation effect displayed on the playing interface corresponds to the rhythm speed of the music time interval, so that better watching experience can be brought to the user, the immersive experience of the user is enhanced, and the diversification of the display style of the playing interface is further improved.
In a second aspect, an embodiment of the present application discloses a music playing device, including:
a processor and a display;
the processor is used for determining a lyric display mode corresponding to the posture of the user;
the display is used for displaying a music playing interface, and in the music playing process, the music playing interface displays first lyrics according to the lyric display mode, wherein the first lyrics comprise lyrics corresponding to the playing music time interval.
In an optional design, the processor is specifically configured to determine a body state of the user according to at least two images including the user that are not at the same time, and determine a lyric display mode corresponding to the body state of the user according to a correspondence between the body state of the user and the lyric display mode.
In an optional design, the processor is specifically configured to determine, according to first information transmitted by the first server, a lyric display manner corresponding to the user's posture, where the first information is used to indicate the lyric display manner corresponding to the user's posture.
In an alternative design, the user's posture includes: a local limb motion of the user or a full body motion of the user;
the local limb action of the user comprises: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion;
the user's whole body actions include: at least one of jumping, whole body swinging, and walking;
the lyric display mode comprises the following steps: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing.
In an optional design, the processor is further configured to, when the first lyric is displayed on the playing interface of the music, adjust a variation amplitude of the first lyric during the displaying process according to an action amplitude of the user.
In an alternative design, the processor is further configured to determine a frame to which at least one word included in the first lyric corresponds, the frame including: a dynamic picture and/or a static picture;
the display is further configured to display the picture on a playing interface of the music in a process of displaying the first lyrics, where the picture is located between a background of the playing interface and a display layer of the first lyrics.
In an optional design, the display is further configured to, if a difference between a time for displaying the first lyric and a time for displaying the second lyric is smaller than a first time difference, display a picture corresponding to at least one word included in the second lyric on a playing interface of the music while displaying a picture corresponding to at least one word included in the first lyric, where the picture corresponding to at least one word included in the second lyric is located between a background of the playing interface and a display layer of the first lyric.
In an optional design, the processor is specifically configured to transmit the first lyric to a second server, so that the second server determines, according to the first lyric, a picture corresponding to at least one word included in the first lyric, and obtains the picture corresponding to at least one word included in the first lyric transmitted by the second server.
In an optional design, the processor is specifically configured to determine, by querying a database, a picture corresponding to at least one word included in the first lyric, where the database includes at least one picture and a correspondence between pictures and words.
In an alternative design, the processor is further configured to determine a font display style of the first lyric according to a font display element of the first lyric, the font display element including: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics;
and when the first lyrics are displayed on the playing interface of the music, the processor is further used for playing or adjusting the font of the first lyrics displayed on the playing interface according to the font display style of the first lyrics.
In an optional design, the processor is further configured to determine an animation effect corresponding to a timbre of the playing music session;
the display is further configured to display the animation effect on a playing interface of the music while displaying the first lyrics, where the animation effect is located between a background of the playing interface and a display layer of the first lyrics.
In an alternative design, the processor is further configured to adjust the evolving effect of the animation effect according to the melody of the playing music session.
In a third aspect, an embodiment of the present application discloses a terminal apparatus, including:
at least one processor and a memory, wherein the memory,
the memory to store program instructions;
the processor is configured to call and execute the program instructions stored in the memory, so as to enable the terminal device to execute the music playing method according to the first aspect.
In a fourth aspect, the embodiments of the present application disclose a computer-readable storage medium,
the computer-readable storage medium has stored therein instructions that, when executed on a computer, cause the computer to execute the music playing method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions that, when run on an electronic device, cause the electronic device to perform the method according to the first aspect.
According to the scheme provided by the embodiment of the application, the first lyrics can be displayed on the music playing interface in the music playing process in a lyric display mode corresponding to the body state of the user. The user's posture includes a plurality of forms, and correspondingly, the lyric display mode corresponding to the user's posture also includes a plurality of forms, so in the embodiment of the application, in the music playing process, the lyrics can be displayed through a plurality of lyric display modes. Compared with the prior art, the lyric display mode provided by the embodiment of the application is more diversified.
Furthermore, in the scheme provided by the embodiment of the application, the lyric display mode corresponds to the body state of the user, and if the body state of the user changes, the lyric display mode also changes correspondingly, so that the lyric display mode can be flexibly changed.
In addition, in the scheme provided by the embodiment of the application, the lyric display mode of the first lyric corresponds to the posture of the user, so that the user can experience the interactive feeling between the user and the terminal equipment playing music when viewing the lyric displayed on the playing interface. By the scheme provided by the embodiment of the application, the immersion experience of the user can be increased, and the user experience in music listening is improved.
Drawings
FIG. 1 is an exemplary diagram of a lyrics presentation interface disclosed in the prior art;
FIG. 2 is a schematic diagram of a mobile phone;
fig. 3 is a schematic view of a work flow of a music playing method disclosed in an embodiment of the present application;
fig. 4(a) is a schematic view of a playing interface of a music playing method disclosed in an embodiment of the present application;
fig. 4(b) is a schematic diagram of another playing interface of a music playing method disclosed in the embodiment of the present application;
fig. 4(c) is a schematic diagram of another playing interface of a music playing method disclosed in the embodiment of the present application;
fig. 5 is a schematic view of a scene in a music playing method disclosed in an embodiment of the present application;
fig. 6 is a schematic workflow diagram of another music playing method disclosed in the embodiment of the present application;
fig. 7 is a schematic view of an interaction scene of a music playing method disclosed in an embodiment of the present application;
fig. 8 is an interaction scene diagram of another music playing method disclosed in the embodiment of the present application;
fig. 9 is an exemplary diagram of a screen corresponding to at least one word included in first lyrics in a music playing method disclosed in an embodiment of the present application;
fig. 10 is an interaction scene diagram of another music playing method disclosed in the embodiment of the present application;
fig. 11 is an interaction scene schematic diagram of another music playing method disclosed in the embodiment of the present application;
fig. 12 is a scene schematic diagram of a music playing method disclosed in an embodiment of the present application;
fig. 13 is a diagram illustrating an example of a playing interface in a music playing method disclosed in an embodiment of the present application;
fig. 14 is a diagram illustrating an example of a playing interface in another music playing method disclosed in the embodiment of the present application;
fig. 15 is a schematic structural diagram of a music playing system according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more. The term "and/or" is used to describe an association relationship that associates objects, meaning that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to solve the problem of single lyric display mode in the existing music playing technology, the embodiment of the application provides a music playing method. The method is applied to a system capable of playing music. In a possible implementation manner, the system includes a music Application (APP) that can be used to play music, and the music APP can play music through the scheme provided by the embodiment of the present application.
In another feasible implementation manner, the system may include a terminal device, a music APP capable of playing music may be installed in the terminal device, the terminal device may run the music APP to play music, and in the process of playing music, a display screen of the terminal device displays a playing interface of the music.
The terminal device may be a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), a sound box with a screen, and the like, which is not limited in this embodiment.
Or, in another possible implementation manner, the system may include a terminal device, in which a music APP capable of playing music may be installed, and the system may further include a server capable of performing information interaction with the terminal device. The terminal equipment is in the process of playing the music through running the music APP, and the terminal equipment can interact with the server.
Taking a mobile phone as an example of the terminal device, fig. 2 shows a schematic structural diagram of the mobile phone.
The mobile phone may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a screen 301, and a Subscriber Identification Module (SIM) card interface 195, etc.
It is to be understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation to the mobile phone. In other embodiments of the present application, the handset may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can be a nerve center and a command center of the mobile phone. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface, thereby implementing the touch function of the mobile phone.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the communication module 160. For example: the processor 110 communicates with a bluetooth module in the communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the communication module 160 through the UART interface, so as to realize the function of playing music through the bluetooth headset.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as the screen 301, the camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the mobile phone. The processor 110 and the screen 301 communicate through the DSI interface to realize the display function of the mobile phone.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the screen 301, the communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge a mobile phone, or may be used to transmit data between the mobile phone and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminal devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone. In other embodiments of the present application, the mobile phone may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the cell phone. The charging management module 140 may also supply power to the terminal device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the screen 301, the camera 193, the communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the rf module 150, the communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The radio frequency module 150 may provide a solution including wireless communication of 2G/3G/4G/5G and the like applied to the mobile phone. The rf module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The rf module 150 may receive the electromagnetic wave from the antenna 1, and filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The rf module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the rf module 150 may be disposed in the processor 110. In some embodiments, at least some functional modules of the rf module 150 may be disposed in the same device as at least some modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the screen 301. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 110 and may be disposed in the same device as the rf module 150 or other functional modules.
The communication module 160 may provide solutions for wireless communication applied to a mobile phone, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The communication module 160 may be one or more devices integrating at least one communication processing module. The communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The communication module 160 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic waves via the antenna 2 to radiate it.
In some embodiments, the handset antenna 1 is coupled to the rf module 150 and the handset antenna 2 is coupled to the communication module 160 so that the handset can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile phone realizes the display function through the GPU, the screen 301, the application processor and the like. The GPU is a microprocessor for image processing, connecting the screen 301 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information. In the embodiment of the present application, the screen 301 may include a display and a touch device therein. The display is used for outputting display contents to a user, and the touch device is used for receiving a touch event input by the user on the screen 301.
In the mobile phone, the sensor module 180 may include one or more of a gyroscope, an acceleration sensor, a pressure sensor, an air pressure sensor, a magnetic sensor (e.g., a hall sensor), a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, a pyroelectric infrared sensor, an ambient light sensor, or a bone conduction sensor, which is not limited in this embodiment.
The mobile phone can realize shooting function through the ISP, the camera 193, the video codec, the GPU, the flexible screen 301, the application processor and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the mobile phone selects the frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The handset may support one or more video codecs. Thus, the mobile phone can play or record videos in various encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize the applications of intelligent cognition and the like of the mobile phone, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, a phone book and the like) created in the use process of the mobile phone. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The handset can listen to music through the speaker 170A or listen to a hands-free conversation.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the mobile phone receives a call or voice information, the receiver 170B can be close to the ear to receive voice.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The handset may be provided with at least one microphone 170C. In other embodiments, the mobile phone may be provided with two microphones 170C to achieve the noise reduction function in addition to collecting the sound signal. In other embodiments, the mobile phone may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The mobile phone may receive a key input, and generate a key signal input related to user setting and function control of the mobile phone.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the flexible screen 301. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the mobile phone by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195. The mobile phone can support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile phone realizes functions of communication, data communication and the like through interaction of the SIM card and a network. In some embodiments, the handset employs eSIM, namely: an embedded SIM card. The eSIM card can be embedded in the mobile phone and cannot be separated from the mobile phone.
In addition, an operating system runs on the above components. For example, the iOS operating system developed by apple, the Android open source operating system developed by google, the Windows operating system developed by microsoft, and the like. A running application may be installed on the operating system.
In order to clarify the aspects provided by the present application, the following description is made of various embodiments with reference to the accompanying drawings.
The embodiment of the application provides a music playing method. Referring to a work flow diagram shown in fig. 3, the music playing method provided by the embodiment of the present application includes the following steps:
and step S11, determining a lyric display mode corresponding to the posture of the user.
Wherein the user generally refers to a listener of music, and the users may be one or more. In the process of listening to music, users often make corresponding postures according to the emotion of listening, for example, waving both hands and the like. In this case, in the embodiment of the present application, the lyric display mode may be adjusted accordingly according to the posture of the user.
In addition, in this embodiment of the present application, the posture of the user may include: a local limb movement of the user or a full body movement of the user.
Wherein the local limb action of the user may comprise: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion. For example, the single-hand action may be a wave-like swing of a single hand or a wave-like up and down of a single hand; the two-hand action can be wave-shaped swinging of the two hands or up-and-down waving of the two hands and the like; the finger actions may reveal a set of actions for the finger, e.g., a thumbs-up gesture may be made for the finger, etc.; the head motion may be head side-to-side swing or head back-and-forth swing, etc.
In one possible implementation, the user's posture may include only one of the partial limb movements. Alternatively, in another possible implementation, the user's posture may include a variety of partial limb movements, for example, movements that may include a combination of a single hand swing up and down and a head swing back and forth movement.
The user's whole-body action is an action of the user's whole-body participation, and in one possible implementation, the user's whole-body action may include: at least one of jumping, whole body swinging, and walking.
Of course, in an actual music playing scene, the posture of the user may also be other forms of local body movements or whole body movements, which is not limited in the present application.
In the embodiment of the present application, the lyric display mode includes: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing.
The lyric jumping refers to that part of words or all words in the lyrics jump on a playing interface. Illustratively, referring to the exemplary diagram of the playing interface shown in fig. 4(a), in this example, the first lyric is shown in the playing interface of music by means of lyric jumping, wherein the first lyric is "let us swing up with double paddles", and part of words (e.g., "we" and "double paddles") in the first lyric are jumping in the playing interface.
The lyric wavy swing refers to the wavy swing of part of words or all words in the lyric on the playing interface. Illustratively, referring to the exemplary diagram of the playing interface shown in fig. 4(b), in this example, the first lyrics are shown in the playing interface of the music by the way of the lyrics waving, the first lyrics are "let us wave double oar", and all words in the first lyrics waving in the playing interface.
In addition, the lyric left-right swing refers to the swing of a part of words or all words in the lyric in the left-right direction on the playing interface. Illustratively, referring to the exemplary diagram of the playing interface shown in fig. 4(c), in this example, the first lyric is shown in the playing interface of the music by way of left-right swinging of the lyric, where the first lyric is "let us swing up with double paddles", in the time corresponding to fig. 4(c), all words in the first lyric swing to the left in the playing interface, and, in the next time, all words in the first lyric may swing to the right in the playing interface, thereby presenting the effect of left-right swinging of the first lyric.
In addition, in a possible implementation manner, the first lyric may be displayed only in one of the lyric display manners, or in another possible implementation manner, the first lyric may be further divided into different portions, the first lyric of each portion is displayed in one of the lyric display manners, and the lyric display manners of the first lyric of the different portions are different. For example, the first lyrics may be divided into two lines, the first line being presented by the lyrics bouncing and the second line being presented by the lyrics swinging left and right.
Of course, the first lyric may also be displayed in other lyric display manners, which is not limited in the embodiment of the present application.
In the embodiment of the application, the lyric display mode corresponds to the posture and the direction of the user. In one example, if the user's posture is a beat, the lyric presentation manner corresponding to the user's posture may be a part or all of word beats in the first lyric; if the posture of the user is one-hand wavy swing or two-hand wavy swing, the lyric display mode corresponding to the posture of the user can be lyric wavy swing; if the posture of the user is swinging left and right with one hand or the whole body, the lyric display mode corresponding to the posture of the user can be that partial words or all words swing left and right, further, in this scene, the swinging direction of the first lyric can be the same as the swinging direction of one hand or both hands of the user, for example, when the user swings left, partial words or all words in the first lyric swing left, and when the user swings right, partial words or all words in the first lyric swing right; if the posture of the user is a head swing motion, the lyric display mode corresponding to the posture of the user can also be a partial word side-to-side swing or a full word side-to-side swing, and further, in this scenario, the swing direction of the first lyric can be the same as the swing direction of the head of the user, for example, when the head of the user swings to the right, the first lyric swings to the right, and when the head of the user swings to the left, the first lyric swings to the left.
Of course, the corresponding relationship between the body state of the user and the lyric display mode may be in other forms, which is not limited in the embodiment of the present application.
In the embodiment of the application, the corresponding relationship between the body state of the user and the lyric display mode can be preset, and in this case, after the body state of the user is determined, the lyric display mode corresponding to the body state of the user can be determined through the corresponding relationship.
While music is playing, one or more users often listen to the music. In this embodiment of the application, if there are a plurality of users and the posture of the plurality of users is different, the lyric presentation mode determined in step S11 is the lyric presentation mode corresponding to the posture corresponding to the user with the largest number.
For example, in a scenario of playing music, the body states of three users are obtained, wherein the body states of two users are one-handed wavy swing, and the body state of one user is head swing, then the lyric display mode determined in step S11 is the lyric display mode corresponding to the one-handed wavy swing.
Step S12, in the music playing process, according to the lyric display mode, displaying first lyrics on the playing interface of the music, wherein the first lyrics include lyrics corresponding to the playing music time interval.
The music is often composed of a plurality of music time periods, and when the music time periods are divided, one music time period in the music can be set to be a time period with preset duration, or if the music comprises lyrics, one music time period in the music can be set to be a time period for playing one lyric. Of course, the music time interval may also be divided in other manners, which is not limited in the embodiment of the present application.
In the embodiment of the application, the first lyrics include lyrics corresponding to a music time interval being played, and in this case, a user can obtain the lyrics corresponding to the music time interval being played currently by observing a playing interface displayed by the terminal device.
Additionally, the first lyrics may further include lyrics of a previous music time period and/or a next music time period of the music time period being played. For example, the rhythm of some music is fast, in this case, the first lyric may include lyrics corresponding to a playing music time interval and lyrics of a next music time interval, so that the playing music time interval and the lyrics of the next music time interval can be displayed simultaneously on the playing interface of the music according to a lyric display mode corresponding to the posture of the user, so that the user can timely understand the lyrics of the next music time interval and prepare for the lyrics when singing.
The embodiment of the application provides a music playing method, wherein a lyric display mode corresponding to the posture of a user can be determined, and first lyrics are displayed on a music playing interface according to the determined lyric display mode in the music playing process.
According to the scheme provided by the embodiment of the application, the first lyrics can be displayed on the music playing interface in the music playing process in a lyric display mode corresponding to the body state of the user. The user's posture includes a plurality of forms, and correspondingly, the lyric display mode corresponding to the user's posture also includes a plurality of forms, so in the embodiment of the application, in the music playing process, the lyric of the music can be displayed in a plurality of lyric display modes. Compared with the prior art, the lyric display mode provided by the embodiment of the application is more diversified.
Furthermore, in the scheme provided by the embodiment of the application, the lyric display mode corresponds to the body state of the user, and if the body state of the user changes, the lyric display mode also changes correspondingly, so that the lyric display mode can be flexibly changed.
In addition, in the scheme provided by the embodiment of the application, the lyric display mode of the first lyric corresponds to the posture of the user, so that the user can experience the interactive feeling between the user and the terminal equipment playing music when viewing the lyric displayed on the playing interface. Therefore, by the scheme provided by the embodiment of the application, the immersive experience of the user can be increased, and the user experience in music listening is improved.
In order to clarify the solution provided by the embodiment of the present application, the present application also provides an example, in which the posture of the user is a one-handed left-right swing.
In addition, in this example, the lyric display mode corresponding to the user's posture, i.e., the user's one-handed left-right swing, is assumed to be the lyric left-right swing, and in this example, referring to the scene diagram shown in fig. 5, the first lyric is displayed in the left-right swing mode. Further, to enhance the user experience, the direction in which the first lyrics swings may be the same as the direction in which the user's single hand swings. And in this example, the user's single hand is swinging from left to right.
In this case, the first lyric is set to "let us swing with double paddles", which may also swing from left to right, see fig. 5, due to the user's single hand swinging from left to right.
In addition, in the next music period of the example shown in fig. 5, the first lyric is "boat push aside wave", and if the user's posture is one-handed left-right swing and the swing direction is right-left swing, in this example, according to the user's posture, it is determined that the corresponding first lyric is presented in a left-right swing manner, and further, the first lyric can be swung right-left.
According to the corresponding example of fig. 5, according to the scheme provided by the embodiment of the application, the display mode of the first lyrics can be adjusted according to the change of the user posture, so that the playing interface of the music can display the first lyrics in a diversified lyrics display mode. Moreover, the lyric display mode of the first lyrics corresponds to the posture of the user, so that the immersive experience of the user when listening to music can be improved.
In the embodiment of the application, a lyric display mode corresponding to the body posture of the user needs to be determined, so that the first lyric is displayed on a music playing interface according to the lyric display mode. In this case, the lyric display mode corresponding to the user's posture may be determined in various ways according to the embodiment of the present application.
In one possible implementation, referring to the workflow diagram shown in fig. 6, the determining the lyric presentation mode corresponding to the user's posture includes the following steps:
step S111, determining the posture of the user according to at least two images including the user, wherein the images are not at the same time.
In the scheme provided by the embodiment of the application, the posture of the user is determined by at least two images including the user, wherein the images are not at the same moment. In addition, the at least two images including the user which are not at the same time can be shot by an imaging device, and the imaging device can be a camera, a video recorder and the like.
The system for executing the music playing method provided by the embodiment of the application can comprise terminal equipment, wherein the terminal equipment is provided with the music APP, and the music playing can be realized by operating the music APP. In this case, if the imaging device is built in the terminal device, the user may be continuously photographed by the imaging device built in the terminal device, so that the terminal device acquires at least two images including the user that are not at the same time.
For example, if the terminal device is a mobile phone, the terminal device can shoot through a camera of the mobile phone, so as to obtain at least two images including the user which are not at the same time; or, if the terminal device is a television, at least two images including the user that are not at the same time can be acquired through a camera installed on the television.
Alternatively, the system for executing the music playing method provided by the embodiment of the present application may be connected to an imaging device, and acquire an image captured by the imaging device. For example, an imaging device may be installed in a singing room, and when a user is located within a shooting range of the imaging device, the imaging device may shoot an image including the user, and transmit the image to the system, and the system determines the posture of the user according to the received image.
Of course, at least two images including the user that are not at the same time may also be obtained in other manners, which is not limited in this application.
And step S112, determining the lyric display mode corresponding to the user body state according to the corresponding relation between the user body state and the lyric display mode.
In the embodiment of the application, a corresponding relationship between the body state of the user and the lyric display mode can be set, and in this case, after the body state of the user is determined, the lyric display mode corresponding to the body state of the user can be determined through the corresponding relationship.
Wherein the correspondence may include a plurality of forms. For example, in the correspondence, a lyric display mode corresponding to the user's posture of the user, which is a beat, may be set as a partial lyric beat or a full lyric beat, and a lyric display mode corresponding to the user's posture of the user, which is a one-handed left-right swing, may be set as a partial or full word left-right swing in the lyrics. Of course, the corresponding relationship may be in other forms, and this is not limited in the embodiments of the present application.
In addition, after the corresponding relation is set, the corresponding relation can be adjusted according to the received user operation, and a lyric display mode corresponding to the body state of the user is determined according to the adjusted corresponding relation, so that the lyric display mode accords with the preference of the user.
Through the scheme provided by the steps S111 to S112, the posture of the user can be determined through at least two images including the user, which are not at the same time, and then the lyric display mode corresponding to the posture of the user is determined according to the corresponding relationship between the posture of the user and the lyric display mode, so that the first lyric is displayed according to the lyric display mode corresponding to the posture of the user.
Further, in the solution provided in the embodiment of the present application, before performing step S111, the following steps may also be included:
and determining that the playing interface of the music is visible on a screen of the terminal equipment playing the music.
The music playing interface is visible on a screen of the terminal device, which means that a user can watch the music playing interface when watching the screen of the terminal device.
When playing music, according to the setting of a user, the music APP can run in the foreground or the background of the terminal equipment. If the music APP runs in the background of the terminal device, the playing interface of the music APP usually cannot be displayed on the display interface of the terminal device, that is, the playing interface of the music is invisible on the screen of the terminal device playing the music.
In addition, in some cases, in order to reduce the power consumption of the terminal device, the terminal device may also play the screen during the music playing process. After the terminal equipment is in the screen displaying state, the playing interface of the music is not visible on the screen of the terminal equipment.
In this case, in the scheme provided in this embodiment of the present application, it may be determined whether a playing interface of the music is visible on the screen of the terminal device by determining whether the music APP is running on the foreground and whether the terminal device turns off the screen. If the music APP runs in the foreground of the terminal device and the terminal device does not turn off the screen, it can be determined that the playing interface of the music is visible on the screen of the terminal device.
In this scheme, the posture of the user is determined only when it is determined that the playing interface of the music is visible on the screen of the terminal device that plays the music. And if the playing interface of the music is not visible on the screen of the terminal equipment, pausing to determine the posture of the user until the playing interface of the music is determined to be visible on the screen of the terminal equipment again.
Through the steps, after the fact that the playing interface of the music is visible on the screen of the terminal device is determined, the first lyrics are displayed on the playing interface of the music according to the lyric display mode corresponding to the user body state, correspondingly, the user can watch the first lyrics through watching the playing interface, and under the condition, the first lyrics are displayed through the lyric display mode corresponding to the user body state, so that the immersion experience of the user in the process of listening to the music can be improved.
In addition, if the playing interface of the music is not visible on the screen of the terminal device, the user cannot watch the playing interface, and in this case, even if the playing interface displays the first lyrics, the user cannot watch the first lyrics through watching the playing interface. Therefore, when the interface of the terminal device is invisible, the playing interface of the music can temporarily not display the first lyrics in a lyric display mode corresponding to the user posture. Accordingly, in this case, the determination of the posture of the user and the determination of the lyric presentation mode corresponding to the posture of the user may be suspended, so that the operations to be performed by the system may be reduced, and the power consumption of the system may be further reduced.
Or, in another possible implementation manner, the determining a lyric display manner corresponding to the posture of the user includes the following steps:
and determining a lyric display mode corresponding to the body posture of the user according to first information transmitted by the first server, wherein the first information is used for indicating the lyric display mode corresponding to the body posture of the user.
In this implementation, the first server may be installed with the image forming apparatus, or the first server may be connected with the image forming apparatus. In the music playing process, the first server can acquire at least two images which are shot by the imaging device and comprise the user and are not at the same moment, and the posture of the user is determined through processing the images; then the first server determines a lyric display mode corresponding to the body state of the user according to the corresponding relation between the body state of the user and the lyric display mode; the first server transmits first information used for indicating the lyric display mode corresponding to the user body state, so that a system executing the scheme provided by the embodiment of the application can determine the lyric display mode corresponding to the user body state according to the first information transmitted by the first server.
Of course, in the embodiment of the present application, the lyric display mode corresponding to the posture of the user may also be determined in other ways, which is not limited in the embodiment of the present application.
Further, in the embodiment of the present application, the following operations may also be included:
and when the first lyrics are displayed on the music playing interface, adjusting the variation amplitude of the first lyrics in the displaying process according to the action amplitude of the user.
In general, the larger the motion amplitude of the user, the larger the amplitude of the lyric change. For example, the posture of the user is a beat, and the lyric display mode corresponding to the posture can be a partial word beat in the lyric, and the larger the amplitude of the user beat is, the higher the height of the partial word beat is.
Through the scheme, the variation range of the first lyrics in the display process can be changed along with the variation of the action range of the user, so that the diversity of lyric display modes is further increased, and the immersive experience of the user in music listening is improved.
In the above embodiment, a scheme for determining a lyric display mode corresponding to the user's posture and displaying the first lyric on the music playing interface according to the lyric display mode corresponding to the user's posture in the music playing process is provided. The scheme can be applied to a system for executing the music playing method provided by the embodiment of the application, and the system can comprise various forms. In a possible form, a system for executing the music playing method provided by the embodiment of the present application includes a music APP for playing music.
Alternatively, in another possible form, the system includes a terminal device in which a music APP capable of playing music is installed. Under the condition, the terminal equipment can realize the playing of the music by running the music APP, and in the process of playing the music, the playing interface of the music is displayed through the display screen of the terminal equipment.
Or, in another possible implementation manner, the system may include not only the terminal device, but also a server capable of performing information interaction with the terminal device. The terminal equipment is in the process of playing the music through running the music APP, and the terminal equipment can interact with the server.
Of course, the system can also be in other forms, and the embodiment of the application is not limited to this. For different forms of the system, the present application provides the following music playback scenarios.
In a scenario one, a system for executing the music playing method provided by the embodiment of the present application includes a music APP. In this scenario, referring to the interaction schematic diagram shown in fig. 7, the scheme provided in the embodiment of the present application includes the following steps:
step S21, the music APP determines that a playing interface of the music is visible on a screen of the terminal equipment playing the music.
In general, when the music APP determines that the music APP is running in the foreground of the terminal device and the terminal device is not turning off the screen, it may be determined that the playing interface of the music is visible on the screen of the terminal device playing the music.
In addition, if the music APP determines that the playing interface of the music is not visible on the screen of the terminal device, it indicates that the user does not watch the lyrics even if the playing interface displays the lyrics, and in this case, it is not necessary to determine the body state of the user and determine the lyrics display mode corresponding to the body state of the user.
Step S22, the imaging device takes at least two images including the user that are not at the same time, and transmits the images to the music APP.
Wherein, the music APP is usually installed in a terminal device, in which case, the imaging device may be an imaging device built in the terminal device where the music APP is installed. Alternatively, the imaging device can also be connected with a terminal device installed with a music APP.
In the solution provided by the embodiment of the present application, the imaging device may periodically perform shooting, and after each shooting, transmit the shot image to the music APP. Or, the music APP may trigger the imaging device after determining that music needs to be played and determining that a playing interface of the music is visible on a screen of a terminal device playing the music. And the imaging device starts shooting after receiving the trigger of the music APP.
Step S23, the music APP determines a lyric display mode corresponding to the user 'S posture according to the user' S posture indicated by at least two images which are not at the same time of the user.
In this step, the music APP may analyze the image, determine the posture of the user according to a result of the analysis, and determine a lyric display mode corresponding to the posture of the user according to a correspondence between the posture of the user and the lyric display mode.
Or after the music APP obtains at least two images including the user, which are not at the same time, the images are analyzed and processed by a processor of terminal equipment provided with the music APP, and after the body state of the user is determined through analysis and processing, the processor transmits the determined body state of the user to the music APP, so that the music APP determines a corresponding lyric display mode according to the body state of the user.
Or, the music APP may transmit the received at least two images including the user at different times to a remote server, and the remote server may determine the posture of the user by analyzing and combing the images. And the remote server can transmit the user's posture to the music APP so that the music APP can determine the lyric display mode corresponding to the user's posture.
In addition, the music APP may store a correspondence between the user's posture and the lyric display manner, or the music APP may access a memory in which a correspondence between the user's posture and the lyric display manner is stored, and acquire the correspondence between the user's posture and the lyric display manner by accessing the memory. In this case, the music APP may determine a lyric display mode corresponding to the user's posture according to the correspondence.
Or, the music APP may transmit the received at least two images including the user at different times to a remote server, the remote server determines the posture of the user, then the remote server queries a memory storing a correspondence between the posture of the user and a lyric display mode, and by querying the memory, the remote server may determine the lyric display mode corresponding to the posture of the user. Then, the remote server may transmit the lyric display mode to the music APP. In this case, the music APP may determine a lyric display mode corresponding to the user's posture according to transmission from the remote server.
And step S24, in the music playing process of the music APP, displaying first lyrics on the playing interface of the music according to a lyric display mode corresponding to the user posture.
The playing interface of the music can be a display interface of the music APP.
Through the above steps, the music APP can interact with the imaging device, the music APP can determine the lyric display mode corresponding to the body state of the user, and the first lyrics are displayed on the playing interface of the music through the lyric display mode so as to display the lyrics through the diversified lyric display mode.
Further, in this scheme, after the music APP receives at least two images including the user at different times, the motion amplitude of the user may be determined according to the processing of the images, and the variation amplitude of the first lyric in the display process is adjusted accordingly.
Or, the music APP transmits the at least two images including the user at different times to a remote server, and the remote server, when determining the posture of the user according to the images, may also determine the action amplitude of the user according to the images, and transmit the relevant information of the action amplitude of the user to the music APP, so that the music APP determines the action amplitude of the user, thereby being capable of adjusting the change amplitude of the first lyrics in the display process according to the action amplitude.
Alternatively, in this scenario, the music APP may be connected to a first server. And the first server can be connected with an imaging device so as to acquire at least two images which are shot by the imaging device and comprise the user and are not at the same moment. The first server determines the body state of the user through at least two images of the user, wherein the images are not at the same time, then determines a lyric display mode corresponding to the body state of the user, and transmits first information indicating the lyric display mode to the music APP. The music APP displays the first lyrics on a playing interface of the music through the lyric display mode indicated by the first information.
Further, in the first information transmitted by the first server, the action amplitude of the user may also be indicated, in this case, after the music APP acquires the first information, the change amplitude of the first lyric in the display process may also be adjusted according to the action amplitude of the user indicated by the first information.
Through the scene, the music APP can provide diversified lyric display modes for the user. In addition, in this scenario, the music APP determines a lyric display mode corresponding to the user body state, and displays the first lyric according to the lyric display mode, in this case, after the user adjusts the body state of the user, the music APP can also display the lyric according to the lyric display mode corresponding to the body state after the user adjusts, so as to realize flexible change of the lyric display mode.
In a second scenario, a system for executing the music playing method provided by the embodiment of the present application includes a terminal device, where a music APP is installed in the terminal device, and the music APP can be operated to play music. In this scenario, referring to the interaction diagram shown in fig. 8, in the solution provided in the embodiment of the present application, the method includes the following steps:
step S31, the terminal device determines that the music playing interface is visible on its own screen.
In this step, the terminal device determines that a playing interface of the music is visible on its own screen when it is determined that the terminal device has not turned its own screen and the music APP installed in the terminal device is running in the foreground.
In addition, if the terminal equipment determines that the playing interface of the music is not visible on the screen of the terminal equipment, the body state of the user can be determined in a pause mode, and the lyric display mode corresponding to the body state of the user can be determined in the pause mode.
Step S32, the terminal device determines at least two images including the user that are not at the same time.
The terminal device may capture the image through an imaging device (e.g., a camera) installed on the terminal device, or the terminal device may be connected to the imaging device and obtain the image transmitted by the imaging device.
In this case, the imaging device may periodically perform photographing. Or, the terminal device may trigger the imaging device after determining that music needs to be played and a playing interface of the music is visible on a screen of the terminal device, and the imaging device starts shooting after receiving the trigger of the terminal device.
Step S33, the terminal device determines a lyric display mode corresponding to the user 'S body state according to the at least two images including the user not at the same time, and transmits the lyric display mode corresponding to the user' S body state to the music APP.
The terminal equipment can determine the posture of the user through processing the image. In addition, the terminal device may store a corresponding relationship between the body state of the user and the lyric display mode, and in this case, by querying the corresponding relationship, the terminal device may determine the lyric display mode corresponding to the body state of the user.
In addition, the correspondence between the posture of the user and the lyric display mode may be stored in a remote server, and in this case, after determining the posture of the user, the terminal device may determine the lyric display mode corresponding to the posture of the user by accessing the remote server.
Or, the terminal device may further transmit the at least two images including the user at different times to a remote server, and the remote server determines the body state of the user through image processing, and further determines a lyric display mode corresponding to the body state of the user. Then, the remote server transmits the lyric display mode to the terminal equipment, so that the terminal equipment determines the lyric display mode corresponding to the user posture.
And step S34, in the music playing process of the music APP, displaying first lyrics on the playing interface of the music according to a lyric display mode corresponding to the user posture.
A protocol can be pre-established between the terminal equipment and the music APP, and the protocol comprises instructions corresponding to different lyric display modes. In this case, after determining the lyric display mode corresponding to the user's posture, the terminal device may determine, according to the protocol, a first instruction indicating the lyric display mode corresponding to the user's posture, and transmit the first instruction to the music APP. Correspondingly, after the music APP obtains the first instruction, the lyric display mode can be determined according to the protocol, and then the first lyric is displayed on the playing interface of the music through the lyric display mode.
Through the steps, after the terminal device determines that the playing interface of the music is visible on the screen of the terminal device, the terminal device can determine the lyric display mode corresponding to the body state of the user through at least two images containing the user and not being at the same moment, and then transmits an indication to the music APP to indicate the lyric display mode, so that the music APP can display the first lyric on the playing interface of the music according to the lyric display mode corresponding to the body state of the user.
Further, the terminal device may determine the action amplitude of the user according to at least two images including the user, which are not at the same time. In this case, the terminal device may also transmit information indicating the magnitude of the user's action to the music APP. After the music APP obtains the information, the variation amplitude of the first lyrics in the display process is adjusted according to the action amplitude of the user.
Alternatively, in this scenario, the terminal device may be connected to the first server. The first server determines the body state of the user through at least two images of the user, wherein the images are not at the same moment, then determines a lyric display mode corresponding to the body state of the user, and transmits first information indicating the lyric display mode corresponding to the body state of the user to terminal equipment. After the terminal equipment acquires the first information, a lyric display mode indicated by the information is determined, and the lyric display mode is transmitted to the music APP, so that the music APP can display the first lyrics on a playing interface of the music according to the lyric indication mode corresponding to the body state of the user.
Further, in this case, the first information transmitted by the first server may also be used to indicate the action magnitude of the user. After the terminal device acquires the first information, the action amplitude of the user is determined according to the first information, and the terminal device can also transmit corresponding information indicating the action amplitude of the user to the music APP. Correspondingly, after the music APP receives the transmission of the terminal device, the variation amplitude of the first lyric in the display process can be adjusted according to the action amplitude of the user.
In a music playing method provided in another embodiment of the present application, the method further includes the steps of:
firstly, determining a picture corresponding to at least one word included in the first lyrics, wherein the picture comprises: a dynamic picture and/or a static picture.
The dynamic picture may be of various types, for example, short video and Graphics Interchange Format (GIF), and the like, which is not limited in this embodiment of the present application.
In addition, if the picture corresponding to at least one word included in the first lyric includes multiple pictures, the multiple pictures may be all dynamic pictures or all static pictures, or a part of the multiple pictures is a dynamic picture and another part of the multiple pictures is a static picture, which is not limited in the embodiment of the present application.
And then, in the process of displaying the first lyrics on the playing interface of the music, displaying the picture, wherein the picture is positioned between the background of the playing interface and the display layer of the first lyrics.
That is to say, in the solution provided in the embodiment of the present application, a picture corresponding to at least one word included in the first lyrics is located on an upper layer of a background of the playing interface, in this case, the picture may block the background of the playing interface, and in the process of displaying the first lyrics by the playing interface, the picture is equivalent to the background of the displayed first lyrics.
And the picture is positioned at the lower layer of the display layer of the first lyrics, so that the first lyrics are prevented from being shielded by the picture, and a user can watch the first lyrics conveniently in the process of listening to music.
Through the scheme provided by the steps, in the process of displaying the first lyrics, a picture corresponding to at least one word included in the first lyrics can be displayed on the playing interface at the same time, and the picture is used as the display background of the first lyrics. In this case, the user can also view the picture while viewing the first lyric displayed on the playing interface.
In the prior art, during playing music, a solid background is usually set for the lyrics, for example, in the corresponding example of fig. 1, the background of the lyrics is a gray background. Alternatively, the prior art may also set the cover of the album on which the music is located or the portrait of the music author as the background for the lyric presentation. Therefore, the prior art has a single way of displaying the lyric background.
According to the scheme provided by the embodiment of the application, the first lyrics are displayed, the picture corresponding to at least one word included in the first lyrics is displayed on the playing interface of the music, and the picture is used as the background of the first lyrics.
In addition, in the scheme provided by the embodiment of the application, as the music is played, the first lyrics displayed on the playing interface may change, and correspondingly, the picture corresponding to at least one word included in the first lyrics may also change. That is, in the process of playing the same piece of music, the picture displayed on the playing interface will also change with the change of the first lyrics.
Therefore, by the scheme provided by the embodiment of the application, the music playing interface can display diversified background pictures, and the problem of single lyric background display mode in the prior art is solved. In addition, the background pictures displayed on the playing interface are various, so that the experience of the user in listening to the music can be improved.
In addition, the scheme provided by the embodiment of the application comprises a step of obtaining a picture corresponding to at least one word included in the first lyrics and a step of determining a lyric display mode corresponding to the posture of the user. In the actual music playing process, the two steps do not have strict time sequence, for example, a lyric display mode corresponding to the user's posture may be determined first, then a picture corresponding to at least one word included in the first lyric may be obtained, a picture corresponding to at least one word included in the first lyric may also be obtained first, then a lyric display mode corresponding to the user's posture may be determined, or a picture corresponding to at least one word included in the first lyric and a lyric display mode corresponding to the user's posture may also be obtained at the same time.
Furthermore, in order to ensure the fluency of the pictures displayed on the playing interface, the pictures corresponding to at least one word included in each sentence of lyrics of the music can be determined in advance. In this case, when the first lyrics are displayed on the playing interface of the music, a picture corresponding to at least one word included in the first lyrics is extracted from each picture determined in advance and displayed. In the scheme, because each picture is obtained in advance, the picture corresponding to at least one word included in the first lyrics does not need to be obtained in the process of displaying the first lyrics, and therefore the fluency of the picture displayed on the playing interface is improved.
Further, in the solution provided in the embodiment of the present application, the following operations are further included:
if the difference between the time for displaying the first lyrics and the time for displaying the second lyrics is smaller than the first time difference, displaying a picture corresponding to at least one word included in the second lyrics while displaying a picture corresponding to at least one word included in the first lyrics.
Wherein a picture corresponding to at least one word included in the second lyrics is positioned between a background of the playing interface and a display layer of the first lyrics.
That is to say, the picture corresponding to at least one word included in the second lyrics is located on the upper layer of the background of the playing interface and located on the lower layer of the display layer of the first lyrics.
In the embodiment of the present application, a difference between a time for displaying the first lyrics and a time for displaying the second lyrics is smaller than a first time difference, that is, the display time of the first lyrics is closer to the display time of the second lyrics, and in general, the second lyrics may be lyrics corresponding to a music time interval adjacent to the first lyrics. For example, the music time interval corresponding to the second lyric may be a next music time interval of the first lyric; or the music time interval corresponding to the second lyric can be the last music time interval of the first lyric; or the second lyrics comprise the lyrics corresponding to the previous music time interval and the lyrics corresponding to the next music time interval of the first lyrics.
In addition, the duration of the first time difference may be preset and may also be adjusted according to a user operation, and for example, the first time difference may be 1 second.
If the time difference between the time of displaying the first lyrics and the time of displaying the second lyrics is smaller than the first time difference, the music rhythm between the first lyrics and the second lyrics is faster, and the time required for converting from playing the first lyrics to playing the second lyrics is shorter.
In this case, in order to avoid discomfort of the user due to too fast switching of the displayed pictures on the playing interface, the pictures corresponding to at least one word included in the first lyrics can be displayed on the playing interface, and the pictures corresponding to at least one word included in the second lyrics can be displayed on the playing interface, so that the pictures corresponding to the lyrics in different music periods can be combined into the pictures displayed on the playing interface.
Illustratively, the lyrics of a piece of music in each music time interval respectively include: … … lyric 1, lyric 2, lyric 3, lyric 4 … …, when the lyric 2 is displayed on the display interface through the lyric display mode corresponding to the user's posture, the pictures combined by the pictures corresponding to lyric 1, lyric 2 and lyric 3 can also be displayed on the display interface. In addition, when the lyric 3 is played, a picture formed by combining pictures corresponding to the lyric 2, the lyric 3 and the lyric 4 can be displayed on a playing interface.
In this case, in the two pictures continuously displayed by the play interface, the contents of part of the pictures are the same, and the contents of part of the pictures are different, so that the pictures displayed by the play interface are in certain continuity and inheritance, and the discomfort of a user caused by too fast switching of the pictures displayed by the play interface can be avoided.
In the above solution, an operation of determining a frame corresponding to at least one word included in the first lyric is provided, and the operation may be implemented in various forms.
In one possible implementation manner, the determining a frame corresponding to at least one word included in the first lyric may include the following steps:
determining a picture corresponding to at least one word included in the first lyrics by querying a database, wherein the database includes at least one picture and a corresponding relation between the picture and the word.
In this implementation manner, first, performing word segmentation processing on the first lyrics to obtain at least one word included in the first lyrics; then, according to the corresponding relation between the words and the pictures, the pictures corresponding to at least one word included in the first lyrics are determined.
Usually, each lyric is often composed of at least one word, so in this step, at least one word included in the first lyric can be obtained by performing word segmentation processing on the first lyric. Illustratively, if the first lyric is "i wait for you", then through the word segmentation process, a plurality of words can be obtained, respectively, "i wait for you", "i", "wait for you", "wait for" and "you" etc.
The operation of performing word segmentation processing on the first lyric to obtain at least one word included in the first lyric may be performed by a system executing the music playing method provided in the embodiment of the present application, or may be performed by a third server accessible to the system. In this case, the system may transmit the first lyrics to a third server, so that the third server determines at least one word included in the first lyrics through a splitting process of the first lyrics. The third server then transmits to the system at least one word comprised by the first lyrics.
In addition, in the embodiment of the present application, a database may be provided, where at least one picture is in the database, and the database may further store a correspondence between the picture and the word. The pictures stored in the database often include dynamic pictures and/or static pictures.
The database may be provided in a system for executing the music playing method provided by the embodiment of the present application. Or, the database may be disposed in an independent server, which may be referred to as a fourth server, and the system for executing the music playing method provided in the embodiment of the present application may access the fourth server through a network to perform a query on the fourth server, so as to determine, according to a query result, a picture corresponding to at least one term included in the first lyrics.
Illustratively, the first lyrics are "sky-blue and so on-rain," and i are on-you, "and the words in the first lyrics include" rain "and" i are on-you, "wherein the picture corresponding to the word" rain "may be a dynamic picture of rain, and the picture corresponding to the word" i are on-you "may be a static picture of a back shadow of a person, in which case, the picture corresponding to at least one word included in the first lyrics is a static picture including the dynamic picture of rain and the back shadow of a person.
Or, in another example, the first lyric is "snowflake floating," north wind is shrivelled, "and the words in the first lyric include" snowflake floating, "and" north wind is shrivelled, "wherein the picture corresponding to the word" snowflake floating "may be a dynamic picture of snowflake floating," north wind is shrivelled, "and the picture corresponding to the word" north wind is shrivelled "may be a dynamic picture of a big wind whistling, in which case, referring to the example diagram of the picture shown in fig. 9, the picture corresponding to at least one word included in the first lyric includes the dynamic picture of snowflake floating and the dynamic picture of a big wind whistling.
Further, in the embodiment of the present application, one or more databases may be provided. Wherein, if a plurality of databases are provided, the style of the picture stored in each data block may be different. The user can also specify one of the databases according to own preference. In this case, the database specified by the user may be queried according to the received user specified operation, so that the determined frame corresponding to at least one word included in the first lyric meets the preference of the user.
For example, in the embodiment of the present application, three databases may be provided, where the first database includes comic-style pictures, the second database includes ink-wash-style pictures, and the third database includes oil-wash-style pictures. If the user includes children, the user often wants the background of the playback interface to present a cartoon-style picture. In this case, a specified operation for the first database may be received, and the first database is queried to determine a picture corresponding to at least one word included in the first lyrics, so that the determined picture is a cartoon-style picture and meets the user's requirements.
In another possible implementation manner, the determining a screen corresponding to at least one word included in the first lyric may include the following steps:
firstly, transmitting the first lyrics to a second server so that the second server determines a picture corresponding to at least one word included in the first lyrics according to the first lyrics;
then, a picture corresponding to at least one word included in the first lyrics transmitted by the second server is obtained.
And storing pictures corresponding to all the words and the corresponding relations between the words and the pictures in the second server. In this case, the system executing the music playing method provided by the embodiment of the present application may transmit the first lyrics to the second server.
After the first lyrics are obtained, the second server may perform word segmentation processing on the first lyrics, obtain at least one word included in the first lyrics, determine a picture corresponding to the at least one word included in the first lyrics according to a correspondence between the word and the picture, and then transmit the picture corresponding to the at least one word to a system that executes the music playing method provided by the embodiment of the present application.
In this embodiment, the system for executing the music playing method provided by the embodiment of the present application may sequentially transmit each lyric in the music to the second server during the music playing process.
In addition, in order to ensure the fluency of the frames displayed on the playing interface, in this implementation manner, when the system executing the music playing method provided by the embodiment of the present application transmits the first lyrics to the second server, other lyrics of the music may also be transmitted to the second server at the same time, for example, when it is determined that the music needs to be played, the system may transmit all the lyrics of the music to the second server. After receiving the first lyrics and the other lyrics, the second server determines a picture corresponding to at least one word included in the first lyrics and the other lyrics, and transmits the picture to a system executing the music playing method provided by the embodiment of the application.
In this case, the system for executing the music playing method provided by the embodiment of the present application may obtain, through interaction with the second server in advance, pictures respectively corresponding to at least one word included in each lyric in the music, and extract and display a corresponding picture from the obtained pictures when the pictures corresponding to the at least one word included in the first lyric need to be displayed.
In this implementation manner, since the system obtains the pictures corresponding to at least one word included in each lyric in the music in advance, the picture to be displayed does not need to be determined by the first lyric when the first lyric is displayed, and therefore, the fluency of the display picture of the playing interface can be guaranteed.
Further, in the implementation scheme, the second server processes the first lyric to obtain a picture corresponding to at least one word included in the first lyric, so that operations to be executed by a system executing the music playing method provided by the embodiment of the application are reduced, and therefore, the fluency of the picture displayed on the playing interface can be improved.
The system for executing the music playing method provided by the embodiment of the application can comprise various forms. In order to clarify the scheme provided by the embodiment of the present application for determining the picture corresponding to at least one word included in the first lyrics, the present application provides an example of a music playing scene for different forms of the system.
In a third scenario, a system for executing the music playing method provided by the embodiment of the present application includes a music APP. In this scenario, referring to the interaction schematic diagram shown in fig. 10, in the solution provided in the embodiment of the present application, the method may include the following steps:
step S25, the music APP transmits the first lyrics to the second server.
The music APP may sequentially transmit each sentence of lyrics in the music to the second server in the process of playing the music, or the music APP may also simultaneously transmit other lyrics of the music to the second server when transmitting the first lyrics, which is not limited in the embodiment of the present application.
For example, the music APP may transmit all lyrics of a certain music to the second server when receiving an operation of a user instructing to play the music and determining that the music needs to be played.
Step S26, after the second server obtains the first lyric, determining a picture corresponding to at least one word included in the first lyric.
The database of the second server can store at least one picture and the corresponding relation between the picture and the words. After receiving the first lyrics transmitted by the music APP, the second server may perform word segmentation on the first lyrics, obtain at least one word included in the first lyrics, and determine a picture corresponding to the at least one word according to a correspondence between pictures and words.
In addition, if the second server also receives other lyrics, in this step, the second server may further obtain a picture corresponding to the at least one word included in the other lyrics according to the above step.
Step S27, the second server transmits a picture corresponding to at least one word included in the first lyrics to the music APP.
In addition, in this step, if the second server has acquired the picture corresponding to at least one word included in the other lyrics, the second server may also transmit the picture corresponding to at least one word included in the other lyrics to the music APP.
Step S28, in the process of displaying the first lyrics on the playing interface of the music, the music APP displays a picture corresponding to at least one word included in the first lyrics on the playing interface of the music.
Through the operations in steps S25 to S28, the music APP may obtain and display a picture corresponding to at least one word included in the first lyric through interaction with the second server.
In addition, in the schematic diagram shown in fig. 10, the operation of step S25 is performed after the lyric presentation manner corresponding to the posture of the user is determined. In an actual music playing scene, after the music APP determines a picture corresponding to at least one word included in the first lyrics through interaction with the second server, the lyric display mode corresponding to the user's posture may be determined, and the lyric display mode corresponding to the user's posture and the picture may also be determined at the same time, which is not limited in the embodiment of the present application.
In the scheme described in step S25 to step S28, the music APP obtains a picture corresponding to at least one word included in the first lyric through transmission of the second server.
In addition, the music APP can also query a database, and a picture corresponding to at least one term included in the first lyrics is determined by querying the database. The database comprises at least one picture and a corresponding relation between the picture and the words.
In this case, the music APP may split the first lyric, obtain at least one word included in the first lyric, determine the at least one word included in the first lyric according to a correspondence between pictures and words, and query the database, thereby determining a picture corresponding to the at least one word included in the first lyric.
In a fourth scenario, the system for executing the music playing method provided by the embodiment of the present application includes a terminal device, where a music APP is installed in the terminal device, and the music APP can be operated to play music. In this scenario, referring to the interaction schematic diagram shown in fig. 11, the scheme provided in the embodiment of the present application includes the following steps:
and step S35, the music APP transmits first lyrics to the terminal equipment.
The music APP can sequentially transmit each sentence of lyrics in the music to the terminal device in the music playing process, or the music APP can simultaneously transmit other lyrics of the music when transmitting the first lyric, which is not limited in the embodiment of the application.
And step S36, the terminal equipment transmits the first lyrics to a second server.
Referring to the scene diagram shown in fig. 12, in this example, the terminal device 10 plays music through a music APP installed by itself for one or more users to listen to. And, the terminal device 10 is connected to a second server 20 through a network, and after acquiring the first lyrics transmitted by the music APP, the terminal device 10 may transmit the first lyrics to the second server 20.
Step S37, after the second server obtains the first lyric, determining a picture corresponding to at least one word included in the first lyric.
In the second server, at least one picture and the corresponding relationship between the picture and the word are stored. In this step, the second server may generally perform word segmentation processing on the first lyrics to obtain at least one word included in the first lyrics, and then determine a picture corresponding to the at least one word according to the corresponding relationship.
Step S38, the second server transmits a picture corresponding to at least one word included in the first lyric to the terminal device.
Step S39, the terminal equipment transmits a picture corresponding to at least one word included in the first lyric to the music APP.
Step S40, the music APP obtains a picture corresponding to at least one word included in the first lyrics transmitted by the terminal device, and in the process of displaying the first lyrics on the playing interface of the music, the picture corresponding to at least one word included in the first lyrics is displayed on the playing interface of the music.
According to the operations from step S37 to step S39, the terminal device may obtain, through interaction with the second server, a picture corresponding to at least one word included in the first lyrics transmitted by the second server, and then transmit the picture corresponding to the at least one word included in the first lyrics to the music APP, so that the music APP displays the picture corresponding to the at least one word included in the first lyrics.
In addition, in this scenario, if the terminal device itself stores the picture and the corresponding relationship between the picture and the words, after acquiring the first lyrics transmitted by the music APP, the terminal device may determine the picture corresponding to at least one word included in the first lyrics by querying the storage of the terminal device itself.
In addition, in the schematic diagram shown in fig. 11, the operation of step S35 is performed after the lyric presentation manner corresponding to the posture of the user is determined. In the actual music playing scene, the first lyrics may be transmitted to the third server before the lyrics display mode is determined. Alternatively, the first lyric may also be transmitted to a third server in the process of determining the lyric display mode, which is not limited in the embodiment of the present application.
In the above scheme, a scheme of displaying a first lyric of the music on a playing interface according to a lyric display mode corresponding to a user's posture in a music playing process and displaying a picture corresponding to at least one word included in the first lyric in the process of displaying the first lyric is described. Furthermore, in the music playing process, the font of the displayed first lyrics can be adjusted. Correspondingly, in the embodiment of the present application, the following steps may also be included:
firstly, determining the font display style of the first lyric according to the font display element of the first lyric, wherein the font display element comprises: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics.
And then, when the first lyrics are displayed on the playing interface of the music, the font of the first lyrics displayed on the playing interface is played or adjusted according to the font display style of the first lyrics.
Wherein, the font display style may include: the type, thickness, color and corresponding animation effect of the font. The font types may include a song style, a regular script, an clerical script, and the like, and of course, other types of fonts may also be included, which is not limited in this embodiment of the present application.
In the embodiment of the present application, a corresponding relationship between the font display element and the font display style may be set, and in this case, the font display style of the first lyric may be determined according to the font display element of the first lyric and the corresponding relationship, and the first lyric may be displayed according to the font display style.
In an embodiment of the application, a font display style of the first lyrics may be determined based on the one or more font display elements. The font display element can comprise a music style of music, and the exemplary music style of music can comprise rock, heavy metal, ancient wind, light music and the like, more accents often exist in the music of the music style of rock and heavy metal, under the condition, when the music style of music can be set to rock and heavy metal, the corresponding font display style comprises thicker fonts, and when the music style of music is the ancient wind and the light music, the font display style is usually more relaxed, and the corresponding font display style can comprise finer fonts. Correspondingly, if the music style of the music corresponding to the first lyrics is rock or heavy metal, the first lyrics are displayed through thick fonts in a playing interface.
In addition, the font display elements may include album art for music. The album cover of the music is often marked with an album name, and in this case, the font display style of the first lyrics can be determined according to the font adopted by the album cover of the music. For example, it may be determined that a font presentation style of the first lyrics is the same as a font presentation style employed by an album cover of the music. For example, if the font of the album name marked on the album cover is a bold regular script, the display style of the first lyrics is also a bold regular script, and if the font of the album name marked on the album cover is red, the display style of the first lyrics is also red.
The font display element may include a bass of the first lyric, and illustratively, when most words in a certain lyric are accents, the corresponding font display style includes a thicker font, and when most words in a certain lyric are accents, the corresponding font display style includes a finer font. Correspondingly, if most words in the first lyrics are accents, the first lyrics are displayed in a thicker font in a playing interface.
Further, the font display style may further include an animation effect, and accordingly, in the solution provided in the embodiment of the present application, a corresponding relationship between the font display element and the animation effect may also be set. In this case, the animation effect corresponding to the first lyric may be determined according to the font display element of the first lyric and the corresponding relationship, and the animation effect corresponding to the first lyric may be displayed in the process of displaying the first lyric. Wherein the animation effect can be realized in the forms of video and/or pictures in GIF format.
For example, when most words in the first lyrics are accents, the corresponding animation effect may be an splash-ink animation effect, in which case, when the playing interface displays the first lyrics, the splash-ink animation effect is also displayed around the first lyrics. The splash-ink animation effect may be a video showing the splash-ink, or may be a splash-ink GIF format picture.
In the prior art, the playing interface usually displays the lyrics in a fixed font display style, for example, if the user sets that the playing interface displays the lyrics through a red sons body, the playing interface always displays the lyrics through the red sons body until the user's adjustment operation on the font is received again.
According to the scheme provided by the embodiment of the application, the font display style of the first lyrics in the display process can be correspondingly adjusted according to the font display element of the first lyrics, so that the font display style of the lyrics displayed on the playing interface changes along with the change of the font display element. Therefore, compared with the prior art, the scheme provided by the embodiment of the application can realize diversified display of the lyrics on the playing interface.
In addition, if the system for executing the music playing method provided by the embodiment of the present application includes a music APP, the music APP may determine the font display style of the first lyric through the correspondence between the font display elements stored in the music APP and the font display styles. Or, the music APP can inquire a remote server, the remote server can store the corresponding relation between the font display element and the font display style, and the music APP can determine the font display style of the first lyrics by inquiring the remote server.
In another scenario, a system for executing the music playing method provided by the embodiment of the present application may include a terminal device, where a music APP for playing music is installed in the terminal device. Under the condition, the terminal equipment can obtain the font display element of the first lyric through interaction with the music APP, determine the font display style of the first lyric according to the font display element of the first lyric, and transmit the font display style of the first lyric to the music APP, so that the music APP can display the first lyric on a playing interface through the font display style of the first lyric.
The terminal device may store a corresponding relationship between the font display element and the font display style, or the terminal device may access a remote server, and the remote server may store a corresponding relationship between the font display element and the font display style. After determining the font display element of the first lyric through interaction with the music APP, the terminal device may determine the font display style corresponding to the font display element of the first lyric by querying the storage of the terminal device itself or by querying a remote server.
Further, in the solution provided in the embodiment of the present application, the following operations are further included:
determining an animation effect corresponding to the tone of the playing music time interval;
and displaying the animation effect while displaying the first lyrics on the playing interface of the music, wherein the animation effect is positioned between the background of the playing interface and the display layer of the first lyrics.
That is to say, the animation effect is located at an upper layer of the background of the playing interface and at a lower layer of the display interface layer of the first lyrics.
Different sounding bodies have different timbres of the sounding sound due to different materials and structures. In this case, in the embodiment of the present application, a correspondence relationship between different timbres and animation effects may be set, and by this correspondence relationship, the animation effect corresponding to the timbre of the music session being played may be determined.
If the system for executing the music playing method provided by the embodiment of the application comprises the music APP, the music APP can determine the animation effect corresponding to the playing music time interval through the corresponding relation between the tone and the animation effect stored by the music APP. Or, the music APP may query a remote server, the remote server may store a correspondence between timbres and animation effects, and the music APP may determine the animation effects corresponding to the playing music period by querying the remote server.
Or, the system for executing the music playing method provided by the embodiment of the present application may include a terminal device, where a music APP for playing music is installed in the terminal device. In this case, the terminal device may determine the music time interval being played through interaction with the music APP, and determine the timbre of the music time interval through analysis of this music time interval. And determining corresponding animation effects according to the corresponding relation between the tone and the animation effects, and transmitting the animation effects to the music APP so that the music APP can display the animation effects on a playing interface.
The terminal device may store the corresponding relationship between the timbre and the animation effect, or the terminal device may access a remote server, and the remote server may store the corresponding relationship between the timbre and the animation effect. Correspondingly, the terminal device can determine the animation effect corresponding to the tone of the playing music time interval by inquiring the storage of the terminal device or inquiring a remote server.
Through the scheme of the embodiment of the application, the first lyrics can be displayed, meanwhile, the animation effect corresponding to the tone of the playing music time interval is displayed, the display content of the playing interface is enriched, and diversified display of the playing interface is realized.
The timbres of different instruments are often different, and in a feasible design, the animation effect corresponding to the timbre of the music time interval can be set as a dynamic picture corresponding to the instrument emitting the timbre. If the melody of the playing music time interval is sent out by a drum and the animation effect corresponding to the drum is a dynamic picture with circular ripples, the dynamic picture with circular ripples can be displayed on a playing interface as shown in fig. 13; if the melody of the playing music time interval is sent out by the bowstring and the animation effect corresponding to the bowstring is a dynamic picture of a wave line, the dynamic picture of the circular ripple can be displayed on the playing interface.
Further, in the embodiment of the present application, the animation effect may be adjusted according to the rhythm speed of the playing music time interval. In this case, in the solution provided in the embodiment of the present application, the following steps may be further included:
and adjusting the evolution effect of the animation effect according to the rhythm speed of the playing music time interval.
For example, when the melody of the played music time interval is fast, the evolution effect of the animation effect is fast so as to match the fast melody and present a drastic and sharp style animation effect, and when the melody is slow, the evolution effect of the animation effect is slow so as to match the slow melody and present a gentle style animation effect.
In this case, after the music APP determines the animation effect corresponding to the melody of the playing music period, the music APP may further adjust the evolution effect of the animation effect according to the speed of the melody, and display the adjusted animation effect.
For example, the animation effect corresponding to the playing music time interval is set to be a dynamic picture with rising bubbles, and the evolution effect of the animation effect is a dynamic process with rising bubbles. Referring to the example diagram shown in fig. 14, in this example, the faster the melody of the music time interval, the greater the number of bubbles exhibited by the playing interface and the faster the bubbles rise, so that the evolution effect of the animation effect exhibited by the playing interface is adapted to the speed of the melody of the music time interval being played.
In another example, the animation effect corresponding to the playing music period is set to be a dynamic picture with the changed transparency of the bubble, and the evolution effect of the animation effect is a dynamic process with the changed transparency of the bubble. In this case, the faster the melody of the music time interval, the faster the change of the transparency of the bubble displayed on the playing interface, so as to realize that the evolution effect of the animation effect displayed on the playing interface is adapted to the rhythm speed of the music time interval.
Through the scheme, the evolution effect of the animation effect displayed on the playing interface corresponds to the rhythm speed of the music time interval, so that better watching experience can be brought to the user, the immersive experience of the user is enhanced, and the diversification of the display style of the playing interface is further improved.
The following are embodiments of an apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the device embodiments of the present application, reference is made to the method embodiments of the present application.
As an implementation of the foregoing embodiments, an embodiment of the present application discloses a music playing device. Referring to the schematic structural diagram shown in fig. 15, the music playing apparatus includes: a processor 210 and a display 220.
Wherein, the processor 210 is configured to determine a lyric display mode corresponding to the user's posture;
the display 220 is configured to display a playing interface of the music, and in the playing process of the music, the playing interface of the music displays first lyrics according to the lyric display mode, where the first lyrics include lyrics corresponding to a playing music time interval.
In an embodiment of the present application, the posture of the user may include: a local limb movement of the user or a full body movement of the user.
Wherein the local limb action of the user may comprise: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion. For example, the single-hand action may be a wave-like swing of a single hand or a wave-like up and down of a single hand; the two-hand action can be wave-shaped swinging of the two hands or up-and-down waving of the two hands and the like; the finger actions may reveal a set of actions for the finger, e.g., a thumbs-up gesture may be made for the finger, etc.; the head motion may be head side-to-side swing or head back-and-forth swing, etc.
In addition, the user's whole-body action is an action of the user's whole-body participation, and in one possible implementation, the user's whole-body action may include: at least one of jumping, whole body swinging, and walking.
In the embodiment of the present application, the lyric display mode includes: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing. The lyric jumping refers to the jumping of part of words or all words in the lyrics on a playing interface, the wavy swing of the lyrics refers to the wavy swing of part of words or all words in the lyrics on the playing interface, and the left-right swing of the lyrics refers to the swing of part of words or all words in the lyrics in the left-right direction on the playing interface.
Further, in this embodiment of the application, the processor is specifically configured to determine a body state of the user according to at least two images including the user that are not at the same time, and determine a lyric display mode corresponding to the body state of the user according to a correspondence between the body state of the user and the lyric display mode.
In addition, the processor is specifically configured to determine the lyric display mode corresponding to the user's posture according to first information transmitted by the first server, where the first information is used to indicate the lyric display mode corresponding to the user's posture.
Further, in this embodiment of the application, the processor is further configured to, when the first lyric is displayed on the playing interface of the music, adjust a variation range of the first lyric in a displaying process according to the action range of the user.
Additionally, the processor is further configured to determine a frame corresponding to at least one word included in the first lyric, the frame including: a dynamic picture and/or a static picture;
the display is further configured to display the picture on a playing interface of the music in a process of displaying the first lyrics, where the picture is located between a background of the playing interface and a display layer of the first lyrics.
Further, in this embodiment of the application, the display is further configured to, if a difference between a time for displaying the first lyric and a time for displaying the second lyric is smaller than a first time difference, display a picture corresponding to at least one word included in the second lyric on a playing interface of the music while displaying a picture corresponding to at least one word included in the first lyric, where the picture corresponding to at least one word included in the second lyric is located between a background of the playing interface and a display layer of the first lyric.
In the embodiment of the present application, the screen corresponding to at least one word included in the first lyric may be determined in various ways. In a possible implementation manner, the processor is specifically configured to transmit the first lyric to a second server, so that the second server determines, according to the first lyric, a picture corresponding to at least one word included in the first lyric, and obtains the picture corresponding to at least one word included in the first lyric transmitted by the second server.
Or, in another possible implementation manner, the processor is specifically configured to determine, by querying a database, a picture corresponding to at least one word included in the first lyric, where the database includes at least one picture and a correspondence between pictures and words.
The processor is further configured to determine a font display style of the first lyric according to a font display element of the first lyric, where the font display element includes: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics;
and when the first lyrics are displayed on the playing interface of the music, the processor is further used for playing or adjusting the font of the first lyrics displayed on the playing interface according to the font display style of the first lyrics.
Further, in the solution provided in this embodiment of the application, the processor is further configured to determine an animation effect corresponding to a tone of the playing music time interval;
the display is further configured to display the animation effect on a playing interface of the music while displaying the first lyrics, where the animation effect is located between a background of the playing interface and a display layer of the first lyrics.
Further, in the solution provided in this embodiment of the application, the processor is further configured to adjust an evolution effect of the animation effect according to the speed of the melody of the music time interval being played.
Correspondingly, the embodiment of the application also discloses a terminal device corresponding to the music playing method. Referring to the schematic structural diagram shown in fig. 16, the terminal apparatus includes:
at least one processor 1101 and a memory,
wherein the memory is to store program instructions;
the processor is configured to call and execute the program instructions stored in the memory, so as to cause the terminal device to perform all or part of the steps in the embodiments corresponding to fig. 3, fig. 6 to fig. 8, and fig. 10 to fig. 11.
Further, the terminal device may further include: a transceiver 1102 and a bus 1103 that includes a random access memory 1104 and a read only memory 1105.
The processor is coupled to the transceiver, the random access memory and the read only memory through the bus respectively. When the mobile terminal control device needs to be operated, the device is guided to enter a normal operation state by starting a basic input and output system solidified in a read only memory or a bootloader guiding system in an embedded system. After the device enters a normal operation state, an application program and an operating system are operated in the random access memory, so that the terminal device executes all or part of the steps in the embodiments of fig. 3, 6 to 8 and 10 to 11.
The apparatus according to the embodiment of the present invention may correspond to the music playing method in the embodiments corresponding to fig. 3, fig. 6 to fig. 8, and fig. 10 to fig. 11, and the processor in the terminal apparatus may implement the functions of the music playing apparatus and/or the various steps and methods implemented in the embodiments corresponding to fig. 3, fig. 6 to fig. 8, and fig. 10 to fig. 11, which are not described herein again for brevity.
In particular implementations, embodiments of the present application also provide a computer-readable storage medium, which includes instructions. Wherein a computer readable medium disposed in any apparatus that when executed on a computer may perform all or a portion of the steps of the embodiments including fig. 3, 6-8, and 10-11. The storage medium of the computer readable medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In addition, another embodiment of the present application also discloses a computer program product containing instructions, which when run on an electronic device, enables the electronic device to implement all or part of the steps in the embodiments corresponding to fig. 3, fig. 6 to fig. 8, and fig. 10 to fig. 11.
In a possible implementation, the computer program product containing the instructions may comprise a music APP, which may be provided. In this implementation manner, the music APP may apply the music playing method provided in the embodiment of the present application in the process of playing music, so that in the process of playing music, lyrics of the music are displayed in a music playing interface according to a lyric display manner corresponding to the user's posture.
Correspondingly, in the embodiment of the present application, a music playing system is provided, which may include a terminal device and a music APP, where the music APP is installed in the terminal device, and the terminal device may implement playing of music by running the music APP.
In this case, the music playing system may display the lyrics of the music in a playing interface of the music according to a lyric display mode corresponding to the user's posture during the music playing process by executing the music playing method provided by the present application.
Further, in this embodiment of the application, the music playing system may obtain at least two images including the user at different times through an imaging device built in the terminal device or an imaging device connected to the terminal device, so as to determine the posture of the user according to the images.
In addition, in the embodiment of the present application, the terminal device in the music playing system may further be connected to a remote server, and determine information required in the music playing process through interaction with the remote server.
For example, in the embodiment of the present application, the remote server may store a corresponding relationship between the body state of the user and the lyric display manner, determine the lyric display manner corresponding to the body state of the user according to the corresponding relationship and the body state of the user, and transmit the lyric display manner to the music playing system, so that the music playing system determines the lyric display manner corresponding to the body state of the user.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital information processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital information processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital information processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a UE. In the alternative, the processor and the storage medium may reside in different components in the UE.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The same and similar parts among the various embodiments of the present specification may be referred to, and each embodiment is described with emphasis on differences from the other embodiments. In particular, as to the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to the description of the method embodiments in relevant places.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiments of the road constraint determining apparatus disclosed in the present application, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the description in the method embodiments.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (26)

1. A music playing method, comprising:
determining a lyric display mode corresponding to the posture of the user;
and in the music playing process, displaying first lyrics on a playing interface of the music according to the lyric displaying mode, wherein the first lyrics comprise lyrics corresponding to the playing music time interval.
2. The method of claim 1, wherein determining the lyric presentation mode corresponding to the user's posture comprises:
determining the posture of the user according to at least two images which comprise the user and are not at the same moment;
and determining the lyric display mode corresponding to the body state of the user according to the corresponding relation between the body state of the user and the lyric display mode.
3. The method of claim 1, wherein determining the lyric presentation mode corresponding to the user's posture comprises:
and determining a lyric display mode corresponding to the body posture of the user according to first information transmitted by the first server, wherein the first information is used for indicating the lyric display mode corresponding to the body posture of the user.
4. The method according to any one of claims 1 to 3,
the user's posture includes: a local limb motion of the user or a full body motion of the user;
the local limb action of the user comprises: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion;
the user's whole body actions include: at least one of jumping, whole body swinging, and walking;
the lyric display mode comprises the following steps: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing.
5. The method of any of claims 1 to 4, further comprising:
and when the first lyrics are displayed on the music playing interface, adjusting the variation amplitude of the first lyrics in the displaying process according to the action amplitude of the user.
6. The method of any of claims 1 to 5, further comprising:
determining a picture corresponding to at least one word included in the first lyrics, wherein the picture includes: a dynamic picture and/or a static picture;
and displaying the picture in the process of displaying the first lyrics on the playing interface of the music, wherein the picture is positioned between the background of the playing interface and the display layer of the first lyrics.
7. The method of claim 6, further comprising:
if the difference between the time for displaying the first lyrics and the time for displaying the second lyrics is smaller than the first time difference, displaying a picture corresponding to at least one word included in the second lyrics while displaying a picture corresponding to at least one word included in the first lyrics, wherein the picture corresponding to at least one word included in the second lyrics is located between the background of the playing interface and the display layer of the first lyrics.
8. The method of claim 6, wherein determining the screen corresponding to the at least one word included in the first lyric comprises:
transmitting the first lyrics to a second server so that the second server determines a picture corresponding to at least one word included in the first lyrics according to the first lyrics;
and acquiring a picture corresponding to at least one word included in the first lyrics transmitted by the second server.
9. The method of claim 6, wherein determining the screen corresponding to the at least one word included in the first lyric comprises:
determining a picture corresponding to at least one word included in the first lyrics by querying a database, wherein the database includes at least one picture and a corresponding relation between the picture and the word.
10. The method of any one of claims 1 to 9, further comprising:
determining the font display style of the first lyrics according to the font display elements of the first lyrics, wherein the font display elements comprise: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics;
and when the first lyrics are displayed on the playing interface of the music, the font of the first lyrics displayed on the playing interface is played or adjusted according to the font display style of the first lyrics.
11. The method of any one of claims 1 to 10, further comprising:
determining an animation effect corresponding to the tone of the playing music time interval;
and displaying the animation effect while displaying the first lyrics on the playing interface of the music, wherein the animation effect is positioned between the background of the playing interface and the display layer of the first lyrics.
12. The method of claim 11, further comprising:
and adjusting the evolution effect of the animation effect according to the rhythm speed of the playing music time interval.
13. A music playing apparatus, comprising:
a processor and a display;
the processor is used for determining a lyric display mode corresponding to the posture of the user;
the display is used for displaying a music playing interface, and in the music playing process, the music playing interface displays first lyrics according to the lyric display mode, wherein the first lyrics comprise lyrics corresponding to the playing music time interval.
14. The apparatus of claim 13,
the processor is specifically configured to determine a body state of the user according to at least two images including the user, the images not being at the same time, and determine a lyric display mode corresponding to the body state of the user according to a correspondence between the body state of the user and the lyric display mode.
15. The apparatus of claim 13,
the processor is specifically configured to determine the lyric display mode corresponding to the user's posture according to first information transmitted by the first server, where the first information is used to indicate the lyric display mode corresponding to the user's posture.
16. The apparatus of any one of claims 13 to 15,
the user's posture includes: a local limb motion of the user or a full body motion of the user;
the local limb action of the user comprises: at least one of a one-handed motion, a two-handed motion, a finger motion, and a head motion;
the user's whole body actions include: at least one of jumping, whole body swinging, and walking;
the lyric display mode comprises the following steps: at least one of lyric jumping, lyric wavy swing, and lyric left and right swing.
17. The apparatus according to any one of claims 13 to 16,
the processor is further configured to adjust a variation amplitude of the first lyric in a display process according to the action amplitude of the user when the first lyric is displayed on the music playing interface.
18. The apparatus of any one of claims 13 to 17,
the processor is further configured to determine a frame corresponding to at least one word included in the first lyrics, the frame including: a dynamic picture and/or a static picture;
the display is further configured to display the picture on a playing interface of the music in a process of displaying the first lyrics, where the picture is located between a background of the playing interface and a display layer of the first lyrics.
19. The apparatus of claim 18,
the display is further configured to, if a difference between a time for displaying the first lyrics and a time for displaying the second lyrics is smaller than a first time difference, display a picture corresponding to at least one word included in the second lyrics on a playing interface of the music while displaying a picture corresponding to at least one word included in the first lyrics, where the picture corresponding to at least one word included in the second lyrics is located between a background of the playing interface and a display layer of the first lyrics.
20. The apparatus of claim 18,
the processor is specifically configured to transmit the first lyric to a second server, so that the second server determines, according to the first lyric, a picture corresponding to at least one word included in the first lyric, and obtains the picture corresponding to the at least one word included in the first lyric transmitted by the second server.
21. The apparatus of claim 18,
the processor is specifically configured to determine, by querying a database, a picture corresponding to at least one word included in the first lyric, where the database includes at least one picture and a correspondence between pictures and words.
22. The apparatus of any one of claims 13 to 21,
the processor is further configured to determine a font display style of the first lyric according to a font display element of the first lyric, where the font display element includes: at least one of a style of the music, a font of album art of the music, and a subwoofer of the first lyrics;
and when the first lyrics are displayed on the playing interface of the music, the processor is further used for playing or adjusting the font of the first lyrics displayed on the playing interface according to the font display style of the first lyrics.
23. The apparatus of any one of claims 13 to 22,
the processor is further used for determining an animation effect corresponding to the tone of the playing music time interval;
the display is further configured to display the animation effect on a playing interface of the music while displaying the first lyrics, where the animation effect is located between a background of the playing interface and a display layer of the first lyrics.
24. The apparatus of claim 11,
the processor is further configured to adjust an evolution effect of the animation effect according to the melody speed of the playing music time interval.
25. A terminal device, comprising:
at least one processor and a memory, wherein the memory,
the memory to store program instructions;
the processor is configured to call and execute the program instructions stored in the memory to cause the terminal device to execute the music playing method according to any one of claims 1 to 12.
26. A computer-readable storage medium, characterized in that,
the computer-readable storage medium has stored therein instructions that, when run on a computer, cause the computer to execute the music playing method according to any one of claims 1 to 12.
CN202011474741.5A 2020-12-14 2020-12-14 Music playing method and device Pending CN112507161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011474741.5A CN112507161A (en) 2020-12-14 2020-12-14 Music playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011474741.5A CN112507161A (en) 2020-12-14 2020-12-14 Music playing method and device

Publications (1)

Publication Number Publication Date
CN112507161A true CN112507161A (en) 2021-03-16

Family

ID=74973386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011474741.5A Pending CN112507161A (en) 2020-12-14 2020-12-14 Music playing method and device

Country Status (1)

Country Link
CN (1) CN112507161A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596241A (en) * 2021-06-24 2021-11-02 荣耀终端有限公司 Sound processing method and device
CN115134643A (en) * 2021-03-24 2022-09-30 腾讯科技(深圳)有限公司 Bullet screen display method and device for vehicle-mounted terminal, terminal and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134643A (en) * 2021-03-24 2022-09-30 腾讯科技(深圳)有限公司 Bullet screen display method and device for vehicle-mounted terminal, terminal and medium
CN113596241A (en) * 2021-06-24 2021-11-02 荣耀终端有限公司 Sound processing method and device

Similar Documents

Publication Publication Date Title
US20220130360A1 (en) Song Recording Method, Audio Correction Method, and Electronic Device
CN112714214A (en) Content connection method and electronic equipment
CN111742539B (en) Voice control command generation method and terminal
CN112783330A (en) Electronic equipment operation method and device and electronic equipment
CN114710640A (en) Video call method, device and terminal based on virtual image
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN112507161A (en) Music playing method and device
CN111552451A (en) Display control method and device, computer readable medium and terminal equipment
CN113643728A (en) Audio recording method, electronic device, medium, and program product
CN111930335A (en) Sound adjusting method and device, computer readable medium and terminal equipment
CN113938720A (en) Multi-device cooperation method, electronic device and multi-device cooperation system
CN113593567B (en) Method for converting video and sound into text and related equipment
CN114449333B (en) Video note generation method and electronic equipment
CN112269554B (en) Display system and display method
CN114724055A (en) Video switching method, device, storage medium and equipment
CN111142767B (en) User-defined key method and device of folding device and storage medium
CN109285563B (en) Voice data processing method and device in online translation process
CN114120987B (en) Voice wake-up method, electronic equipment and chip system
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN115249364A (en) Target user determination method, electronic device and computer-readable storage medium
CN115544296A (en) Audio data storage method and related equipment
CN114466238A (en) Frame demultiplexing method, electronic device and storage medium
CN114079694B (en) Control labeling method and device
WO2023284591A1 (en) Video capture method and apparatus, electronic device, and storage medium
CN114268689B (en) Electric quantity display method of Bluetooth device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination