CN110750675A - Lyric sharing method and device and storage medium - Google Patents

Lyric sharing method and device and storage medium Download PDF

Info

Publication number
CN110750675A
CN110750675A CN201910986713.2A CN201910986713A CN110750675A CN 110750675 A CN110750675 A CN 110750675A CN 201910986713 A CN201910986713 A CN 201910986713A CN 110750675 A CN110750675 A CN 110750675A
Authority
CN
China
Prior art keywords
lyric
information
marked
lines
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910986713.2A
Other languages
Chinese (zh)
Inventor
段小磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201910986713.2A priority Critical patent/CN110750675A/en
Publication of CN110750675A publication Critical patent/CN110750675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics

Abstract

The application discloses a lyric sharing method and device and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: in the process of playing a target song by a terminal, acquiring environmental information of the position of the terminal; according to the environment information, marking a plurality of lyric lines to be selected of the target song to obtain a plurality of marked lyric lines; generating a target lyric fragment according to the plurality of marked lyric lines; and sharing the target lyric fragment. The method and the device are beneficial to simplifying the lyric sharing process. The method and the device are used for lyric marking and sharing.

Description

Lyric sharing method and device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a lyric sharing method and apparatus, and a storage medium.
Background
With the development of computer technology, terminals such as mobile phones and tablet computers are more and more popular, and more terminals can play songs and share lyrics.
At present, during the process of playing a song, a terminal may display lyrics of the song line by line, a user may trigger a selection instruction for at least one lyric line of the song according to the preference of the user, and the terminal may combine the at least one lyric line into a lyric fragment according to the selection instruction triggered by the user and share the lyric fragment.
However, the lyric sharing described above requires the user to manually select lyrics, and thus the lyric sharing process is complicated.
Disclosure of Invention
The application provides a lyric sharing method and device and a storage medium. The technical scheme is as follows:
in one aspect, a lyric sharing method is provided, and the method includes:
in the process of playing a target song by a terminal, acquiring environmental information of the position of the terminal;
according to the environment information, marking a plurality of lyric lines to be selected of the target song to obtain a plurality of marked lyric lines;
generating a target lyric fragment according to the plurality of marked lyric lines;
and sharing the target lyric fragment.
Optionally, the generating a target lyric fragment from the plurality of marked lyric lines comprises:
determining at least one target lyric line according to the plurality of marked lyric lines;
generating the target lyric fragment according to the at least one target lyric line.
Optionally, the environment information includes: at least one of location information, weather information, time information, or sound information;
the acquiring the environmental information of the position where the terminal is located includes:
when the environment information comprises the position information, acquiring the position information through a positioning component in the terminal;
when the environment information comprises the weather information, acquiring the position information through a positioning component in the terminal, and acquiring the weather information according to the position information;
when the environment information comprises the time information, acquiring the time information through a clock component in the terminal;
and when the environment information comprises the sound information, acquiring the sound information through a sound acquisition assembly in the terminal.
Optionally, the marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines includes:
determining a plurality of lyric lines to be marked of words to be marked, which are matched with the environmental information, from the plurality of lyric lines to be selected of the target song;
and according to the environment information, marking words to be marked which are matched with the environment information in the plurality of lyric lines to be marked to obtain the plurality of marked lyric lines.
Optionally, the marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines includes:
acquiring an association word corresponding to the environment information through semantic association;
determining a plurality of lyric lines to be marked of words to be marked which are matched with the associative words from a plurality of lyric lines to be selected of the target song;
and marking words to be marked matched with the association words in the lyric lines to be marked according to the environment information to obtain a plurality of marked lyric lines.
Optionally, the marked lyric line includes at least one environment marked word, and the environment marked word is obtained by marking the word to be marked according to the environment information;
after marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines, the method further comprises:
and when a mark display instruction of the environment mark word is detected, displaying word mark information of the environment mark word.
Optionally, the word tag information includes the environment information, and the environment information includes: at least one of location information, weather information, time information, or sound information;
when a mark display instruction for the environment mark word is detected, displaying word mark information of the environment mark word, including:
when a mark display instruction for a position mark word is detected, displaying the position information;
when a mark display instruction for a weather mark word is detected, displaying the weather information;
when a mark display instruction for a time mark word is detected, displaying the time information;
and when a mark display instruction of the sound mark word is detected, playing the sound information, wherein the playing volume of the sound information is less than that of the target song.
Optionally, the environment information includes: at least one of position information, weather information, time information or sound information, wherein at least one environment marking word exists in the marked lyric line and is marked according to the environment information;
the determining at least one target lyric line from the plurality of marked lyric lines comprises:
carrying out duplication elimination screening on the plurality of marked lyric lines to obtain at least one lyric line to be composed;
when the at least one word line to be lyriced contains all environment marking words corresponding to the environment information, determining the at least one word line to be lyriced as the at least one target word line;
when the at least one lyric line to be composed does not contain all environment marking words corresponding to the environment information, according to the missing environment information in the at least one lyric line to be composed, a filling word matched with the missing environment information is obtained to be used as a filling lyric line, and the at least one lyric line to be composed and the filling lyric line are determined to be the at least one target lyric line.
Optionally, the performing duplication elimination screening on the plurality of marked lyric lines to obtain at least one lyric line to be composed includes:
determining a plurality of marked lyric lines with the same semantics in the plurality of marked lyric lines, and screening the at least one lyric line to be grouped from the plurality of marked lyric lines with the same semantics; alternatively, the first and second electrodes may be,
determining a plurality of marked lyric lines with the same line marking information in the plurality of marked lyric lines, and screening out the at least one lyric line to be selected from the plurality of marked lyric lines with the same line marking information; alternatively, the first and second electrodes may be,
and determining at least one reference lyric line with the most line marking information in the marked lyric lines, and screening out the marked lyric lines of which the line marking information belongs to the line marking information of the reference lyric line to obtain the at least one lyric line to be composed.
Optionally, the generating the target lyric fragment according to the at least one target lyric line comprises:
determining a combined priority for each of the at least one target lyric line;
and combining the at least one target lyric line according to the combination priority to obtain the target lyric fragment.
In another aspect, an apparatus for sharing lyrics is provided, the apparatus comprising:
the acquisition module is used for acquiring the environmental information of the position of the terminal in the process of playing the target song by the terminal;
the marking module is used for marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines;
a generating module for generating a target lyric fragment according to the plurality of marked lyric lines;
and the sharing module is used for sharing the target lyric fragment.
Optionally, the generating module includes:
a determining submodule for determining at least one target lyric line from the plurality of marked lyric lines;
and the generation sub-module is used for generating the target lyric fragment according to the at least one target lyric line.
Optionally, the environment information includes: at least one of location information, weather information, time information, or sound information; the obtaining module is configured to:
when the environment information comprises the position information, acquiring the position information through a positioning component in the terminal;
when the environment information comprises the weather information, acquiring the position information through a positioning component in the terminal, and acquiring the weather information according to the position information;
when the environment information comprises the time information, acquiring the time information through a clock component in the terminal;
and when the environment information comprises the sound information, acquiring the sound information through a sound acquisition assembly in the terminal.
Optionally, the marking module is configured to:
determining a plurality of lyric lines to be marked of words to be marked, which are matched with the environmental information, from the plurality of lyric lines to be selected of the target song;
and according to the environment information, marking words to be marked which are matched with the environment information in the plurality of lyric lines to be marked to obtain the plurality of marked lyric lines.
Optionally, the marking module is configured to:
acquiring an association word corresponding to the environment information through semantic association;
determining a plurality of lyric lines to be marked of words to be marked which are matched with the associative words from a plurality of lyric lines to be selected of the target song;
and marking words to be marked matched with the association words in the lyric lines to be marked according to the environment information to obtain a plurality of marked lyric lines.
Optionally, the marked lyric line includes at least one environment marked word, and the environment marked word is obtained by marking the word to be marked according to the environment information; the device further comprises:
and the display module is used for displaying the word mark information of the environment mark words when the mark display instruction of the environment mark words is detected.
Optionally, the word tag information includes the environment information, and the environment information includes: at least one of location information, weather information, time information, or sound information; the display module is used for:
when a mark display instruction for a position mark word is detected, displaying the position information;
when a mark display instruction for a weather mark word is detected, displaying the weather information;
when a mark display instruction for a time mark word is detected, displaying the time information;
and when a mark display instruction of the sound mark word is detected, playing the sound information, wherein the playing volume of the sound information is less than that of the target song.
Optionally, the environment information includes: at least one of position information, weather information, time information or sound information, wherein at least one environment marking word exists in the marked lyric line and is marked according to the environment information; the determination submodule includes:
the first determining unit is used for carrying out duplication elimination screening on the plurality of marked lyric lines to obtain at least one lyric line to be composed;
the second determining unit is used for determining the at least one word line to be lyriced as the at least one target lyric line when the at least one word line to be lyriced contains all environment marking words corresponding to the environment information;
a third determining unit, configured to, when the at least one to-be-lyric line does not include all environment tag words corresponding to the environment information, obtain, from the server, a filling word matched with the missing environment information as a filling lyric line according to the missing environment information in the at least one to-be-lyric line, and determine the at least one to-be-lyric line and the filling lyric line as the at least one target lyric line.
Optionally, the first determining unit is configured to:
determining a plurality of marked lyric lines with the same semantics in the plurality of marked lyric lines, and screening the at least one lyric line to be grouped from the plurality of marked lyric lines with the same semantics; alternatively, the first and second electrodes may be,
determining a plurality of marked lyric lines with the same line marking information in the plurality of marked lyric lines, and screening out the at least one lyric line to be selected from the plurality of marked lyric lines with the same line marking information; alternatively, the first and second electrodes may be,
and determining at least one reference lyric line with the most line marking information in the marked lyric lines, and screening out the marked lyric lines of which the line marking information belongs to the line marking information of the reference lyric line to obtain the at least one lyric line to be composed.
Optionally, the generating sub-module is configured to:
determining a combined priority for each of the at least one target lyric line;
and combining the at least one target lyric line according to the combination priority to obtain the target lyric fragment.
In another aspect, a lyric sharing apparatus is provided, including: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the lyric sharing method according to any one of the above aspects.
In still another aspect, there is provided a storage medium in which a program is stored, the program being capable of implementing the lyric sharing method according to any one of the above aspects when executed by a processor.
The beneficial effect that technical scheme that this application provided brought is:
according to the lyric sharing method and device and the storage medium, in the process of playing the target song by the terminal, the environmental information of the position where the terminal is located is obtained, a plurality of lyric lines to be selected of the target song are marked according to the environmental information, a plurality of marked lyric lines are obtained, the target lyric fragments are generated according to the plurality of marked lyric lines, and the target lyric fragments are shared. After the lyric lines are marked according to the environment information, the target lyric segment is generated according to the marked lyric lines, and the target lyric segment is shared, so that lyric sharing can be realized without manually selecting lyrics by a user, and the lyric sharing process is facilitated to be simplified.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment to which various embodiments of the present application relate;
fig. 2 is a flowchart of a method for sharing lyrics according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another lyric sharing method according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a method for marking words to be tagged matched with environmental information in a word line to be selected according to an embodiment of the present application;
fig. 5 is a flowchart of a method for marking words to be tagged that are matched with associative words in a line of words to be selected according to an embodiment of the present application;
FIG. 6 is a flow chart of a method for determining a target lyric line according to a marked lyric line provided by an embodiment of the present application;
FIG. 7 is a flowchart of a method for generating a target lyric fragment according to a target lyric line according to an embodiment of the present application;
fig. 8 is a block diagram of a lyric sharing device according to an embodiment of the present application;
FIG. 9 is a block diagram of a generation module provided by an embodiment of the present application;
fig. 10 is a block diagram of another lyric sharing device according to an embodiment of the present application;
FIG. 11 is a block diagram of a determination submodule provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a lyric sharing device according to an embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, which is a schematic diagram illustrating an implementation environment according to various embodiments of the present application, referring to fig. 1, the implementation environment may include: a server 120 and at least two terminals.
Wherein the at least two terminals may be connected through a wired network or a wireless network, and at least one terminal may be connected with the server 120 through a wired network or a wireless network. The wireless network may include, but is not limited to: a Wireless Fidelity (WIFI) network, a bluetooth network, an infrared network, a Zigbee (Zigbee) network, or a data network, and the wired network may be a Universal Serial Bus (USB) network.
Each terminal may be an electronic device having a song playing function and a lyric sharing function, and the electronic device may be a smart phone, a tablet computer, a smart television, a smart watch, a motion Picture Experts Group Audio Layer V (english: Moving Picture Experts Group Audio Layer V; MP5 for short) player, a laptop portable computer, a desktop computer, or the like. Optionally, a music player may be installed in the terminal, and the music player has a song playing function and a lyric sharing function, and may be, for example, a "xx music" client or a "zz music" client. In the embodiment of the present application, the at least two terminals may be the same type of electronic device, or different types of electronic devices. As shown in fig. 1, the at least two terminals include a terminal 140 and a terminal 160, the terminal 140 and the terminal 160 are both smartphones (that is, the terminal 140 and the terminal 160 are electronic devices of the same type), and the terminal 140 is connected to the server 120, it is easy to understand that the terminal 160 may also be connected to the server 120, which is not limited in this embodiment of the present application.
The server 120 may be a server, a server cluster composed of several servers, or a cloud computing service center. The server may provide any data to the terminal that the terminal needs, e.g., the server 120 may provide a file of the target song to the terminal 140, and may provide an association word or the like to the terminal 140 based on environmental information of the location where the terminal 140 is located.
The terminal 140 may play a target song, acquire environmental information of a position where the terminal 140 is located during a process of the target song, mark a plurality of lyric lines to be selected of the target song according to the environmental information of the position where the terminal 140 is located, acquire a plurality of marked lyric lines, generate a target lyric fragment according to the plurality of marked lyric lines, and share the target lyric fragment with the terminal 160. Wherein the environment information includes: at least one of location information, weather information, time information, or sound information. Because the lyric sharing can be realized without manual selection of a user, the lyric sharing process can be simplified.
Please refer to fig. 2, which shows a flowchart of a lyric sharing method according to an embodiment of the present application, where the lyric sharing method may be executed by a terminal. Referring to fig. 2, the method may include the following steps:
step 201, in the process of playing the target song by the terminal, obtaining the environmental information of the position of the terminal.
Wherein the environment information may include at least one of location information, weather information, time information, or sound information.
Step 202, according to the environment information, marking a plurality of lyric lines to be selected of the target song to obtain a plurality of marked lyric lines.
Optionally, the terminal may determine a word to be tagged that is matched with the environment information from a plurality of lyric lines to be selected of the target song according to the environment information, tag the word to be tagged to obtain an environment tag word, and determine the lyric line to be selected where the environment tag word is located as the tagged lyric line.
Step 203, generating a target lyric fragment according to the plurality of marked lyric lines.
Optionally, the terminal may determine at least one target lyric line from the plurality of marked lyric lines, and generate a target lyric fragment from the at least one target lyric line.
And step 204, sharing the target lyric fragment.
In summary, according to the lyric sharing method provided by the embodiment of the application, the environmental information of the position of the terminal is obtained in the process of playing the target song by the terminal, a plurality of lyric lines to be selected of the target song are marked according to the environmental information to obtain a plurality of marked lyric lines, a target lyric fragment is generated according to the plurality of marked lyric lines, and the target lyric fragment is shared. The lyric sharing can be realized without manually selecting the lyrics by a user, so that the lyric sharing process is facilitated to be simplified.
Referring to fig. 3, a flowchart of another method for sharing lyrics according to an embodiment of the present application is shown, and the embodiment of the present application takes the application of the method for sharing lyrics in the embodiment shown in fig. 1 as an example. Referring to fig. 3, the method may include the following steps:
step 301, in the process of playing the target song by the terminal, obtaining the environment information of the position where the terminal is located.
Optionally, a music player may be installed in the terminal, the terminal may play the target song through the music player, and in the process of playing the target song, the terminal may acquire the environment information of the location where the terminal is located. Wherein the environment information may include at least one of location information, weather information, time information, or sound information. In the embodiment of the present application, the environment information of the location where the terminal is located obtained by the terminal may be obtained in the following four cases:
in the first case: when the environment information includes the location information, the location information is acquired by a positioning component in the terminal.
The positioning component can be a positioning component based on a Global Positioning System (GPS) in the united states, a beidou system in china, or a galileo system in russia.
Optionally, the terminal may have a positioning switch key therein, and a user may turn on or off a positioning function of the terminal through the positioning switch key. When the positioning function of the terminal is started, the terminal can acquire the position information of the position of the terminal through a positioning assembly in the terminal, correspondingly, the environment information of the position of the default terminal comprises the position information, when the positioning function of the terminal is closed, the terminal cannot acquire the position information of the position of the terminal, and at the moment, the environment information of the position of the default terminal does not comprise the position information.
In the second case: when the environment information comprises weather information, position information is obtained through a positioning component in the terminal, and the weather information is obtained according to the position information.
The terminal can obtain the position information of the position where the terminal is located through a positioning component in the terminal, and obtain the weather information from a weather system according to the position information, wherein the weather system can be a server.
Optionally, the terminal may generate a weather obtaining request carrying the location information, and send the weather obtaining request to the server, after receiving the weather obtaining request, the server obtains the weather information corresponding to the location information according to the location information carried by the weather obtaining request, and sends the weather information to the terminal, and the terminal obtains the weather information by receiving the weather information.
It is easy to understand that when the positioning function of the terminal is turned on, the terminal can acquire the weather information, accordingly, the environment information of the position where the default terminal is located includes the weather information, when the positioning function of the terminal is turned off, the terminal cannot acquire the weather information, and at this time, the environment information of the position where the default terminal is located does not include the weather information.
In the third case: when the environment information includes time information, the time information is acquired by a clock component in the terminal.
The clock component may be a system clock of the terminal or a client installed in the terminal and capable of acquiring time information, where the time information may be beijing time.
In a fourth case: and when the environment information comprises sound information, acquiring the sound information through a sound acquisition component in the terminal.
The sound information of the position of the terminal may be sound information within a preset distance near the terminal, the sound information may include a human voice, an animal voice, a vehicle voice, a siren voice, or the like, and the human voice may include a human voice of one person, human voices of multiple persons, a female voice, a male voice, or the like. The preset distance may be determined according to a collection range of the sound collection assembly, which may be a microphone, for example, and may be 50 meters.
Optionally, the terminal may have an audio capture switch button, and the user may turn on or off the audio capture function of the terminal through the audio capture switch button. When the audio acquisition function of the terminal is started, the terminal can acquire the sound information of the position where the terminal is located through the sound acquisition assembly in the terminal, correspondingly, the environment information of the position where the terminal is located includes the sound information, when the audio acquisition function of the terminal is closed, the terminal cannot acquire the sound information of the position where the terminal is located, and at the moment, the environment information of the position where the terminal is located does not include the sound information.
Those skilled in the art will readily understand that the above four cases can be implemented individually or simultaneously. Illustratively, when the environment information includes location information, weather information, time information, or sound information, the above four cases are implemented separately, when the environment information includes location information and weather information, the above first case and the second case are implemented simultaneously, when the environment information includes location information, weather information, and time information, the above first case, the second case, and the third case are implemented simultaneously, when the environment information includes location information, weather information, time information, and sound information, the above four cases are implemented simultaneously, and the same goes for the other embodiments, which is not described herein again in this application.
And 302, marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines.
The target song can comprise a plurality of lyric lines to be selected, and the terminal can display the lyric lines to be selected line by line in the process of playing the target song. After the terminal obtains the environment information of the position where the terminal is located, the plurality of lyric lines to be selected can be marked according to the environment information, and a plurality of marked lyric lines are obtained.
In this embodiment of the present application, the marking, by the terminal, the multiple lyrics to be selected of the target song according to the environment information may include the following two possible implementation manners:
the first implementation mode comprises the following steps: and the terminal marks words to be marked matched with the environment information in a plurality of words to be selected of the target song.
For example, please refer to fig. 4, which shows a flowchart of a method for tagging a to-be-tagged word matching with environment information in a to-be-selected song line according to an embodiment of the present application, and referring to fig. 4, the method may include:
and a substep 3021a, determining a plurality of lyric lines to be marked, in which words to be marked matched with the environment information exist, from the plurality of lyric lines to be selected of the target song.
Optionally, the terminal determines a word to be marked matched with the environment information from a plurality of lyric lines to be selected of the target song according to the environment information, and determines the lyric line to be selected where the word to be marked is located as the lyric line to be marked.
Optionally, the terminal determines words to be marked matched with the position information from a plurality of lyrics lines to be selected of the target song according to the position information; the terminal determines words to be marked matched with the weather information from a plurality of lyrics lines to be selected of the target song according to the weather information; the terminal determines words to be marked matched with the time information from a plurality of lyrics lines to be selected of the target song according to the time information; and the terminal determines words to be marked matched with the sound information from a plurality of words to be selected of the target song according to the sound information.
Exemplarily, if the position information is starbucks, the terminal determines words to be labeled matched with the starbucks from a plurality of lyrics to be selected of the target song; and if the position information is XXX mansion, the terminal determines words to be marked matched with XXX mansion from a plurality of lyric lines to be selected of the target song. And if the weather information is rainy, the terminal determines words to be marked matched with the rainy day from a plurality of words to be selected of the target song. And if the time information is a hour, b minutes and c seconds, the terminal determines words to be marked which are matched with the hour, b minutes and c seconds from a plurality of words to be selected of the target song. If the voice information is the voice, the terminal determines words to be marked matched with the voice from a plurality of words to be selected of the target song; if the sound information is the bird sound, the terminal determines words to be marked matched with the bird sound from a plurality of words to be selected of the target song; if the sound information is the vehicle sound, the terminal determines words to be marked matched with the vehicle sound from a plurality of words to be selected of the target song; and if the sound information is the siren sound, the terminal determines words to be marked which are matched with the siren sound from a plurality of words to be selected of the target song.
And a substep 3022a, according to the environment information, marking words to be marked in the plurality of lyric lines to be marked, which are matched with the environment information, to obtain a plurality of marked lyric lines.
The terminal can mark words to be marked matched with the environment information in the lyric lines to be marked according to the environment information to obtain environment marking words, and the lyric lines to be marked where the environment marking words are located are determined as marked lyric lines.
The second implementation mode comprises the following steps: and the terminal acquires the association words corresponding to the environment information and marks the words to be marked matched with the association words in a plurality of words to be selected of the target song.
For example, please refer to fig. 5, which shows a flowchart of a method for tagging a to-be-tagged word matching with an associated word corresponding to environment information in a to-be-selected song line according to an embodiment of the present application, and referring to fig. 5, the method may include:
and a substep 3021b, acquiring an associated word corresponding to the environment information through semantic association.
Optionally, the terminal may obtain an associated word corresponding to the environment information from the target word bank through semantic association. The terminal can query the target word bank according to the environment information to determine the associated words associated with the environment information and acquire the associated words corresponding to the environment information.
Optionally, the target thesaurus may be located in the terminal or in the server. If the target word bank is located in the terminal, the terminal can query the locally stored target word bank according to the environment information to obtain the associated words corresponding to the environment information. If the target word bank is located in the server, the terminal can generate a word acquisition request carrying the environment information and send the word acquisition request to the server, after the server receives the word acquisition request, the server queries the locally stored target word bank according to the environment information carried by the word acquisition request to obtain the association words corresponding to the environment information and sends the association words corresponding to the environment information to the terminal, and the terminal acquires the association words by receiving the association words corresponding to the environment information.
For example, if the location information is starbucks, the association word corresponding to the location information may be a coffee shop, and if the location information is XXX building, the association word corresponding to the location information may be a high-rise building or a building; if the weather information is rainy, the association word corresponding to the weather information can be rainstorm; if the time information is a hour, b minutes and c seconds, the associative word corresponding to the time information can be early morning, dawn, early morning, noon (or noon), evening, afternoon, dusk, evening, midnight, late night or the like according to the difference of a, b and c; if the voice information is a human voice, the associated word corresponding to the voice information may be a human, if the voice information is a bird voice, the associated word corresponding to the voice information may be a bird, and if the voice information is a vehicle voice, the associated word corresponding to the voice information may be an automobile. It is easy to understand that this paragraph only describes the environment information and the association words exemplarily, in practical applications, the environment information is various, and the same environment information may correspond to a plurality of different association words, which is not described herein again.
And a substep 3022b, determining a plurality of lyric lines to be marked of the words to be marked matched with the associative words from the plurality of lyric lines to be selected of the target song.
Optionally, the terminal determines a word to be marked matched with the association word from a plurality of lyric lines to be selected of the target song according to the association word corresponding to each piece of environment information, and determines the lyric line to be selected where the word to be marked is located as the lyric line to be marked.
Optionally, the terminal determines a word to be marked matched with the association word from a plurality of lyric lines to be selected of the target song according to the association word corresponding to the position information; the terminal determines words to be marked matched with the association words from a plurality of words to be selected of the target song according to the association words corresponding to the weather information; the terminal determines words to be marked matched with the association words from a plurality of words to be selected of the target song according to the association words corresponding to the time information; and the terminal determines words to be marked matched with the association words from a plurality of words to be selected of the target song according to the association words corresponding to the voice information.
For example, if the location information is starbucks and the associated word corresponding to the location information is a cafe, the terminal determines a word to be tagged that matches the cafe from a plurality of lyrics to be selected of the target song; if the position information is XXX mansion and the associated words corresponding to the position information can be tall buildings, the terminal determines words to be marked which are matched with the tall buildings from a plurality of words to be selected of the target song. And if the weather information is rainy and the associated word corresponding to the weather information is rainstorm, the terminal determines the word to be marked matched with the rainstorm from a plurality of lyrics to be selected of the target song. And if the time information is a hour, b minutes and c seconds, and the associated word corresponding to the time information is early morning, the terminal determines the word to be marked matched with the early morning from a plurality of lyrics lines to be selected of the target song. If the voice information is the voice of a person and the associated words corresponding to the voice information are the person, the terminal determines words to be marked matched with the person from a plurality of lyrics lines to be selected of the target song; if the sound information is a bird sound and the associated word corresponding to the sound information is a bird, the terminal determines a word to be marked matched with the bird from a plurality of lines of lyrics to be selected of the target song; if the voice information is the vehicle voice and the associated word corresponding to the voice information is the automobile, the terminal determines the word to be marked matched with the automobile from a plurality of lines of words to be selected of the target song.
And a substep 3023b, according to the environment information, marking words to be marked which are matched with the associative words in the plurality of lyric lines to be marked to obtain a plurality of marked lyric lines.
The terminal can mark words to be marked matched with the association words in the lyric lines to be marked according to the environment information to obtain environment marking words, and the lyric lines to be marked where the environment marking words are located are determined as marked lyric lines.
Optionally, in the two implementation manners, the terminal marks the to-be-marked word according to the environment information, and may further obtain word marking information of the environment marking word, where the word marking information may be environment information corresponding to the environment marking word, and the environment information corresponding to the environment marking word refers to environment information according to which the to-be-marked word is marked when the environment marking word is obtained. For example, a position marker word may be obtained by marking a to-be-marked word according to position information, and word marking information of the position marker word may be position information; marking the words to be marked according to the weather information to obtain weather marking words, wherein the word marking information of the weather marking words can be the weather information; marking the word to be marked according to the time information to obtain a time marking word, wherein the word marking information of the time marking word can be time information; the method includes the steps of marking a word to be marked according to sound information to obtain a sound marked word, wherein word mark information of the sound marked word can be sound information.
In the embodiment of the application, the terminal marks the words to be marked according to the environment information, so that the marked environment marked words can be displayed in a way of being different from the unmarked words, and a user can conveniently distinguish the marked environment marked words from the unmarked words. For example, the color of the environment-setting marked word set by the terminal is different from the color of the unmarked word, or the font of the environment-setting marked word set by the terminal is different from the font of the unmarked word, or the font size of the environment-setting marked word set by the terminal is different from the font size of the unmarked word, which is not limited in this embodiment of the application.
It should be noted that, as will be readily understood from the description of this step 302, in this embodiment of the present application, a lyric line to be selected, which is related to the environmental information of the location where the terminal is located, is marked in a plurality of lyric lines to be selected of the target song, and there may be some lyric lines to be selected related to the environmental information and some lyric lines to be selected not related to the environmental information in the plurality of lyric lines to be selected, so that the number of marked lyric lines obtained by marking may be less than or equal to the number of lyric lines to be selected.
Step 303, determining at least one target lyric line according to the plurality of marked lyric lines.
As is readily understood from the above description, at least one environment tag word exists in each tagged lyric line, the environment tag word is tagged according to environment information, the environment information may include at least one of location information, weather information, time information or voice information, and the environment tag word may include at least one of location tag words, weather tag words, time tag words or voice tag words.
Optionally, referring to fig. 6, which shows a flowchart of a method for determining a target lyric line according to a marked lyric line provided in an embodiment of the present application, referring to fig. 6, the method may include the following steps:
and a substep 3031, performing duplication removal screening on the plurality of marked lyric lines to obtain at least one lyric line to be composed.
Optionally, the terminal performs de-duplication screening on the plurality of marked lyric lines, including three possible implementation manners:
the first implementation mode comprises the following steps: the terminal determines a plurality of marked lyric lines with the same semantics in the plurality of marked lyric lines, and screens out at least one lyric line to be grouped from the plurality of marked lyric lines with the same semantics.
Optionally, the terminal may determine the semantics of each marked lyric line through semantic analysis to obtain at least one semantic marked lyric line, and for a plurality of marked lyric lines of each semantic, the terminal may select one marked lyric line from the plurality of marked lyric lines as a to-be-generated lyric line, so that the terminal may determine at least one to-be-generated lyric line.
Optionally, for the plurality of marked lyric lines of each semantic, the terminal determines the marked lyric line closest to the starting position of the lyric of the target song as the lyric line to be formed.
For example, two marked lyric lines "wind is cold raining" and "wind is cold fine raining" having the same semantic are present in the lyrics of the target song, and the marked lyric line "wind is cold raining" closest to the start position of the lyric of the target song among the two marked lyric lines, so that the terminal determines the marked lyric line "wind is cold raining" as the lyric line to be composed.
The second implementation mode comprises the following steps: the terminal determines a plurality of marked lyric lines with the same line marking information in the plurality of marked lyric lines, and screens out at least one lyric line to be formed from the plurality of marked lyric lines with the same line marking information.
Each marked lyric line has a line marking message, which may be composed of word marking messages of all the environmental marking words in the marked lyric line. It is easy to understand that the line marking information may include at least one type of environment information, that is, the line marking information may be a single type of environment information or a combination of at least two types of environment information.
Optionally, the terminal may analyze the line tag information of the multiple tagged lyric lines to obtain at least one tagged lyric line corresponding to the line tag information, and for the multiple tagged lyric lines corresponding to each line tag information, screen one tagged lyric line from the multiple tagged lyric lines as a to-be-lyric line, so that the terminal may determine the at least one to-be-lyric line.
Optionally, for a plurality of marked lyric lines corresponding to each type of line marking information, the terminal determines, as a to-be-lyric-group lyric line, a marked lyric line closest to a starting position of lyrics of the target song from among the plurality of marked lyric lines.
For example, two marked lyric lines "rain cold and" rain outside the window "in which the marked lyric line with the mark information being weather information exists in the lyrics of the target song, and the marked lyric line" rain cold and rain "closest to the start position of the lyric of the target song is present in the two marked lyric lines, so that the terminal determines the marked lyric line" rain cold and wind "as the lyric line to be composed.
The third implementation mode comprises the following steps: the terminal determines at least one reference lyric line with the most line marking information in the marked lyric lines, and screens out the marked lyric lines of which the line marking information belongs to the line marking information of the reference lyric line to obtain at least one lyric line to be composed.
Each marked lyric line has a line mark information, the line mark information can be composed of the word mark information of all the environment mark words in the marked lyric line, and the line mark information can be single environment information or the combination of at least two environment information.
Optionally, the terminal may analyze the line tag information of the plurality of tagged lyric lines to obtain at least one tagged lyric line with the largest line tag information as a reference lyric line, for each reference lyric line, the terminal may screen out the tagged lyric lines of the plurality of tagged lyric lines, where the line tag information belongs to the tagged lyric line of the line tag information of the reference lyric line, and use remaining tagged lyric lines of the plurality of tagged lyric lines as a to-be-lyric line to obtain at least one to-be-lyric line.
For example, the line marking information of the reference lyric line may be: and if the time information, the weather information and the position information are obtained, the terminal screens out the marked lyric lines, wherein the line marking information is as follows: and marking lyric lines of six types including time information + weather information, time information + position information, weather information + position information, time information, weather information and position information, and taking the remaining marked lyric lines in the marked lyric lines as lyric lines to be composed.
And a substep 3032, determining at least one lyric line to be composed as at least one target lyric line when the at least one lyric line to be composed contains environment marking words corresponding to all environment information.
Each lyric line to be grouped may include at least one environment tag word, where the environment tag word is obtained according to environment information tags, and the environment tag word corresponding to the environment information in the substep 3032 is also an environment tag word obtained according to the environment information tags.
Optionally, for each to-be-lyric line in the at least one to-be-lyric line, the terminal may determine word tag information of each environment tag word in the to-be-lyric line, determine environment information corresponding to the environment tag word according to the word tag information of each environment tag word, then determine whether the determined environment information includes all the environment information acquired in step 301, and if the determined environment information includes all the environment information acquired in step 301, the terminal determines that the at least one to-be-lyric line includes the environment tag word corresponding to all the environment information, so that the terminal determines the at least one to-be-lyric line as the at least one target lyric line.
Exemplarily, assuming that the terminal determines 3 lines of lyrics to be composed in sub-step 3031, when the environment information includes location information, weather information, time information and sound information, if the 3 lines of lyrics to be composed include location tag words, weather tag words, time tag words and sound tag words, the terminal determines the 3 lines of lyrics to be composed as target lines of lyrics.
And a substep 3033, when at least one to-be-generated-song word line does not contain all environment marking words corresponding to the environment information, acquiring a filling word matched with the missing environment information as a filling lyric line according to the missing environment information in the at least one to-be-generated-song word line.
Each lyric line to be grouped may include at least one environment tag word, where the environment tag word is obtained according to environment information tags, and the environment tag word corresponding to the environment information in the substep 3033 is also an environment tag word obtained according to the environment information tags.
Optionally, for each to-be-group lyric line in the at least one to-be-group lyric line, the terminal may determine word tag information of each environment tag word in the to-be-group lyric line, determine environment information corresponding to the environment tag word according to the word tag information of each environment tag word, then determine whether the determined environment information includes all the environment information acquired in step 301, and if the determined environment information does not include all the environment information acquired in step 301, the terminal acquires a filling word matched with the missing environment information as a filling lyric line according to the missing environment information (i.e., the environment information which is not included in the determined environment information) in the at least one to-be-group lyric line. The process of acquiring the filler word by the terminal may refer to the process of acquiring the association word by the terminal in sub-step 3021b, which is not described herein again in this embodiment of the present application.
Exemplarily, assuming that the terminal determines 3 lyric lines to be composed in sub-step 3031, when the environment information includes location information, weather information, time information and sound information, if the 3 lyric lines to be composed include location tagged words, weather tagged words and time tagged words, and do not include sound tagged words, the terminal obtains filling words matched with the sound information as lyric filling lines according to the sound information collected in step 301.
Substep 3034, determining at least one lyric line to be composed and a filling lyric line as at least one target lyric line.
Step 304, generating a target lyric fragment according to the at least one target lyric line.
Optionally, the terminal may generate a target lyric fragment according to the at least one target lyric line through semantic analysis, so that the semantics of the target lyric fragment are smoother. Referring to fig. 7, which is a flowchart illustrating a method for generating a target lyric fragment according to a target lyric line according to an embodiment of the present application, referring to fig. 7, the method may include the following steps:
sub-step 3041 determines a combined priority for each of the at least one target lyric line.
Optionally, the terminal may determine a combination priority of each target lyric line according to the line marking information of each target lyric line in the at least one target lyric line. As is readily understood from step 303, the line tag information of the target lyric line is composed of the word tag information of all the environment tag words in the target lyric line, and the line tag information may be a single kind of environment information or a combination of at least two kinds of environment information.
Alternatively, the terminal may set a marking level for each environmental information, determine a marking priority of each line marking information through semantic analysis according to the marking level of the environmental information, and determine the marking priority of the line marking information of each target lyric line as a combined priority of the target lyric line.
Optionally, taking the example that the environment information includes location information, weather information, time information, and sound information, the terminal may set the marking level of the time information as a first-class mark, the marking level of the weather information as a second-class mark, the marking level of the location information as a third-class mark, and the marking level of the sound information as a fourth-class mark, and according to normal semantic analysis, the terminal may determine that the marking priority of the line marking information is: time information + other environment information > time information > weather information + other environment information > weather information > location information + other environment information > location information > sound information, that is, the tag priority of the combination of the time information and the other environment information is higher than the tag priority of the time information, the tag priority of the time information is higher than the tag priority of the combination of the weather information and the other environment information, the tag priority of the combination of the weather information and the other environment information is higher than the tag priority of the weather information, the tag priority of the weather information is higher than the tag priority of the combination of the location information and the other environment information, the tag priority of the combination of the location information and the other environment information is higher than the tag priority of the location information, and the tag priority of the location information is higher than the tag priority of the sound information. And for the line marking information with the same highest marking level, determining the marking priority of the line marking information according to the lower marking level in the line marking information. Illustratively, for two rows of tag information: time information + weather information + location information, and, time information + location information + sound information, since the marking level of the weather information is higher than that of the location information, the marking priorities of the two are: time information + weather information + location information > time information + location information + sound information. As a further example, for two rows of tag information: time information + weather information + location information, and, time information + weather information + sound information, since the mark level of the location information is higher than the mark level of the sound information, the mark priorities of the two are: time information + weather information + location information > time information + weather information + sound information.
Sub-step 3042, combining the at least one target lyric line according to the combination priority to obtain a target lyric fragment.
Optionally, for any two target lyric lines, the terminal orders the higher-combination-priority target lyric line of the two target lyric lines before the lower-combination-priority target lyric line, thereby obtaining the target lyric fragment.
For example, suppose that the terminal determines four target lyric lines in step 303, the four target lyric lines: "it is in the middle of the morning", "wind is cold raining", "hear you say, he knows gentleness than me" and "it is in beverage shop, library or classroom" wherein the row mark information of "it is time information", the row mark information of "it is cold raining" is weather information, "hear you say, he knows gentleness than me" the row mark information is sound information ", the row mark information of" it is location information, the combined priority of the four target lyrics rows can be determined according to the priority of the row mark information in sub-step 3041: "the midnight before the morning" > "the wind is a cold raining" > "the beverage shop, the library, or one corner of the classroom" > "listen to you say that he is gentler than me", then the target lyric fragments that are combined by the four target lyric lines according to the combination priority may be "the midnight before the morning/the wind is a cold raining/the beverage shop, the library, or one corner of the classroom/listen to you say that he is gentler than me".
Further illustratively, assume that the terminal determines three target lyric lines in step 303, the three target lyric lines: "wind is cold raining", "hear you say, he knows gentleness than me" and "cafe in midnight is quite quiet", wherein the row mark information of "wind is cold raining" is weather information, the row mark information of "hear you say, he knows gentleness than me" is sound information, the row mark information of "cafe in midnight is quite quiet" is time information + position information, the combined priority of the three target lyric rows can be determined according to the priority of the row mark information in sub-step 3041 as follows: "the cafe very quiet at midnight" > "the wind is cold raining" > "hear you feel that he is gentler than me", then the target lyric fragments resulting from combining the three target lyric lines according to the combination priority may be "the cafe very quiet at midnight/the wind is cold raining/hearing you feel that he is gentler than me".
Further illustratively, assume that the terminal determines three target lyric lines in step 303, the three target lyric lines: "midnight before morning", "in the beverage shop, coffee shop or classroom corner" and "rain accompany me to cry", wherein the row marker information of "midnight before morning" is the time information, the row marker information of "in the beverage shop, coffee shop or classroom corner" is the location information, the row marker information of "rain accompany me to cry" is the weather information + the sound information, the combined priority of the three target lyric rows may be determined according to the priority of the row marker information in sub-step 3041 as: "the midnight before the morning" > "the rain accompany i cry" > "the beverage shop, the coffee shop, or the classroom corner", then the target lyric fragment obtained by combining the three target lyric lines according to the combination priority may be "the midnight before the morning/the rain accompany i cry/the beverage shop, the coffee shop, or the classroom corner".
It should be noted that, in the embodiment of the present application, by performing de-duplication screening on a plurality of marked lyric lines, redundant information of a generated target lyric fragment may be reduced to a certain extent, and it may be ensured that the target lyric fragment conforms to normal semantics. In addition, in the embodiment of the present application, the combination priority of the target lyric line is determined in step 304 as an example, and in practical application, the priority of each marked lyric line can be determined after the lyric line to be selected is marked according to the environment information, which is not limited in the embodiment of the present application.
And 305, sharing the target lyric fragment.
Optionally, the terminal may send the target lyric fragment to another terminal through a communication connection with the other terminal, so as to implement sharing of the target lyric fragment.
And step 306, when a mark display instruction for marking the environment mark words in the lyric line is detected, displaying word mark information of the environment mark words.
Each marked lyric line comprises at least one environment marked word, and the environment marked word is obtained by marking the to-be-marked word according to the environment information. The environment information may include at least one of location information, weather information, time information, or sound information, and accordingly, the environment tag word may include at least one of a location tag word, a weather tag word, a time tag word, or a sound tag word. Moreover, it is easy to know from the description of step 302 that each environmental tag word further has word tag information, which may be environmental information corresponding to the environmental tag word.
In the embodiment of the application, when the terminal detects a mark display instruction for the environment mark word, the terminal can display the word mark information of the environment mark word. Optionally, the user may click an environment tag word in the lyric of the target song to trigger the terminal to detect a tag display instruction for the environment tag word, and when the terminal detects the tag display instruction for the environment tag word, the terminal obtains word tag information of the environment tag word and displays the word tag information. For example, according to the difference of the environment tagged words, the terminal presenting the word tagging information of the environment tagged word may include the following four possible situations:
in the first case: and when a mark display instruction for the position mark words is detected, the terminal displays the position information.
The position marker words are obtained by marking the to-be-marked words according to the position information, and the word marking information of the position marker words can be the position information. Alternatively, the terminal may display the position information in the vicinity of the position marker word, or the terminal displays the position information on the position marker word in a layered manner. For example, the terminal displays the position information above or below the position marker word, and distributes the position information and the position marker word on the same layer. If the terminal displays the position information on the position marking words in a laminated mode, the terminal can set the transparency of the position information to avoid the position information from shielding the position marking words.
In the second case: and when a mark display instruction of the weather mark words is detected, the terminal displays weather information.
The weather marking words are obtained by marking the to-be-marked words according to the weather information, and the word marking information of the weather marking words can be the weather information.
In the third case: and when a mark display instruction for the time mark words is detected, the terminal displays the time information.
The time-stamped words are obtained by stamping the to-be-stamped words according to the time information, and the word stamping information of the time-stamped words can be the time information.
The implementation processes of the second case and the third case may refer to the first case, and are not described herein again in this embodiment of the application.
In a fourth case: and when a mark display instruction of the sound mark words is detected, the terminal plays sound information, wherein the playing volume of the sound information is smaller than that of the target song.
The voice tagged word is obtained by tagging the to-be-tagged word according to the voice information, and the word tagging information of the voice tagged word can be the voice information. Optionally, the terminal may play the target song and the sound mark information synchronously, and make the playing volume of the sound information smaller than the playing volume of the target song, so as to avoid the interference of the sound information on the target song.
In summary, according to the lyric sharing method provided by the embodiment of the application, the environmental information of the position of the terminal is obtained in the process of playing the target song by the terminal, a plurality of lyric lines to be selected of the target song are marked according to the environmental information to obtain a plurality of marked lyric lines, a target lyric fragment is generated according to the plurality of marked lyric lines, and the target lyric fragment is shared. Because the lyrics can be shared without manually selecting the lyrics by a user, the lyrics sharing process is facilitated to be simplified, and the lyrics sharing triggering rate is improved.
It should be noted that, the order of the steps of the lyric sharing method provided in the embodiment of the present application may be appropriately adjusted, and the steps may also be increased or decreased according to the circumstances, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application should be included in the protection scope of the present application, and therefore, the details are not described again.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 8, a block diagram of a lyric sharing apparatus 800 according to an embodiment of the present application is shown, where the lyric sharing apparatus 800 may be a program component in a terminal, and is configured to execute the lyric sharing method according to the embodiment. Referring to fig. 8, the lyric sharing device 800 may include, but is not limited to:
the obtaining module 810 is configured to obtain environment information of a location where the terminal is located in a process of playing the target song by the terminal;
a marking module 820, configured to mark multiple lyric lines to be selected of a target song according to environment information, so as to obtain multiple marked lyric lines;
a generating module 830 for generating a target lyric fragment according to the plurality of marked lyric lines;
the sharing module 840 is configured to share the target lyric fragment.
To sum up, in the lyric sharing method provided in the embodiment of the present application, the obtaining module obtains environment information of a position where the terminal is located in a process of playing a target song by the terminal, the marking module marks a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines, the generating module generates a target lyric fragment according to the plurality of marked lyric lines, and the sharing module shares the target lyric fragment. The lyric sharing can be realized without manually selecting the lyrics by a user, so that the lyric sharing process is facilitated to be simplified.
Optionally, referring to fig. 9, which shows a block diagram of a generating module 830 provided in an embodiment of the present application, referring to fig. 9, the generating module 830 may include:
a determining submodule 831 for determining at least one target lyric line based on the plurality of marked lyric lines;
a generation submodule 832 for generating a target lyric fragment from the at least one target lyric line.
Optionally, the environment information includes: at least one of location information, weather information, time information, or sound information; an obtaining module 810 configured to:
when the environment information comprises position information, acquiring the position information through a positioning component in the terminal;
when the environment information comprises weather information, acquiring position information through a positioning component in the terminal, and acquiring the weather information according to the position information;
when the environment information comprises time information, acquiring the time information through a clock component in the terminal;
and when the environment information comprises sound information, acquiring the sound information through a sound acquisition component in the terminal.
Optionally, a marking module 820 for:
determining a plurality of lyric lines to be marked, which have words to be marked matched with the environmental information, from a plurality of lyric lines to be selected of the target song;
and marking words to be marked matched with the environmental information in the plurality of lyric lines to be marked according to the environmental information to obtain a plurality of marked lyric lines.
Optionally, a marking module 820 for:
acquiring association words corresponding to the environmental information through semantic association;
determining a plurality of lyric lines to be marked of words to be marked which are matched with the associative words from a plurality of lyric lines to be selected of the target song;
and marking words to be marked matched with the association words in the plurality of lyric lines to be marked according to the environment information to obtain a plurality of marked lyric lines.
Optionally, the marked lyric line comprises at least one environment marked word, and the environment marked word is obtained by marking the to-be-marked word according to the environment information; referring to fig. 10, a block diagram of another lyric sharing device 800 according to an embodiment of the present application is shown, where on the basis of fig. 8, the lyric sharing device 800 further includes:
and the display module 850 is configured to display the word tagging information of the environment tagging word when a tag display instruction for the environment tagging word is detected.
Optionally, the word tag information includes environment information, and the environment information includes: at least one of location information, weather information, time information, or sound information; a presentation module 850 for:
when a mark display instruction for the position mark word is detected, displaying position information;
when a mark display instruction for a weather mark word is detected, displaying weather information;
when a mark display instruction for the time mark word is detected, displaying time information;
and when a mark display instruction of the sound mark word is detected, playing sound information, wherein the playing volume of the sound information is less than that of the target song.
Optionally, the environment information includes: at least one of position information, weather information, time information or sound information is marked, at least one environment marking word exists in the lyric line, and the environment marking word is obtained according to the environment information; referring to fig. 11, which shows a block diagram of a determining sub-module 831 provided in an embodiment of the present application, referring to fig. 11, the determining sub-module 831 may include:
a first determining unit 8311, configured to perform de-duplication screening on the multiple marked lyric lines to obtain at least one lyric line to be composed;
a second determining unit 8312, configured to determine, when at least one to-be-lyric line includes an environment tag word corresponding to all environment information, the at least one to-be-lyric line as at least one target lyric line;
a third determining unit 8313, configured to, when at least one to-be-lyric line does not include all environment tagged words corresponding to the environment information, obtain, according to the missing environment information in the at least one to-be-lyric line, a filling word matched with the missing environment information as a filling lyric line, and determine the at least one to-be-lyric line and the filling lyric line as at least one target lyric line.
Optionally, the first determining unit 8311 is configured to:
determining a plurality of marked lyric lines with the same semantics in the plurality of marked lyric lines, and screening out at least one lyric line to be formed from the plurality of marked lyric lines with the same semantics; alternatively, the first and second electrodes may be,
determining a plurality of marked lyric lines with the same line marking information in the plurality of marked lyric lines, and screening out at least one lyric line to be selected from the plurality of marked lyric lines with the same line marking information; alternatively, the first and second electrodes may be,
and determining at least one reference lyric line with the most line marking information in the plurality of marked lyric lines, and screening out the marked lyric lines of the line marking information of the plurality of marked lyric lines, wherein the line marking information belongs to the line marking information of the reference lyric line, so as to obtain at least one lyric line to be composed.
Optionally, generating submodule 832 is configured to:
determining a combined priority for each of the at least one target lyric line;
and combining at least one target lyric line according to the combination priority to obtain a target lyric fragment.
To sum up, in the lyric sharing method provided in the embodiment of the present application, the obtaining module obtains environment information of a position where the terminal is located in a process of playing a target song by the terminal, the marking module marks a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines, the generating module generates a target lyric fragment according to the plurality of marked lyric lines, and the sharing module shares the target lyric fragment. The lyric sharing can be realized without manually selecting the lyrics by a user, so that the lyric sharing process is facilitated to be simplified.
The embodiment of the application provides a lyric sharing device, include: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory is used for storing a computer program.
The processor is configured to execute the computer program stored in the memory, and implement the lyric sharing method provided in the foregoing embodiment.
Please refer to fig. 12, which illustrates a schematic structural diagram of a lyric sharing device 1200 according to an embodiment of the present application. The apparatus 1200 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The apparatus 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, the apparatus 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1202 is configured to store at least one instruction for execution by the processor 1201 to implement the lyric sharing method provided by an embodiment of the present application.
In some embodiments, the apparatus 1200 may further include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, display 1205, camera assembly 1206, audio circuitry 1207, positioning assembly 1208, or power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the apparatus 1200; in other embodiments, the display 1205 may be at least two, respectively disposed on different surfaces of the apparatus 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the device 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The display panel 1205 may be an OLED (Organic Light-Emitting Diode) display panel.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the apparatus 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to locate the current geographic location of the device 1200 to implement navigation or LBS (location based Service). The positioning component 1208 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1209 is used to power the various components in the apparatus 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the device 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 may detect magnitudes of accelerations on three coordinate axes of a coordinate system established with the apparatus 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the apparatus 1200, and the gyro sensor 1212 may collect a 3D motion of the apparatus 1200 by the user in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of device 1200 and/or an underlying layer of touch display 1205. When the pressure sensors 1213 are disposed on the side frames of the device 1200, the user's holding signal of the device 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensors 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be disposed on the front, back, or side of the device 1200. When a physical key or vendor Logo is provided on the device 1200, the fingerprint sensor 1214 may be integrated with the physical key or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the apparatus 1200. The proximity sensor 1216 is used to collect the distance between the user and the front of the apparatus 1200. In one embodiment, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state when the proximity sensor 1216 detects that the distance between the user and the front of the apparatus 1200 is gradually decreased; when the proximity sensor 1216 detects that the distance between the user and the front surface of the apparatus 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the rest screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of apparatus 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present application provides a storage medium, and when a program in the storage medium is executed by a processor, the lyric sharing method provided in the above embodiment can be implemented.
The term "at least one of a or B" in this application is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, at least one of a or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. Similarly, "A, B or at least one of C" means that there may be seven relationships that may represent: seven cases of A alone, B alone, C alone, A and B together, A and C together, C and B together, and A, B and C together exist. Similarly, "A, B, C or at least one of D" indicates that there may be fifteen relationships, which may indicate: fifteen cases of a alone, B alone, C alone, D alone, a and B together, a and C together, a and D together, C and B together, D and B together, C and D together, A, B and C together, A, B and D together, A, C and D together, B, C and D together, A, B, C and D together exist.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method for sharing lyrics, the method comprising:
in the process of playing a target song by a terminal, acquiring environmental information of the position of the terminal;
according to the environment information, marking a plurality of lyric lines to be selected of the target song to obtain a plurality of marked lyric lines;
generating a target lyric fragment according to the plurality of marked lyric lines;
and sharing the target lyric fragment.
2. The method of claim 1, wherein generating a target lyric fragment from the plurality of marked lyric lines comprises:
determining at least one target lyric line according to the plurality of marked lyric lines;
generating the target lyric fragment according to the at least one target lyric line.
3. The method of claim 1, wherein the context information comprises: at least one of location information, weather information, time information, or sound information;
the acquiring the environmental information of the position where the terminal is located includes:
when the environment information comprises the position information, acquiring the position information through a positioning component in the terminal;
when the environment information comprises the weather information, acquiring the position information through a positioning component in the terminal, and acquiring the weather information according to the position information;
when the environment information comprises the time information, acquiring the time information through a clock component in the terminal;
and when the environment information comprises the sound information, acquiring the sound information through a sound acquisition assembly in the terminal.
4. The method of claim 1, wherein the marking a plurality of lyrics lines to be selected of the target song according to the environmental information to obtain a plurality of marked lyrics lines comprises:
determining a plurality of lyric lines to be marked of words to be marked, which are matched with the environmental information, from the plurality of lyric lines to be selected of the target song;
and according to the environment information, marking words to be marked which are matched with the environment information in the plurality of lyric lines to be marked to obtain the plurality of marked lyric lines.
5. The method of claim 1, wherein the marking a plurality of lyrics lines to be selected of the target song according to the environmental information to obtain a plurality of marked lyrics lines comprises:
acquiring an association word corresponding to the environment information through semantic association;
determining a plurality of lyric lines to be marked of words to be marked which are matched with the associative words from a plurality of lyric lines to be selected of the target song;
and marking words to be marked matched with the association words in the lyric lines to be marked according to the environment information to obtain a plurality of marked lyric lines.
6. The method according to claim 4 or 5, wherein the marked lyric line comprises at least one environment marked word, and the environment marked word is obtained by marking the word to be marked according to the environment information;
after marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines, the method further comprises:
and when a mark display instruction of the environment mark word is detected, displaying word mark information of the environment mark word.
7. The method of claim 6, wherein the word tagging information comprises the context information, and wherein the context information comprises: at least one of location information, weather information, time information, or sound information;
when a mark display instruction for the environment mark word is detected, displaying word mark information of the environment mark word, including:
when a mark display instruction for a position mark word is detected, displaying the position information;
when a mark display instruction for a weather mark word is detected, displaying the weather information;
when a mark display instruction for a time mark word is detected, displaying the time information;
and when a mark display instruction of the sound mark word is detected, playing the sound information, wherein the playing volume of the sound information is less than that of the target song.
8. The method of claim 2, wherein the context information comprises: at least one of position information, weather information, time information or sound information, wherein at least one environment marking word exists in the marked lyric line and is marked according to the environment information;
the determining at least one target lyric line from the plurality of marked lyric lines comprises:
carrying out duplication elimination screening on the plurality of marked lyric lines to obtain at least one lyric line to be composed;
when the at least one word line to be lyriced contains all environment marking words corresponding to the environment information, determining the at least one word line to be lyriced as the at least one target word line;
when the at least one lyric line to be composed does not contain all environment marking words corresponding to the environment information, according to the missing environment information in the at least one lyric line to be composed, a filling word matched with the missing environment information is obtained to be used as a filling lyric line, and the at least one lyric line to be composed and the filling lyric line are determined to be the at least one target lyric line.
9. The method of claim 8, wherein the de-duplication screening of the marked lyric lines to obtain at least one lyric line to be composed comprises:
determining a plurality of marked lyric lines with the same semantics in the plurality of marked lyric lines, and screening the at least one lyric line to be grouped from the plurality of marked lyric lines with the same semantics; alternatively, the first and second electrodes may be,
determining a plurality of marked lyric lines with the same line marking information in the plurality of marked lyric lines, and screening out the at least one lyric line to be selected from the plurality of marked lyric lines with the same line marking information; alternatively, the first and second electrodes may be,
and determining at least one reference lyric line with the most line marking information in the marked lyric lines, and screening out the marked lyric lines of which the line marking information belongs to the line marking information of the reference lyric line to obtain the at least one lyric line to be composed.
10. The method of claim 2, wherein generating the target lyric fragment from the at least one target lyric line comprises:
determining a combined priority for each of the at least one target lyric line;
and combining the at least one target lyric line according to the combination priority to obtain the target lyric fragment.
11. A lyric sharing apparatus, comprising:
the acquisition module is used for acquiring the environmental information of the position of the terminal in the process of playing the target song by the terminal;
the marking module is used for marking a plurality of lyric lines to be selected of the target song according to the environment information to obtain a plurality of marked lyric lines;
a generating module for generating a target lyric fragment according to the plurality of marked lyric lines;
and the sharing module is used for sharing the target lyric fragment.
12. A lyric sharing apparatus, comprising: a processor and a memory, wherein the processor is capable of processing a plurality of data,
the memory for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the lyric sharing method according to any one of claims 1 to 10.
13. A storage medium, characterized in that a program in the storage medium, when executed by a processor, is capable of implementing the lyric sharing method according to any one of claims 1 to 10.
CN201910986713.2A 2019-10-17 2019-10-17 Lyric sharing method and device and storage medium Pending CN110750675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910986713.2A CN110750675A (en) 2019-10-17 2019-10-17 Lyric sharing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910986713.2A CN110750675A (en) 2019-10-17 2019-10-17 Lyric sharing method and device and storage medium

Publications (1)

Publication Number Publication Date
CN110750675A true CN110750675A (en) 2020-02-04

Family

ID=69278645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910986713.2A Pending CN110750675A (en) 2019-10-17 2019-10-17 Lyric sharing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110750675A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423107A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Lyric video display method and device, electronic equipment and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951554A (en) * 2015-06-29 2015-09-30 浙江大学 Method for matching landscape with verses according with artistic conception of landscape
CN105554273A (en) * 2015-12-15 2016-05-04 魅族科技(中国)有限公司 Alarm clock reminding method and terminal
CN106446048A (en) * 2016-08-31 2017-02-22 维沃移动通信有限公司 Song recommendation method and mobile terminal
US20170262256A1 (en) * 2016-03-10 2017-09-14 Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North Americ Environment based entertainment
CN108108338A (en) * 2018-01-05 2018-06-01 维沃移动通信有限公司 A kind of method for processing lyric, lyric display method, server and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951554A (en) * 2015-06-29 2015-09-30 浙江大学 Method for matching landscape with verses according with artistic conception of landscape
CN105554273A (en) * 2015-12-15 2016-05-04 魅族科技(中国)有限公司 Alarm clock reminding method and terminal
US20170262256A1 (en) * 2016-03-10 2017-09-14 Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North Americ Environment based entertainment
CN106446048A (en) * 2016-08-31 2017-02-22 维沃移动通信有限公司 Song recommendation method and mobile terminal
CN108108338A (en) * 2018-01-05 2018-06-01 维沃移动通信有限公司 A kind of method for processing lyric, lyric display method, server and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423107A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Lyric video display method and device, electronic equipment and computer readable medium
CN112423107B (en) * 2020-11-18 2022-05-17 北京字跳网络技术有限公司 Lyric video display method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
CN107885533B (en) Method and device for managing component codes
CN109982102B (en) Interface display method and system for live broadcast room, live broadcast server and anchor terminal
CN109327608B (en) Song sharing method, terminal, server and system
CN109144346B (en) Song sharing method and device and storage medium
CN112181572A (en) Interactive special effect display method and device, terminal and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111935516B (en) Audio file playing method, device, terminal, server and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
CN110677713B (en) Video image processing method and device and storage medium
CN109660876B (en) Method and device for displaying list
CN113556481B (en) Video special effect generation method and device, electronic equipment and storage medium
CN111083554A (en) Method and device for displaying live gift
CN111192072A (en) User grouping method and device and storage medium
CN112770177B (en) Multimedia file generation method, multimedia file release method and device
CN110750675A (en) Lyric sharing method and device and storage medium
CN111063372B (en) Method, device and equipment for determining pitch characteristics and storage medium
CN111641853B (en) Multimedia resource loading method and device, computer equipment and storage medium
CN112132472A (en) Resource management method and device, electronic equipment and computer readable storage medium
CN113936240A (en) Method, device and equipment for determining sample image and storage medium
CN109275015B (en) Method, device and storage medium for displaying virtual article
CN113613028A (en) Live broadcast data processing method, device, terminal, server and storage medium
CN108763182B (en) Method and device for rendering lyrics
CN108831423B (en) Method, device, terminal and storage medium for extracting main melody tracks from audio data
CN112560903A (en) Method, device and equipment for determining image aesthetic information and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200204

RJ01 Rejection of invention patent application after publication