CN112699269A - Lyric display method, device, electronic equipment and computer readable storage medium - Google Patents

Lyric display method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112699269A
CN112699269A CN202011631075.1A CN202011631075A CN112699269A CN 112699269 A CN112699269 A CN 112699269A CN 202011631075 A CN202011631075 A CN 202011631075A CN 112699269 A CN112699269 A CN 112699269A
Authority
CN
China
Prior art keywords
melody
original
target
song
lyrics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011631075.1A
Other languages
Chinese (zh)
Inventor
陈纯
马小坤
张馨予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011631075.1A priority Critical patent/CN112699269A/en
Publication of CN112699269A publication Critical patent/CN112699269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics

Abstract

The disclosure relates to a lyric display method, apparatus, electronic device, computer-readable storage medium, and computer program product. The method comprises the following steps: displaying a melody editing page of an original song, wherein the original song comprises an original melody and original lyrics; responding to the trigger operation of a melody configuration control in a melody editing page, and acquiring target melody configuration information; acquiring a target melody generated according to the target melody configuration information; and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody. By automatically adjusting the rotation rate based on the target melody configuration information, the goodness of fit between the melody and the lyrics can be improved, and the quality of song creation is improved.

Description

Lyric display method, device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying lyrics, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of computer technology, more and more applications have appeared to support users in composing songs. The creation of songs includes lyric creation and melody creation.
In the related art, the following ways can be used to assist the user in making original songs: the user selects different chords through an application program and freely combines the chords to form a song melody; or the user carries out operations such as sound effect addition and the like based on the existing chord template provided by the application program to form a song melody. Then the user can self-define and fill in lyrics based on the obtained song melody; alternatively, the system randomly recommends lyrics based on the resulting song melody, thereby generating a complete song clip. The user can manually adjust the coincidence condition of the melody and the lyrics in the song fragment so as to enable the adjusted song fragment to be more in line with the effect desired by the user.
However, the manual adjustment method in the related art requires a user to have certain professional knowledge of music. Therefore, a processing method that can adjust songs more efficiently is needed.
Disclosure of Invention
The present disclosure provides a lyric display method, apparatus, electronic device, computer-readable storage medium, and computer program product to provide a more efficient way of adjusting songs. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a lyric display method, including:
displaying a melody editing page of an original song, the original song including an original melody and original lyrics;
responding to the trigger operation of the melody configuration control in the melody editing page, and acquiring target melody configuration information;
acquiring a target melody generated according to the target melody configuration information;
and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody.
In one embodiment, the obtaining the target melody generated according to the target melody configuration information includes:
determining the duration of the target melody according to the configuration information of the target melody;
and adjusting the original melody according to the target melody duration to obtain the target melody.
In one embodiment, the adjusting the original melody according to the target melody duration to obtain the target melody includes:
acquiring original melody attribute information of the original melody;
acquiring an original prelude duration corresponding to the original melody attribute information and a target prelude duration corresponding to the target melody attribute information;
acquiring a difference value between the original prelude time length and the target prelude time length;
and adjusting the duration of the original introduction in the original melody according to the difference to obtain the target melody.
In one embodiment, the adjusting the duration of the original introduction in the original melody according to the difference to obtain the target melody includes:
adjusting the pronunciation starting time of the original lyrics according to the difference value to obtain a target prelude;
and obtaining the target melody according to the original melody and the target introduction.
In one embodiment, the obtaining the target melody generated according to the target melody configuration information includes:
acquiring target melody configuration parameters according to the target melody configuration information;
and regenerating the new target melody according to the target melody configuration parameter.
In one embodiment, the displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody comprises:
playing the target melody and the original lyrics;
highlighting one character or one sentence of lyrics which is being played at the current playing moment.
In one embodiment, the method further comprises:
acquiring the duration of the target melody;
changing the duration of the original melody that has been displayed to the duration of the target melody in the melody editing page.
In one embodiment, the obtaining of the target melody configuration information in response to the triggering operation of the melody configuration control in the melody editing page includes:
responding to the trigger operation of the melody configuration control, and displaying attribute association information of melody attributes;
and acquiring the target melody configuration information in response to the triggering operation of the attribute association information.
In one embodiment, the attribute association information includes at least one of a melody attribute name and a preset image corresponding to the melody attribute.
In one embodiment, the original melody and the original lyric are matched according to at least one of the original song theme and the original melody attribute information configured in advance.
According to a second aspect of the embodiments of the present disclosure, there is provided a lyric display apparatus including:
a first display module configured to execute a melody editing page displaying an original song, the original song including an original melody and original lyrics;
a first obtaining module configured to execute a trigger operation of a melody configuration control in the melody editing page to obtain target melody configuration information;
a second obtaining module configured to perform obtaining a target melody generated according to the target melody configuration information;
and the second display module is configured to perform lyric display according to the playing time of the characters in the original lyrics in the target melody.
In one embodiment, the second obtaining module includes:
a duration determination unit configured to perform determining a target melody duration according to the target melody configuration information;
and the adjusting unit is configured to adjust the original melody according to the target melody duration to obtain the target melody.
In one embodiment, the adjusting unit includes:
a first obtaining sub-unit configured to perform obtaining original melody attribute information of the original melody;
a second obtaining subunit configured to perform obtaining an original introduction duration corresponding to the original melody attribute information and a target introduction duration corresponding to the target melody attribute information;
a third obtaining subunit configured to perform obtaining a difference between the original introduction duration and the target introduction duration;
and the adjusting subunit is configured to adjust the duration of the original introduction in the original melody according to the difference to obtain the target melody.
In one embodiment, the adjusting subunit is configured to perform adjusting the pronunciation start time of the original lyric according to the difference to obtain a target prelude; and obtaining the target melody according to the original melody and the target introduction.
In one embodiment, the second obtaining module includes:
a first obtaining unit configured to perform obtaining a target melody configuration parameter according to the target melody configuration information;
a melody generating unit configured to perform a regeneration of the new target melody according to the target melody configuration parameter.
In one embodiment, the second display module includes:
a playing unit configured to perform playing the target melody and the original lyrics;
a first display unit configured to perform highlighting of one character or one sentence of lyrics being played at a current play time.
In one embodiment, the apparatus further comprises:
a third obtaining module configured to execute obtaining a duration of the target melody;
a changing module configured to perform changing the duration of the original melody, which has been displayed, to the duration of the target melody in the melody editing page.
In one embodiment, the first obtaining module includes:
a second display unit configured to perform attribute association information of melody attributes in response to a trigger operation on the melody configuration control;
a second obtaining unit configured to perform a trigger operation in response to the attribute associated information, and obtain the target melody configuration information.
In one embodiment, the attribute association information includes at least one of a melody attribute name and a preset image corresponding to the melody attribute.
In one embodiment, the original melody and the original lyric are matched according to at least one of the original song theme and the original melody attribute information configured in advance.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the lyric display method according to any embodiment of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the lyric display method according to any one of the embodiments of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program is configured to implement the lyric display method according to any one of the embodiments of the first aspect when executed by a processor.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
displaying a melody editing page of an original song, wherein the original song comprises an original melody and original lyrics; responding to the trigger operation of a melody configuration control in a melody editing page, and acquiring target melody configuration information; acquiring a target melody generated according to the target melody configuration information; and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody. By deploying the melody editing page, the melody in the song is supported to be independently adjusted by the user, the song creation efficiency is improved, and the requirement of the song creation on professional knowledge of the music is reduced; by automatically adjusting the rotation rate based on the target melody configuration information, the goodness of fit between the melody and the lyrics can be improved, and the quality of song creation is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a diagram illustrating an application environment for a method of displaying lyrics, according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of lyric display according to an exemplary embodiment.
Fig. 3 is a flow chart illustrating a method of generating a target melody according to an exemplary embodiment.
Fig. 4 is a diagram illustrating a processing of an original melody according to an exemplary embodiment.
Fig. 5 is a diagram illustrating creation of an original song, according to an example embodiment.
FIG. 6 is a flow chart illustrating a method of lyric display according to an exemplary embodiment.
FIG. 7 is a block diagram illustrating a lyric display apparatus according to an exemplary embodiment.
Fig. 8 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The lyric display method provided by the present disclosure may be applied to an application environment as shown in fig. 1. The terminal 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. An application program supporting the melody editing function may be installed in the terminal 110. The application may be a social-type application, a short-video-type application, an instant messaging-type application, a music composition-type application, and the like. The melody editing function may be deployed in these applications in the form of a plug-in, applet, or the like. The terminal 110 may provide the melody editing page to the user through the application program so that the user can individually edit the melody in the song through the melody editing page.
In a specific implementation, the terminal 110 displays a melody editing page of an original song, where the original song includes an original melody and original lyrics; responding to the trigger operation of a melody configuration control in a melody editing page, and acquiring target melody configuration information; acquiring a target melody generated according to the target melody configuration information; and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody.
In another exemplary embodiment, the lyric display method provided by the present disclosure may be applied to an application environment including a terminal and a server. The terminal and the server interact with each other through a network. The terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers. The server may be deployed with intelligent melody generation logic. The melody generation logic may be implemented based on deep learning models, search algorithms, and the like. The deep learning model may be any model that can be used to generate melodies, such as a linear model, a neural network model, a support vector machine, and the like. The lookup algorithm may be a sequential lookup, a binary lookup, or the like.
In the specific implementation, the terminal displays a melody editing page of an original song; and responding to the triggering operation of the melody configuration control in the melody editing page to acquire target melody configuration information. The terminal sends a melody generation request to the server, the melody generation request carrying target melody configuration information to request the server to generate a target melody matching the target melody configuration information based on the melody generation logic. The terminal acquires a target melody sent by the server; and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody.
Fig. 2 is a flowchart illustrating a lyric display method according to an exemplary embodiment, and as shown in fig. 2, the lyric display method is used in a terminal, including the following steps.
In step S210, a melody editing page of an original song including an original melody and original lyrics is displayed.
Wherein the original song may be a song that has not been modified any more. The original song may be an existing song stored in a local database or pre-solidified in a server. In this case, the client may provide a song upload function to acquire an original song uploaded by the user through the upload function. The original song may also be a song that the user authored autonomously. In this case, the client may provide a song composition page for the user to perform song composition, for example, the song composition page may be provided for the user to autonomously edit the original melody and/or the original lyrics, or the song composition page may be provided for the user to define song attribute information for the client or the server to intelligently generate the original melody and/or the original lyrics based on the song attribute information.
The melody editing page supports the user to independently edit the original melody in the original song to obtain a new song. The melody editing page can display the contents of original lyrics, a lyric editing control, a melody configuration control, a playing control key, song duration and the like.
In step S220, in response to a trigger operation on the melody configuration control in the melody editing page, target melody configuration information is acquired.
The melody configuration control is an object with which the user can interact to trigger the configuration of melody configuration information. The melody configuration control is not limited to being displayed at an arbitrary position of the melody editing page in the manner of a fixed control; or flexibly presented in the melody editing page in a floating button mode and the like. The melody configuration information may be music basic element information such as rhythm information, pitch information, tempo, music style, or category information formed by combining a plurality of music basic elements, for example, rhythm-fast, pitch-man-high to form one melody configuration information.
Specifically, the client may display the configuration page of the melody configuration information in response to a triggering operation on the melody configuration control in the melody editing page. The configuration page may display a selection item of predefined melody configuration information to enable a user to obtain target melody configuration information by selecting the selection item. And/or, an information input area with the melody configuration information can be displayed in the melody editing page, so that a user can manually input the target melody configuration information.
In step S230, the target melody generated according to the target melody configuration information is acquired.
In some possible embodiments, the target melody may be re-edited from the original melody based on the target melody configuration information. In this case, after obtaining the target melody configuration information, the client may adjust the original melody in a direction conforming to the target melody configuration information based on the preset melody editing logic. Illustratively, the target melody configuration information is rhythm-fast. The beat duration corresponding to the rhythm-fast is configured in advance. Then, after obtaining the configuration information of the target melody, the client may adjust the original melody according to the beat duration corresponding to the rhythm-speed to obtain the target melody.
In some possible embodiments, the target melody may be a new melody that is regenerated based on the target melody configuration information. In this case, the client may determine the melody matching the target melody configuration information as the target melody based on the pre-configured melody generation logic after acquiring the target melody configuration information.
In step S240, lyrics are displayed according to the playing time of the characters in the original lyrics in the target melody.
Specifically, the playing time of the original lyric in the target melody may be adjusted accordingly based on the playing time of the original lyric in the original melody. Illustratively, the target melody is a melody that is faster than the original melody by a times, and then the playing time of the original lyrics in the original melody is correspondingly faster by a times, resulting in the playing time of the original lyrics in the target melody. Alternatively, the playing time of the characters in the starting lyrics in the target melody may be determined based on the composition of the target melody. For example, the target melody is divided in advance to obtain a prelude, an interlude, a lyric singing, a tail, and the like. And determining the playing time of the characters in the original lyrics in the target melody according to the beat of the singing part of the lyrics.
And after the client acquires the target melody, obtaining a new song based on the target melody and the original lyrics. The client can automatically play the new song or respond to the triggering operation of the playing control key in the rotation editing page to play the new song. In the playing process, the client acquires the current playing time, and displays the characters which have reached the current playing time in the original lyrics, or a sentence of lyrics in which the characters which have reached the current playing time are located, or the lyrics in a preset time period after the current playing time, and the like as the display state in playing. The display status during playing may be highlighted, a large font size display, a preset color (e.g., red) display, etc.
In the lyric display method, a melody editing page of an original song is displayed, and the original song comprises an original melody and original lyrics; responding to the trigger operation of a melody configuration control in a melody editing page, and acquiring target melody configuration information; acquiring a target melody generated according to the target melody configuration information; and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody. By deploying the melody editing page, the melody in the song is supported to be independently adjusted by the user, the song creation efficiency is improved, and the requirement of the song creation on professional knowledge of the music is reduced; by automatically adjusting the rotation rate based on the target melody configuration information, the goodness of fit between the melody and the lyrics can be improved, and the quality of song creation is improved.
In an exemplary embodiment, in step S230, acquiring the target melody generated according to the target melody configuration information includes: determining the duration of the target melody according to the configuration information of the target melody; and adjusting the original melody according to the target melody duration to obtain the target melody.
The target melody duration may include at least one category, such as the total melody duration; the part duration in the melody, for example, the prelude duration, the interlude duration, the tail duration, and the like. The corresponding part of the original melody may be adjusted according to the type of the duration of the target melody. For example, when the target melody duration is the total duration, then all the original melodies may be adjusted accordingly; when the target melody duration is the prelude duration, the prelude part in the original melody can be adjusted accordingly.
Specifically, the correspondence between the melody configuration information and the melody duration may be predefined, and stored in a local database or solidified in the server. After the client acquires the target melody configuration information, the client inquires the target melody duration corresponding to the target melody configuration information from the corresponding relation between the melody configuration information and the melody duration. And adjusting the corresponding part in the original melody according to the type of the target melody time length to obtain the target melody.
For example, the original melody duration is the total duration T1; the total duration of the target melody duration is T2. The ratio T1/T2 of the total duration T1 of the original melody duration and T2 of the target melody duration may be calculated. If the ratio is greater than 1, the speed of the original melody can be increased according to the ratio; if the ratio is less than 1, the original melody may be slowed down according to the ratio. Alternatively, the difference T1-T2 between the original melody time length which is the total time length T1 and the target melody time length which is the total time length T2 may be calculated. If the difference is positive, the original melody can be clipped according to the difference; if the difference is negative, a new melody may be added to the original melody, for example, a part of the melody in the original melody may be taken for repetition.
In this embodiment, the original melody is adjusted according to the target melody duration corresponding to the target melody configuration information, so that the target melody expected by the user can be quickly obtained, and the editing efficiency of the target melody is accelerated.
In an exemplary embodiment, as shown in fig. 3, the original melody is adjusted according to the duration of the target melody to obtain the target melody, and the method may include the following steps:
in step S310, the original melody attribute information of the original melody is obtained.
In step S320, an original introduction duration corresponding to the original melody attribute information and a target introduction duration corresponding to the target melody attribute information are obtained.
In step S330, a difference between the original introduction duration and the target introduction duration is obtained.
In step S340, the duration of the original introduction in the original melody is adjusted according to the difference to obtain the target melody.
In this embodiment, the melody configuration information may include melody attribute information. Melody generally refers to an organized, rhythmic sequence of several musical sounds through artistic conception. The melody is formed by combining a plurality of basic music elements, such as mode, rhythm, beat, timbre performance method and the like. The melody attributes may be any one or more of the musical primitives. The target melody configuration information may reflect the user's desire for the melody style of the song, such as tempo information, pitch information, tempo, music style, and the like. The melody duration category may be a prelude duration. Then, the correspondence between the melody configuration information and the melody duration may be expressed as a correspondence between the melody attribute information and the introduction duration.
Specifically, the original melody attribute information may be carried in song association information of the original song. After the client acquires the original song, the original melody attribute information can be directly searched from the song association information. The original melody attribute information can also be obtained by detecting an original song, for example, intelligently identifying the original song based on a deep learning model. It can be understood that the deep learning model in the present embodiment has been trained using several song samples, and has the capability of detecting and identifying the original song.
After obtaining the original melody attribute information of the original melody, the client can search the corresponding relation between the melody attribute information and the prelude time length to obtain the original prelude time length corresponding to the original melody attribute information and the target prelude time length corresponding to the target rotation attribute information. The client calculates the difference between the original prelude time length and the target prelude time length. The time length of the original introduction in the original melody is adjusted according to the difference, for example, the introduction in the original melody is clipped or added according to the difference.
In a possible embodiment, after obtaining the difference between the original prelude duration and the target prelude duration, the client may adjust the pronunciation start time of the original lyric according to the difference to obtain the target prelude. The manner of adjusting the pronunciation start time of the original lyrics may be various, for example, by speeding up or slowing down the original prelude, or by cropping the original prelude or adding new content to the original prelude, thereby changing the pronunciation start time of the original lyrics. After the target prelude is obtained, the client side can generate the target melody according to the target prelude and other parts except the original prelude in the original melody.
For example, the original prelude duration is a; the target prelude duration is b. The difference a-b between the original introduction time length a and the target introduction time length b can be calculated. If the difference is positive, the speed of the original prelude can be increased, and the pronunciation starting time of the original lyrics is advanced a-b time points; if the difference is negative, the speed of the original prelude can be reduced, and the pronunciation starting time of the original lyric is delayed by b-a time points.
In the embodiment, the prelude not only can provide specific style, feeling and mood for the song, but also can provide singing basis such as emotion, speed, accuracy in pitch, tone, rhythm and dynamics for the singer, so that the adjusted melody can more easily meet the expectation of the user on the style of the song by adjusting the prelude part in the melody, and the accuracy of editing the song is improved.
In an exemplary embodiment, in step S230, acquiring the target melody generated according to the target melody configuration information includes: acquiring target melody configuration parameters according to the target melody configuration information; and regenerating a new target melody according to the target melody configuration parameter.
The target melody configuration parameters may be relevant parameters required for generating a new target melody, such as rhythm, tempo, timbre performance method, pitch, and the like.
Specifically, the correspondence between the melody configuration information and the melody configuration parameter may be predefined. And after the client acquires the target melody configuration information, acquiring the target melody configuration parameters from the corresponding relation between the melody configuration information and the melody configuration parameters according to the target melody configuration information. For example, the target melody configuration information is configuration a, and the melody configuration parameters corresponding to configuration a are searched for rhythm-fast, pitch-boy height, style-popular. The client determines the melody matched with the target melody configuration parameter as the target melody based on the preset melody generation logic. Exemplarily, the client is configured with a melody configuration parameter and a melody correspondence relationship. After the client acquires the target melody configuration parameters, the song melody matched with the target melody configuration parameters is obtained through retrieval based on the corresponding relation between the melody configuration parameters and the melody and serves as the target melody.
In the embodiment, the melody editing page is deployed to support the user to independently adjust the melody in the song, so that the song creation efficiency is improved, and the requirement of the song creation on the professional knowledge of the music is reduced; by automatically adjusting the rotation rate based on the melody configuration information, the goodness of fit between the melody and the lyrics can be improved, and the quality of song creation is improved.
In an exemplary embodiment, the obtaining of the target melody configuration information in response to the triggering operation of the melody configuration control in the melody editing page includes: responding to the trigger operation of the melody configuration control, and displaying attribute association information of the melody attributes; and acquiring target melody configuration information in response to the triggering operation of the attribute association information.
The attribute association information may be used to uniquely identify the melody attribute, and may be one or more of a melody attribute name, a preset image, attribute introduction information, and the like.
Specifically, attribute-related information corresponding to the melody attribute is previously assigned. And after the client detects the trigger operation of the melody configuration control, displaying a configuration page of the melody attributes. The configuration page may be displayed in the form of a popup layer page, a next level page of a melody editing page, a popup window, or the like. The configuration page comprises at least one attribute associated information under the melody attribute. The at least one property association information may be presented in the form of a list, a control, or the like. And after the client detects the triggering operation of the user on the attribute associated information, acquiring target melody attribute information corresponding to the triggered attribute associated information.
Fig. 4 is a diagram illustrating exemplary configuration information of a configuration target melody. Take the melody attribute as "style" as an example. As shown in FIG. 4, the melody editing page 410 includes a melody configuration control 412. After detecting the trigger operation on the spin rule configuration control 412, the client displays a configuration page 420. The configuration page 420 is displayed in a popup layer above the melody editing page. The configuration page 420 displays therein a plurality of attribute-related information under the melody attribute "style". Each attribute association information comprises a melody attribute name and a corresponding preset image. The user may perform a triggering operation on any one of the attribute-associated information, so as to enable the client to acquire the target melody attribute information corresponding to the triggered attribute-associated information. The target melody is generated based on the acquired target melody attribute information and the melody editing page 430 continues to be displayed.
In the embodiment, the attribute associated information of the melody attribute is directly displayed, so that visual reminding can be given to the user; by displaying the attribute associated information to be in a state of being capable of interacting with the user, the difficulty of configuring melody configuration information by the user is reduced, and the efficiency of editing the song melody by the user is improved.
In an exemplary embodiment, the melody editing page also displays the duration of the melody (which may be considered the duration of the song). After the client obtains the target melody, the duration of the original melody displayed in the melody editing page can be changed into the duration of the target melody. Wherein, the duration of the original melody may be carried in the song association information of the original song. After the client acquires the original song, the original song is directly inquired from the song associated information of the original song. Or, the client may be pre-deployed with detection logic of a rotation law duration. And obtaining the original song at the client, and detecting the original song based on the melody duration detection logic to obtain the duration of the original melody. Similarly, the duration of the target melody may also be obtained by referring to the above manner of obtaining the duration of the original melody, which is not specifically described herein.
With continued reference to fig. 4. As shown in fig. 4, before the original melody is edited, the melody editing page 410 displays the length of time "00: 54" of the original melody. After the original melody is edited, the melody editing page 430 displays the length of time "00: 48" for the target melody.
In the embodiment, the user can directly acquire the related information of the song by displaying the duration of the melody in the melody editing page; after the discipline is adjusted, the displayed duration in the discipline editing page is updated, so that the accuracy of information display can be ensured.
In an exemplary embodiment, the original melody and the original lyric are matched according to at least one of a pre-configured original song theme and original melody attribute information.
In particular, the original lyrics may be intelligently generated based on the original song theme selected by the user. Song themes may be used to reflect the type of original lyrics, e.g. themes of adolescence, testimony, love, praise, campus, etc. Multiple levels of song themes may be predefined. For example, the primary song theme includes youth, witty, love, praise, campus. The primary song theme "youth" may also include secondary song themes such as neighborhoods girls, 70 th, 80 th, etc. The original song theme may be used to reflect the user's desire for core content of the lyrics, which may be at least one in number.
The client may display a song theme configuration page. And responding to the adding operation of the song theme through the song theme configuration page to obtain the target song theme. The adding operation of the song theme is not limited to a single-click operation, a double-click operation, a long-press operation, a sliding operation, a gesture operation, an operation on a preset operation control, and the like. For example, an add theme control is deployed in a song theme page, so that a user can trigger the add operation of a song theme by clicking the control; or responding to the long-time pressing operation of the user, displaying an operation menu so that the user can trigger the addition operation of the song theme from the operation menu; or, arranging an input prompt box of the song theme in the page, so that the user triggers the addition operation of the song theme by clicking the input prompt box.
The client may display a theme addition page of the song theme in response to the addition operation of the song theme. One or more of a theme input area, a theme recommendation area, a history area, a material uploading area and the like can be included in the theme addition page. The user may enter the song title through either of these regions.
Illustratively, the subject input area is displayed with a text entry box. The user may manually enter text information through the text entry box to cause the client to determine an original song theme from a predefined plurality of song themes based on the text information. For example, the client may search for a song theme containing the main text information from predefined song themes as the original song theme based on a search algorithm.
The topic recommendation area may display a plurality of recommendation topics. The recommendation theme can be obtained by recommending by the recommendation system based on recommendation logic. The recommendation logic may be deployed based on similarity between the user account and the song theme, search popularity of the song theme, and the like, for example, the song theme with a larger search volume or a higher popularity in a period of time, the song theme more matching the behavior data of the user account, and the like. The client can respond to the triggering operation of any one or more recommendation themes in the theme recommendation area, and the triggered recommendation themes are used as original song themes.
The history area displays the history theme records which are searched by the user account. The client can respond to the triggering operation of any one or more history themes in the history recording area, and the triggered history themes are used as original song themes.
A material upload control can be included in the material upload area. The client can respond to the triggering operation of the material uploading control to acquire the material uploaded by the user. The material may be a picture, a video, music, or the like. The client can intelligently identify the materials uploaded by the user based on a deep learning model and other modes to obtain the original song theme. It can be understood that the deep learning model in the embodiment has been trained by using several material samples, and has the capability of detecting and identifying the material uploaded by the user.
Under the condition that the original song theme is added, the client can respond to the lyric generation instruction and send a lyric generation request to the server, wherein the lyric generation request carries the original song theme. The server may be configured with a correspondence of song themes and lyrics. After the server receives the lyric generation request, at least one original lyric matched with the original song theme is obtained through retrieval based on the corresponding relation between the song theme and the lyrics.
In some possible embodiments, the correspondence between the song theme and the lyrics may be obtained by: a plurality of lyrics are obtained in advance. The lyrics can be obtained by one or more of collecting lyrics in existing songs, independently-created lyrics of a user, combining lyrics and words through a deep learning model and the like. And analyzing and processing each lyric based on the text theme model to obtain the song theme of each lyric. Creating a lyric library corresponding to the song theme, and storing the lyrics into the lyric library corresponding to the song theme; or marking each lyric with a corresponding song theme label, thereby forming a corresponding relation between the song theme and the lyric.
Specifically, the original melody may be intelligently generated based on the original melody attribute information selected by the user. The client may display a song melody editing page. The user may configure desired original melody attribute information through the song melody editing page.
The original melody editing page may include any one or more of a melody attribute selection area, a melody attribute input area, a melody attribute recommendation area, a history area, a material upload area, and the like. The user may input the original melody attribute information through any one of the areas. For example, at least one melody attribute and attribute information corresponding to the melody attribute may be displayed in the melody attribute selection area. The attribute information of each melody attribute may be presented in the form of a list, a control, or the like, so that the user can configure the target melody attribute information through a pull-down menu of the list, click a button, or the like. For the implementation manners of the melody attribute input area, the melody attribute recommendation area, the history recording area and the material uploading area, reference may be made to the descriptions of the theme input area, the theme recommendation area, the history recording area and the material uploading area, which are not specifically set forth herein.
And under the condition that the original melody attribute information configuration is finished, the client sends a melody generation request to the server in response to the melody generation instruction, wherein the melody generation request carries the original melody attribute information so as to request the server to generate at least one original melody matched with the original melody attribute information.
In some possible embodiments, the at least one original melody matching the original melody attribute information may be obtained by: the server is configured with the corresponding relation between the melody attribute information and the song melody. After the server receives the melody generation request, at least one original melody matching the target melody attribute information is retrieved based on the correspondence between the melody attribute information and the song melody.
In some possible embodiments, the correspondence between the melody attribute information and the song melody may be obtained by: several melodies are obtained in advance. The number of melodies may be derived from one or more of an existing song melody, a melody authored by the user autonomously, a concatenation of existing melodies, etc. Each melody can be analyzed and processed based on the deep learning model, and melody attribute information of each melody is obtained. Creating a melody library corresponding to the melody attribute information; or labeling the corresponding melody attribute information labels for each melody, thereby forming the corresponding relation between the melody attribute information and the song melody.
Further, after the client acquires the original melody attribute information, the client may further filter the obtained at least one original lyric based on the original melody attribute information. The screening may be performed in a variety of ways. For example, the correspondence between the melody attribute information and the lyrics may be configured in advance. And searching lyrics corresponding to the attribute information of the original melody from at least one original lyric. Or, a character number interval value corresponding to the melody attribute information may be configured in advance, and the original lyrics are screened based on the character number interval value. In the specific implementation, after the original lyrics are obtained, the number of characters corresponding to each original lyric is detected and obtained. And comparing the number of the characters corresponding to each original lyric with the character number interval value corresponding to the melody attribute information. If the number of characters corresponding to the original lyrics is in the interval value of the number of characters corresponding to the attribute information of the target melody, the lyrics are reserved; otherwise, the lyric is deleted.
Further, the at least one original melody obtained may be adjusted based on the original song theme, for example, the duration, pitch, rhythm speed and the like of the phonemes in the original melody may be adjusted based on the original song theme.
After the original lyrics and the original melody are obtained, the server performs pairing synthesis on the original lyrics and the original melody to obtain the original song. The number of the original lyrics and/or the original melody may be plural, in which case, the original song may be obtained by a random pairing, a sequential combination pairing, and the like.
Of course, the original song may also be obtained through the client. The original song is determined by the server and the client, the difference between the original song and the original song is only the difference of the execution main body, and the implementation principle and the implementation process are similar.
Fig. 5 illustrates a schematic diagram of composing an original song. As shown in fig. 5, the song theme configuration page 512 and the song melody editing page 514 may be displayed together in the song configuration page 510. The song theme configuration page 512 includes a theme addition control to enable the client to obtain the original song theme in response to a trigger operation of the theme addition control. The song melody editing page 514 includes a property selection control corresponding to each melody property. The client may respond to the trigger operation on the attribute selection control to obtain the original melody attribute information.
The lyric generation instruction and the melody generation instruction may be acquired in response to a trigger operation of the key 516. After the client detects the trigger operation of the key 516, a song selection page 520 is displayed. The song selection page 520 includes at least one original song, each of which includes an original lyric matching the theme of the original song and an original melody matching the original melody attribute information. After the client detects the trigger operation of the button 522 in the song selection page 520, the currently displayed original song is acquired, and the melody editing page 530 is displayed.
In the embodiment, the system can intelligently generate the original song according to the song theme and the melody attribute information by supporting the user to autonomously configure the song theme and the melody attribute information, can assist the user to quickly finish the creation of the song, and greatly improves the efficiency of song creation; and through intelligent generation of songs, the threshold of music creation is greatly reduced, and users without professional music knowledge can easily create personalized songs.
Fig. 6 is a flowchart illustrating a lyric display method according to an exemplary embodiment, and as shown in fig. 6, the lyric display method is used in a terminal, including the following steps.
In step S602, an original song is acquired. The original song includes an original melody and original lyrics.
Wherein, the original song can be an existing song stored in a local database or solidified in a server in advance; or, a song created by the user autonomously may also be used, and a specific implementation manner of the song created by the user autonomously may refer to the foregoing embodiment and fig. 5, which is not specifically described herein.
In step S604, the melody editing page of the original song is displayed. The melody editing page can display the contents of a melody configuration control, a playing control key, original lyrics and the like. A specific implementation of the melody editing page may refer to fig. 4, which is not specifically described herein.
In step S606, in response to the trigger operation on the melody configuration control in the melody editing page, the target melody configuration information is acquired. The target melody configuration information includes any one of target melody attribute information, target melody configuration parameters, and the like. The following description will be given taking the target melody attribute information as an example.
In step S608, the original introduction duration corresponding to the original melody attribute information and the target introduction duration corresponding to the target melody attribute information are acquired. The specific manner of obtaining the original prelude time length and the target prelude time length can refer to the above embodiments, and is not specifically described herein.
In step S610, a difference between the original introduction time length and the target introduction time length is acquired.
In step S612, the pronunciation start time of the original lyric is adjusted according to the difference to obtain the target prelude. And obtaining the target melody according to the other parts of the original melody except the original prelude and the target prelude. The specific implementation manner of adjusting the pronunciation start time of the original lyric according to the difference value can refer to the above embodiments, and is not specifically described herein.
In step S614, the target melody and the original lyric are played, and the character of the original lyric that has reached the current playing time, or a sentence of lyric where the character that has reached the current playing time is located, or the lyric within a preset time period after the current playing time, etc. are displayed as the display status during playing.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
FIG. 7 is a block diagram illustrating a lyrics display apparatus 700 according to an exemplary embodiment. Referring to fig. 7, the apparatus includes a first display module 702, a first acquisition module 704, a second acquisition module 706, and a second display module 707.
A first display module 702 configured to execute a melody editing page displaying an original song, the original song including an original melody and original lyrics; a first obtaining module 704 configured to perform a trigger operation on a melody configuration control in a melody editing page to obtain target melody configuration information; a second obtaining module 706 configured to perform obtaining the target melody generated according to the target melody configuration information; and a second display module 707 configured to perform lyric display according to a playing time of characters in the original lyrics in the target melody.
In an exemplary embodiment, the second obtaining module 706 includes: a duration determination unit configured to perform determining a target melody duration according to the target melody configuration information; and the adjusting unit is configured to adjust the original melody according to the target melody duration to obtain the target melody.
In an exemplary embodiment, the adjusting unit includes: a first obtaining sub-unit configured to perform obtaining original melody attribute information of an original melody; a second obtaining subunit configured to perform obtaining an original prelude duration corresponding to the original melody attribute information and a target prelude duration corresponding to the target melody attribute information; a third obtaining subunit configured to perform obtaining a difference between the original introduction duration and the target introduction duration; and the adjusting subunit is configured to adjust the duration of the original introduction in the original melody according to the difference value to obtain the target melody.
In an exemplary embodiment, the adjusting subunit is configured to perform adjusting the pronunciation start time of the original lyric according to the difference value to obtain a target prelude; and obtaining the target melody according to the original melody and the target prelude.
In an exemplary embodiment, the second obtaining module 706 includes: a first obtaining unit configured to perform obtaining a target melody configuration parameter according to the target melody configuration information; a melody generating unit configured to perform a regeneration of a new target melody according to the target melody configuration parameter.
In an exemplary embodiment, the second display module 707 includes: a playing unit configured to perform playing of the target melody and the original lyric; a first display unit configured to perform highlighting of one character or one sentence of lyrics being played at a current play time.
In an exemplary embodiment, the apparatus 700 further comprises: a third obtaining module configured to execute obtaining a duration of the target melody; a changing module configured to perform changing the duration of the displayed original melody to the duration of the target melody in the melody editing page.
In an exemplary embodiment, the first obtaining module 704 includes: a second display unit configured to perform attribute association information of the melody attribute in response to a trigger operation on the melody configuration control; a second acquisition unit configured to perform a trigger operation in response to the attribute association information, and acquire the target melody configuration information.
In an exemplary embodiment, the attribute association information includes at least one of a melody attribute name and a preset image corresponding to the melody attribute.
In an exemplary embodiment, the original melody and the original lyric are matched according to at least one of a pre-configured original song theme and original melody attribute information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram illustrating an electronic device Z00 for lyric display in accordance with an exemplary embodiment. For example, electronic device Z00 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 8, electronic device Z00 may include one or more of the following components: a processing component Z02, a memory Z04, a power component Z06, a multimedia component Z08, an audio component Z10, an interface for input/output (I/O) Z12, a sensor component Z14 and a communication component Z16.
The processing component Z02 generally controls the overall operation of the electronic device Z00, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component Z02 may include one or more processors Z20 to execute instructions to perform all or part of the steps of the method described above. Further, the processing component Z02 may include one or more modules that facilitate interaction between the processing component Z02 and other components. For example, the processing component Z02 may include a multimedia module to facilitate interaction between the multimedia component Z08 and the processing component Z02.
The memory Z04 is configured to store various types of data to support operations at the electronic device Z00. Examples of such data include instructions for any application or method operating on electronic device Z00, contact data, phonebook data, messages, pictures, videos, and the like. The memory Z04 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component Z06 provides power to the various components of the electronic device Z00. The power component Z06 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device Z00.
The multimedia component Z08 comprises a screen providing an output interface between the electronic device Z00 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component Z08 includes a front facing camera and/or a rear facing camera. When the electronic device Z00 is in an operating mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component Z10 is configured to output and/or input an audio signal. For example, the audio component Z10 includes a Microphone (MIC) configured to receive external audio signals when the electronic device Z00 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory Z04 or transmitted via the communication component Z16. In some embodiments, the audio component Z10 further includes a speaker for outputting audio signals.
The I/O interface Z12 provides an interface between the processing component Z02 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly Z14 includes one or more sensors for providing status assessment of various aspects to the electronic device Z00. For example, the sensor assembly Z14 may detect the open/closed state of the electronic device Z00, the relative positioning of the components, such as the display and keypad of the electronic device Z00, the sensor assembly Z14 may also detect a change in the position of one component of the electronic device Z00 or the electronic device Z00, the presence or absence of user contact with the electronic device Z00, the orientation or acceleration/deceleration of the electronic device Z00, and a change in the temperature of the electronic device Z00. The sensor assembly Z14 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly Z14 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly Z14 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component Z16 is configured to facilitate wired or wireless communication between the electronic device Z00 and other devices. The electronic device Z00 may have access to a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component Z16 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component Z16 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device Z00 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, for example a memory Z04 comprising instructions executable by the processor Z20 of the electronic device Z00 to perform the above method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when being executed by a processor, implements the lyric display method of any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A lyric display method, comprising:
displaying a melody editing page of an original song, the original song including an original melody and original lyrics;
responding to the trigger operation of the melody configuration control in the melody editing page, and acquiring target melody configuration information;
acquiring a target melody generated according to the target melody configuration information;
and displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody.
2. The lyric display method of claim 1, wherein said obtaining the target melody generated according to the target melody configuration information comprises:
determining the duration of the target melody according to the configuration information of the target melody;
and adjusting the original melody according to the target melody duration to obtain the target melody.
3. The lyric display method of claim 2, wherein said adjusting the original melody according to the target melody duration to obtain the target melody comprises:
acquiring original melody attribute information of the original melody;
acquiring an original prelude duration corresponding to the original melody attribute information and a target prelude duration corresponding to the target melody attribute information;
acquiring a difference value between the original prelude time length and the target prelude time length;
and adjusting the duration of the original introduction in the original melody according to the difference to obtain the target melody.
4. The lyric display method of claim 3, wherein said adjusting the duration of the original introduction in the original melody according to the difference to obtain the target melody comprises:
adjusting the pronunciation starting time of the original lyrics according to the difference value to obtain a target prelude;
and obtaining the target melody according to the original melody and the target introduction.
5. The lyric display method of claim 1, wherein said obtaining the target melody generated according to the target melody configuration information comprises:
acquiring target melody configuration parameters according to the target melody configuration information;
and regenerating the new target melody according to the target melody configuration parameter.
6. The lyric display method of claim 1, wherein the displaying the lyrics according to the playing time of the characters in the original lyrics in the target melody comprises:
playing the target melody and the original lyrics;
highlighting one character or one sentence of lyrics which is being played at the current playing moment.
7. A lyric display apparatus, comprising:
a first display module configured to execute a melody editing page displaying an original song, the original song including an original melody and original lyrics;
a first obtaining module configured to execute a trigger operation of a melody configuration control in the melody editing page to obtain target melody configuration information;
a second obtaining module configured to perform obtaining a target melody generated according to the target melody configuration information;
and the second display module is configured to perform lyric display according to the playing time of the characters in the original lyrics in the target melody.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the lyric display method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the lyric display method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the lyrics display method of any one of claims 1 to 6 when executed by a processor.
CN202011631075.1A 2020-12-30 2020-12-30 Lyric display method, device, electronic equipment and computer readable storage medium Pending CN112699269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011631075.1A CN112699269A (en) 2020-12-30 2020-12-30 Lyric display method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011631075.1A CN112699269A (en) 2020-12-30 2020-12-30 Lyric display method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112699269A true CN112699269A (en) 2021-04-23

Family

ID=75513560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011631075.1A Pending CN112699269A (en) 2020-12-30 2020-12-30 Lyric display method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112699269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611267A (en) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 Word and song processing method and device, computer readable storage medium and computer equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313477A (en) * 2005-12-21 2008-11-26 Lg电子株式会社 Music generating device and operating method thereof
US20100162879A1 (en) * 2008-12-29 2010-07-01 International Business Machines Corporation Automated generation of a song for process learning
CN101916240A (en) * 2010-07-08 2010-12-15 福建天晴在线互动科技有限公司 Method for generating new musical melody based on known lyric and musical melody
CN105637586A (en) * 2014-07-26 2016-06-01 华为技术有限公司 Method and apparatus for editing audio files
CN105702240A (en) * 2014-11-25 2016-06-22 腾讯科技(深圳)有限公司 Method and device for enabling intelligent terminal to adjust song accompaniment music
CN106373580A (en) * 2016-09-05 2017-02-01 北京百度网讯科技有限公司 Singing synthesis method based on artificial intelligence and device
CN107659725A (en) * 2017-09-26 2018-02-02 维沃移动通信有限公司 A kind of audio-frequency processing method and mobile terminal
CN108806655A (en) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 Song automatically generates
CN109543064A (en) * 2018-11-30 2019-03-29 北京微播视界科技有限公司 Lyrics display processing method, device, electronic equipment and computer storage medium
CN110164481A (en) * 2019-05-21 2019-08-23 北京字节跳动网络技术有限公司 A kind of song recordings method, apparatus, equipment and storage medium
CN111092991A (en) * 2019-12-20 2020-05-01 广州酷狗计算机科技有限公司 Lyric display method and device and computer storage medium
US20200234684A1 (en) * 2019-04-02 2020-07-23 Beijing Dajia Internet Information Technology Co., Ltd. Live stream processing method, apparatus, system, electronic apparatus and storage medium
CN111639226A (en) * 2020-05-13 2020-09-08 腾讯音乐娱乐科技(深圳)有限公司 Lyric display method, device and equipment
CN111862911A (en) * 2020-06-11 2020-10-30 北京时域科技有限公司 Song instant generation method and song instant generation device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313477A (en) * 2005-12-21 2008-11-26 Lg电子株式会社 Music generating device and operating method thereof
US20100162879A1 (en) * 2008-12-29 2010-07-01 International Business Machines Corporation Automated generation of a song for process learning
CN101916240A (en) * 2010-07-08 2010-12-15 福建天晴在线互动科技有限公司 Method for generating new musical melody based on known lyric and musical melody
CN105637586A (en) * 2014-07-26 2016-06-01 华为技术有限公司 Method and apparatus for editing audio files
CN105702240A (en) * 2014-11-25 2016-06-22 腾讯科技(深圳)有限公司 Method and device for enabling intelligent terminal to adjust song accompaniment music
CN106373580A (en) * 2016-09-05 2017-02-01 北京百度网讯科技有限公司 Singing synthesis method based on artificial intelligence and device
CN108806655A (en) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 Song automatically generates
CN107659725A (en) * 2017-09-26 2018-02-02 维沃移动通信有限公司 A kind of audio-frequency processing method and mobile terminal
CN109543064A (en) * 2018-11-30 2019-03-29 北京微播视界科技有限公司 Lyrics display processing method, device, electronic equipment and computer storage medium
US20200234684A1 (en) * 2019-04-02 2020-07-23 Beijing Dajia Internet Information Technology Co., Ltd. Live stream processing method, apparatus, system, electronic apparatus and storage medium
CN110164481A (en) * 2019-05-21 2019-08-23 北京字节跳动网络技术有限公司 A kind of song recordings method, apparatus, equipment and storage medium
CN111092991A (en) * 2019-12-20 2020-05-01 广州酷狗计算机科技有限公司 Lyric display method and device and computer storage medium
CN111639226A (en) * 2020-05-13 2020-09-08 腾讯音乐娱乐科技(深圳)有限公司 Lyric display method, device and equipment
CN111862911A (en) * 2020-06-11 2020-10-30 北京时域科技有限公司 Song instant generation method and song instant generation device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611267A (en) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 Word and song processing method and device, computer readable storage medium and computer equipment

Similar Documents

Publication Publication Date Title
CN106024009B (en) Audio processing method and device
CN104166689B (en) The rendering method and device of e-book
WO2022142772A1 (en) Lyric processing method and apparatus
CN110958386B (en) Video synthesis method and device, electronic equipment and computer-readable storage medium
CN105335414B (en) Music recommendation method and device and terminal
CN112445395B (en) Music piece selection method, device, equipment and storage medium
CN113365134B (en) Audio sharing method, device, equipment and medium
CN112632906A (en) Lyric generation method, device, electronic equipment and computer readable storage medium
CN112188266A (en) Video generation method and device and electronic equipment
CN110362711A (en) Song recommendations method and device
CN113411516B (en) Video processing method, device, electronic equipment and storage medium
WO2022198934A1 (en) Method and apparatus for generating video synchronized to beat of music
CN111583972B (en) Singing work generation method and device and electronic equipment
CN112068711A (en) Information recommendation method and device of input method and electronic equipment
CN112837664B (en) Song melody generation method and device and electronic equipment
CN108156506A (en) The progress adjustment method and device of barrage information
CN113986574A (en) Comment content generation method and device, electronic equipment and storage medium
CN112699269A (en) Lyric display method, device, electronic equipment and computer readable storage medium
CN113157972A (en) Recommendation method and device for video cover documents, electronic equipment and storage medium
CN109756783A (en) The generation method and device of poster
CN111615007A (en) Video display method, device and system
CN113364999B (en) Video generation method and device, electronic equipment and storage medium
CN113407275A (en) Audio editing method, device, equipment and readable storage medium
CN113923517A (en) Background music generation method and device and electronic equipment
CN112596695A (en) Song guide method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination