CN113539217A - Lyric creation navigation method and device, equipment, medium and product thereof - Google Patents

Lyric creation navigation method and device, equipment, medium and product thereof Download PDF

Info

Publication number
CN113539217A
CN113539217A CN202111016989.1A CN202111016989A CN113539217A CN 113539217 A CN113539217 A CN 113539217A CN 202111016989 A CN202111016989 A CN 202111016989A CN 113539217 A CN113539217 A CN 113539217A
Authority
CN
China
Prior art keywords
lyric
note
melody
music
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111016989.1A
Other languages
Chinese (zh)
Inventor
黄不群
彭学杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Publication of CN113539217A publication Critical patent/CN113539217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/111Automatic composing, i.e. using predefined musical rules
    • G10H2210/115Automatic composing, i.e. using predefined musical rules using a random process to generate a musical note, phrase, sequence or structure
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/151Music Composition or musical creation; Tools or processes therefor using templates, i.e. incomplete musical sections, as a basis for composing

Abstract

The application discloses a lyric creation navigation method and a device, equipment, medium and product thereof, wherein the method comprises the following steps: responding to a word filling instruction of a user, and acquiring a music melody pre-generated in a music composition of a composition interface; displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the word filling interface, and displaying lyric prompt information corresponding to the editing area in a correlated manner; responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area, and correspondingly storing the lyric text in a lyric cache area; and responding to the storage event of the lyric cache region, and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text. The method and the device can efficiently guide the user to create lyrics to form musical works, enrich the auxiliary musical creation means, and improve the auxiliary musical creation efficiency.

Description

Lyric creation navigation method and device, equipment, medium and product thereof
Technical Field
The present application relates to the field of audio processing technologies, and in particular, to a lyric creation navigation method, and a corresponding apparatus, computer device, computer-readable storage medium, and computer program product.
Background
Most of auxiliary music creation technologies in the prior art only realize the automation of input and output, users do not need to create with paper pens and musical instruments, corresponding convenience is provided for the users, and the users still rely on music theory knowledge to create effective music works.
The existing software related to auxiliary music creation lacks business logic for guiding users to make effective music creation by fusing music theory knowledge, technically depends on acoustic models and synthetic models related to artificial intelligence technology, and songs created by artificial intelligence technology depend on big data and are obtained by 'creation' after training and learning 'creation ability', so that correspondingly obtained works have different quality, often lack of individuation and lose artistry. From this, because the musical composition of artificial intelligence creation lacks the user and participates in the link, user experience also has not followed the acquisition, can't really effectively attract user's flow to participate in the creation and share, can't form novel attitude, consequently, its limitation is obvious.
The defects of the prior art lead to low efficiency of auxiliary music creation, intelligent advantages of computer equipment cannot be excavated, and huge excavation space still exists in related fields, so that the application tries to make related exploration.
Disclosure of Invention
A primary object of the present application is to solve at least one of the above problems and provide a lyric creation navigation method and a corresponding apparatus, computer device, computer readable storage medium, computer program product to implement assisted music creation.
In order to meet various purposes of the application, the following technical scheme is adopted in the application:
the lyric creation navigation method adapted to one of the purposes of the application comprises the following steps:
responding to a word filling instruction of a user, and acquiring a music melody pre-generated in a music composition of a composition interface;
displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the word filling interface, and displaying lyric prompt information corresponding to the editing area in a correlated manner;
responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area, and correspondingly storing the lyric text in a lyric cache area;
and responding to the storage event of the lyric cache region, and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text.
In a further embodiment, the method for obtaining a musical melody pre-generated from a musical composition in a composition interface in response to a word-filling command from a user comprises the steps of:
monitoring a music melody pre-generated in a music composition of a composition interface;
and detecting whether the selected note quantity of the music melody generated in the composition interface is required by at least one lyric single sentence marked by preset melody rhythm information of the music melody, if so, activating a word filling control used for triggering a word filling instruction of a user, and otherwise, keeping the word filling control in an inactivated state.
In a further embodiment, displaying a word-filling interface includes the steps of:
triggering and displaying a word filling interface in response to a word filling instruction of a user;
correspondingly dividing a plurality of lyric single sentences according to sentence dividing information in the melody rhythm information corresponding to the music melody, and establishing a corresponding relation between the lyric single sentences and the melody single sentences in the music melody;
providing a corresponding editing area in the word filling interface to adapt to the display of each lyric single sentence, and initializing and displaying characters of the corresponding lyric single sentence in the lyric cache area in the editing area;
determining the maximum word number information of the lyrics to be input of the corresponding lyric single sentence according to the melody single sentence, wherein the maximum word number information is used for restricting the input word number of the corresponding editing area;
displaying the maximum word count information and the word count information entered therein for each edit section association.
In a further embodiment, the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence is determined according to the melody single sentence, and is used for restricting the input word number of the corresponding editing area, and the method comprises the following steps:
determining a total duration of durations of selected notes of the melodic simple sentence;
dividing the total time value by a product of a preset unit time value to be used as the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence;
the maximum word count information is configured as a maximum input word count of the corresponding edit field text box.
In a further embodiment, the method for obtaining the lyric text input by the user in the editing area in response to the word filling confirmation instruction and correspondingly storing the lyric text in the lyric cache area comprises the following steps:
responding to the word filling confirmation instruction, acquiring the lyric texts input by each lyric single sentence in each editing area one by one, and correspondingly replacing the corresponding lyric single sentences in the lyric cache area to realize storage;
and adaptively establishing a rhythm synchronization relation between the lyric single sentence and the selected notes in the corresponding melody single sentence, so that each single character in the lyric single sentence is in rhythm synchronization with the selected notes in the rhythm single sentence.
In a further embodiment, the method for synchronously updating the words in the note zone of each corresponding selected note in the composition interface with the lyric text in response to the storage event in the lyric cache zone comprises the following steps:
responding to a storage event of the lyric cache region, and acquiring an updated lyric text;
and correspondingly updating the characters in the note zone of each corresponding selected note in the composition interface according to the updated lyric text and the rhythm synchronization relation of the music melody.
In an expanded embodiment, after the word filling interface is displayed, the method comprises the following steps:
responding to an intelligent reference instruction triggered by a single sentence text in the lyric text, and entering an intelligent search interface;
displaying one or more recommended texts matched with the keywords in response to the keywords input from the intelligent search interface;
and responding to a selection instruction of one of the recommended texts, and replacing the single sentence text with the selected recommended text.
In an expanded embodiment, after the word filling interface is displayed, the method comprises the following steps:
responding to input of a prior lyric text, and acquiring one or more intelligent automatic association texts of a subsequent lyric text;
displaying the intelligent automatic association text on the word filling interface;
and in response to the selection operation for one of the intelligent automatic association texts, using the selected intelligent automatic association text as the subsequent lyric text.
In a preferred embodiment, a text box is arranged in the editing area and used for receiving the input of the lyric text; the note zone location where the selected note is located is loaded with a lyric editing control for receiving input of lyric text.
In an expanded embodiment, the method comprises the following steps:
responding to an editing event of characters in the lyric editing control acting on the note zone of the selected note, and replacing the corresponding edited characters to update corresponding lyric texts in the lyric cache zone according to the word number corresponding relation;
and responding to the content updating event of the lyric cache area, and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface by using the updated lyric text.
In an expanded embodiment, after the word filling interface is displayed, the method comprises the following steps:
responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment, and automatically completing the lyric text according to the selected notes of the music melody.
In an expanded embodiment, before the music melody pre-generated in the music composition of the composition interface is acquired in response to the word filling instruction of the user, the method comprises the following steps:
displaying a composition interface, wherein the composition interface displays a list of note locations with rhythm and scale as dimensions, and each note location corresponds to a note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension;
formatting the occupation width of corresponding note zone bits on the composition interface according to the rhythm corresponding to each undetermined note in the preset rhythm information;
sequentially advancing the acquisition of the selected notes corresponding to each undetermined note in the music melody to be acquired, and displaying a corresponding note prompt area in a composition interface aiming at each undetermined note in the current sequence so as to list the note areas corresponding to the selectable notes which are changed in relation to the prior sequenced selected notes for selection;
constructing said musical melody comprising selected notes that have been determined in sequence.
In an expanded embodiment, the method comprises the following steps:
and responding to a work playing instruction, and playing the music work of the vocal singing version synthesized by the lyric text and the music melody.
The melody creation navigation device adapted to one of the objectives of the present application comprises: the system comprises a melody loading module, an interface construction module, a lyric input module and a lyric updating module. The melody loading module is used for responding to a word filling instruction of a user and acquiring a music melody pre-generated in a music composition of a composition interface; the interface construction module is used for displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the interface construction module, and displaying lyric prompt information corresponding to the editing area in an associated manner; the lyric input module is used for responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area and correspondingly storing the lyric text in a lyric cache area; and the lyric updating module is used for responding to the storage event of the lyric cache region and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text.
The computer device comprises a central processing unit and a memory, wherein the central processing unit is used for calling and running a computer program stored in the memory to execute the steps of the lyric creation navigation method.
A computer-readable storage medium, which stores in the form of computer-readable instructions a computer program implemented according to the lyric creation navigation method, and which, when invoked by a computer, performs the steps comprised in the method.
A computer program product, provided to adapt to another object of the present application, comprises computer programs/instructions which, when executed by a processor, implement the steps of the method described in any of the embodiments of the present application.
Compared with the prior art, the application has the following advantages:
firstly, the method allows a user to create lyrics according to a given music melody, and forms lyric prompt information required by the lyric creation according to melody rhythm information corresponding to the music melody, so that a creation navigation function is exerted for the lyric creation process of the user through a word filling interface.
And secondly, the synchronization of the words filling interface and the lyric text in the composition interface is realized, the lyric text input by the user can be displayed in the composition interface in time, the rhythm synchronization relation between each word in the lyric text and each corresponding note in the music melody is displayed, the user can conveniently and visually grasp the lyric rhythm, and the lyric auxiliary creation efficiency is improved.
In addition, the implementation of the technical scheme of the application realizes the decoupling between the lyric creation and the music melody creation, and allows the lyric creation based on the music melody created by other people, thereby promoting the collaborative creation among users, activating the user flow, promoting the creation and sharing of user works, redefining the internet music ecology, and realizing that people are all musicians.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram illustrating an exemplary embodiment of a lyric authoring navigation method of the present application;
FIG. 2 is a schematic illustration of a word-filling interface of the present application;
FIG. 3 is a schematic layout of a composition interface of the present application;
FIG. 4 is a flowchart illustrating a specific process of displaying a word-filling interface according to the present application;
FIG. 5 is a schematic flow chart illustrating a process for determining maximum word count information for a lyric single sentence according to the present application;
FIG. 6 is a schematic flow chart of adaptive adjustment of rhythm correspondence between lyric characters and notes according to the present application;
FIG. 7 is a schematic flow chart illustrating the updating of a composition interface in response to a lyric storage event according to the present application;
FIG. 8 is a schematic flow chart illustrating the intelligent determination of recommended texts for lyric creation;
fig. 9 and 10 are corresponding user graphical interfaces in the process of intelligently determining the recommended text according to the present application, which respectively show an intelligent search interface and a recommended text display page;
FIG. 11 is a flow chart illustrating the intelligent association of text for lyric creation;
FIG. 12 is a schematic flow chart illustrating the implementation of association updates in response to changes in the lyric content of the composition interface;
FIG. 13 is a schematic flowchart illustrating a music melody retrieval process according to the present application;
FIG. 14 is a diagram of a note-hinting area displayed on a composition interface of the present application;
FIG. 15 is a schematic flow chart of a formatting composition interface according to the present application;
FIG. 16 is a flowchart illustrating the process of obtaining an accompaniment template according to the present application;
FIG. 17 is a schematic view of an interface corresponding to acquiring an accompaniment template according to the present application;
FIG. 18 is a schematic flowchart illustrating a procedure for obtaining a music melody according to the present application;
FIGS. 19 and 20 are schematic diagrams of a composition interface, together illustrating the process of identifying a selected note in the composition interface;
FIG. 21 is a flowchart illustrating a process of displaying lyrics in the note zone in response to note selection according to the present application;
FIG. 22 is a schematic flow chart illustrating a process for playing a musical composition according to the present application;
FIG. 23 is a schematic layout view of a composition interface of the present application in a state, mainly illustrating a control of a preset sound type;
FIG. 24 is a schematic flow chart illustrating audio synthesis according to voice type according to the present application;
FIG. 25 is a schematic flow chart of a method for synthesizing a musical composition according to the present application;
FIG. 26 is a schematic flow chart illustrating the support of cross-user collaborative authoring according to the present application;
FIGS. 27 and 28 are schematic block diagrams of exemplary embodiments of a lyric composition navigation device and a musical piece composition device, respectively, according to the present application;
fig. 29 is a schematic structural diagram of a computer device used in the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, "client," "terminal," and "terminal device" as used herein include both devices that are wireless signal receivers, which are devices having only wireless signal receivers without transmit capability, and devices that are receive and transmit hardware, which have receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
One or more technical features of the present application, unless expressly specified otherwise, may be deployed to a server for implementation by a client remotely invoking an online service interface provided by a capture server for access, or may be deployed directly and run on the client for access.
Unless specified in clear text, the neural network model referred to or possibly referred to in the application can be deployed in a remote server and used for remote call at a client, and can also be deployed in a client with qualified equipment capability for direct call.
Various data referred to in the present application may be stored in a server remotely or in a local terminal device unless specified in the clear text, as long as the data is suitable for being called by the technical solution of the present application.
The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, it is proposed based on the same inventive concept, and therefore, concepts of the same expression and concepts of which expressions are different but are appropriately changed only for convenience should be equally understood.
The embodiments to be disclosed herein can be flexibly constructed by cross-linking related technical features of the embodiments unless the mutual exclusion relationship between the related technical features is stated in the clear text, as long as the combination does not depart from the inventive spirit of the present application and can meet the needs of the prior art or solve the deficiencies of the prior art. Those skilled in the art will appreciate variations therefrom.
In order to provide a musical composition creation environment for composing and word-filling for a user of the computer program product of the present application, after the computer program product implemented according to the technical solution of the present application is operated, it is necessary to determine related information required for creating a music melody in a musical composition, including at least melody rhythm information, and in some embodiments, it is necessary to include accompaniment chord information, so as to implement a navigation process for composing and word-filling for the user as required.
The musical composition comprises a pure musical type or a singing type. The pure music type musical composition generally comprises background music and music melody, wherein the background music is chord music generally formed by playing according to chords, other auxiliary music sounds such as drumbeats and the like can be added to the background music, and the music melody can be flexibly determined by a person skilled in the art and is the main melody in the musical composition. In comparison with the former, the musical melody of the singing type musical composition is generally an acoustic melody formed by vocal singing. In the application, the background music is played according to the chord predefined in the accompaniment chord information.
The accompaniment chord information predefines one or more chord progresses expanded according to time sequence, each chord progress comprises a plurality of chords, the accompaniment chord information can be manually compiled in advance, and the background music can be prepared according to the accompaniment chord information to form related background music data for storage and standby, so that the background music data and the music melody produced by the user form corresponding music works in the following.
The rhythm information of the melody predefines the rhythm of each undetermined note required to be acquired by the music melody in the music composition, specifically, the music melody is composed of a plurality of notes organized in sequence, each note is in an undetermined state before the user creation, and the rhythm information of the melody is used for predefining the time value of each undetermined note, so that the limitation on the rhythm of the music melody is formed. When each note of the musical melody is selected, a plurality of sequentially organized selected notes are constructed to form the complete musical melody.
For the lyric creation navigation method of the application, the lyric creation navigation method can be performed on the basis of the music melody created by the user, and the music melody is constructed according to the rhythm information of the melody, so in some embodiments, the lyric creation can be performed directly according to the music melody, but in other embodiments, the lyric creation navigation method can be performed by means of the well-prepared music melody and the rhythm information of the melody at the same time. That is, the lyric creation navigation method of the present application can guide the user to complete the lyric creation as long as the user works on the basis of the minimum necessary information resources, and thus, the following embodiments of the present application will be described separately.
The lyric creation navigation method can be programmed into a computer program product and is realized by being deployed in terminal equipment and/or a server to run, so that a client can access an open user interface of the computer program product after running in a webpage program or application program mode to realize man-machine interaction. Referring to fig. 1 and 2, in an exemplary embodiment, the method includes the steps of:
step S1100, responding to the word filling instruction of the user, and acquiring the music melody pre-generated in the music composition of the composition interface:
referring to fig. 2, an entry for triggering the word filling command of the user may be provided in a graphical user interface presented when the computer program product of the present application runs, for example, a word filling control is added in a function selection area of the composition interface shown in fig. 2, so as to allow the user to touch the word filling control to trigger the word filling command, and thus, a word filling interface may be displayed for receiving an input of a lyric text. Of course, other more convenient entry setting modes can be adopted, as long as the word-filling interface can be accessed through a certain entry, and the implementation of the application is not influenced.
And responding to the word filling instruction of the user, acquiring a music melody pre-generated in the music works of the composition interface, wherein the music melody can be created by the current user or extracted from draft information acquired from a remote server and issued by other users, and the draft information can realize subsequent auxiliary creation of the unfinished music works of other users.
As mentioned above, the music melody is already composed according to the constraint of the melody rhythm information, so that each note follows the constraint of the melody rhythm information, the duration of each note is determined, and the duration can be analyzed and corresponding to the composition area constructed by the composition interface.
The music composing interface is constructed with a two-dimensional coordinate system and is presented in a graphical user interface of the client, two dimensions of the music composing interface are rhythm dimension and musical scale dimension respectively, specifically, a transverse axis of the music composing interface is unfolded according to the rhythm dimension in a time sequence, and the unfolding direction of the transverse axis represents the sequential unfolding direction of each note in the music melody; the longitudinal axis of which is expanded according to the scale dimension and exhibits various scale levels, typically including at least the corresponding scale levels of bass, mid-range and treble.
As shown in fig. 3, according to the two-dimensional coordinate structure, the whole composition interface can be divided into a list of note locations by using the auxiliary line, each list of note locations includes a note location row corresponding to each note sequence, each note location row includes a plurality of note locations corresponding to the scale in the whole scale, and each note location corresponds to a note. It can be appreciated that, since the horizontal axis represents time, the bit width of each note location in the horizontal direction substantially represents the corresponding duration of the note in the note location.
Therefore, after the music melody is acquired in response to the word filling instruction of the user, the melody rhythm information corresponding to the acquired music melody can be analyzed or associated, and the corresponding relation between the lyrics to be authored and the music melody in rhythm can be further determined subsequently.
Step S1200, displaying a word filling interface, displaying an editing area for receiving the lyric text corresponding to the music melody, and displaying the lyric prompt information corresponding to the editing area in an associated manner:
after responding to the word filling instruction of the user, displaying a word filling interface as shown in fig. 2, wherein an editing area can be provided in the word filling interface, and lyric prompt information is displayed in the editing area for prompting the user to follow a certain rule to fill words, so as to play a role in word filling navigation. The lyric prompt message may include word number information corresponding to a lyric text, may also include word number information already entered in the editing area, and may also include other information such as an automatic association text for which a user has entered lyrics. Typically, the lyric prompt information is provided in response to a sequence of selected notes in the musical melody that have been determined by the user, and may be absent for selected notes that have not been determined by the user.
Step S1300, responding to the word filling confirmation instruction, acquiring the lyric text input by the user in the editing area, and correspondingly storing the lyric text in a lyric cache area:
after the user inputs the lyric text in the word filling interface, the user can touch a 'completion' storage control provided in the word filling interface, so as to trigger a corresponding word filling confirmation instruction, and in response to the word filling confirmation instruction, the embodiment stores the lyrics input by the user in a lyric cache area.
The lyrics input by the user in the word filling interface usually obey the constraint of the lyric prompt information, so that the lyric texts input by the user are obtained from the editing area and then are correspondingly replaced with the corresponding contents in the lyric cache area, therefore, if the user only modifies the single-sentence texts of individual lyric single sentences, only the corresponding single-sentence texts in the lyric cache area need to be replaced and stored.
The lyric text input by the user can also be generated by means of artificial intelligence, and the recommendation text obtained by searching or correlating by the user through an artificial intelligence interface based on deep semantic learning can be used as the lyric text required by the application.
In some embodiments, the lyric text input by the user is allowed to change the corresponding relationship between the number of words and the number of selected notes in the music melody, so that the corresponding relationship between the selected notes and the lyric text needs to be rearranged according to a certain preset rule, which will be described in other embodiments later.
Step S1400, responding to the storage event of the lyric cache area, and synchronously updating the characters in the note area where each corresponding selected note is located in the composition interface by the lyric text:
after the user saves the lyric text, the lyric cache region triggers a storage event, namely a content updating event, so that the words in the lyric editing control of the note region corresponding to each rhythm in the composition interface are triggered to be updated according to the updated word content, and the synchronization between the lyric text input by the user in the word filling interface and the lyric text displayed in the note region in the composition interface is realized.
And a lyric editing control is loaded in the note zone corresponding to the selected note of the music melody displayed in the composition interface and used for displaying lyric characters corresponding to the selected note indicated by the note zone, so that after the characters in the lyric cache area are synchronized to the lyric editing control of the composition interface in response to the storage event, a user can read the input lyric text in the composition interface, and the effect of navigation creation is achieved.
Therefore, the embodiment provides a word filling interface for a user to create lyrics, provides lyric prompt information in the word filling interface, and plays a role in guiding the user to create lyrics, so as to improve the word filling efficiency of the user.
Through the disclosure of the typical embodiment of the lyric creation navigation method and various variation embodiments thereof, it can be understood that the user-defined music melody becomes more convenient and efficient, and particularly, the following advantages are embodied:
firstly, the method allows a user to create lyrics according to a given music melody, and forms lyric prompt information required by the lyric creation according to melody rhythm information corresponding to the music melody, so that a creation navigation function is exerted for the lyric creation process of the user through a word filling interface.
And secondly, the synchronization of the words filling interface and the lyric text in the composition interface is realized, the lyric text input by the user can be displayed in the composition interface in time, the rhythm synchronization relation between each word in the lyric text and each corresponding note in the music melody is displayed, the user can conveniently and visually grasp the lyric rhythm, and the lyric auxiliary creation efficiency is improved.
In addition, the implementation of the technical scheme of the application realizes the decoupling between the lyric creation and the music melody creation, and allows the lyric creation based on the music melody created by other people, thereby promoting the collaborative creation among users, activating the user flow, promoting the creation and sharing of user works, redefining the internet music ecology, and realizing that people are all musicians.
In a modified embodiment of the lyric composition navigation method, the step S1100 of obtaining the musical melody pre-generated in the musical composition from the composition interface in response to the word filling command of the user includes the following steps:
step S1110, monitoring a music melody pre-generated in the musical composition of the composition interface:
in order to highlight the effect of creating navigation and avoid the disorder of lyrics added by a user, the word filling control can be activated only by controlling the music melody to meet certain conditions, therefore, a monitoring service can be arranged in a background, and the composition of the music melody developed in the composition interface is analyzed in real time so as to determine whether the music melody meets the requirement of preset space.
Step S1120, detecting whether the selected number of notes of the music melody generated in the composition interface is required for forming at least one lyric single sentence marked by the preset melody rhythm information of the music melody, if so, activating a word filling control for triggering a word filling instruction of a user, otherwise, keeping the word filling control in an inactive state:
one of the ways may be to require that the musical melody at least constructs a lyric single sentence determined according to the melody rhythm information thereof, so that whether the determined selected notes in the composition interface reach the number can be judged according to the number of the selected notes required by the first lyric single sentence of the musical melody, and when the determined selected notes reach the number, the word filling control in the function selection area is activated so as to further trigger the user word filling instruction; otherwise, keeping the word filling control in an inactivated state, so that the user cannot trigger a word filling instruction of the user.
For the selected note quantity needed by judging a single lyric sentence, if the music melody is fused with the sentence information in the preset melody rhythm information, the judgment can be carried out according to the music melody, otherwise, the corresponding preset melody rhythm information can be directly called to obtain the sentence information for determination.
The embodiment strengthens the navigation function of lyric auxiliary creation, avoids the distraction of users, guides the users to start the lyric creation function after finishing the music melody under certain conditions, and further improves the creation navigation efficiency.
Referring to fig. 4, in an alternative embodiment of the lyric creation navigation method of the present application, in step S1200, a word filling interface is displayed, which includes the following steps:
step S1210, responding to a word filling instruction of a user and triggering and displaying a word filling interface:
as described above, after the user touches the word filling control in the function selection area, the word filling interface shown in fig. 2 is triggered and displayed.
Step S1220, correspondingly dividing a plurality of lyric single sentences according to the sentence dividing information in the melody rhythm information corresponding to the music melody, and establishing a corresponding relationship between the lyric single sentences and the melody single sentences in the music melody:
in order to facilitate word filling for the user, the melody rhythm information included in or corresponding to the music melody may further include sentence segmentation information of the music melody so as to instruct the user to divide the lyric text into a plurality of lyric single sentences. The sentence dividing information is also a lyric prompting information in nature, and does not necessarily need to be marked by a clear text, and the lyric prompting information can be presented by dividing a single sentence of lyrics.
Specifically, the selected notes of the musical melody may be punctuated according to the sentence splitting information in the melody rhythm information, thereby obtaining a plurality of subsequences of the selected notes, each subsequence corresponding to one lyric single sentence, thereby determining a plurality of lyric single sentences of the lyric text, and the total number of the selected notes corresponding to each lyric single sentence is also determined accordingly, so that the word number information included in each lyric single sentence can also be determined. For example, if it is specified that each selected note corresponds to a syllable, the maximum input word count (in chapters) of the lyric phrase should be the total number of selected notes of its corresponding subsequence, whereby the word count information for each lyric phrase is a constant value. Under the prompt of the certain value, the lyric creation thought of the user is clearer.
In this embodiment, the division of the lyric single sentences according to the sentence dividing information in the melody rhythm information is performed logically, and may be configured by, for example, a character string array in the background, where each element in the array is used to store one lyric single sentence, and the corresponding relationship between each lyric single sentence in the lyric cache area and the contents in the text box in the editing area in the composition interface and the filling interface may be realized by the array subscript, so as to establish the corresponding relationship between the lyric single sentence and the melody single sentence in the music melody.
Step S1230, providing a corresponding editing area in the word filling interface to accommodate each lyric single sentence display, and initializing the words of the corresponding lyric single sentence in the lyric cache area in the editing area:
in order to display the lyric prompt information of the sentence dividing information in the melody rhythm information, only one editing area is correspondingly arranged in a word filling interface corresponding to each lyric single sentence according to the lyric single sentence, and each editing area is used for independently receiving the input of the corresponding lyric single sentence. Therefore, sentence dividing information of the lyric text can be displayed, and the function of lyric prompt information is achieved.
Therefore, the layout of the word filling interface is divided into a plurality of editing areas, a text box is added in each editing area for displaying and inputting the corresponding lyric single sentence, and because default text or historical editing text may exist in the lyric cache area, the corresponding lyric single sentence in the lyric cache area can be correspondingly loaded into the text box of the corresponding editing area in the initial creation stage of the text box.
Step S1240, determining the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence according to the melody single sentence, wherein the maximum word number information is used for restricting the input word number of the corresponding editing area:
in order to facilitate the input of the enhanced navigation function for the user, the corresponding single sentence text is displayed and edited by loading the text box in each editing area, a prompt area can be arranged in each editing area, the maximum word number information of the corresponding lyric single sentence and the total number information which can be input in the current text box are displayed in the prompt area, the lyric prompt information is played, and the user is guided to fill words more efficiently and dynamically. In an improved embodiment, the text box may be further constrained such that the maximum number of words input does not exceed the total number of selected notes corresponding to the lyric single sentence corresponding to the text box, i.e. does not exceed the constant value defined by the maximum word number information, so as to avoid the user from inputting too many words by mistake.
In this embodiment, the maximum number of words that can be input per lyric single sentence is controlled, and this maximum word number information is used to restrict the maximum number of input words for the text box of each editing area. As mentioned above, the melody clause is also determined according to the clause information in the melody rhythm information, and is in one-to-one correspondence with the lyric clause, and the word number of each lyric clause should be effectively controlled, otherwise, when the same selected note corresponds to a plurality of lyric characters, the situation of inconsistent pronunciation is easily caused, so that the maximum input word number of the text box in the editing area is restricted, which is helpful for ensuring that a user produces lyrics meeting certain specifications and ensuring the quality of the lyrics created by the user.
Step S1250, displaying the maximum word count information and the word count information input therein for each editing area in association:
because default generated lyric texts may exist in a lyric cache area for storing lyric texts, and the lyrics edited in a word filling interface are finally updated into the lyric cache area, after editing areas corresponding to single lyric sentences are arranged, single sentence texts for displaying the corresponding single lyric sentences in the lyric cache area are loaded for each editing area, and further, the maximum word number information of the lyric to be input of the single lyric sentences corresponding to the editing areas can be further displayed in each editing area as the lyric prompt information so as to prompt a user that the number of words input aiming at the single lyric texts should not exceed a fixed value prompted by the maximum word number information.
In order to ensure the navigation function and provide sufficient lyric prompt information for the user, in response to the setting of the text box in the editing area, the maximum number of words that can be input by the text box, that is, the maximum number of words that can be received by the corresponding lyric single sentence, and the real-time input number of words in the text box may be displayed in the prompting area corresponding to the text box in the editing area as shown in fig. 2 for the comparison of the user.
According to the embodiment, the word filling interface is more scientifically arranged according to the sentence segmentation information contained in the rhythm information of the melody and the word number information of the selected notes in the music melody, and the word filling interface is suitable for displaying the editing area and the lyric prompt information of each lyric single sentence in a one-to-one correspondence mode, so that the word filling interface has a stronger navigation function, can be suitable for the user-defined music melody by virtue of the fact that the corresponding navigation information is flexibly provided, the user can conveniently and quickly fill words, and the auxiliary music creation efficiency is improved.
According to the method and the device, the presentation mode of the lyric prompt information is standardized by refining the flow of the display word filling interface, sufficient information required by lyric creation is presented for the user through more scientific and reasonable interface layout, the user is prevented from inputting the lyrics randomly and disorderly, and the auxiliary lyric creation efficiency is improved.
Referring to fig. 5, in an alternative embodiment of the lyric creation navigation method of the present application, in step S1220, the maximum word number information of the lyrics to be input of the corresponding lyric single sentence is determined according to the melody single sentence, and is used to constrain the input word number of the corresponding editing area, including the following steps:
step S1221, determining a total duration value formed by duration values of the selected notes of the melody clause:
in a convenient embodiment, each selected note may be arranged to provide a word in rhythm with which it is synchronized, thereby simplifying the rhythm correspondence between the selected note in the musical melody and the words in the lyric text. However, it is difficult to express the agility of the musical composition, and therefore, the present embodiment attempts to accommodate a flexible correspondence between the number of words in the lyric phrase and the total number of selected notes in the melody phrase.
As mentioned above, when the number of words corresponding to a selected note is too large, the melody and lyrics are easily inconsistent, and the specific judgment is determined according to the normal pronunciation length of the human voice and the time value of the corresponding selected note, which has scientific components and can also be determined according to experience. For example, if a selected note of 1/4 beats corresponds to a single word, specifically, a single syllable, the pronunciation is generally natural, if two syllables correspond, the pronunciation is slightly compact, if three syllables correspond, the pronunciation will affect the quality of music, and in view of this, a unit duration value reasonably occupied when each word (one syllable can be regarded as a word to adapt to lyrics of different languages) in each lyric text is pronounced may be specified, and then, the maximum word count information of the corresponding lyric single sentence is determined according to the total duration value corresponding to the selected note in the melody single sentence.
Step S1222, dividing the total time value by a predetermined unit time value to obtain the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence:
and calculating the product obtained between the total time value of one melody single sentence and the preset unit time value, namely the maximum word number which can be accommodated by the lyric single sentence corresponding to the melody single sentence, specifically the number of syllables, thereby determining the maximum word number information of the lyrics to be input of the corresponding lyric single sentence.
Step S1223, configuring the maximum word number information as the maximum input word number of the corresponding edit area text box:
on the basis of determining the maximum word number information of a lyric single sentence, a constraint value can be added to the text box in the corresponding editing area, and the maximum input word number is constrained to the specific numerical value corresponding to the maximum word number information, so that the number of words input in the text box by a user can be controlled not to exceed the specific numerical value.
Through the flexible processing of the embodiment, a rhythm corresponding relation is preliminarily established between the selected notes of the lyric single sentence and the characters taking syllables as units in the melody single sentence based on a unit time value, the maximum word number which can be input by each lyric single sentence is determined, a lyric prompting function is realized, a user is guided to input a proper amount of lyrics so as to realize harmony on vocal music, and an auxiliary creation means is perfected.
Referring to fig. 6, in an alternative embodiment of the lyric creation navigation method of the present application, in step S1300, in response to a word filling confirmation instruction, obtaining a lyric text input by a user in the editing area, and correspondingly storing the lyric text in a lyric cache area, the method includes the following steps:
step S1310, responding to the word filling confirmation instruction, acquiring the lyric texts input by each lyric single sentence in each editing area one by one, and correspondingly replacing the corresponding lyric single sentences in the lyric cache area to realize storage;
after the user triggers the word filling confirmation instruction to confirm that the word filling is completed, the embodiment processes the words in each editing area one by taking the lyric single sentence as a unit, and replaces the lyric single sentence at the corresponding address in the lyric cache area correspondingly by the lyric text in the text box of each editing area to realize the storage. In program level processing of the replacement, the corresponding operations may be performed according to the string array as described above, as will be understood by those skilled in the art.
Step S1320, adaptively establishing a rhythm synchronization relationship between the lyric single sentence and the selected notes in the corresponding melody single sentence, so that each single character in the lyric single sentence has the selected notes in the melody single sentence synchronized with the selected notes in the melody single sentence in rhythm:
as described above, in the exemplary embodiment of the present application, a single note, such as a chinese, is taken as an example, and a single selected note is specified to correspond to a single chinese character in rhythm, under such a condition, a one-to-one correspondence in number between the selected note and the number of words that can be input in a lyric single sentence can be established according to the total number of the selected note in the melody single sentence, and accordingly, when the maximum word number information of the lyric to be input in a lyric single sentence needs to be determined and displayed, the maximum word number information can be directly determined according to the total number of the selected note in the corresponding melody single sentence, and such a rhythm synchronization relationship is established, which is direct and simple, but lacks compatibility.
In the improved embodiment, in order to enhance compatibility, the maximum word number information adapted to the corresponding lyric single sentence may be determined according to a total time value corresponding to the total number of the selected notes, for example, a total time value formed by the selected notes of 2/4, the selected notes of 1/4, and the like, according to a preset corresponding relationship between the time value and the word number, and then, when a corresponding relationship between the selected notes and the word number in the lyric single sentence is established, the maximum word number information may be determined according to a rule of the time value.
Theoretically, a user inputs a lyric single sentence according to the maximum word number information prompted by the lyric prompt information, but considering certain compatibility, the word number input by the user is allowed to be slightly less than the maximum word number information, for example, when a single sentence text prompt of one lyric single sentence is to input six Chinese characters, if a corresponding melody single sentence has four selected notes, the three notes are respectively ' 2/4 beat ', 2/4 beat ', 1/4 beat ' and 1/4 beat ', if the user inputs only five Chinese characters, the five Chinese characters can be flexibly adapted to each selected note, for example, the first selected note of 2/4 beat corresponds to two Chinese characters, and the other three selected notes correspond to one Chinese character; if the user only inputs four Chinese characters, the four Chinese characters can be corresponding to four selected musical notes one by one; if the user only inputs three Chinese characters, each Chinese character still can be corresponding to the note of 2/4 beats, namely the last Chinese character only corresponds to the first 1/4, and the note of the latter 1/4 is taken as the note of the last Chinese character with synchronous prolonged sound. Such adjustment is adaptively performed based on programming to avoid requiring a total number of selected notes to be strictly consistent with the number of words in a single sentence of lyrics, thereby exhibiting flexibility in lyric-assisted creation.
The method and the device have the advantages that the corresponding quantity relation between the total word number of the lyric text input by the user and the selected note number of the corresponding melody single sentence is adjusted in a self-adaptive mode based on the time value relation, so that the user can flexibly and flexibly write lyrics, lyric creation is more compatible, lyric creation auxiliary creation means are enriched, and lyric creation auxiliary creation efficiency can be improved.
Referring to fig. 7, in an alternative embodiment of the lyric creation navigation method of the present application, in step S1400, the method for synchronously updating the characters in the note zone where each corresponding selected note is located in the composition interface with the lyric text in response to the storage event in the lyric cache zone includes the following steps:
step S1410, responding to the storage event in the lyric cache area, obtaining updated lyric text:
as mentioned above, after the lyrics inputted in the word filling interface are stored in the lyrics buffer area, the storage event of the lyrics buffer area is triggered, and in response to the storage event, the updated lyrics text can be obtained from the lyrics buffer area, and can be processed in a single lyrics sentence unit, or in a full lyrics text, or determined by referring to the selected note corresponding to the duration range corresponding to the updated words.
In the embodiment, the rhythm corresponding relation is pre-established between the lyric single sentence in the lyric text and the melody single sentence of the music melody, so that the whole lyric single sentence can be obtained as an updated lyric text to be correspondingly processed by taking the lyric single sentence as a unit, and further the character content in the lyric editing control in the note zone where the selected note of the corresponding melody single sentence is located can be adaptively modified according to the updated lyric text.
Step S1420, updating the characters in the note zone where each corresponding selected note is located in the composition interface with the updated lyric text according to the relationship between the updated lyric text and the rhythm synchronization of the music melody:
because the rhythm corresponding relation is established between the characters in the lyric single sentence and the selected notes in the music melody, the note zone bit in the composition interface where the characters are located is definite, and accordingly, each character in the updated lyric text is correspondingly replaced to the corresponding note zone bit for displaying one by one according to the rhythm corresponding relation.
It should be noted that once the lyric phrase is updated, the word number changes from the original word number because the rhythm corresponding relationship between the lyric phrase and the selected note in the corresponding melody phrase is reset when the storage event is triggered, so that the same process can be adapted to the new rhythm corresponding relationship, and the characters in the updated lyric text are correspondingly displayed in the note zone of the selected note corresponding to the rhythm.
The embodiment embodies the dynamic association from the lyric text edited in the word filling interface to the composition interface, so that the lyric text edited in the word filling interface can be reflected in the composition interface in time, the synchronization of the lyric text contents displayed in the two interfaces is realized, a user can edit the melody according to the change of the lyrics in time, and the music auxiliary creation efficiency is further enhanced.
Referring to fig. 8, in an expanded embodiment of the lyric creation navigation method of the present application, after the step S1200 and the display of the word filling interface, the method includes the following steps:
step S1501, responding to an intelligent reference instruction triggered by a single sentence text in the lyric texts, and entering an intelligent search interface:
referring to fig. 2, the intelligent quote instruction for a single sentence text can be triggered in various ways. One of the common ways can be that when the cursor is positioned to a text box corresponding to a single sentence text, an "AI word filling" control is displayed on the word filling interface, and after the user touches the control, the intelligent exploration interface shown in fig. 8 is entered. For other ways, one skilled in the art can implement it flexibly.
Step S1502, in response to the keyword input from the intelligent search interface, displaying one or more recommended texts matching the keyword:
as shown in fig. 9, a user inputs one or more keywords in a search box provided in an intelligent search interface, and may trigger a background to search, through a preset search engine or an artificial intelligence word filling interface, one or more recommended texts semantically associated with the keywords, as shown in fig. 10, the recommended texts may be generated according to a preset rule, for example, may follow a constraint of a maximum number of input words corresponding to a corresponding single sentence text, or may control a plurality of recommended texts to be determined in association with a vowel of the keyword, so that each recommended text may be rhyme, and thus, the lyrics are heard more vividly when the lyrics are sung in human voice.
The search engine or the artificial intelligence word filling interface is pre-configured to determine the recommended text semantically associated with the keyword according to the keyword of the user, and can be flexibly implemented by adopting tools well known to those skilled in the art, which is not repeated herein.
Step S1503, in response to a selection instruction of one of the recommended texts, replacing the single sentence text with the selected recommended text:
after the user selects one of the recommended texts, a selection instruction is triggered, and in response to the selection instruction, the selected recommended text is substituted for the display content in the text box of the corresponding editing area, and is also synchronized to the lyric cache area to replace the corresponding single sentence text. Of course, the action of replacing the lyric single sentence in the lyric cache region can also be executed when the user triggers the word filling confirmation instruction, and the embodiment of the spirit of the invention is not influenced.
It will be further appreciated that as the lyric phrase in the lyric cache is replaced, the text in the note zone where the selected note corresponding to the lyric phrase is located in the composition interface is also refreshed accordingly.
According to the embodiment, an intelligent reference function for a single lyric sentence is provided for a user word filling process, and auxiliary creation means are enriched, so that when the user creates lyrics, the advantages of big data can be fully utilized to further improve the word filling efficiency, shorten the word filling time and improve the production efficiency of music works.
Referring to fig. 11, in an expanded embodiment of the lyric creation navigation method of the present application, after the step S1200 and the display of the word filling interface, the method includes the following steps:
step S1601, in response to the input of the prior lyric text, acquiring one or more intelligent automatic association texts of the subsequent lyric text:
the embodiment further introduces an input automatic association function for lyric creation, and when a user inputs lyrics in a text box corresponding to a lyric single sentence, a background can automatically call a pre-configured search engine or an artificial intelligent word filling interface to obtain a corresponding intelligent automatic association text, wherein the search engine or the artificial intelligent word filling interface is configured to generate a subsequent lyric text according to the lyric text input by the user in advance, and the search engine or the artificial intelligent word filling interface can be realized by those skilled in the art, and a plurality of preferable such association texts are obtained and provided for the user to select.
Step S1602, displaying the intelligent automatic association text on the word filling interface:
and displaying one or more intelligent automatic association texts acquired from the remote server to the word filling interface or other layers convenient for the user to select.
Step S1603, in response to a selection operation for one of the intelligent auto-association texts, taking the selected intelligent auto-association text as the following lyric text:
when the user selects one of the intelligent auto-association texts listed in the interface, the corresponding intelligent auto-association text can be added into the corresponding editing area as the lyrics text subsequent to the previous lyrics text.
Preferably, the automatic association function may be implemented for lyric single sentences, that is, the search engine or the artificial intelligence word filling interface may be configured to generate an intelligent automatic association text of a next lyric single sentence according to a single sentence text of a previous lyric single sentence which has been input by the user or according to all previous lyric texts, so as to provide more personalized operation space for the user through user single sentence selection.
Therefore, the lyric creation efficiency of the user can be further improved by expanding the automatic association function of the lyric text input, and the personalized creation space can be reserved, so that the lyric creation assisting efficiency is improved.
In an extended embodiment of the lyric creation navigation method, the method comprises the following steps:
step S1601, responding to an editing event of characters in the lyric editing control acting on the note zone of the selected note, and replacing the corresponding edited characters to update corresponding lyric texts in the lyric cache zone according to the word number corresponding relation:
based on the embodiments of the present application, a user is allowed to edit the words in the lyric editing control in the note zone of the selected note, and as described in some embodiments, the word length of the lyric editing control can be restricted to a single word. As a requirement for function expansion, a user may also input a plurality of words in one lyric editing control, and then perform corresponding processing by this embodiment.
In this embodiment, the user edits the text in the phrase editing control in the note zone of a selected note, and then triggers a corresponding editing event.
And in response to the editing event, the word content in the lyric editing control can be obtained, the content of the corresponding word number of the lyric text with the synchronization of the stanza in the lyric cache region is searched according to the word number of the word content, and the former is replaced with the latter, so that the modification of the words in the lyric editing control can be realized, and the replacement and modification of the subsequent word content with the synchronization of the stanza in the lyric cache region can be realized.
Step S1602, responding to the content update event of the lyric cache area, and refreshing the words in the lyric editing control corresponding to the selected note of the music melody in the composition interface with the updated lyric text:
the action of modifying the content of the lyric cache area leads to triggering a content updating event of the lyric cache area, the updated text content can be obtained in response to the event, and further the text content in the lyric editing control in the note area where the plurality of continuously selected notes are located can be adaptively modified according to the updated text content.
Different from the foregoing embodiment, according to the specification of the present embodiment, the case of modifying lyrics from the lyric editing area is adapted, and only the word number correspondence relationship thereof needs to be considered for replacement, specifically, no matter how many words are input from the lyric editing control corresponding to a selected note by the user, the user can determine the position corresponding to the lyric text in the lyric cache area based on the selected note, and then, from the position, obtain backward the target characters in the same total word number as the total word number input by the lyric editing control, and replace the target characters with the character string input in the lyric editing control in a one-to-one correspondence manner.
Therefore, even if the lyric editing control is pre-specified to only correspond to one single word, and for more single words input from the lyric editing control, the lyric editing control replaces corresponding word contents in a lyric cache area, the replacement of the word contents in other subsequent lyric texts is realized, because the lyric cache area has a mechanism for responding to a storage event and triggering the refreshing of words at note areas in a music interface, the refreshing mechanism can further ensure that the words displayed by the lyric editing control at the note area of each selected note are synchronous with the words with synchronous pitches in the lyric cache area, so that the words contents can be modified by the lyric editing control at one note area, a plurality of single words or whole words can be input, and the replacement of the lyric texts displayed in the note areas corresponding to the subsequently selected notes can be related and realized, the efficiency of editing the lyric text by the user is further improved.
In an alternative embodiment, if one selected note is allowed to correspond to two single characters, the two single characters are processed and displayed according to the priority of each selected note in the same way; if a selected note needs to be defined as the continuous sounding synchronization range of the character corresponding to the previous selected note, a placeholder can be used to represent the character corresponding to the selected note, for example, characters such as "+", "#", etc. can be used as placeholders, and once such characters appear in a lyric editing control, the lyric editing control can be parsed by the embodiment so that the sounding content corresponding to the selected note can be sounded according to the character corresponding to the previous selected note.
According to the embodiment, the modification of the content of the lyric text can be conveniently found and realized through the composition interface, the modification of the whole sentence or long string of lyrics can be realized through the lyric editing control in a single note zone, a more flexible and convenient control means is provided for word filling operation in music creation, a more convenient realization mode is provided for forming music works by users, the technical means of music auxiliary creation is enriched, and the operation efficiency is correspondingly improved.
In an extended embodiment of the lyric creation navigation method, after the step S1400 and the display of the word filling interface, the method includes the following steps:
step S1700, responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment, and automatically completing the lyric text according to the selected notes of the music melody:
further, a quickly generated lyric text may be provided for a music melody composed by a user through the present embodiment. Specifically, in the word filling interface, the lyric text can be automatically created according to the music melody created by the user by recognizing an automatic word filling instruction generated by the user touching a preset control or recognizing an automatic word filling instruction generated by the user executing a 'shaking one by one' operation through a vibration sensor.
In a specific implementation manner, with the help of the implementation logic of the related embodiment, for each lyric single sentence corresponding to the music melody, the recommended text corresponding to the single sentence text of the lyric single sentence is searched one by one, and then the recommended text is randomly selected and replaced by the content in the text box of the editing area of the lyric single sentence.
Therefore, the user does not need to create the lyric text by himself, the lyric text can be obtained only through the touch control or the shaking, and then only simple modification or modification of each recommended text is needed, so that the lyric text is rapidly generated, and the music auxiliary creation efficiency is further improved.
Referring to fig. 13 in combination with fig. 3, in an extended embodiment of the lyric composition navigation method of the present application, before the step S1100, responding to the word filling command of the user, and obtaining the pre-generated music melody in the music composition of the composition interface, the method includes the following steps:
step S8100, a composition interface is displayed, the composition interface displays a list of note locations with rhythm and scale as dimensions, wherein each note location corresponds to one note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension:
step S8200, formatting the occupation width of the corresponding note zone on the composition interface according to the rhythm corresponding to each undetermined note in the preset melody rhythm information:
as shown in fig. 3, since the melody rhythm information already defines the duration corresponding to each to-be-determined note in the music melody as described above, the duration corresponding to each to-be-determined note in the composition interface can be correspondingly adjusted according to the duration of each to-be-determined note, so that the durations of the to-be-determined notes are adapted, thereby formatting the layout of the composition interface according to the duration of the melody rhythm information, forming the composition area for producing the music melody in a part of the composition interface, and displaying the duration information of the to-be-determined note of the music melody in the composition interface, so that the user can intuitively understand the duration of each note of the music melody from the composition interface, thereby facilitating understanding of the music rhythm and facilitating the implementation of music composition.
Step S8300, the acquisition of the selected notes corresponding to each undetermined note in the music melody to be acquired is sequentially advanced, and the corresponding note prompt area is displayed in the composition interface aiming at each undetermined note in the current sequence, so as to list the note areas corresponding to the selectable notes which are changed and related to the prior selected notes in sequence for selection:
the melody rhythm information specifies the sequence and duration of each pending note of the music melody, so that the music melody can be started according to the sequence of the pending note, and each pending note can be corresponding to the two-dimensional coordinate system of the composition interface, so that the selected note corresponding to the current position can be obtained in the composition interface, particularly the composition area, in sequence from the first pending note of the music melody.
In this embodiment, focusing on processing the determined association relationship between the one or more selected notes ordered in the first place and the selected note in the current sequence, those skilled in the art can flexibly set a preset rule for determining the association relationship between the selected note in the first place and the selected note in the current sequence according to their own understanding of the musical theory knowledge and for the purpose of improving the creation assistance efficiency, so that the selectable note in the current sequence can be determined in association according to the determined one or more selected notes ordered in the first place and displayed in the note prompt region corresponding to the composition interface, as shown in fig. 14, and the selected note is provided for the user to select. Thus, the corresponding selected notes are obtained for each note to be determined one by one, and the whole music melody can be constructed. It is understood that the preset rule set referred to herein may be theoretical or empirical.
Step S8400, constructing the music melody, which includes the selected notes that have been determined in sequence:
in the process of obtaining each selected note of the music melody, the obtained each selected note may be additionally stored in the melody buffer corresponding to the music melody, so that the melody buffer can timely store the selected note sequence formed by the determined selected note, and the selected note sequence is substantially configured as the music melody.
Through the disclosure of the typical embodiment of the lyric creation navigation method and various variation embodiments thereof, it can be understood that the user-defined music melody becomes more convenient and efficient, and particularly, the following advantages are embodied:
first, the embodiment provides melody rhythm information required by music melody creation, formats duration information corresponding to each undetermined note in a composition interface according to rhythm defined by the melody rhythm information, guides a user to select each undetermined note in the music melody according to the melody rhythm information of the music melody in the composition interface, and determines each selected note required by the music melody, so that the music melody is constructed to further form the music work.
Secondly, the technical means adopted by the embodiment is easy to implement, and the operation cost is low, for example, the manner of obtaining the melody rhythm information and obtaining the music melody at the composition interface does not need to depend on big data, and the method can be implemented at the client side, the server side and the client and the server, and can be implemented by division of labor, so that the implementation manner is flexible, the implementation cost is low, and the creation time is greatly reduced, thereby fully playing the advantage of computing resources of computer equipment, perfecting the function of music auxiliary creation products, and greatly improving the production efficiency of music auxiliary creation.
Referring to fig. 15, an embodiment of the lyric creation navigation method according to the present application, in step S8200, the occupying width of the corresponding note zone on the composition interface is formatted according to the rhythm corresponding to each undetermined note in the preset melody rhythm information, including the following steps:
step S8210, obtaining preset accompaniment chord information and preset melody rhythm information corresponding to the target accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be obtained:
the present embodiment may determine information required to produce the music melody according to the accompaniment template prepared in advance. Each accompaniment template corresponds to one background music, accompaniment chord information according to the background music and melody rhythm information which prescribes the rhythm corresponding relation between the melody and the accompaniment chord information, therefore, the accompaniment chord information and the melody rhythm information corresponding to the accompaniment templates can be obtained, and the background music can be obtained if necessary.
The rhythm corresponding relation exists between the melody rhythm information and the accompaniment chord information, and the rhythm corresponding relation follows the relevant specification required by music theory knowledge, so that after the music melody prepared according to the melody rhythm information is produced, the music melody and the background music played according to the accompaniment chord information can keep synchronous on rhythm, and the music melody and the background music can be synthesized according to the rhythm corresponding relation. Similarly, the melody rhythm information can also be used as a basis for filling words in the music melody, so that the lyrics and the background music keep a rhythm synchronous relationship.
It should be noted that the chord may be a columnar chord or a decomposed chord, or in the whole musical piece, part of the chord may be a columnar chord and another part of the chord may be a decomposed chord. Alternatively, for the same chord progression, it may apply a decomposed chord to the verse portion of the musical melody and a columnar chord to the refrain portion. These techniques are all based on the creation principle that can be flexibly implemented in the music creation process, so they are not flexible enough to be out of the scope covered by the creation spirit of the present application.
When the correspondence relationship between the rhythm of the chord and the melody is maintained, the chord in the chord progression and the undetermined note in the music melody are expressed in terms of time, and can also flexibly correspond to each other, for example, in the case of the beat number of 4/4, each chord can correspond to 4 beats, 2 beats or 1 beat, so that the correspondence relationship between the rhythm of the chord and the note of the melody, which is called in the application, should not be understood as a strict isometric relationship on the duration, that is, the duration of one chord may overlap the duration of the note in one or more music melodies, and specifically can be formulated in the rhythm information.
It should be noted that in a musical composition, sometimes, there is no need for one-to-one synchronization between chords and melody notes, and therefore, for example, for pauses in a melody, there is no need to determine selected notes for them; as another example, for one or several melody notes, it is also possible to realize singing without adapting the chord, in which case the corresponding selected note still needs to be determined even if there is no chord synchronized with it in rhythm. In view of this, the melody rhythm information may be added with pause marks as needed to represent pauses or may represent pending notes belonging to the singing section in a certain format. In this regard, those skilled in the art will be able to flexibly adapt the same.
The accompaniment chord information and the melody rhythm information can be stored locally or in a remote server, so that after a client user designates the accompaniment template, the accompaniment chord information and the melody rhythm information corresponding to the accompaniment template can be called correspondingly. Because the accompaniment chord information and the melody rhythm information are associated with the same accompaniment template, the accompaniment chord information and the melody rhythm information can be packaged into the same message body in the storage and/or transmission process, such as the message structure in the same XML or TXT format, and then the message structure is called correspondingly at the client, of course, the accompaniment chord information and the melody rhythm information can be stored and transmitted respectively, and the embodiment of the creative spirit of the application is not influenced.
In addition, it can be understood that, when the accompaniment chord information corresponding to the accompaniment template is determined and forms the background music, the corresponding key sign, time stamp and speed per hour are also determined, and these information can also be explicitly recorded in the melody rhythm information, shown in the function selection area 210 of the composition interface shown in fig. 3, and provided for the client user to refer to, so as to highlight the navigation function of the present application.
Step S8220, adjusting the occupation width of the corresponding note zone in the composition interface according to the rhythm corresponding to each undetermined note in the preset melody rhythm information to show the duration information of the undetermined note of the music melody:
in the client device, a composition interface is required to be presented, which may be presented in various suitable layout forms, for example, in a two-dimensional coordinate layout form in an embodiment of a type to be disclosed later in this application, as shown in fig. 3 and fig. 14, so that in the composition interface, a note prompt area 220 for each pending note of the whole music melody may be laid out, the note prompt area 220 is provided for the user to select the pending note, and if necessary, a description prompt area 240 may be provided, so as to obtain a series of selected notes corresponding to the music melody in the composition interface, and construct the whole music melody.
In the melody rhythm information, the rhythm information of each sequentially arranged undetermined note can be marked by a time value, so that in the composition interface, the note prompt area 220 can be marked corresponding to the time value of each undetermined note, and the time value information of the undetermined note is displayed. The mark form of the time value can be flexibly designed according to different layout forms of the composition interface, as long as the design embodies the time value information to provide navigation value for the composition of the user.
In the embodiment, the duration information of the undetermined notes of the music melody is automatically and visually expressed into the composition interface according to the melody rhythm information corresponding to the accompaniment template, so that the navigation effect of the auxiliary creation is enhanced, the user can understand the rhythm of the music melody more conveniently, and the user can create the music melody more efficiently by combining singing.
Referring to fig. 16, in an embodiment of a variation of the lyric creation navigation method according to the present application, in step S8210, the obtaining of the preset accompaniment chord information and the preset melody rhythm information corresponding to the target accompaniment template includes the following steps:
step S8211, displaying a plurality of candidate accompaniment templates in response to the accompaniment template acquisition command:
a database can be pre-constructed for collecting information related to the accompaniment templates, as mentioned above, one accompaniment template corresponds to background music, accompaniment chord information and melody rhythm information stored in the background, when the accompaniment templates are displayed in the foreground, an entry control thereof can be set in a composition interface, when a user triggers the entry control, as shown in fig. 17, a plurality of accompaniment templates are listed in a list form to form a plurality of candidate accompaniment templates, a "listening trial" playing control of the candidate accompaniment templates is provided, when the playing control of one candidate accompaniment template is touched, corresponding background music in the background is called to play, so that the user can sense the music style corresponding to the candidate accompaniment template by listening to the background music, and thus, a decision whether to select the candidate accompaniment template is made.
Step S8212, receiving a user selection instruction to determine a target accompaniment template from the candidate accompaniment templates:
after determining a candidate accompaniment template, a user can touch the 'using' control to select the candidate accompaniment template and correspondingly determine the candidate accompaniment template as a target accompaniment template.
Step S8213, obtaining the accompaniment chord information and the melody rhythm information corresponding to the target accompaniment template:
and if the corresponding accompaniment chord information and the melody rhythm information are stored in the remote server, the accompaniment chord information and the melody rhythm information can be directly acquired through a remote network.
The embodiment allows a plurality of accompaniment templates to be provided for the user to select, realizes decoupling between the accompaniment templates and the music melody, allows pre-preparation of the accompaniment templates for the user to select according to needs, therefore, richer template sources can be provided for music creation, sharing and re-creation of accompaniment template resources are realized, the user-defined accompaniment templates are not required to be relied on, convenience of the user for creating music works is further improved, and the auxiliary creation efficiency is facilitated to be improved.
Referring to fig. 18, an embodiment of the lyric composition navigation method of the present application is modified, wherein in step S8300, the obtaining of the selected note corresponding to each pending note in the music melody to be obtained is sequentially advanced, and for each pending note in the current sequence, the corresponding note-cue region is displayed in the composition interface to list the note regions corresponding to the selectable notes that vary in relation to the prior ranked selected notes, for selection, the method includes the following steps performed in response to each pending note:
step S8310, corresponding to a current order undetermined note of the music melody, determining a plurality of selectable notes according to a preset rule:
as previously mentioned, for each currently sequenced pending note, a corresponding alternative note needs to be determined for it. The determination of the selectable note may be determined according to the preset rule according to one or more selected notes ordered first, and for the undetermined note ordered as the leading in the music melody, the selectable note may be provided in a random recommendation manner or a recommendation manner according to a certain rule, and for other subsequent undetermined notes, the selectable note may be recommended for each pending note ordered sequentially according to the preset rule, so as to be selected by the user.
The predetermined rules may be based on whether the current sequence of selectable notes is acoustically coordinated with the previously determined sequence or sequences of selected notes, such that the current sequence of selectable notes varies in relation to variations in the previously determined sequence or sequences of selected notes. As mentioned above, this predetermined rule may not apply to the first pending note in the musical melody because it lacks the previously selected note.
After processing according to the predetermined rules, configuration according to the predetermined rules typically results in deleting at least individual notes from the plurality of candidate notes according to the selected notes that are ordered prior to the current rank order, further refining the candidate notes, and obtaining the final selectable notes. Of course, it is sometimes not necessary to delete individual notes from the candidate notes, depending on the content of the particular rule.
Step S8320, displaying a note zone corresponding to the selectable note at a position corresponding to the current sequence on a composition interface to form a note prompt zone:
as shown in fig. 3, fig. 19 and fig. 20, for the pending note in the current sequence, as mentioned above, there is a corresponding relationship in the composition interface, specifically, in the aforementioned example of representing the composition area in the two-dimensional coordinate system, in the composition area, there is a note area position column for determining the pending note in the current sequence in the horizontal axis direction, so that the plurality of selectable notes can be visually represented in the note area position column, for example, the note areas corresponding to the selectable notes are displayed in a coloring manner, and the selectable notes collectively define the note prompt area 220 corresponding to the pending note in the current sequence. Thus, the note-hinting area 220 includes a plurality of colored note locations, each note location indicating a corresponding alternate note that may be continuous or discontinuous across the scale.
Step S8330, receiving the selected note determined from the plurality of selectable notes in the note-hinting area, advancing the music melody to the next order to cyclically determine its subsequent selected note:
the user in the client device may adapt to each pending note in the music melody on the composition interface, determine a selected note from its corresponding note-hinting area 220, and construct the finished music melody by determining a sequence of the selected notes. It will be appreciated that the plurality of selected notes, which form the musical melody, are arranged in a sequence, presented in the note selection area 230 for displaying the selected notes, and are typically acquired in a chronological order, although individual adaptation thereof is not excluded.
The user may determine the corresponding selected note only corresponding to a part of undetermined notes specified by the melody rhythm information, and discard other parts, thereby forming a self-recognized complete music melody, but generally, the user is required to determine the corresponding selected note according to all rhythm information of the melody rhythm information, so that the music melody more completely matches the background music.
For the pending note in the current sequence, the user can select one of the note locations provided in the note indication area 220 corresponding to the current sequence, that is, a selected note is determined, and the selected note is used to form the note required by the corresponding sequence in the music melody. As mentioned above, in some embodiments, a melody buffer may be used to store each selected note sequence generated during the creation of a musical melody, and when a selected note corresponding to a pending note is generated, the selected note is added to the end of the sequence, as shown in the note selection field 230 of FIGS. 19 and 20.
As shown in fig. 19 and 20, after the determination of a selected note is completed, the music melody is advanced to the next sequence, the pending note in the next sequence is advanced to the new current sequence, and the process of this embodiment is then repeated to determine the selected note corresponding to the pending note in the next sequence. And repeating the steps continuously until each undetermined note of the music melody is determined, and finishing the creation of the music melody.
The embodiment realizes the whole navigation of the user for composing the music melody through a simple flow, has small real code quantity, simple logic and high performance, and the user only needs to gradually select the musical notes required by the music melody without knowing rich music theory knowledge, thereby greatly reducing the threshold of the user for composing the music melody.
In the process of obtaining the selected notes of the music melody, the selected notes corresponding to the pending notes are controlled by the selected notes sequenced in advance, so that the vocal coordination processing between the selected notes in the music melody and the adjacent selected notes is realized, and the quality of the music melody created by the user is improved.
In addition, in the process of guiding the user to create the music melody, the note prompt area 220 is displayed in the composition interface, and the note zone corresponding to the selectable note is displayed in the note prompt area 220, so that the navigation effect of the auxiliary creation is highlighted, and the efficiency of creating the music composition by the user is further improved.
In an alternative embodiment of the lyric creation navigation method, in step S8310, a plurality of selectable notes are determined according to a preset rule corresponding to a currently positioned pending note of the music melody, including the steps of:
step S8311, according to the chord in the preset accompaniment chord information synchronized with the current pending note in rhythm, determining the note in the harmony interval corresponding to the chord as a candidate note:
the embodiment further utilizes the accompaniment chord information which is matched with the melody chord information and is provided, and because the accompaniment chord information and the melody chord information which correspond to the accompaniment template are determined through the accompaniment template and the background music which is prepared according to the accompaniment chord information is correspondingly associated, the embodiment further improves the navigation quality of the music melody according to the preset accompaniment chord information.
As mentioned above, the melody rhythm information specifies the rhythm synchronization relationship between each note of the music melody and each chord in the accompaniment chord information, and for each pending note currently oriented in the music melody, this embodiment may determine a chord synchronized with the pending note in the accompaniment chord information in terms of the rhythm according to the time information provided by the melody rhythm information, and then determine the note in the harmonization interval according to the chord, and use these notes as candidate notes.
The collaboration interval includes: extremely fully harmonized musical interval: a pure first degree of perfect unity and an almost perfect octave of unity with the sound of the chord; full harmony interval: pure fifth and pure fourth degree which are fused with the sound of the chord; incomplete harmony musical interval: not very confluent three degrees in size and six degrees in size. The user may prefer notes within the pitch interval to which the application applies in accordance with this principle to determine the corresponding selected notes.
In the application, the note synchronized with the chord on the rhythm is restricted, so that the finally determined selected note forms the note in the harmony interval of the chord synchronized with the note on the rhythm, thereby simplifying the requirement on the musical theory knowledge of the user and producing the happy and fused musical works more efficiently.
Step S8312, applying a preset rule, filtering the candidate notes according to the selected notes sorted in advance to obtain the remaining selectable notes:
the notes in the joint interval, which usually include a plurality of notes, may be further filtered to further improve the creation efficiency and achieve a good creation effect.
The skilled person can flexibly set the preset rules required by the filtering according to self understanding of the music theory knowledge and for the purpose of improving the creation assistance efficiency, so that the candidate notes remaining after the filtering are used as the selectable notes to be finally displayed. It will therefore be appreciated that the setting of these preset rules may be theoretical or empirical.
After the embodiment is executed, a plurality of selectable notes are provided in the note prompt area 220 in the composition interface for the user to select and determine, and the note prompt area 220 essentially corresponds to a chord in the accompaniment chord information in rhythm, that is, the playing time length of the selected notes determined from the note prompt area 220 is covered by the playing time length of a chord, in this case, the chord and the selected notes are rhythmically synchronized.
As previously mentioned, the preset rules may be configured to remove at least individual notes from the plurality of candidate notes according to the selected note ranked first.
In a modified embodiment, in the process of cyclically obtaining the selected note, if a plurality of consecutive undetermined notes are synchronized with the same chord in rhythm, the corresponding candidate note of the succeeding undetermined note does not need to be determined, and the candidate note determined correspondingly to the previous undetermined note is adopted to be continuously associated with the selected note in the previous sequence, so that the selectable note corresponding to the current sequence is preferably selected, and the program operation efficiency can be improved.
In the embodiment, the musical notes in the harmony interval of the chord are further determined as candidate musical notes according to the chord which is synchronized with the music melody rhythm in the preset accompaniment chord information, and then the selectable musical notes are preferably selected by applying the preset rules in the candidate musical notes, so that the range of the user selecting the selectable musical notes following the music theory or the priori knowledge is further narrowed, obviously, the efficiency of the user for creating the music melody can be further improved, and the user is ensured to create high-quality music works with higher probability.
In step S8320, a note zone corresponding to the selectable note is displayed at a position corresponding to the current order on the composition interface to form a note prompt zone, which includes the following steps:
step S8321, coloring the note zone corresponding to the selectable note at the position corresponding to the current sequence on the composition interface to form a note prompt zone so as to finish the representation and display of the selectable note:
as mentioned above, after determining a selectable note corresponding to a current sequence in the musical melody according to the aforementioned process of determining the sequence of the selected notes, the note locations corresponding to the current sequence in the musical melody are determined to form the note indication areas 220, and the note locations are colored to finish the representation of the selectable note.
Step S8322, moving the composition interface along the rhythm dimension direction to move the note prompt area corresponding to the current sequence to a preset important position:
for convenience of user operation, the automatic shift of the composition interface may be implemented, and it is known from the layout variation relationship presented between fig. 19 and fig. 20 that, specifically, the composition interface is automatically moved along the rhythm dimension direction of the two-dimensional coordinate system, so that the note prompt region 220 corresponding to the current cis position is moved to a preset key position, so as to be most convenient for the user to select continuous touch, for example, the key position is placed near the longitudinal axis position of the mobile terminal serving as the client, so that the user can conveniently use the finger to continuously touch, and thus, the note regions at each sequencing position can be quickly and continuously selected.
In the embodiment, further in consideration of cultivating and adapting to the operation habits of the user, the note zones are displayed in a coloring mode and the composition interface is automatically moved, so that the user can select the note zones required by the music melody more conveniently, the composition interface is not required to be automatically moved by the user, the user can conveniently and quickly acquire information through the colored note zones to realize selection, and the auxiliary creation efficiency is further improved.
In an embodiment of the lyric creation navigation method, the method further includes the following steps:
and step S8500, responding to the reset event of any selected note, starting the sequence updating process of the selected note after the sequence is carried out, so that the selected note after the sequence is automatically re-determined according to the selected note before the sequence on the composition interface, wherein if the selected note at the position of the re-determined sequence does not comprise the selected note determined originally, the selected note at the position of the re-determined sequence is randomly selected from the re-determined selected notes, and the selected note at the position of the sequence is re-determined.
After the user has completed the determination of a part of the pending notes in the music melody, if the user needs to reset the determined respective selected notes, the user is allowed to determine a new selected note by selecting other note locations in the note selection area 230 to which the selected note belongs, thereby triggering a reset event.
A reset event of a selected note located in a non-ending position of the musical melody could theoretically cause the vocal relationship between the selected note and the following selected note to contradict the preset rules set in the previous embodiment, and therefore, it is suitable to rearrange other following selected notes after the reset selected note in response to the reset event.
To accomplish the sorting of the selected notes after the sort, the selected notes after the sort can be reset one by one according to the procedure of setting the selected notes according to the previous embodiment starting from the selected note after the reset selected note. In this case, if the note-indicating area 220 of the current sequence still includes the note location corresponding to the selected note determined before the current sequence, the note location of the current sequence does not need to be changed, and accordingly, the selected note of the current sequence does not need to be changed. Conversely, if the note-cue region 220 of the current sequence does not include a note region corresponding to a previously identified selected note, then the selected note is automatically identified again and the selected note is selected only to select between the newly identified note regions.
Indeed, in other embodiments, one could select a selectable note according to other rules, such as selecting the selectable note indicated in the note zone closest to the original note zone as the new selected note.
The embodiment further allows the user to individually adjust the selected notes after the user completes the determination of the selected notes, and the changes caused thereby automatically determine the selected notes of the subsequent music melody for the user again in a chain reaction mode through the automatic processing of the embodiment without setting the selected notes one by one again by the user, thereby simplifying the process of modifying the music melody by the user, greatly improving the editing efficiency of the music melody and enriching the auxiliary music creation means.
In an embodiment of the lyric creation navigation method, the method further includes the following steps:
step S8600, responding to an automatic composition instruction triggered by a control in a composition interface or a vibration sensor of the local device, and automatically completing a note prompt area corresponding to undetermined notes in the music melody and selected notes in the note prompt area:
in view of the foregoing embodiment, the present application may also implement automatic creation of a music melody, and may determine a corresponding selected note for an undetermined pending note in the music melody after a user triggers an automatic composition command.
The automatic composition command may be triggered by lifting a control in a composition interface, or by recognizing a data model of a vibration sensor, for example, by recognizing that the vibration sensor is in a reciprocating and shaking state to recognize that the vibration sensor triggers the automatic composition command, which is commonly referred to as "shake-shake".
After the automatic composition instruction is triggered, in response to the instruction, the embodiment determines the flow of the embodiment related to the selected notes according to the sequence, determines the notes in the corresponding harmony interval according to each current sequence in the music melody, determines the notes in the corresponding harmony interval according to the chord synchronized with the rhythm, then applies the preset rule to filter and select the selectable notes, displays the note positions corresponding to the selectable notes, then randomly determines the selectable note indicated by one note position as the selected note of the current sequence, advances the next sequence to continue to obtain the subsequent selected note, and so on until all the notes to be determined are determined to be the corresponding selected note, that is, the automatic composition process is realized, and the whole music melody is determined.
The automatic creation process of the embodiment is different from the logic of artificial intelligence automatic generation in the prior art, the creation of the music melody can be automatically completed only according to the relationship between the melody rhythm information and the chord and the preset rule relationship that the selected note after being sequenced is associated with the selected note before being sequenced, the whole process can be realized according to the rule, the calculated amount is small, the calculation efficiency is high, the music auxiliary creation means is further enriched, the realization efficiency of the automatic auxiliary creation is improved, the realization way that people are all musicians is further unblocked, and the novel Internet music ecology is convenient to construct.
Referring to fig. 21, in an embodiment of the variant of the lyric creation navigation method of the present application, in step S8330, a selected note determined from a plurality of selectable notes in the note-cue region is received, including the following steps:
step S8331, in response to a selected operation from the note locations corresponding to the plurality of selectable notes in the note prompt area, receiving the note corresponding to the selected note location as the selected note of the current order of the music melody:
after the user selects a note zone from the note-hinting area 220 in the current sequence for a selected operation, a note-selecting event is triggered to determine the selectable note indicated by the selected note zone as the selected note corresponding to the pending note in the current sequence of the music melody.
And S8332 highlighting the selected note zone, adding a lyric editing control in the note zone, and displaying characters which are synchronous with the lyric zone in rhythm in the lyric text in the lyric cache zone in the lyric editing control.
For the selected note zone, the embodiment highlights the note zone, and the highlighting herein can also be implemented by, for example, adjusting other note zones to be gray scale areas and reserving the selected note zone as a highlighted area, compared to other unselected note zones.
Further, a lyric editing control, which may be a text box, is added to the selected note zone. Then, displaying characters in the lyric text in the lyric editing control to indicate the character content of the lyric text with the rhythm synchronous with the selected note corresponding to the note zone. The lyric text can be stored in a lyric cache region in advance, and after the lyric editing control is added, characters in the lyric text with synchronous rhythm are loaded from the lyric cache region for the lyric editing control; if no such lyrics text is present in the lyrics cache, a default word occupancy may be used, e.g. filling in pseudonyms like "la", "o", etc., which may be synchronized into the lyrics cache. The lyrics text may be authored or edited by other embodiments as will be disclosed later in this application, where the table is pressed temporarily.
The words displayed by each lyric edit control can be set as a single word by default, such as a single Chinese character in the Chinese or a word in the English, and in other embodiments, multiple words can be allowed to appear, but generally more than one word is not allowed to appear, so as to adapt the duration of the note. In the embodiment, taking the chinese lyrics as an example, it is suitable to display a single word in each lyrics editing control. And the words displayed in the lyric editing control indicate that the words at the position are to be sung according to the musical sound of the selected note indicated by the position. In some embodiments, when the text in the lyric editing control of a note zone is emptied, it can be understood that the selected note here will sing according to the continuation pronunciation of the text corresponding to the previous selected note. Therefore, the selected musical notes and the singing characters of the lyric texts can be in one-to-one correspondence, and can also be in more complex relationship after various flexible changes, and the skilled person can flexibly realize the relationship according to the principle.
In the embodiment, when a user determines a selected note, the note zone corresponding to the selected note zone is highlighted, and a lyric editing control is added to the selected note zone, so that the lyric editing control is used for displaying characters in a lyric text with synchronous rhythm, the corresponding relation between a music melody and the lyric text can be displayed more intuitively, a more intuitive navigation effect is achieved, the user can modify the corresponding character content individually through the lyric editing control in a follow-up manner, the alignment is accurate, and the music auxiliary creation efficiency can be further improved.
In an embodiment of the method for navigating lyric composition, in step S8400, the music melody is constructed, the music melody including the selected notes in the determined sequence, comprising the steps of:
step S8410, in response to the determined event of each selected note, adding it to the melody buffer:
as mentioned above, a melody buffer may be configured in the memory for storing a user-defined sequence of selected notes, whereby when a user defines a selected note and triggers a defined event, the user-defined sequence of selected notes is added to the melody buffer in response to the defined event, and the sequence of selected notes stored in the melody buffer in order is the user-defined sequence of selected notes and may be used to construct a corresponding musical melody.
Step S8420, responding to the reset event of each selected note, replacing the corresponding selected note in the melody buffer area:
as mentioned above, when the user resets for each selected note, a corresponding reset event is triggered, so that the user-reset selected note is replaced with the corresponding selected note in the melody buffer according to the above-mentioned embodiment.
Step S8430, in response to the composition playing command, constructing the selected note sequence stored in the melody buffer as the music melody to start playing:
after the user touches the playing control in the composition interface, the music melody can be constructed according to the selected note sequence stored in the melody cache region, of course, each selected note in the music melody is restrained by the corresponding duration value of the melody rhythm information, and on the basis, the music melody can be automatically played.
In some embodiments, if a preset lyric text needs to be sung according to the music melody, the music melody and the lyric text may be sent to a remote server for synthesis, or if the local server has synthesis capability, the music melody and the lyric text may be directly synthesized at a local client, and played after synthesis.
Before playing the music melody, the music melody and the background music corresponding to the accompaniment template can be synthesized according to preset business logic and then played.
In some embodiments, the composition playing instruction may be triggered when the user submits a delivery submission instruction for delivering the musical melody.
The embodiment provides an auxiliary control means for the series of operations after the user makes the music melody, so that the user can edit the music melody created by the user, and the music auxiliary creation efficiency is further improved.
In an embodiment of the method for navigating lyric composition, in step S8400, the music melody is constructed, the music melody including the selected notes in the determined sequence, comprising the steps of:
step S8440, responding to the issue submission instruction, submitting the draft information including the music melody to a remote server for storage, driving the remote server to synthesize a song according to the music melody, and pushing the song to a browsable page for display:
in the application, the step can be added to release the musical works created by the users to the database accessible by the public or the users with the authority. Specifically, a publishing control may be provided in a graphical user interface of the computer program product of the present application, and a user may trigger a corresponding publishing submission instruction after touching the publishing control. The graphical user interface may be the composition interface, and may be specifically, for example, set in a function selection area of the composition interface, or may be other interfaces not mentioned, as long as the client user is accessible.
Responding to the issue submitting instruction, displaying a corresponding issue editing interface in a pop-up or switching mode, enabling a user to input text information to be issued in the issue editing interface, wherein the text information is usually title information and brief introduction information corresponding to the musical works created by the user, and then confirming submission, namely submitting the musical works created by the user and the edited text information to a remote server. When the user submits the musical composition and sends a corresponding message to the remote server, the message generally includes the music melody to which the sound effect is applied, the accompaniment template or the characteristic information thereof available for the server to obtain, the preset sound type and other information, and even may include the corresponding lyric text and the like, which is specifically determined according to the implementation logic of the remote server.
The remote server obtains the information submitted by the user, synthesizes the playable music works according to the specific content provided by the user, constructs a summary message body according to a preset format, embeds the text information used for the music works and corresponding text information, even comprises the lyrics, and then publishes the summary message body to a database accessible by the user.
The remote server can provide a browsable interface through the computer program product of the application, a user can load the abstract message body made by the remote server through the browsable interface, after the computer program product obtains the abstract message body, a corresponding control is constructed on the browsable interface, a player is implanted in the corresponding control and used for automatically playing songs packaged by the abstract message body, and lyric texts in the abstract message body and other text information and the like are displayed in the corresponding control. The corresponding control can also enter the detail page in response to the touch of the user, and for this, the skilled person can flexibly implement.
The embodiment allows a user to send the works to the public network for other users to access, so that the created musical works can be browsed and shared, the user flow of the computer program product realized by the application can be activated, the operating efficiency of a remote server deployed for the computer program product is improved, the ecological environment of internet music services is deepened and expanded, and people are music players.
Similarly, in the process of the user composing the music melody, the draft information corresponding to the music composition can also be submitted to the server in response to the issuing and submitting instruction. The draft information comprises an accompaniment template corresponding to the musical composition, a preset sound type, the musical melody created by the user and optionally lyric text created by the user.
The remote service acquires the draft information submitted by the user, and stores the draft information as a draft box of the user, namely a personal editing library of the user for subsequent calling.
The embodiment can realize the protection of filing of user's musical composition, protects user's creative achievement, makes the user can improve the musical composition of its creation constantly, can further realize many people collaborative creation relevant musical composition through sharing draft information even, further promotes sharing and interaction between the user, promotes supplementary creation efficiency.
In an embodiment of the lyric creation navigation method, the method further includes the following steps: step S1800, responding to a work playing instruction, playing the music work of the vocal singing version synthesized by the lyric text and the music melody:
after the user at the client constructs the musical melody and the lyric text by determining a series of selected notes, a musical composition may be further constructed on the basis of the musical melody and played.
Corresponding works playing controls or corresponding works publishing controls can be configured in the composition interface, after a user triggers the corresponding controls, corresponding works playing instructions are triggered, and music works are played by responding to the works playing instructions.
In an improved embodiment, the selected notes of the music melody can be played in a predetermined musical instrument sound effect sequence, thereby realizing the playing of the music melody, and playing the sound effect of playing the music melody with the musical instrument sound effect, wherein the sound effect does not carry or synthesize the background music corresponding to the accompaniment template.
In another improved embodiment, the background music and the music melody corresponding to the accompaniment template can be combined into one based on the processing according to the previous embodiment, so that the played sound effect includes both the background music and the music melody, and the two are kept harmonious in rhythm according to the provision of the rhythm information of the melody.
In another improved embodiment, before playing the musical composition, a corresponding lyric text may be provided for the musical melody, and then a sound effect of singing the lyric text by human voice according to the musical melody is synthesized according to the lyric text and the musical melody, so as to play the sound effect.
In a further improved embodiment, the sound effect of the text containing the vocal singing lyrics can be further combined with the background music of the accompaniment template, so that a musical work with both chords and singing effects can be obtained and played.
According to the above improved embodiments, when the synthesis technology of musical compositions is involved, the link of synthesizing singing effect according to music melody and lyric text, the link of synthesizing sound effect with background music, and the like can be implemented by using various pre-trained acoustic models known to those skilled in the art. When the composition of the musical composition is implemented, the composition can be generally deployed in a remote server, and a client calls related functions through an interface provided by the remote server, and in some cases, the composition interface can also be realized by a page of the server. However, it should be understood that as the performance of the computer device is improved and the migratory learning of various neural network modules for realizing the composition becomes possible, the client is often competent for the capability of composition, and in this case, the related musical composition can be directly synthesized in the client.
In one embodiment, when the musical composition is played, the musical composition can be played according to a uniformly set speed per hour. Since the background music corresponding to the accompaniment template is determined, the rhythm information of the melody can be defined and displayed to the user by the constant value of the time speed. Similarly, the key and the time scale required by the musical composition are determined according to the determination of the background music, and can be displayed for the user to know.
The embodiment provides multiple background support forms for playing the music works, so that the user can flexibly monitor the music production efficiency, the music auxiliary creation means is enriched, and the music auxiliary creation efficiency is improved.
Referring to fig. 22, in an embodiment of the present application, in which the lyric creation navigation method is modified, the step S1800 further includes the following steps:
step S1810, responding to a work playing instruction, and acquiring a preset sound type:
in this embodiment, a play control is provided on the composition interface, and a user can trigger a composition play instruction by touching the play control. As shown in fig. 23, the function selection area in the composition interface provides a preset control corresponding to the sound type adapted to the music melody, and the preset control can preset the sound type to the sound type of a certain musical instrument such as a piano by default so as to instruct the user to play the music melody created by the user with the piano sound effect subsequently. Similarly, the preset control can also be used for the user to select other musical instrument types, or for the user to select a voice type, and when the preset control is selected as the voice type, the preset control indicates that the music melody created by the user can sing according to the lyric text in the lyric cache region by using the sound effect corresponding to the voice pronunciation.
Step S1820, obtaining the music melody to which the corresponding sound effect is applied according to the preset sound type:
it is understood that, for each preset sound type, the corresponding sound effect, i.e. sound effect, can be generated, and the music melody is played or sung in the session. Generally, the sound effect data of the types of the musical instruments are simple, can be stored in a client and directly called at the client, and can be realized by an audio synthesis technology if the sound effect of the human body is obtained, or can be locally synthesized if the client supports the technology, or else, the corresponding lyric text and the music melody regulated by the melody rhythm information need to be sent to a remote server to be generated and then acquired. It will be appreciated that in any way, a musical melody to which the corresponding sound effect of the preset sound type is applied is finally obtained.
Step S1830, playing the musical composition including the musical melody:
after the music melody applying the sound effect is obtained, the playing of the music melody can be started, and the music works of the application are presented. If the music melody is further synthesized with the background music of the selected accompaniment template in advance, the music composition necessarily shows the music melody and the background music with the sound effect applied at the same time, and the music melody and the background music are defined and synchronized in rhythm according to melody rhythm information, otherwise, if the music melody is not synthesized with the background music in advance, the music melody with the corresponding sound effect applied can be simply played. Generally speaking, for the preset sound type of a musical instrument, background music synthesis is not required to be performed in advance, so that the playing effect is purer; for the preset voice type being the human voice type, the background music can be synthesized together, so that the playing effect is more infectious. Of course, the skilled person can set it flexibly. It should be noted that whether the finally played music content includes background music or not, the music composition referred to in the present application is formed as long as it includes the music melody created according to the present application.
The embodiment allows the music melody created by the user to be matched with the corresponding sound type, so that the corresponding sound effect can be conveniently generated for the user to be applied to the music melody, thereby forming the music composition expected by the user and realizing rich music auxiliary creation means.
Referring to fig. 24, in an embodiment of the lyric composition navigation method of the present application, in step S1820, the step of obtaining the music melody to which the corresponding sound effect is applied according to the preset sound type includes the following steps:
step S1821, judging and determining that the preset voice type is the voice type representing voice singing according to the lyrics:
and recognizing that the preset voice type set in the function selection area by the user is a voice type representing that the voice sings according to the lyrics so as to start the business logic processed according to the voice type.
Step S1822, obtaining a preset lyric text corresponding to the voice type:
as described in the previous embodiments, the lyric text of the music melody of the present application is generally stored in the lyric cache, where it can be directly recalled. In some embodiments, if there is no corresponding lyrics text in the lyrics buffer, the user may even be allowed to import a preset choreography of lyrics text, for which the skilled person may implement flexibly.
Step S1823, adapting to the type of the human voice, constructing the sound effect of the human voice pronunciation of the lyric text, applying the sound effect to the music melody, and synthesizing the background music corresponding to the accompaniment template for the music melody, wherein the background music is the music played according to the chord information:
as mentioned above, the lyric text and the music melody specified by the melody rhythm information can be sent to the remote server according to the voice type, the acoustic model pre-trained is called by the remote server, the sound effect of the acoustic model is obtained according to the lyric text, the sound effect is processed into singing music data according to the music melody, and the application of the sound effect of the voice singing to the music melody is realized.
Further, the music melody to which the sound effect is applied is synthesized with the background music by using a preset audio synthesis model, that is, the singing music data is synthesized with the background music, and since the two are synchronized in rhythm in relation to the melody rhythm information, the synthesized product is a musical composition coordinated in vocal music and can be used for playing.
The embodiment adapts to the requirement of singing in voice type, can realize the synthesis of lyric text, music melody and background music according to the setting of the user, creates corresponding song musical works for the user in one step, simplifies the song creation process of the user and improves the auxiliary song creation efficiency.
Referring to fig. 25, a musical composition method according to the present application, which can be programmed as a computer program product, is mainly implemented by being deployed in a server for operation, and is used for supporting the operation of the computer program product implemented according to the lyric creation navigation method of the present application. Referring to fig. 25, in an exemplary embodiment, the method includes the steps of:
step S2100, determining draft information of the original user in response to a music composition instruction submitted by the original user, where the draft information includes an accompaniment template specified in the instruction, a preset sound type, a music melody of which a selected note is determined by the original user, and a lyric text created by the original user, and a duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template:
after an original user creates a corresponding musical composition by using the lyric creation navigation method of the application, the corresponding musical composition can be submitted to a remote server, namely the server of the musical composition synthesis method of the application, so that a music synthesis instruction is triggered. In one embodiment, the music composition instruction may be further triggered after the originating user triggers the issuance of the submit instruction.
And in response to a music synthesis instruction triggered by the original user, draft information for determining the original user is extracted from the music synthesis instruction, wherein the draft information comprises an accompaniment template specified in the instruction, a preset sound type and a music melody of which the selected note is determined by the original user, and the duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template. Since the accompaniment template is usually stored in the remote server, only the feature identifier of the accompaniment template can be provided in the draft information of the original user.
In other alternative embodiments, the draft information may further include a lyric text submitted by the original user, and the lyric text may also be automatically generated by the original user using a computer program product.
Step S2200, storing the draft information into the personal editing library of the original user for subsequent calling:
the server creates a corresponding personal editing library, i.e. a draft box, for each user in advance, so that the draft information can be stored in the personal editing library corresponding to the original user for calling.
Step S2300, synthesizing the corresponding sound effect into the music melody according to a preset sound type:
the technical realization of synthesizing the musical works is implemented by deploying the technical realization in the server of the method, so that the server is responsible for preparing the sound effect data of the playing or singing version of the musical melody by applying the corresponding sound effect according to the preset sound type appointed in the draft information, and then synthesizing the sound effect data into the musical melody.
In one embodiment, when the preset sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyric text carried in the draft information into a sound effect with a preset tone color, and then the sound effect is synthesized into the music melody. For the designation of the tone, the lyric creation navigation method of the application can also require the user to designate in the user creation process, and the corresponding tone data can be provided by the server, so that the server can finally apply the corresponding tone data for synthesis.
Step S2400, synthesizing background music formed by playing according to the accompaniment chord information and the music melody into playable musical compositions, and pushing the musical compositions to the user:
the background music corresponding to the accompaniment template is stored in the server, and the background music is played according to the accompaniment chord information, so that the background music and the music melody submitted by the user have a rhythmical synchronization relationship. On the basis that the music melody of the user has synthesized the corresponding sound effect, the music melody is further synthesized with the background music, and the corresponding musical composition can be obtained at the server side. Further, the music works can be pushed to the original user for playing.
The typical embodiment realizes the synthesis technical support and the archiving support of the musical works created by the original user at the server side, so that the original user can obtain the corresponding musical works through the simple and convenient customized music melody, and the music auxiliary creation efficiency is improved.
Referring to fig. 26, in an alternative embodiment of a musical composition synthesizing method according to the present application, a collaboration between multiple users can be realized, allowing multiple users to jointly complete the creation of a musical composition, and the method further includes the following steps:
step S2500, responding to an authorized access instruction, pushing the draft information to an authorized user authorized by the original user:
the original user can share the draft information with the authorized user, the authority of the authorized user for accessing the draft information is associated, and when the authorized user accesses the draft information, an authorized access instruction is triggered.
In response to the authorized access instruction, after the server authentication is passed, the draft information can be pushed to the authorized user, and the authorized user can perform corresponding editing on the basis of the draft information, including modifying the music melody and/or lyric text therein, and the like, depending on the authorized scope of the original user.
Step S2600, receiving an updated version of the draft information submitted by an authorized user to replace an original version thereof, regenerating the playable musical composition according to the updated version, and pushing the musical composition to the originating user:
and after the authorized user finishes editing and modifying the music melody and/or the lyric text, the authorized user forms an updated version of the draft information, submits the updated version to the server, and after the server receives the submitted updated version of the draft information, the server can replace the corresponding original version in the draft box of the original user, and then regenerates the playable musical works according to the updated version and pushes the musical works to the original user.
The embodiment further provides a technical approach for collaborative creation of musical works among multiple users, so that multiple users can collaboratively edit the same musical work to create complete musical works together, and therefore professional users and non-professional users can collaborate with each other to complete song creation, communication among user groups is further activated, user interaction of a music platform is activated, the operating efficiency of a related server group is improved, and the method is more beneficial to developing the music state of the internet.
Referring to fig. 27, the lyric creation navigation apparatus provided in the present application is adapted to the lyric creation navigation method of the present application for functional deployment, and includes: the system comprises a melody loading module, an interface construction module, a lyric input module and a lyric updating module. The melody loading module is used for responding to a word filling instruction of a user and acquiring a music melody pre-generated in a music composition of a composition interface; the interface construction module is used for displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the interface construction module, and displaying lyric prompt information corresponding to the editing area in an associated manner; the lyric input module is used for responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area and correspondingly storing the lyric text in a lyric cache area; and the lyric updating module is used for responding to the storage event of the lyric cache region and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text.
In a further embodiment, the melody loading module includes: the composition monitoring submodule is used for monitoring the musical melody pre-generated in the musical composition of the composition interface; and the progress detection submodule is used for detecting whether the number of the selected notes of the music melody generated in the composition interface is required by at least one lyric single sentence marked by the preset melody rhythm information of the music melody, activating a word filling control used for triggering a word filling instruction of a user if the number of the selected notes of the music melody in the composition interface is required, and keeping the word filling control in an inactivated state if the number of the selected notes of the music melody in the composition interface is not required by the at least one lyric single sentence marked by the preset melody rhythm information of the music melody.
In a further embodiment, the interface construction module comprises: the trigger display sub-module is used for responding to a word filling instruction of a user and triggering and displaying a word filling interface; the sentence dividing submodule is used for correspondingly dividing a plurality of lyric single sentences according to the sentence dividing information in the melody rhythm information corresponding to the music melody and establishing the corresponding relation between the lyric single sentences and the melody single sentences in the music melody; the single sentence loading submodule is used for providing a corresponding editing area in the word filling interface in response to the display of each lyric single sentence, and initializing and displaying characters of the corresponding lyric single sentence in the lyric cache area in the editing area; the word number constraint submodule is used for determining the maximum word number information of lyrics to be input of the corresponding lyric single sentence according to the melody single sentence and constraining the input word number of the corresponding editing area; and the navigation prompting submodule is used for displaying the maximum word number information and the input word number information for each editing area in an associated manner.
In a further embodiment, the word count constraint submodule comprises: the word number determining secondary module is used for determining a total time value formed by the time values of the selected notes of the melody single sentence; the word number adjusting secondary module is used for dividing the total time value by a product of preset unit time values to be used as the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence; a text box constraint secondary module for configuring the maximum word count information to a maximum input word count for a corresponding edit region text box.
In a further embodiment, the lyric input module comprises: the word filling confirming submodule is used for responding to a word filling confirming instruction, acquiring the lyric texts input by each lyric single sentence in each editing area one by one, and correspondingly replacing the corresponding lyric single sentences in the lyric cache area to realize storage; and the relation establishing secondary module is used for adaptively establishing a rhythm synchronization relation between the lyric single sentence and the selected notes in the corresponding melody single sentence, so that each single character in the lyric single sentence is rhythmically synchronized with the selected notes in the melody single sentence.
In a further embodiment, the lyric updating module comprises: the updating and obtaining sub-module is used for responding to the storage event of the lyric cache area and obtaining an updated lyric text; and the character replacing sub-module is used for correspondingly updating the characters in the note zone position of each corresponding selected note in the composition interface according to the updated lyric text and the rhythm synchronization relation of the music melody.
In an extended embodiment, the lyric creation navigation device further includes: the intelligent reference module is used for responding to an intelligent reference instruction triggered by a single sentence text in the lyric text and entering an intelligent search interface; the intelligent association module is used for responding to the keywords input from the intelligent search interface and displaying one or more recommended texts matched with the keywords; and the recommendation updating module is used for responding to a selected instruction of one of the recommended texts and replacing the single sentence text with the selected recommended text.
In an extended embodiment, the lyric creation navigation device further includes: the intelligent acquisition module is used for responding to the input of a prior lyric text and acquiring one or more intelligent automatic association texts of the subsequent lyric text; the association display module is used for displaying the intelligent automatic association text on the word filling interface; and the association confirmation module is used for responding to the selection operation of one intelligent automatic association text and using the selected intelligent automatic association text as the subsequent lyric text.
In a preferred embodiment, a text box is arranged in the editing area and used for receiving the input of the lyric text; the note zone location where the selected note is located is loaded with a lyric editing control for receiving input of lyric text.
In an extended embodiment, the lyric creation navigation device further includes: the individual editing module is used for responding to an editing event of characters in the lyric editing control acting on the note zone of the selected note, and replacing the corresponding edited characters with corresponding lyric texts in the lyric cache zone according to the word number corresponding relation; and the editing synchronization module is used for responding to the content updating event of the lyric cache area, and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface by the updated lyric text.
In an extended embodiment, the lyric creation navigation device further includes: and the automatic word filling module is used for responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment and automatically completing the lyric text according to the selected musical notes of the music melody.
In an extended embodiment, the lyric creation navigation device further includes: the interface display module is used for displaying a composition interface, and the composition interface displays a note zone bit list with rhythm and scale as dimensionality, wherein each note zone bit corresponds to one note in the scale dimensionality corresponding to a certain time under the indication rhythm dimensionality; the interface formatting module is used for formatting the occupation width of the corresponding note zone bit on the composition interface according to the rhythm corresponding to each undetermined note in the preset rhythm information; the melody acquisition module is used for sequentially promoting the acquisition of the selected notes corresponding to each undetermined note in the music melody to be acquired, and displaying a corresponding note prompt area in a composition interface aiming at each undetermined note in the current sequence so as to list note areas corresponding to the selectable notes which change in relation to the prior sorted selected notes for selection; a melody construction module for constructing said musical melody including selected notes that have been sequenced.
In an extended embodiment, the lyric creation navigation device further includes: and the song synthesizing module is used for responding to a work playing instruction and playing the music works of the vocal singing version synthesized by the lyric text and the music melody.
Referring to fig. 28, the apparatus for synthesizing musical composition provided by the present application, which is adapted to the method for synthesizing musical composition of the present application to perform functional deployment, includes the following steps: the system comprises a draft acquisition module, a draft storage module, a sound effect synthesis module and a music synthesis module. The draft acquiring module is used for responding to a music synthesis instruction submitted by triggering of an original user, determining draft information of the original user, wherein the draft information comprises an accompaniment template specified in the instruction, a preset sound type, a music melody of which a selected note is determined by the original user, and a lyric text created by the original user, and the time value of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template; the draft storage module is used for storing the draft information into a personal editing library of the original user for subsequent calling; the sound effect synthesis module is used for synthesizing the corresponding sound effect into the music melody according to the preset sound type so as to synthesize the sound effect of singing the lyric text according to the music melody when the preset sound type is the human voice type; the music synthesis module is used for synthesizing background music formed by playing according to the accompaniment chord information and the music melody into playable music works and pushing the music works to the user.
In an expanded embodiment, the device comprises: the authorized access module is used for responding to an authorized access instruction and pushing the draft information to an authorized user authorized by the original user; and the draft replacing module is used for receiving the updated version of the draft information submitted by the authorized user to replace the original version of the draft information, regenerating the playable musical composition according to the updated version and pushing the musical composition to the original user.
In a preferred embodiment, the updated version of the draft information includes lyric text corresponding to the music melody.
In a preferred embodiment, in the sound effect synthesizing module, when the preset sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyric text carried in the draft information into a sound effect of a predetermined tone, and then the sound effect is synthesized into the music melody.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. As shown in fig. 29, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a computer-readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can cause the processor to realize a lyric creation navigation method/music composition method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform the lyric composition navigation method/musical composition method of the present application. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 29 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module and its sub-module in fig. 27 and 28, and the memory stores program codes and various data required for executing the modules or sub-modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all modules/sub-modules in the lyric creation navigation device/musical composition device of the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The present application further provides a storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the lyric composition navigation method/musical piece synthesis method of any of the embodiments of the present application.
The present application also provides a computer program product comprising computer programs/instructions which, when executed by one or more processors, implement the steps of the method as described in any of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments of the present application can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed, the processes of the embodiments of the methods can be included. The storage medium may be a computer-readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
In conclusion, the method and the device can efficiently guide the user to create the lyrics to form the music works, enrich the auxiliary music creation means and improve the auxiliary music creation efficiency.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (17)

1. A lyric creation navigation method is characterized by comprising the following steps:
responding to a word filling instruction of a user, and acquiring a music melody pre-generated in a music composition of a composition interface;
displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the word filling interface, and displaying lyric prompt information corresponding to the editing area in a correlated manner;
responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area, and correspondingly storing the lyric text in a lyric cache area;
and responding to the storage event of the lyric cache region, and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text.
2. The lyric composition navigation method of claim 1, wherein the step of obtaining a pre-generated musical melody in a musical composition from a composition interface in response to a user word-filling command, comprises the steps of:
monitoring a music melody pre-generated in a music composition of a composition interface;
and detecting whether the selected note quantity of the music melody generated in the composition interface is required by at least one lyric single sentence marked by preset melody rhythm information of the music melody, if so, activating a word filling control used for triggering a word filling instruction of a user, and otherwise, keeping the word filling control in an inactivated state.
3. The lyric creation navigation method of claim 1, wherein displaying a word-filling interface comprises the steps of:
triggering and displaying a word filling interface in response to a word filling instruction of a user;
correspondingly dividing a plurality of lyric single sentences according to sentence dividing information in the melody rhythm information corresponding to the music melody, and establishing a corresponding relation between the lyric single sentences and the melody single sentences in the music melody;
providing a corresponding editing area in the word filling interface to adapt to the display of each lyric single sentence, and initializing and displaying characters of the corresponding lyric single sentence in the lyric cache area in the editing area;
determining the maximum word number information of the lyrics to be input of the corresponding lyric single sentence according to the melody single sentence, wherein the maximum word number information is used for restricting the input word number of the corresponding editing area;
displaying the maximum word count information and the word count information entered therein for each edit section association.
4. The lyric creation navigation method of claim 3, wherein the maximum word count information of the lyrics to be inputted of the corresponding lyric single sentence is determined according to the melody single sentence, for restricting the input word count of the corresponding editing area, comprising the steps of:
determining a total duration of durations of selected notes of the melodic simple sentence;
dividing the total time value by a product of a preset unit time value to be used as the maximum word number information of the lyrics to be input of the corresponding lyrics single sentence;
the maximum word count information is configured as a maximum input word count of the corresponding edit field text box.
5. The lyric creation navigation method of claim 3, wherein the lyric text inputted by the user in the editing area is acquired in response to a word-filling confirmation command and is correspondingly stored in the lyric cache area, comprising the steps of:
responding to the word filling confirmation instruction, acquiring the lyric texts input by each lyric single sentence in each editing area one by one, and correspondingly replacing the corresponding lyric single sentences in the lyric cache area to realize storage;
and adaptively establishing a rhythm synchronization relation between the lyric single sentence and the selected notes in the corresponding melody single sentence, so that each single character in the lyric single sentence is in rhythm synchronization with the selected notes in the rhythm single sentence.
6. The lyric creation navigation method of claim 4, wherein in response to a stored event in the lyric buffer, synchronously updating the text in the note zone in which each corresponding selected note is located in the composition interface with the lyric text, comprising the steps of:
responding to a storage event of the lyric cache region, and acquiring an updated lyric text;
and correspondingly updating the characters in the note zone of each corresponding selected note in the composition interface according to the updated lyric text and the rhythm synchronization relation of the music melody.
7. The lyric creation navigation method of claim 1, wherein after displaying the word-filling interface, comprising the steps of:
responding to an intelligent reference instruction triggered by a single sentence text in the lyric text, and entering an intelligent search interface;
displaying one or more recommended texts matched with the keywords in response to the keywords input from the intelligent search interface;
and responding to a selection instruction of one of the recommended texts, and replacing the single sentence text with the selected recommended text.
8. The lyric creation navigation method of claim 1, wherein after displaying the word-filling interface, comprising the steps of:
responding to input of a prior lyric text, and acquiring one or more intelligent automatic association texts of a subsequent lyric text;
displaying the intelligent automatic association text on the word filling interface;
and in response to the selection operation for one of the intelligent automatic association texts, using the selected intelligent automatic association text as the subsequent lyric text.
9. The lyric authoring navigation method of any one of claims 1 to 8, wherein a text box is provided in the editing area for receiving input of lyric text; the note zone location where the selected note is located is loaded with a lyric editing control for receiving input of lyric text.
10. The lyric creation navigation method of any one of claims 1 to 8, wherein the method comprises the steps of:
responding to an editing event of characters in the lyric editing control acting on the note zone of the selected note, and replacing the corresponding edited characters to update corresponding lyric texts in the lyric cache zone according to the word number corresponding relation;
and responding to the content updating event of the lyric cache area, and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface by using the updated lyric text.
11. The lyric creation navigation method of claim 1, wherein after displaying the word-filling interface, comprising the steps of:
responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment, and automatically completing the lyric text according to the selected notes of the music melody.
12. The lyric composition navigation method of claim 1, wherein before acquiring the pre-generated musical melody in the musical composition of the composition interface in response to the user word-filling command, the method comprises the steps of:
displaying a composition interface, wherein the composition interface displays a list of note locations with rhythm and scale as dimensions, and each note location corresponds to a note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension;
formatting the occupation width of corresponding note zone bits on the composition interface according to the rhythm corresponding to each undetermined note in the preset rhythm information;
sequentially advancing the acquisition of the selected notes corresponding to each undetermined note in the music melody to be acquired, and displaying a corresponding note prompt area in a composition interface aiming at each undetermined note in the current sequence so as to list the note areas corresponding to the selectable notes which are changed in relation to the prior sequenced selected notes for selection;
constructing said musical melody comprising selected notes that have been determined in sequence.
13. The lyric creation navigation method of claim 13, comprising the steps of:
and responding to a work playing instruction, and playing the music work of the vocal singing version synthesized by the lyric text and the music melody.
14. A lyric creation navigation device, comprising:
the melody loading module is used for responding to the word filling instruction of the user and acquiring the music melody pre-generated in the music composition of the composition interface;
the interface construction module is used for displaying a word filling interface, displaying an editing area for receiving a lyric text corresponding to the music melody in the interface construction module, and displaying lyric prompt information corresponding to the editing area in a correlated manner;
the lyric input module is used for responding to a word filling confirmation instruction, acquiring a lyric text input by a user in the editing area and correspondingly storing the lyric text in a lyric cache area;
and the lyric updating module is used for responding to the storage event of the lyric cache region and synchronously updating the characters in the note region where each corresponding selected note is located in the composition interface by the lyric text.
15. A computer device comprising a central processor and a memory, characterized in that the central processor is adapted to invoke execution of a computer program stored in the memory to perform the steps of the method according to any one of claims 1 to 14.
16. A computer-readable storage medium, characterized in that it stores, in the form of computer-readable instructions, a computer program implemented according to the method of any one of claims 1 to 14, which, when invoked by a computer, performs the steps comprised by the corresponding method.
17. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method as claimed in any one of claims 1 to 14.
CN202111016989.1A 2021-06-29 2021-08-31 Lyric creation navigation method and device, equipment, medium and product thereof Pending CN113539217A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021107299885 2021-06-29
CN202110729988 2021-06-29

Publications (1)

Publication Number Publication Date
CN113539217A true CN113539217A (en) 2021-10-22

Family

ID=78122891

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202111016988.7A Pending CN113539216A (en) 2021-06-29 2021-08-31 Melody creation navigation method and device, equipment, medium and product thereof
CN202111016989.1A Pending CN113539217A (en) 2021-06-29 2021-08-31 Lyric creation navigation method and device, equipment, medium and product thereof
CN202111015092.7A Active CN113611268B (en) 2021-06-29 2021-08-31 Musical composition generating and synthesizing method and device, equipment, medium and product thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111016988.7A Pending CN113539216A (en) 2021-06-29 2021-08-31 Melody creation navigation method and device, equipment, medium and product thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111015092.7A Active CN113611268B (en) 2021-06-29 2021-08-31 Musical composition generating and synthesizing method and device, equipment, medium and product thereof

Country Status (1)

Country Link
CN (3) CN113539216A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200907874A (en) * 2007-08-14 2009-02-16 Deansoft Co Ltd Karaoke system providing user with self-learning function
CN105513607A (en) * 2015-11-25 2016-04-20 网易传媒科技(北京)有限公司 Method and apparatus for music composition and lyric writing
CN108039184A (en) * 2017-12-28 2018-05-15 腾讯音乐娱乐科技(深圳)有限公司 Lyrics adding method and device
CN108492817A (en) * 2018-02-11 2018-09-04 北京光年无限科技有限公司 A kind of song data processing method and performance interactive system based on virtual idol
CN109741724A (en) * 2018-12-27 2019-05-10 歌尔股份有限公司 Make the method, apparatus and intelligent sound of song
US20200413003A1 (en) * 2018-12-25 2020-12-31 Beijing Microlive Vision Technology Co., Ltd Method and device for processing multimedia information, electronic equipment and computer-readable storage medium
CN112632327A (en) * 2020-12-30 2021-04-09 北京达佳互联信息技术有限公司 Lyric processing method, device, electronic equipment and computer readable storage medium
CN112632906A (en) * 2020-12-30 2021-04-09 北京达佳互联信息技术有限公司 Lyric generation method, device, electronic equipment and computer readable storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3533974B2 (en) * 1998-11-25 2004-06-07 ヤマハ株式会社 Song data creation device and computer-readable recording medium recording song data creation program
JP3580210B2 (en) * 2000-02-21 2004-10-20 ヤマハ株式会社 Mobile phone with composition function
JP3709821B2 (en) * 2001-10-02 2005-10-26 ヤマハ株式会社 Music information editing apparatus and music information editing program
WO2006112585A1 (en) * 2005-04-18 2006-10-26 Lg Electronics Inc. Operating method of music composing device
KR100658869B1 (en) * 2005-12-21 2006-12-15 엘지전자 주식회사 Music generating device and operating method thereof
CN101719366A (en) * 2009-12-16 2010-06-02 德恩资讯股份有限公司 Method for editing and displaying musical notes and music marks and accompanying video system
CN101800046B (en) * 2010-01-11 2014-08-20 北京中星微电子有限公司 Method and device for generating MIDI music according to notes
US9620092B2 (en) * 2012-12-21 2017-04-11 The Hong Kong University Of Science And Technology Composition using correlation between melody and lyrics
KR20170137526A (en) * 2016-06-03 2017-12-13 서나희 Composing method using composition program
EP3631789A4 (en) * 2017-05-22 2021-07-07 Zya Inc. System and method for automatically generating musical output
CN107644630B (en) * 2017-09-28 2020-07-28 北京灵动音科技有限公司 Melody generation method and device based on neural network and storage medium
GB201802440D0 (en) * 2018-02-14 2018-03-28 Jukedeck Ltd A method of generating music data
CN113012665B (en) * 2021-02-19 2024-04-19 腾讯音乐娱乐科技(深圳)有限公司 Music generation method and training method of music generation model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200907874A (en) * 2007-08-14 2009-02-16 Deansoft Co Ltd Karaoke system providing user with self-learning function
CN105513607A (en) * 2015-11-25 2016-04-20 网易传媒科技(北京)有限公司 Method and apparatus for music composition and lyric writing
CN108039184A (en) * 2017-12-28 2018-05-15 腾讯音乐娱乐科技(深圳)有限公司 Lyrics adding method and device
CN108492817A (en) * 2018-02-11 2018-09-04 北京光年无限科技有限公司 A kind of song data processing method and performance interactive system based on virtual idol
US20200413003A1 (en) * 2018-12-25 2020-12-31 Beijing Microlive Vision Technology Co., Ltd Method and device for processing multimedia information, electronic equipment and computer-readable storage medium
CN109741724A (en) * 2018-12-27 2019-05-10 歌尔股份有限公司 Make the method, apparatus and intelligent sound of song
CN112632327A (en) * 2020-12-30 2021-04-09 北京达佳互联信息技术有限公司 Lyric processing method, device, electronic equipment and computer readable storage medium
CN112632906A (en) * 2020-12-30 2021-04-09 北京达佳互联信息技术有限公司 Lyric generation method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113611268B (en) 2024-04-16
CN113539216A (en) 2021-10-22
CN113611268A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
US11651757B2 (en) Automated music composition and generation system driven by lyrical input
JP7041270B2 (en) Modular automatic music production server
JP5895740B2 (en) Apparatus and program for performing singing synthesis
CN104050961A (en) Voice synthesis device, voice synthesis method, and recording medium having a voice synthesis program stored thereon
CN107430849A (en) Sound control apparatus, audio control method and sound control program
CN112164379A (en) Audio file generation method, device, equipment and computer readable storage medium
Berliner The art of Mbira: Musical inheritance and legacy
CN108922505B (en) Information processing method and device
CN113539217A (en) Lyric creation navigation method and device, equipment, medium and product thereof
JP3843953B2 (en) Singing composition data input program and singing composition data input device
May Going Up to God by Musical Process: A Structural Analysis of David Lang's the little match girl passion (2007)
Potter ‘New Chaconnes for Old?’Steve Reich’s Sketches for Variations for Winds, Strings and Keyboards, with Some Thoughts on Their Significance for the Analysis of the Composer’s Harmonic Language in the Late 1970s
CN114495873A (en) Song recomposition method and device, equipment, medium and product thereof
KR20240021753A (en) System and method for automatically generating musical pieces having an audibly correct form
Puckette et al. Between the Tracks: Musicians on Selected Electronic Music
Osterkamp Leonard Bernstein's Piano Music: A Comparative Study of Selected Works
Palmese John Adams Composing Through Others: Modeling, Innovation, and Recomposition
Bruce Change of the" Guard": Charlie Rouse, Steve Lacy, and the Music of Thelonious Monk

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination