CN113077770A - Fole generation method, device, equipment and storage medium - Google Patents
Fole generation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113077770A CN113077770A CN202110301852.4A CN202110301852A CN113077770A CN 113077770 A CN113077770 A CN 113077770A CN 202110301852 A CN202110301852 A CN 202110301852A CN 113077770 A CN113077770 A CN 113077770A
- Authority
- CN
- China
- Prior art keywords
- preset
- music
- buddha
- buddha music
- variable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000003860 storage Methods 0.000 title claims abstract description 23
- 239000012634 fragment Substances 0.000 claims abstract description 70
- 230000033764 rhythmic process Effects 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000009826 distribution Methods 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/046—Forward inferencing; Production systems
- G06N5/047—Pattern matching networks; Rete networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/021—Background music, e.g. for video sequences or elevator music
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/105—Composing aid, e.g. for supporting creation, edition or modification of a piece of music
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
The invention relates to the field of artificial intelligence, and discloses a Buddha music generation method, a device, equipment and a storage medium, which are applied to the field of intelligent education and used for generating Buddha music works which are more in line with the expectation of a user according to a preset Buddha music generation model and Buddha music fragments, so that the Buddha music generation efficiency is improved. The Buddha music generation method comprises the following steps: acquiring a Buddha music fragment to be created; calling a preset variational self-encoder VAE, converting the Buddha music segment to be created into a potential variable, and decomposing the potential variable into a pitch variable and a rhythm variable; calling a preset melody repairing device Inpainter to obtain an intermediate Buddha music segment; processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable; and calling a preset decoder to decode the target latent variable to generate the final Buddha music work.
Description
Technical Field
The invention relates to the field of audio conversion, in particular to a method, a device, equipment and a storage medium for Buddha music generation.
Background
Music autogeneration has a great influence on the research and development of human expression creativity. In recent years, neural network technology has achieved a good effect in the field of automatic music generation. Previous related research works supported various forms of music generation, there were studies that provided a constraint mechanism that allowed users to limit the generated results to match the style of composition, that could compose accompaniment from existing melodies in classical music, and so on. However, these methods all require the user's preference to be defined as a more complete track, which is difficult for people without the experience of composing music.
In the existing scheme, the most relevant to music generation is music repair, namely, a series of missing measurement values are generated according to the contextual music background to complete the musical composition, but the method hardly considers the preference of the user, or a plurality of music repair segments are randomly generated for the same contextual music background, and the user selects the music segment which is more preferred by the user, but the optimal music segment of the user cannot be generated directly according to the preference setting of the user.
Disclosure of Invention
The invention provides a Buddha music generation method, a device, equipment and a storage medium, which are used for generating Buddha music works more conforming to the expectation of a user according to a preset Buddha music generation model and a Buddha music fragment, reducing the difficulty of the user participating in Buddha music creation and improving the Buddha music generation efficiency.
The invention provides a Buddha music generation method in a first aspect, which comprises the following steps: the method comprises the steps of obtaining a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment; calling a preset variational self-encoder VAE, converting the Buddha music fragment to be created into a potential variable, and decomposing the potential variable into a pitch variable and a rhythm variable; calling a preset melody restorer Inpainter, predicting a corresponding Buddha music segment based on the background of the Buddha music and the potential variable, and obtaining an intermediate Buddha music segment; processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable; and calling a preset decoder to decode the target latent variable to generate the final Buddha musical work.
Optionally, in a first implementation manner of the first aspect of the present invention, the invoking a preset variational auto-encoder VAE, converting the funeral segment to be created into a latent variable, and decomposing the latent variable into a pitch variable and a tempo variable includes: converting the Buddha music segment to be authored into a subsequence consisting of a pitch sequence P consisting of a pitch type present in the Buddha music segment to be authored and a rhythm sequence R consisting of a duration type present in the Buddha music segment to be authored; inputting the pitch sequence P and the rhythm sequence R into a preset variational self-encoder VAE to generate potential variables; and decomposing the latent variable into a pitch variable and a rhythm variable based on a preset factorization reasoning network.
Optionally, in a second implementation manner of the first aspect of the present invention, the invoking a preset melody repairing device Inpainter, and predicting a corresponding buddha music segment based on a background of buddha music and the latent variable to obtain an intermediate buddha music segment includes: calling a preset melody repairing device Inpainter to read the potential variables; inputting the potential variables into a pitch gating circulation unit GRU and a rhythm gating circulation unit GRU to obtain basic Fole fragments; generating an intermediate Fole segment based on the background of Fole and the base Fole segment.
Optionally, in a third implementation manner of the first aspect of the present invention, the processing the intermediate buddha music segment based on a random mask-free code to generate a target buddha music segment, invoking a preset Connector, and combining the target buddha music segment with a preset buddha music summary sketch to generate a target latent variable includes: controlling and modifying the intermediate Buddha music fragment based on a preset random mask-free code to generate a target Buddha music fragment; calling a preset Connector to read a preset folk summary sketch, wherein the preset folk summary sketch comprises pitch and rhythm information input by a user; and combining the target Fole segment with the preset Fole summary sketch based on the preset Connector to generate a target latent variable and sending the target latent variable to a preset decoder.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the invoking a preset decoder to decode the target latent variable to generate a final folk work includes: calling a preset decoder to read the target latent variable; and decoding the target latent variable based on the preset decoder to generate a final Buddha musical work.
Optionally, in a fifth implementation manner of the first aspect of the present invention, after the obtaining the funeral piece to be created, where the funeral piece to be created includes a past funeral piece and a future funeral piece, before the invoking a preset variational auto-encoder VAE, converting the funeral piece to be created into a latent variable, and decomposing the latent variable into a pitch part and a rhythm part, the method further includes: a preset folk summary sketch is received, which includes pitch and tempo information input by a user.
Optionally, in a sixth implementation manner of the first aspect of the present invention, after the invoking a preset decoder decodes the target latent variable to generate a final works of Buddha music, the method further includes: calculating a loss function li(theta, phi), the specific formula is:
where θ is a parameter of the preset variational autoencoder VAE and φ is a parameter of the preset decoder, assuming θ refers to a mapping from x to z, φ refers to a reconstruction from z to x, q is a parameter of the preset decoderθ(z|xi) Is a posterior distribution of z derived from x, p (z) is an a priori distribution of z, p (z) is a gaussian distribution N (0,1) assuming a mean of 0 and a variance of 1.
A second aspect of the present invention provides a device for producing buddha music, comprising: the device comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring a Buddha music segment to be created, the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment; the conversion module is used for calling a preset variational self-encoder VAE, converting the Buddha music fragment to be created into a potential variable and decomposing the potential variable into a pitch variable and a rhythm variable; the prediction module is used for calling a preset melody repairing device Inpainter, predicting a corresponding Buddha music segment based on the background of the Buddha music and the potential variable, and obtaining an intermediate Buddha music segment; the processing module is used for processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable; and the decoding module is used for calling a preset decoder to decode the target latent variable to generate the final Buddha musical work.
Optionally, in a first implementation manner of the second aspect of the present invention, the conversion module includes: a conversion unit, configured to convert the Buddha music segment to be created into a subsequence composed of a pitch sequence P composed of pitch types present in the Buddha music segment to be created and a rhythm sequence R composed of duration types present in the Buddha music segment to be created; a first input unit, configured to input the pitch sequence P and the rhythm sequence R into a preset variation autocoder VAE, so as to generate a latent variable; and the decomposition unit is used for decomposing the potential variable into a pitch variable and a rhythm variable based on a preset factorization reasoning network.
Optionally, in a second implementation manner of the second aspect of the present invention, the prediction module includes: the first reading unit is used for calling a preset melody repairing device Inpainter to read the potential variable; the second input unit is used for inputting the potential variables into a pitch gating cycle unit GRU and a rhythm gating cycle unit GRU to obtain basic Fole fragments; and the generating unit is used for generating an intermediate Buddha music fragment based on the background of the Buddha music and the basic Buddha music fragment.
Optionally, in a third implementation manner of the second aspect of the present invention, the processing module includes: the modifying unit is used for controlling and modifying the intermediate Buddha music fragment based on a preset random mask-free code to generate a target Buddha music fragment; a second reading unit, configured to call a preset Connector to read a preset folk summary sketch, where the preset folk summary sketch includes pitch and rhythm information input by a user; and the combining unit is used for combining the target Buddha music segment with the preset Buddha music summary sketch based on the preset Connector, generating a target latent variable and sending the target latent variable to a preset decoder.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the decoding module includes: the third reading unit is used for calling a preset decoder to read the target latent variable; and the decoding unit is used for decoding the target latent variable based on the preset decoder to generate a final Buddha musical work.
Optionally, in a fifth implementation manner of the second aspect of the present invention, after the obtaining the funeral piece to be created, where the funeral piece to be created includes a past funeral piece and a future funeral piece, before the invoking a preset variational self-encoder VAE, converting the funeral piece to be created into a latent variable, and decomposing the latent variable into a pitch part and a rhythm part, the apparatus further includes: the receiving module is used for receiving a preset Fowler summary sketch, and the preset Fowler summary sketch comprises pitch and rhythm information input by a user.
Optionally, in a sixth implementation manner of the second aspect of the present invention, after the invoking a preset decoder decodes the target latent variable to generate a final works of buddha music, the apparatus further includes: calculating a loss function li(theta, phi), the specific formula is:
where θ is a parameter of the preset variational autoencoder VAE and φ is a parameter of the preset decoder, assuming θ refers to a mapping from x to z, φ refers to a reconstruction from z to x, q is a parameter of the preset decoderθ(z|xi) Is a posterior distribution of z derived from x, p (z) is an a priori distribution of z, p (z) is a gaussian distribution N (0,1) assuming a mean of 0 and a variance of 1.
A third aspect of the present invention provides a buddha music generating apparatus comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the folk generation device to perform the folk generation method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the above-described folk generation method.
According to the technical scheme provided by the invention, a Buddha music segment to be created is obtained, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment; calling a preset variational self-encoder VAE, converting the Buddha music fragment to be created into a potential variable, and decomposing the potential variable into a pitch variable and a rhythm variable; calling a preset melody restorer Inpainter, predicting a corresponding Buddha music segment based on the background of the Buddha music and the potential variable, and obtaining an intermediate Buddha music segment; processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable; and calling a preset decoder to decode the target latent variable to generate the final Buddha musical work. In the embodiment of the invention, the Buddha music works which are more in line with the expectation of the user are generated according to the preset Buddha music generation model and the Buddha music fragment, so that the difficulty of the user in participating in Buddha music creation is reduced, and the Buddha music generation efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a Fowler generating method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of another embodiment of a Fowler generating method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an embodiment of a Fowler generating device in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of another embodiment of a Fowler generating device in accordance with an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of a folk music generating device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a Buddha music generation method, a device, equipment and a storage medium, which are used for generating Buddha music works more conforming to the expectation of a user according to a preset Buddha music generation model and a Buddha music fragment, reducing the difficulty of the user in participating in Buddha music creation and improving the Buddha music generation efficiency.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For understanding, a specific flow of an embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a folk generation method in an embodiment of the present invention includes:
101. obtaining a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment.
The server acquires a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment. And a preset time length exists between the first Buddha music segment and the second Buddha music segment, and the preset time length is used for inserting pitch and rhythm information input by a user to generate a final Buddha music work.
It is to be understood that the executing subject of the present invention may be a folk music generating apparatus, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
102. And calling a preset variational self-encoder VAE, converting the Buddha music segment to be created into a latent variable, and decomposing the latent variable into a pitch variable and a rhythm variable.
The server calls a preset variational self-encoder VAE, converts the Buddha music segment to be created into a latent variable, and decomposes the latent variable into a pitch variable and a rhythm variable. Specifically, the server converts the Buddha music segment to be created into a subsequence consisting of a pitch sequence P and a rhythm sequence R, wherein the pitch sequence P consists of a pitch type presented in the Buddha music segment to be created, and the rhythm sequence R consists of a duration type presented in the Buddha music segment to be created; the server inputs the pitch sequence P and the rhythm sequence R into a preset variational self-encoder VAE to generate potential variables; the server decomposes the latent variable into a pitch variable and a tempo variable based on a preset factorization reasoning network.
For example, pitch sequence P may use D5、A4、B5And G4The pitch of each note in the music score is represented, the rhythm sequence R is represented by the duration of the note, the minimum unit of the duration is one sixteenth note, the preset variation self-encoder VAE comprises a learnable embedded layer, a pitch gating cycle unit GRU, a rhythm gating cycle unit GRU and two linear layers, the potential variable is obtained through normal distribution, and the hidden layer coded by a neural network is assumed to be a standard hidden layer by the VAEThe method comprises the steps of Gaussian distribution, sampling a feature from the distribution, decoding by using the feature, expecting to obtain the same result as the original input, and increasing a regular term of KL divergence of a coding inferred distribution and a standard Gaussian distribution compared with a common self-encoder, wherein the KL divergence refers to relative entropy and is an asymmetry measurement of difference between two probability distributions.
103. And calling a preset melody repairing device Inpainter, predicting a corresponding Buddha music segment based on background and potential variables of the Buddha music, and obtaining an intermediate Buddha music segment.
And calling a preset melody repairing device Inpainter by the server, predicting a corresponding Buddha music segment based on background and potential variables of the Buddha music, and obtaining an intermediate Buddha music segment. Specifically, the server calls a preset melody repairing device Inpainter to read potential variables; the server inputs the potential variables into a pitch gating circulation unit GRU and a rhythm gating circulation unit GRU to obtain basic Fole fragments; the server generates an intermediate Buddha music piece based on the background of Buddha music and the basic Buddha music piece.
The melody restorer Inpainter comprises a preset melody restoration Inpaint algorithm and a preset gradient optimization algorithm Adam, predicts corresponding Buddha music segments based on background style characteristics of Buddha music, and appoints the style of Buddha music by controlling pitch and rhythm instead of performing Buddha music creation through a complete music track, so that the difficulty of creating Buddha music is reduced.
104. And processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable.
The server processes the intermediate Buddha music fragment based on the random mask-free code to generate a target Buddha music fragment, calls a preset Connector, and combines the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable. Specifically, the server controls and modifies the intermediate Buddha music fragment based on a preset random mask-free code to generate a target Buddha music fragment; the server calls a preset Connector to read a preset folk summary sketch, wherein the preset folk summary sketch comprises pitch and rhythm information input by a user; the server combines the target folk fragment with a preset folk summary sketch based on a preset Connector, generates a target latent variable and sends the target latent variable to a preset decoder.
The preset Connector combines a user-input folk summary sketch containing pitch and tempo information with a target folk fragment, and comprises a Kafka Connector for providing streaming integration between a data store and a Kafka queue, the Kafka Connector having a rich application program interface API and further having a representational state transfer application program interface REST API for configuring and managing the Connector. The Kafka connector itself is a modular part of the key components including connectors for defining a set of JAR files associated with data storage integration and translators for handling serialization and deserialization of data.
105. And calling a preset decoder to decode the target latent variable to generate the final Buddha music work.
And the server calls a preset decoder to decode the target latent variable to generate the final Buddha music work. Specifically, the server calls a preset decoder to read a target latent variable; the server decodes the target latent variable based on a preset decoder to generate the final Buddha music works.
In the embodiment of the invention, the Buddha music works which are more in line with the expectation of the user are generated according to the preset Buddha music generation model and the Buddha music fragment, so that the difficulty of the user in participating in Buddha music creation is reduced, and the Buddha music generation efficiency is improved. This scheme can be applied to in the wisdom education field to promote the construction in wisdom city.
Referring to fig. 2, another embodiment of the method for producing folk music according to the embodiment of the present invention includes:
201. obtaining a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment.
The server acquires a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment. And a preset time length exists between the first Buddha music segment and the second Buddha music segment, and the preset time length is used for inserting pitch and rhythm information input by a user to generate a final Buddha music work.
It is to be understood that the executing subject of the present invention may be a folk music generating apparatus, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
202. And calling a preset variational self-encoder VAE, converting the Buddha music segment to be created into a latent variable, and decomposing the latent variable into a pitch variable and a rhythm variable.
The server calls a preset variational self-encoder VAE, converts the Buddha music segment to be created into a latent variable, and decomposes the latent variable into a pitch variable and a rhythm variable. Specifically, the server converts the Buddha music segment to be created into a subsequence consisting of a pitch sequence P and a rhythm sequence R, wherein the pitch sequence P consists of a pitch type presented in the Buddha music segment to be created, and the rhythm sequence R consists of a duration type presented in the Buddha music segment to be created; the server inputs the pitch sequence P and the rhythm sequence R into a preset variational self-encoder VAE to generate potential variables; the server decomposes the latent variable into a pitch variable and a tempo variable based on a preset factorization reasoning network.
For example, pitch sequence P may use D5、A4、B5And G4The pitch of each note in a music score is represented, a rhythm sequence R is represented by the duration of the note, the minimum unit of the duration is one sixteenth note, a preset variational self-encoder VAE comprises a learnable embedded layer, a pitch gating cycle unit GRU, a rhythm gating cycle unit GRU and two linear layers, a latent variable is obtained through normal distribution, a hidden layer coded by a neural network is assumed to be standard Gaussian distribution by the VAE, then a feature is sampled from the distribution and is decoded by the feature, the same result as the original input is expected, and compared with the common self-encoder, the encoding push-push is addedThe regular term of KL divergence of the broken distribution and the standard Gaussian distribution, the KL divergence refers to the relative entropy and is the asymmetry measure of the difference between the two probability distributions.
203. And calling a preset melody repairing device Inpainter to read the potential variables.
The server calls a preset melody repairer Inpainter to read the potential variables. The melody restorer Inpainter comprises a preset melody restoration Inpaintt algorithm and a preset gradient optimization algorithm Adam, wherein Adam is a first-order optimization algorithm capable of replacing the traditional random gradient descent process and can update the weight of a neural network iteratively based on training data.
204. And inputting the potential variables into a pitch gating cycle unit GRU and a rhythm gating cycle unit GRU to obtain a basic Fole fragment.
And the server inputs the potential variables into a pitch gating cycle unit GRU and a rhythm gating cycle unit GRU to obtain a basic Fole fragment.
The gate control circulation unit GRU is a model which keeps the effect of a long-short term memory network LSTM, has a simpler structure, fewer parameters and better convergence, and consists of an update gate and a reset gate, wherein the influence degree of an output hidden layer at the previous moment on the current hidden layer is controlled by the update gate, and the larger the value of the update gate is, the larger the influence of the output hidden layer at the previous moment on the current hidden layer is; the degree to which the hidden layer information at the previous time is ignored is controlled by a reset gate, and a smaller value of the reset gate indicates that the more the hidden layer information is ignored. The GRU is simpler in construction: one less gate than LSTM, reduced number of matrix multiplication, GRU can save much time under the condition of large training data.
205. An intermediate folk music segment is generated based on the background and base folk music segments of folk music.
The server generates an intermediate Buddha music piece based on the background of Buddha music and the basic Buddha music piece. The background of Buddha music comprises the creation, transmission and development history of Buddha music and the protection, inheritance and representation of Buddha music.
206. The server processes the intermediate Buddha music fragment based on the random mask-free code to generate a target Buddha music fragment, calls a preset Connector, and combines the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable. Specifically, the server controls and modifies the intermediate Buddha music fragment based on a preset random mask-free code to generate a target Buddha music fragment; the server calls a preset Connector to read a preset folk summary sketch, wherein the preset folk summary sketch comprises pitch and rhythm information input by a user; the server combines the target folk fragment with a preset folk summary sketch based on a preset Connector, generates a target latent variable and sends the target latent variable to a preset decoder.
The preset Connector combines a user-input folk summary sketch containing pitch and tempo information with a target folk fragment, and comprises a Kafka Connector for providing streaming integration between a data store and a Kafka queue, the Kafka Connector having a rich application program interface API and further having a representational state transfer application program interface REST API for configuring and managing the Connector. The Kafka connector itself is a modular part of the key components including connectors for defining a set of JAR files associated with data storage integration and translators for handling serialization and deserialization of data.
207. And calling a preset decoder to decode the target latent variable to generate the final Buddha music work.
And the server calls a preset decoder to decode the target latent variable to generate the final Buddha music work. Specifically, the server calls a preset decoder to read a target latent variable; the server decodes the target latent variable based on a preset decoder to generate the final Buddha music works.
In the embodiment of the invention, the Buddha music works which are more in line with the expectation of the user are generated according to the preset Buddha music generation model and the Buddha music fragment, so that the difficulty of the user in participating in Buddha music creation is reduced, and the Buddha music generation efficiency is improved. This scheme can be applied to in the wisdom education field to promote the construction in wisdom city.
With reference to fig. 3, the method for generating folk music in the embodiment of the present invention is described above, and the folk music generating device in the embodiment of the present invention is described below, where an embodiment of the folk music generating device in the embodiment of the present invention includes:
an obtaining module 301, configured to obtain a buddha music segment to be created, where the buddha music segment to be created includes a first buddha music segment and a second buddha music segment, and an initial time of the second buddha music segment is later than an end time of the first buddha music segment;
the conversion module 302 is configured to invoke a preset variational self-encoder VAE, convert a Buddha music segment to be created into a latent variable, and decompose the latent variable into a pitch variable and a rhythm variable;
the prediction module 303 is configured to call a preset melody repairing device Inpainter, predict a corresponding Buddha music segment based on the background and the potential variable of Buddha music, and obtain an intermediate Buddha music segment;
the processing module 304 is configured to process the intermediate buddha music fragment based on a random mask-free code, generate a target buddha music fragment, call a preset Connector, combine the target buddha music fragment with a preset buddha music summary sketch, and generate a target latent variable;
and a decoding module 305, configured to invoke a preset decoder to decode the target latent variable, and generate a final Buddha musical product.
In the embodiment of the invention, the Buddha music works which are more in line with the expectation of the user are generated according to the preset Buddha music generation model and the Buddha music fragment, so that the difficulty of the user in participating in Buddha music creation is reduced, and the Buddha music generation efficiency is improved. This scheme can be applied to in the wisdom education field to promote the construction in wisdom city.
Referring to fig. 4, another embodiment of the folk music generating apparatus according to the embodiment of the present invention includes:
an obtaining module 301, configured to obtain a buddha music segment to be created, where the buddha music segment to be created includes a first buddha music segment and a second buddha music segment, and an initial time of the second buddha music segment is later than an end time of the first buddha music segment;
the conversion module 302 is configured to invoke a preset variational self-encoder VAE, convert a Buddha music segment to be created into a latent variable, and decompose the latent variable into a pitch variable and a rhythm variable;
the prediction module 303 is configured to call a preset melody repairing device Inpainter, predict a corresponding Buddha music segment based on the background and the potential variable of Buddha music, and obtain an intermediate Buddha music segment;
the processing module 304 is configured to process the intermediate buddha music fragment based on a random mask-free code, generate a target buddha music fragment, call a preset Connector, combine the target buddha music fragment with a preset buddha music summary sketch, and generate a target latent variable;
and a decoding module 305, configured to invoke a preset decoder to decode the target latent variable, and generate a final Buddha musical product.
Optionally, the converting module 302 includes:
a conversion unit 3021 configured to convert the funeral segment to be created into a subsequence consisting of a pitch sequence P consisting of a pitch type present in the funeral segment to be created and a rhythm sequence R consisting of a duration type present in the funeral segment to be created;
a first input unit 3022, configured to input the pitch sequence P and the rhythm sequence R into a preset variation autoencoder VAE, and generate a latent variable;
a decomposition unit 3023, configured to decompose the latent variable into a pitch variable and a tempo variable based on a preset factorized inference network.
Optionally, the prediction module 303 includes:
the first reading unit 3031 is configured to call a preset melody repairing device Inpainter to read a potential variable;
a second input unit 3032, configured to input the latent variable into a pitch gating loop unit GRU and a rhythm gating loop unit GRU to obtain a basic folk segment;
a generating unit 3033, configured to generate an intermediate folk segment based on the background and the base folk segment of folk.
Optionally, the processing module 304 includes:
a modifying unit 3041, configured to control and modify the intermediate buddha music segment based on a preset random mask-free code, so as to generate a target buddha music segment;
a second reading unit 3042 for calling a preset Connector to read a preset folk summary sketch, which includes pitch and rhythm information input by the user;
a combining unit 3043 for combining the target folk fragment with the preset folk summary sketch based on the preset Connector, generating a target latent variable and sending the target latent variable to a preset decoder.
Optionally, the decoding module 305 includes:
a third reading unit 3051, configured to call a preset decoder to read the target latent variable;
and a decoding unit 3052, configured to decode the target latent variable based on a preset decoder, so as to generate a final folk work.
In the embodiment of the invention, the Buddha music works which are more in line with the expectation of the user are generated according to the preset Buddha music generation model and the Buddha music fragment, so that the difficulty of the user in participating in Buddha music creation is reduced, and the Buddha music generation efficiency is improved. This scheme can be applied to in the wisdom education field to promote the construction in wisdom city.
Fig. 3 and 4 describe the folk music generation device in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the folk music generation device in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 5 is a schematic structural diagram of a folk generation apparatus 500 according to an embodiment of the present invention, which may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the buddha music generation apparatus 500. Still further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the folk generation apparatus 500.
The folk music generation device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the Buddha music generating device configuration shown in FIG. 5 does not constitute a limitation of the Buddha music generating device, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The present invention also provides a folk generation device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the folk generation method in the above embodiments.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the folk generation method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A Fole generation method, comprising:
the method comprises the steps of obtaining a Buddha music segment to be created, wherein the Buddha music segment to be created comprises a first Buddha music segment and a second Buddha music segment, and the starting time of the second Buddha music segment is later than the ending time of the first Buddha music segment;
calling a preset variational self-encoder VAE, converting the Buddha music fragment to be created into a potential variable, and decomposing the potential variable into a pitch variable and a rhythm variable;
calling a preset melody restorer Inpainter, predicting a corresponding Buddha music segment based on the background of the Buddha music and the potential variable, and obtaining an intermediate Buddha music segment;
processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable;
and calling a preset decoder to decode the target latent variable to generate the final Buddha musical work.
2. The folk music generation method according to claim 1, wherein said calling a preset variational auto-encoder VAE, converting the folk music piece to be authored into a latent variable, and decomposing the latent variable into a pitch variable and a tempo variable comprises:
converting the Buddha music segment to be authored into a subsequence consisting of a pitch sequence P consisting of a pitch type present in the Buddha music segment to be authored and a rhythm sequence R consisting of a duration type present in the Buddha music segment to be authored;
inputting the pitch sequence P and the rhythm sequence R into a preset variational self-encoder VAE to generate potential variables;
and decomposing the latent variable into a pitch variable and a rhythm variable based on a preset factorization reasoning network.
3. The folk music generation method of claim 1, wherein the calling a preset melody repairing device Inpainter to predict a corresponding folk music segment based on the background of folk music and the latent variable to obtain an intermediate folk music segment comprises:
calling a preset melody repairing device Inpainter to read the potential variables;
inputting the potential variables into a pitch gating circulation unit GRU and a rhythm gating circulation unit GRU to obtain basic Fole fragments;
generating an intermediate Fole segment based on the background of Fole and the base Fole segment.
4. The folk music generation method according to claim 1, wherein the processing the intermediate folk music piece based on the random maskless process to generate a target folk music piece, calling a preset Connector, and combining the target folk music piece with a preset folk music summary sketch to generate a target latent variable comprises:
controlling and modifying the intermediate Buddha music fragment based on a preset random mask-free code to generate a target Buddha music fragment;
calling a preset Connector to read a preset folk summary sketch, wherein the preset folk summary sketch comprises pitch and rhythm information input by a user;
and combining the target Fole segment with the preset Fole summary sketch based on the preset Connector to generate a target latent variable and sending the target latent variable to a preset decoder.
5. The method of claim 1, wherein said invoking a preset decoder to decode the target latent variable to generate a final work of Buddha music comprises:
calling a preset decoder to read the target latent variable;
and decoding the target latent variable based on the preset decoder to generate a final Buddha musical work.
6. The folk music generation method according to any one of claims 1 to 5, wherein after the obtaining of the folk music piece to be authored including a past folk music piece and a future folk music piece, before the calling of a preset variational self-encoder VAE, converting the folk music piece to be authored into a latent variable, and decomposing the latent variable into a pitch part and a rhythm part, the method further comprises:
a preset folk summary sketch is received, which includes pitch and tempo information input by a user.
7. The method of claim 1, wherein after said invoking a preset decoder decodes said target latent variable to generate a final work of Buddha music, said method further comprises:
calculating a loss function li(theta, phi), the specific formula is:
where θ is a parameter of the preset variational autoencoder VAE and φ is a parameter of the preset decoder, assuming θ refers to a mapping from x to z, φ refers to a reconstruction from z to x, q is a parameter of the preset decoderθ(z|xi) Is a posterior distribution of z derived from x, p (z) is an a priori distribution of z, p (z) is a gaussian distribution N (0,1) assuming a mean of 0 and a variance of 1.
8. A folk music generating apparatus, characterized by comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring Buddha music segments to be created, and the Buddha music segments to be created comprise past Buddha music segments and future Buddha music segments;
the conversion module is used for calling a preset variational self-encoder VAE, converting the Buddha music fragment to be created into a potential variable and decomposing the potential variable into a pitch variable and a rhythm variable;
the prediction module is used for calling a preset melody repairing device Inpainter, predicting a corresponding Buddha music segment based on the background of the Buddha music and the potential variable, and obtaining an intermediate Buddha music segment;
the processing module is used for processing the intermediate Buddha music fragment based on random mask-free to generate a target Buddha music fragment, calling a preset Connector, and combining the target Buddha music fragment with a preset Buddha music summary sketch to generate a target latent variable;
and the decoding module is used for calling a preset decoder to decode the target latent variable to generate the final Buddha musical work.
9. A folk music generating apparatus, characterized in that the folk music generating apparatus comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the folk generation device to perform the folk generation method of any one of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the folk music generation method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301852.4A CN113077770B (en) | 2021-03-22 | 2021-03-22 | Buddha music generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301852.4A CN113077770B (en) | 2021-03-22 | 2021-03-22 | Buddha music generation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077770A true CN113077770A (en) | 2021-07-06 |
CN113077770B CN113077770B (en) | 2024-03-05 |
Family
ID=76613396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110301852.4A Active CN113077770B (en) | 2021-03-22 | 2021-03-22 | Buddha music generation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077770B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03203785A (en) * | 1989-12-30 | 1991-09-05 | Casio Comput Co Ltd | Music part generating device |
US6297439B1 (en) * | 1998-08-26 | 2001-10-02 | Canon Kabushiki Kaisha | System and method for automatic music generation using a neural network architecture |
CN102610222A (en) * | 2007-02-01 | 2012-07-25 | 缪斯亚米有限公司 | Music transcription method, system and device |
US20130262096A1 (en) * | 2011-09-23 | 2013-10-03 | Lessac Technologies, Inc. | Methods for aligning expressive speech utterances with text and systems therefor |
CN109671416A (en) * | 2018-12-24 | 2019-04-23 | 成都嗨翻屋科技有限公司 | Music rhythm generation method, device and user terminal based on enhancing study |
CN110853604A (en) * | 2019-10-30 | 2020-02-28 | 西安交通大学 | Automatic generation method of Chinese folk songs with specific region style based on variational self-encoder |
CN112331170A (en) * | 2020-10-28 | 2021-02-05 | 平安科技(深圳)有限公司 | Method, device and equipment for analyzing similarity of Buddha music melody and storage medium |
-
2021
- 2021-03-22 CN CN202110301852.4A patent/CN113077770B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03203785A (en) * | 1989-12-30 | 1991-09-05 | Casio Comput Co Ltd | Music part generating device |
US6297439B1 (en) * | 1998-08-26 | 2001-10-02 | Canon Kabushiki Kaisha | System and method for automatic music generation using a neural network architecture |
CN102610222A (en) * | 2007-02-01 | 2012-07-25 | 缪斯亚米有限公司 | Music transcription method, system and device |
US20130262096A1 (en) * | 2011-09-23 | 2013-10-03 | Lessac Technologies, Inc. | Methods for aligning expressive speech utterances with text and systems therefor |
CN109671416A (en) * | 2018-12-24 | 2019-04-23 | 成都嗨翻屋科技有限公司 | Music rhythm generation method, device and user terminal based on enhancing study |
CN110853604A (en) * | 2019-10-30 | 2020-02-28 | 西安交通大学 | Automatic generation method of Chinese folk songs with specific region style based on variational self-encoder |
CN112331170A (en) * | 2020-10-28 | 2021-02-05 | 平安科技(深圳)有限公司 | Method, device and equipment for analyzing similarity of Buddha music melody and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113077770B (en) | 2024-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111061847A (en) | Dialogue generation and corpus expansion method and device, computer equipment and storage medium | |
Datta et al. | A binary-real-coded differential evolution for unit commitment problem | |
JP6871809B2 (en) | Information processing equipment, information processing methods, and programs | |
CN109727590B (en) | Music generation method and device based on recurrent neural network | |
CN111310436B (en) | Text processing method and device based on artificial intelligence and electronic equipment | |
BR112019014822A2 (en) | NEURAL NETWORKS FOR ATTENTION-BASED SEQUENCE TRANSDUCTION | |
WO2019083519A1 (en) | Natural language processing with an n-gram machine | |
Gaussier et al. | Online tuning of EASY-backfilling using queue reordering policies | |
Xu et al. | A multiple priority queueing genetic algorithm for task scheduling on heterogeneous computing systems | |
CN112131888B (en) | Method, device, equipment and storage medium for analyzing semantic emotion | |
CN117194056A (en) | Large language model reasoning optimization method, device, computer equipment and storage medium | |
CN110297885B (en) | Method, device and equipment for generating real-time event abstract and storage medium | |
EP1570427A1 (en) | Forward-chaining inferencing | |
CN112560456A (en) | Generation type abstract generation method and system based on improved neural network | |
CN113641447A (en) | Online learning type scheduling method based on container layer dependency relationship in edge calculation | |
CN111401037A (en) | Natural language generation method and device, electronic equipment and storage medium | |
WO2023114661A1 (en) | A concept for placing an execution of a computer program | |
CN117312559A (en) | Method and system for extracting aspect-level emotion four-tuple based on tree structure information perception | |
CN113077770A (en) | Fole generation method, device, equipment and storage medium | |
CN113421646A (en) | Method and device for predicting duration of illness, computer equipment and storage medium | |
CN112597777A (en) | Multi-turn dialogue rewriting method and device | |
CN116168666A (en) | Music estimation device, music estimation method, and model generation device | |
CN112420002A (en) | Music generation method, device, electronic equipment and computer readable storage medium | |
CN115602139A (en) | Automatic music generation method and device based on two-stage generation model | |
CN113066457B (en) | Fan-exclamation music generation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |