CN109192187A - Composing method, system, computer equipment and storage medium based on artificial intelligence - Google Patents

Composing method, system, computer equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN109192187A
CN109192187A CN201810561621.5A CN201810561621A CN109192187A CN 109192187 A CN109192187 A CN 109192187A CN 201810561621 A CN201810561621 A CN 201810561621A CN 109192187 A CN109192187 A CN 109192187A
Authority
CN
China
Prior art keywords
lstm
network
music
single layer
time series
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810561621.5A
Other languages
Chinese (zh)
Inventor
王义文
刘奡智
王健宗
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810561621.5A priority Critical patent/CN109192187A/en
Priority to PCT/CN2018/104715 priority patent/WO2019232959A1/en
Publication of CN109192187A publication Critical patent/CN109192187A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/111Automatic composing, i.e. using predefined musical rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of composing method based on artificial intelligence, system, computer equipment and storage mediums, the method comprise the steps that obtaining note information, the note information includes the play start time, playing duration time and pitch value of each note;Musical time sequence is created, autoregression is carried out and integrates sliding average ARIMA model prediction, obtain time series forecasting result;According to the note information and time series forecasting as a result, building music prediction model;The topological structure for determining music prediction model, according to the note information and time series forecasting result combining music prediction model, and the topological structure determined, the music predicted is obtained, realizes automatic composition.The above method realizes automatic composition, and manufacturing process greatly simplifies, and user can participate in creating, and generates different songs by different inputs, does not need midway and intervenes, flexibly abundant.

Description

Composing method, system, computer equipment and storage medium based on artificial intelligence
Technical field
The present invention relates to information technology field more particularly to a kind of composing methods based on artificial intelligence, system, computer Equipment and storage medium.
Background technique
With application of the computer technology in music processing, Computer Music is come into being.Computer Music is as new Raw generation art, gradually penetrates into many aspects such as creation, instrument playing, education, the amusement of music, using artificial intelligence technology into The automatic composition of row receives the height weight of related fields researcher as research direction newer in Computer Music in recent years Depending on.
The existing automatic composing method based on artificial intelligence technology mainly have based on heuristic search it is automatic composition and Automatic composition based on genetic algorithm.But to be only applicable to length of audio track short for the existing automatic composition based on heuristic search The case where, search efficiency declines with the increase exponentially grade of length of audio track, thus melody this method longer for length Poor feasibility.And the automatic composing method based on genetic algorithm inherits some exemplary shortcomings of genetic algorithm, such as to first Beginning population rely on big, genetic operator be difficult to it is precisely selected etc..
Summary of the invention
Based on this, it is necessary to which the drawbacks of being directed to existing automatic composing method provides a kind of composition side based on artificial intelligence Method, system, computer equipment and storage medium.
A kind of composing method based on artificial intelligence, the composing method include: acquisition note information, the note information Play start time, playing duration time and pitch value including each note;Musical time sequence is created, autoregression product is carried out Divide sliding average ARIMA model prediction, obtains time series forecasting result;According to the note information and time series forecasting knot Fruit constructs music prediction model;The topological structure for determining music prediction model, according to the note information and time series forecasting As a result combining music prediction model, and the topological structure determined, obtain the music predicted, realize automatic composition.
Progress autoregression integral sliding average ARIMA model prediction in one of the embodiments, includes:
Audio, sound frame and its changing rule are examined according to auto-correlation function and partial autocorrelation function, time series is put down Stability is identified;
Tranquilization processing is carried out to nonstationary time series;
According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;
Hypothesis testing is carried out, whether diagnosis residual sequence is white noise;
Forecast analysis is carried out using the model examined has been passed through.
In one of the embodiments, it is described according to note information and time series forecasting as a result, building music predict mould Type includes:
Single layer shot and long term memory network LSTM is established, using Dropout Mechanism establishing hidden layer;
According to the single layer LSTM, single layer network is copied into double-layer network;
Weight parameter is initialized, is successively reversely adjusted weight parameter in the double-layer network using back-propagating mechanism, Optimize loss function by the training precision that iteration improves network, constructs music prediction model.
It is described in one of the embodiments, to establish single layer shot and long term memory network LSTM, using Dropout Mechanism establishing Hidden layer includes:
Single layer shot and long term memory network LSTM is established, the single layer LSTM contains LSTM block, and the LSTM block includes One door, the door determine can whether input be important, be remembered and can be exported;
Original state random initializtion, the corresponding input value of each step, the input value is the corresponding word of each word Vector;
The unit of hidden layer is randomly deleted using Dropout mechanism, keeps input and output neuron constant.
Described according to the single layer LSTM in one of the embodiments, single layer network, which is copied into double-layer network, includes:
Being summarized multiple basis LSTM units using multilayer RNN unit is one;
The state of RNN unit is controlled by door, and deletes or increase information thereto, and it is mono- to add a RNN every time Member re-calls a basis LSTM unit;
For the original state of each layer network, initial value is set.
A kind of compositing system based on artificial intelligence, the compositing system based on artificial intelligence include:
Module is obtained, for obtaining note information, the note information includes the play start time of each note, plays Duration and pitch value;
Creation module carries out autoregression and integrates sliding average ARIMA model prediction, obtain for creating musical time sequence To time series forecasting result;
Module is constructed, is used for according to the note information and time series forecasting as a result, building music prediction model;
Composition module, it is pre- according to the note information and time series for determining the topological structure of music prediction model Result combining music prediction model, and the topological structure determined are surveyed, the music predicted is obtained, realizes automatic composition.
The building module in one of the embodiments, further include:
Unit is established, for establishing single layer shot and long term memory network LSTM, using Dropout Mechanism establishing hidden layer;
Copied cells, for according to the single layer LSTM, single layer network to be copied into double-layer network;
Optimize unit, for initializing weight parameter, using back-propagating mechanism by weight parameter in the double-layer network It is successively reversed to adjust, loss function is optimized by the training precision that iteration improves network, constructs music prediction model.
It is described in one of the embodiments, to establish unit further include:
Subelement is established, for establishing single layer shot and long term memory network LSTM, the single layer LSTM contains LSTM block, institute Stating LSTM block includes a door, and the door determines can whether input be important, be remembered and can be exported;
Corresponding subelement, is used for original state random initializtion, the corresponding input value of each step, and the input value is The corresponding term vector of each word;
Subelement is deleted, for randomly deleting the unit of hidden layer using Dropout mechanism, keeps input and output Neuron is constant;
The copied cells further include:
Summarize subelement, is one for being summarized multiple basis LSTM units using multilayer RNN unit;
Subelement is deleted, for controlling the state of RNN unit by door, and deletes or increase information thereto, often One RNN unit of secondary addition re-calls a basis LSTM unit;
Subelement is set, for initial value to be arranged for the original state of each layer network.
A kind of computer equipment, including memory and processor are stored with computer-readable instruction in the memory, institute When stating computer-readable instruction and being executed by the processor, so that the step of processor executes above-mentioned composing method.
A kind of storage medium being stored with computer-readable instruction, the computer-readable instruction are handled by one or more When device executes, so that the step of one or more processors execute above-mentioned composing method.
The above-mentioned composing method based on artificial intelligence, system, computer equipment and storage medium, by obtaining note letter Breath, the note information includes the play start time, playing duration time and pitch value of each note, creates musical time sequence Column carry out autoregression and integrate sliding average ARIMA model prediction, obtain time series forecasting as a result, establishing single layer shot and long term note Recall network LSTM, the single layer LSTM contains LSTM block, and the LSTM block includes a door, and whether the door determines input It is important, it can be remembered and can be exported, original state random initializtion, the corresponding input value of each step, institute Stating input value is the corresponding term vector of each word, and the unit of hidden layer is randomly deleted using Dropout mechanism, is kept defeated It is constant to enter output neuron, summarizing multiple basis LSTM units using multilayer RNN unit is one, is controlled by door The state of RNN unit, and information is deleted or increased thereto, a RNN unit is added every time re-calls a basis LSTM Unit is that initial value is arranged in the original state of each layer network, initializes weight parameter, will be described double using back-propagating mechanism Weight parameter is successively reversely adjusted in layer network, is optimized loss function by the training precision that iteration improves network, is constructed music Prediction model determines the topological structure of music prediction model, according to the note information and time series forecasting result combination sound Happy prediction model, and the topological structure determined, obtain the music predicted, realize automatic composition, and manufacturing process greatly simplifies, User can participate in creating, and generate different songs by different inputs, do not need midway and intervene, flexibly abundant.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.
Fig. 1 is the flow chart of the composing method based on artificial intelligence in one embodiment;
Fig. 2 is the process for constructing music prediction model in one embodiment according to note information and time series forecasting result Figure;
Fig. 3 is that single layer shot and long term memory network LSTM is established in one embodiment using Dropout Mechanism establishing hidden layer Flow chart;
Fig. 4 is the flow chart that single layer network is copied into in one embodiment according to single layer LSTM double-layer network;
Fig. 5 is the structural block diagram of the compositing system based on artificial intelligence in one embodiment;
Fig. 6 is the structural block diagram that module is constructed in one embodiment;
Fig. 7 is the structural block diagram that unit is established in one embodiment;
Fig. 8 is the structural block diagram of copied cells in one embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition Other one or more features, integer, step, operation, element, component and/or their group.
As a preferable embodiment, as shown in Figure 1, a kind of composing method based on artificial intelligence, should be based on artificial Intelligence composing method the following steps are included:
Step S101 obtains note information, and note information includes the play start time of each note, playing duration time And pitch value;
Length memory network (LSTM, Long Short-Term Memory) is one kind of Recognition with Recurrent Neural Network RNN, can Solve the problems, such as that long-term " memory " in conventional recycle neural network is unserviceable, the ability with distance study.This technology Scheme provides a kind of artificial intelligence compositing system for being based on binary channels length memory network (LSTM), is using pitch and duration Two dimensions carry out intelligent creation on the basis of musical features to express, and realize after inputting simple note to system, machine It can be learnt according to the arrangement feature of note, and write out more complete song abundant, analog music man plays out, drills The song played is not as agile stiff.Note information is obtained, the note information includes the play start time of each note, plays Duration and pitch value extract corresponding frame level audio frequency characteristics, then obtain module according to above-mentioned frame level audio frequency characteristics and pre- The music frequency band feature binding model first constructed obtains the frame level audio frequency characteristics for carrying band information, and according to above-mentioned carrying The frame level audio frequency characteristics of band information and the music prediction model constructed in advance, obtain the music predicted, so as to realize Automatic composition.
Step S102 creates musical time sequence, carries out autoregression and integrates sliding average ARIMA model prediction, when obtaining Between sequence prediction result;
The stable method of musical time series model generally goes to examine according to testing model.If certain time series is unstable It is fixed, it can also be gone by some operations so that time series stablizes (for example taking logarithm, difference), then progress ARIMA model is pre- It surveys, obtains the prediction result of stable time series.ARIMA model full name is that autoregression integrates moving average model (Autoregressive Integrated Moving Average Model) by predict object over time and formed Data sequence is considered as a random sequence, with certain mathematical model come this sequence of approximate description.This model is once known Value future value can be predicted from the past value of time series and now after not.
Step S103, according to note information and time series forecasting as a result, building music prediction model;
Recognition with Recurrent Neural Network (Recurrent Neural Networks, referred to as RNN) is current popular nerve Network model has shown brilliant effect in many tasks of natural language processing.Length memory network (LSTM, Long Short-Term Memory) be Recognition with Recurrent Neural Network RNN one kind, the length being able to solve in conventional recycle neural network Phase " memory " unserviceable problem, the ability with distance study.The used binary channels length of the technical program remembers net Network (LSTM) its advantage lies in being able to those long-term memories that do not use of forgetting about of " clever ", learn new useful information, and Useful information is stored in long-term memory again.The movement played before such as is just believed movement there are currently no affecting Breath removal, the melody in newest movement is recorded in " long-term memory ".Such as, it is assumed that the input of LSTM is xt, and output is Yt, the weight of input layer to hidden layer are W, and the weight of hidden layer to hidden layer is U, and LSTM has memory capability, is exactly by hidden Weight between hiding layer summarizes previous input state, as the auxiliary of input next time, specifically comprises the following steps:
Single layer shot and long term memory network LSTM is initially set up, dropout mechanism is established, establishes hidden layer.LSTM shot and long term Memory is a kind of time recurrent neural network, contains LSTM block (blocks) or a kind of other neural networks, the area LSTM Block may be described as intelligent network element, because it can remember the numerical value of indefinite time span, there is a door in block (gate) can determine whether input is important to can be remembered and can be exported.Original state random initializtion, Mei Gebu A rapid input, actually enters as the corresponding term vector of each word.Hidden state, concealed nodes number can go to final step Rapid state as output, can also state weight or directly average as exporting to all steps, according to specific tasks It is adjusted flexibly.
Its training process of neural network is will to input to carry out forward conduction by network, is then reversely passed error It broadcasts.Dropout mechanism is randomly deleted the unit of hidden layer, is proceeded as described above aiming at during this.It is comprehensive For, the above process may include some hidden neurons in random erasure network, keep input and output neuron constant; Input is subjected to propagated forward by modified network, error is then subjected to backpropagation by modified network;It is right In other batch of training sample, aforesaid operations are repeated, have haved the function that a kind of Vote mechanism.For full Connection Neural Network For, we go 5 different neural networks of training that may obtain multiple and different as a result, we can be with identical data It determines that more ticket persons win by a kind of vote mechanism, therefore in contrast improves the precision and robustness of network.For list For a neural network, if we are carried out in batches, although different networks may generate different degrees of over-fitting, But by its public loss function, be equivalent to it while being optimized, it has taken averagely, therefore can be more efficiently Prevent the generation of over-fitting.Reduce total adaptability complicated between neuron.After hidden layer neuron is by random erasure, make It obtains fully-connected network and is provided with certain rarefaction, to effectively alleviate the synergistic effect of different characteristic.That is, having A little features may depend on the collective effect of the implicit node of fixed relationship, and if passing through Dropout mechanism, just effectively The situation for having organized certain features just effective in the presence of other features, increases the robustness of neural network.Robustness (Robust), that is, the healthy and strong and strong meaning, it is the key that the system survival under exception and dangerous situation.Such as Can not computer software crash, not collapse, be exactly in input error, disk failure, network over loading or intentional attack The robustness of the software.So-called " robustness " refers to that control system under the Parameter Perturbation of certain (structure, size), maintains it The characteristic of its certain performance.According to the different definition to performance, stability robustness and performance robustness can be divided into.With closed-loop system The static controller that is obtained as target design of robustness be known as robust controller.
Secondly, single layer network, is copied into double-layer network by the LSTM according to foundation.LSTM has a succession of repetition nerve net The form of network module, replicated blocks have different structures.It has four layers of neural net layer to interact in a particular manner.Water Horizontal line representative unit state, linear interaction, it is ensured that information is transmitted down.Selectively information is allowed to pass through, by Sigmoid neural net layer and point-by-point multiplying composition.Sigmoid layers each to describing between 0 and 1 by variable mappings Whether ingredient should pass through thresholding.Sigmoid layers, sigmoid is that neural network algorithm selects sigmoid function as activation letter Number, in order to be used to describe neural network, Sigmoid executes operation to each input data, using sigmoid function, It can choose hyperbolic tangent function (tanh).Default 0 represents " any ingredient is not allowed to pass through ", and 1 represents " ingredient is allowed to pass through ". There are three types of similar thresholdings by LSTM, are " the forgetting thresholding sigmoid for determining which information and needing to give up from location mode respectively Layer " determines to need to store " the input threshold layer " of which new information in location mode and value is added to " the tanh of state The sigmoid layer and tanh function which partially needs to export in layer ", determining means state.And binary channels length memory network (LSTM), exactly two LSTM networks are used together, and " study " ability of neural network can be enhanced, make intelligence system sufficiently bright Feature in " association " music whitely, acquires the playing style of music.For the sequence of some complexity, need to use the net of multilayer Network.Model is established first with multilayer RNN Cell (Multi-RNN Cell) by multiple basis LSTM unit (Basic LSTM Cell it is one that) unit, which summarizes, and LSTM is deleted thereto by the state of the structure control cell of entitled door (gate) a kind of Or increase information.It is worth noting that, one unit of addition needs to re-call a basis LSTM unit every time.Because of the function Can state a built-in variable every time, if not doing so can reuse these variables, to generate mistake.For each layer Original state be arranged initial value.Initial value can also be generated using zero_state method, but thus cannot be to centre State carries out display control.It is selected with specific reference to practical application.Multilayer LSTM is more more excellent than single layer, which meets data dimension When becoming larger, non-linear factor increases, the universal law for needing multilayer neural network to model.
Finally, initialization weight parameter, establishes back-propagating mechanism and optimizes loss function.Before starting training, own Weight can all be initialized with some different small random numbers, the network model be by gradient descent method minimize lose Function successively reversely adjusts the weight parameter in network, and the precision for improving network is trained by frequent iteration.Data are pre- Processing, parameter initialization, batch standardize BN regularization, inactivate Dropout, loss function at random.Classification problem, recurrence are asked Topic, gradient inspection.Before pattern checking training, in training, parameter updates.Same gradient is calculated in backpropagation, thus Same parameter update is carried out, for the angle of cost function, parameter initialization again cannot be too big, so it is desirable that weight Initial value will cannot be equal to 0 very close to 0 again.
Step S104 determines the topological structure of music prediction model, according to note information and time series forecasting result knot The happy prediction model of synaeresis, and the topological structure determined, obtain the music predicted, realize automatic composition.
The topological structure for determining music prediction model is predicted according to note information and time series forecasting result combining music Model, and the topological structure determined, obtain the music predicted, realize automatic composition.Topological structure is a mind to liquidate Through network structure, for the present embodiment is with the Recognition with Recurrent Neural Network that liquidates (Recurrent Neural Networks, RNN), open up Flutterring structure includes two independent RNN and connection units, and two independent RNN are named as LF_RNN and HF_RNN respectively, Low-frequency range multi-frequency feature is respectively used to combine and the combination of high band multi-frequency feature.It extracts above-mentioned music file and corresponds to music Then frame level audio frequency characteristics obtain the music frequency band feature combination mould that module is constructed according to above-mentioned frame level audio frequency characteristics and in advance Type, obtains the frame level audio frequency characteristics for carrying band information, and according to the frame level audio frequency characteristics of above-mentioned carrying band information and pre- The music prediction model first constructed, obtains the music predicted, so as to realize automatic composition.
Carrying out autoregression integral sliding average ARIMA model prediction in one of the embodiments, includes:
Audio, sound frame and its changing rule are examined according to auto-correlation function and partial autocorrelation function, time series is put down Stability is identified;
Tranquilization processing is carried out to nonstationary time series;
According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;
Hypothesis testing is carried out, whether diagnosis residual sequence is white noise;
Forecast analysis is carried out using the model examined has been passed through.
ARIMA model full name is that autoregression integrates moving average model (Autoregressive Integrated Moving Average Model) will predict object over time and formed data sequence be considered as a random sequence, use Certain mathematical model carrys out this sequence of approximate description.This model can be from the past value of time series after identified And value predicts future value now.ARIMA model prediction program are as follows: sound is examined according to auto-correlation function and partial autocorrelation function Frequently, sound frame and its changing rule identify the stationarity of time series;Nonstationary time series is carried out at tranquilization Reason;According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;Hypothesis testing is carried out, Whether diagnosis residual sequence is white noise;Forecast analysis is carried out using the model examined has been passed through.
As shown in Fig. 2, in one embodiment, according to note information and time series forecasting as a result, building music prediction Model includes:
Step S201 establishes single layer shot and long term memory network LSTM, using Dropout Mechanism establishing hidden layer;
The memory of LSTM shot and long term is a kind of time recurrent neural network, contains LSTM block (blocks) or other one kind Neural network, LSTM block may be described as intelligent network element, because it can remember the numerical value of indefinite time span, There is a door (gate) that can determine whether input is important to can be remembered and can be exported in block.Original state is random Initialization, each step 1 input, actually enters as the corresponding term vector of each word.Hidden state, concealed nodes number can be with The state of last step is gone to as output, can also state weight to all steps or directly average as exporting, It is adjusted flexibly according to specific tasks.
Single layer network is copied into double-layer network according to single layer LSTM by step S202;
LSTM has a succession of form for repeating neural network module, and replicated blocks have different structures.It has four layers Neural net layer interacts in a particular manner.Horizontal line representative unit state, linear interaction, it is ensured that information It transmits down.Selectively information is allowed to pass through, is made of sigmoid neural net layer and point-by-point multiplying.Sigmoid layers will Variable mappings are between 0 and 1, describing whether each ingredient should pass through thresholding.Sigmoid layers, sigmoid is neural network Algorithm selects sigmoid function as activation primitive, and in order to be used to describe neural network, Sigmoid is to each input number According to executing operation using sigmoid function, also can choose hyperbolic tangent function (tanh).Default 0 represent " do not allow it is any at Divide and pass through ", and 1 represents " ingredient is allowed to pass through ".There are three types of similar thresholdings by LSTM, are to determine which information is needed from list respectively " thresholding sigmoid layers of forgetting ", the decision given up in first state need to store in location mode " the input gate of which new information Limit layer " and by value be added to " tanh layers " of state, the sigmoid layer which partially needs to export in determining means state and Tanh function.And binary channels length memory network (LSTM), exactly two LSTM networks are used together, and neural network can be enhanced " study " ability, allow intelligence system sufficiently expressly " to learn " feature in music, acquire the playing style of music.For one A little complicated sequences, need to use the network of multilayer.It is first with multilayer RNN Cell that multiple basis LSTM are mono- to establish model It is one that member, which summarizes, and LSTM is deleted thereto or increased by the state of the structure control cell of entitled door (gate) a kind of Information.It is worth noting that, one unit of addition needs to re-call a basis LSTM unit every time.Because the function is every time Can state a built-in variable, if not doing so can reuse these variables, to generate mistake.It is initial for each layer Initial value is arranged in state.Can also using zero_state method generate initial value, but thus cannot to intermediate state into Row display control.It is selected with specific reference to practical application.Multilayer LSTM is more more excellent than single layer, which meets data dimension and become larger When, non-linear factor increases, the universal law for needing multilayer neural network to model.
Step S203 initializes weight parameter, using back-propagating mechanism that weight parameter in double-layer network is successively reversed It adjusts, loss function is optimized by the training precision that iteration improves network, constructs music prediction model.
Before starting training, all weights can all be initialized with some different small random numbers, the network model Be by gradient descent method minimize loss function the weight parameter in network is successively reversely adjusted, by frequent iteration come Training improves the precision of network.Data prediction, parameter initialization, batch standardize BN regularization, inactivate Dropout at random, Loss function.Classification problem, regression problem, gradient inspection.Before pattern checking training, in training, parameter updates.In backpropagation In calculate same gradient, to carry out same parameter update, for the angle of cost function, parameter initialization is again not Can be too big, so it is desirable that weight initial value will cannot be equal to 0 very close to 0 again.
As shown in figure 3, in one embodiment, establishing single layer shot and long term memory network LSTM, being built using Dropout mechanism Vertical hidden layer includes:
Step S301 establishes single layer shot and long term memory network LSTM, and single layer LSTM contains LSTM block, and LSTM block includes One door, door determine can whether input be important, be remembered and can be exported;
The memory of LSTM shot and long term is a kind of time recurrent neural network, contains LSTM block (blocks) or other one kind Neural network, LSTM block may be described as intelligent network element, because it can remember the numerical value of indefinite time span, There is a door (gate) that can determine whether input is important to can be remembered and can be exported in block.Original state is random Initialization, each step 1 input, actually enters as the corresponding term vector of each word.Hidden state, concealed nodes number can be with The state of last step is gone to as output, can also state weight to all steps or directly average as exporting, It is adjusted flexibly according to specific tasks.
Its training process of neural network is will to input to carry out forward conduction by network, is then reversely passed error It broadcasts.Dropout mechanism is randomly deleted the unit of hidden layer, is proceeded as described above aiming at during this.It is comprehensive For, the above process may include some hidden neurons in random erasure network, keep input and output neuron constant; Input is subjected to propagated forward by modified network, error is then subjected to backpropagation by modified network;It is right In other batch of training sample, aforesaid operations are repeated, have haved the function that a kind of Vote mechanism.For full Connection Neural Network For, we go 5 different neural networks of training that may obtain multiple and different as a result, we can be with identical data It determines that more ticket persons win by a kind of vote mechanism, therefore in contrast improves the precision and robustness of network.For list For a neural network, if we are carried out in batches, although different networks may generate different degrees of over-fitting, But by its public loss function, be equivalent to it while being optimized, it has taken averagely, therefore can be more efficiently Prevent the generation of over-fitting.Reduce total adaptability complicated between neuron.After hidden layer neuron is by random erasure, make It obtains fully-connected network and is provided with certain rarefaction, to effectively alleviate the synergistic effect of different characteristic.That is, having A little features may depend on the collective effect of the implicit node of fixed relationship, and if passing through Dropout mechanism, just effectively The situation for having organized certain features just effective in the presence of other features, increases the robustness of neural network.Robustness (Robust), that is, the healthy and strong and strong meaning, it is the key that the system survival under exception and dangerous situation.Such as Can not computer software crash, not collapse, be exactly in input error, disk failure, network over loading or intentional attack The robustness of the software.So-called " robustness " refers to that control system under the Parameter Perturbation of certain (structure, size), maintains it The characteristic of its certain performance.According to the different definition to performance, stability robustness and performance robustness can be divided into.With closed-loop system The static controller that is obtained as target design of robustness be known as robust controller.
Step S302, original state random initializtion, the corresponding input value of each step, input value are corresponding for each word Term vector;
Original state random initializtion, each step 1 input, actually enters as the corresponding term vector of each word.It hides State, concealed nodes number can go to the state of last step as output, can also state weight to all steps or Person is directly average as output, is adjusted flexibly according to specific tasks.
Step S303 randomly deletes the unit of hidden layer using Dropout mechanism, keeps input and output neuron It is constant.
Dropout mechanism randomly deletes the unit of hidden layer, carries out above-mentioned mistake aiming at during this Journey.In terms of comprehensive, the above process may include some hidden neurons in random erasure network, keep input and output nerve Member is constant;Input is subjected to propagated forward by modified network, is then carried out error by modified network reversed It propagates;For other batch of training sample, aforesaid operations are repeated, have haved the function that a kind of Vote mechanism.
As shown in figure 4, in one embodiment, according to single layer LSTM, single layer network, which is copied into double-layer network, includes:
Step S401, being summarized multiple basis LSTM units using multilayer RNN unit is one;
LSTM has a succession of form for repeating neural network module, and replicated blocks have different structures.It has four layers Neural net layer interacts in a particular manner.Horizontal line representative unit state, linear interaction, it is ensured that information It transmits down.Selectively information is allowed to pass through, is made of sigmoid neural net layer and point-by-point multiplying.Sigmoid layers will Variable mappings are between 0 and 1, describing whether each ingredient should pass through thresholding.Sigmoid layers, sigmoid is neural network Algorithm selects sigmoid function as activation primitive, and in order to be used to describe neural network, Sigmoid is to each input number According to executing operation using sigmoid function, also can choose hyperbolic tangent function (tanh).Default 0 represent " do not allow it is any at Divide and pass through ", and 1 represents " ingredient is allowed to pass through ".There are three types of similar thresholdings by LSTM, are to determine which information is needed from list respectively " thresholding sigmoid layers of forgetting ", the decision given up in first state need to store in location mode " the input gate of which new information Limit layer " and by value be added to " tanh layers " of state, the sigmoid layer which partially needs to export in determining means state and Tanh function.And binary channels length memory network (LSTM), exactly two LSTM networks are used together, and neural network can be enhanced " study " ability, allow intelligence system sufficiently expressly " to learn " feature in music, acquire the playing style of music.For one A little complicated sequences, need to use the network of multilayer.It is first with multilayer RNN Cell that multiple basis LSTM are mono- to establish model It is one that member, which summarizes, and LSTM is deleted thereto or increased by the state of the structure control cell of entitled door (gate) a kind of Information.It is worth noting that, one unit of addition needs to re-call a basis LSTM unit every time.Because the function is every time Can state a built-in variable, if not doing so can reuse these variables, to generate mistake.It is initial for each layer Initial value is arranged in state.Can also using zero_state method generate initial value, but thus cannot to intermediate state into Row display control.It is selected with specific reference to practical application.Multilayer LSTM is more more excellent than single layer, which meets data dimension and become larger When, non-linear factor increases, the universal law for needing multilayer neural network to model.
Step S402, the state of RNN unit is controlled by door, and deletes or increase information thereto, is added every time One RNN unit re-calls a basis LSTM unit;
Establishing model to summarize multiple basis LSTM units first with multilayer RNN Cell is one, and LSTM passes through one kind The state of the structure control cell of entitled door (gate), and information is deleted or increased thereto.It is worth noting that, addition every time One unit needs to re-call a basis LSTM unit.Because the function can state a built-in variable, if not every time Do so then can reuse these variables, to generate mistake.
Step S403 is that initial value is arranged in the original state of each layer network.
For each layer of original state, initial value is set.Initial value can also be generated using zero_state method, still Cannot thus display control be carried out to intermediate state.It is selected with specific reference to practical application.Multilayer LSTM is more more excellent than single layer, should When trend meets data dimension and becomes larger, non-linear factor increases, the universal law for needing multilayer neural network to model.
As shown in figure 5, in one embodiment, a kind of compositing system based on artificial intelligence is provided, it is described to be based on people The compositing system of work intelligence includes:
Module is obtained, for obtaining note information, the note information includes the play start time of each note, plays Duration and pitch value;
Creation module carries out autoregression and integrates sliding average ARIMA model prediction, obtain for creating musical time sequence To time series forecasting result;
Module is constructed, is used for according to the note information and time series forecasting as a result, building music prediction model;
Composition module, it is pre- according to the note information and time series for determining the topological structure of music prediction model Result combining music prediction model, and the topological structure determined are surveyed, the music predicted is obtained, realizes automatic composition.
As shown in fig. 6, in one embodiment, the building module further include:
Unit is established, for establishing single layer shot and long term memory network LSTM, using Dropout Mechanism establishing hidden layer;
Copied cells, for according to the single layer LSTM, single layer network to be copied into double-layer network;
Optimize unit, for initializing weight parameter, using back-propagating mechanism by weight parameter in the double-layer network It is successively reversed to adjust, loss function is optimized by the training precision that iteration improves network, constructs music prediction model.
As shown in fig. 7, in one embodiment, it is described to establish unit further include:
Subelement is established, for establishing single layer shot and long term memory network LSTM, the single layer LSTM contains LSTM block, institute Stating LSTM block includes a door, and the door determines can whether input be important, be remembered and can be exported;
Corresponding subelement, is used for original state random initializtion, the corresponding input value of each step, and the input value is The corresponding term vector of each word;
Subelement is deleted, for randomly deleting the unit of hidden layer using Dropout mechanism, keeps input and output Neuron is constant;
As shown in figure 8, the copied cells further include:
Summarize subelement, is one for being summarized multiple basis LSTM units using multilayer RNN unit;
Subelement is deleted, for controlling the state of RNN unit by door, and deletes or increase information thereto, often One RNN unit of secondary addition re-calls a basis LSTM unit;
Subelement is set, for initial value to be arranged for the original state of each layer network.
In one embodiment it is proposed that a kind of computer equipment, the computer equipment includes memory and processor, Computer-readable instruction is stored in memory, when computer-readable instruction is executed by processor, so that described in processor execution Perform the steps of acquisition note information when computer program, the note information include each note play start time, Playing duration time and pitch value;Musical time sequence is created, autoregression is carried out and integrates sliding average ARIMA model prediction, obtain To time series forecasting result;According to the note information and time series forecasting as a result, building music prediction model;Determine sound The topological structure of happy prediction model, according to the note information and time series forecasting result combining music prediction model, and Determining topological structure obtains the music predicted, realizes automatic composition.
In one embodiment, progress autoregression integral sliding average ARIMA model prediction includes:
Audio, sound frame and its changing rule are examined according to auto-correlation function and partial autocorrelation function, time series is put down Stability is identified;
Tranquilization processing is carried out to nonstationary time series;
According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;
Hypothesis testing is carried out, whether diagnosis residual sequence is white noise;
Forecast analysis is carried out using the model examined has been passed through.
In one embodiment, it is described according to note information and time series forecasting as a result, building music prediction model packet It includes:
Single layer shot and long term memory network LSTM is established, using Dropout Mechanism establishing hidden layer;
According to the single layer LSTM, single layer network is copied into double-layer network;
Weight parameter is initialized, is successively reversely adjusted weight parameter in the double-layer network using back-propagating mechanism, Optimize loss function by the training precision that iteration improves network, constructs music prediction model.
In one embodiment, described to establish single layer shot and long term memory network LSTM, it is hidden using Dropout Mechanism establishing Layer include:
Single layer shot and long term memory network LSTM is established, the single layer LSTM contains LSTM block, and the LSTM block includes One door, the door determine can whether input be important, be remembered and can be exported;
Original state random initializtion, the corresponding input value of each step, the input value is the corresponding word of each word Vector;
The unit of hidden layer is randomly deleted using Dropout mechanism, keeps input and output neuron constant.
In one embodiment, described according to the single layer LSTM, single layer network, which is copied into double-layer network, includes:
Being summarized multiple basis LSTM units using multilayer RNN unit is one;
The state of RNN unit is controlled by door, and deletes or increase information thereto, and it is mono- to add a RNN every time Member re-calls a basis LSTM unit;
For the original state of each layer network, initial value is set.
In one embodiment it is proposed that a kind of storage medium for being stored with computer-readable instruction, this is computer-readable When instruction is executed by one or more processors, so that one or more processors execute following steps: obtaining note information, institute State the play start time, playing duration time and pitch value that note information includes each note;Musical time sequence is created, into Row autoregression integrates sliding average ARIMA model prediction, obtains time series forecasting result;According to the note information and time Sequence prediction is as a result, building music prediction model;The topological structure for determining music prediction model, according to the note information and when Between sequence prediction result combining music prediction model, and the topological structure determined obtains the music predicted, realizes from acting It is bent.
In one embodiment, progress autoregression integral sliding average ARIMA model prediction includes:
Audio, sound frame and its changing rule are examined according to auto-correlation function and partial autocorrelation function, time series is put down Stability is identified;
Tranquilization processing is carried out to nonstationary time series;
According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;
Hypothesis testing is carried out, whether diagnosis residual sequence is white noise;
Forecast analysis is carried out using the model examined has been passed through.
In one embodiment, it is described according to note information and time series forecasting as a result, building music prediction model packet It includes:
Single layer shot and long term memory network LSTM is established, using Dropout Mechanism establishing hidden layer;
According to the single layer LSTM, single layer network is copied into double-layer network;
Weight parameter is initialized, is successively reversely adjusted weight parameter in the double-layer network using back-propagating mechanism, Optimize loss function by the training precision that iteration improves network, constructs music prediction model.
In one embodiment, described to establish single layer shot and long term memory network LSTM, it is hidden using Dropout Mechanism establishing Layer include:
Single layer shot and long term memory network LSTM is established, the single layer LSTM contains LSTM block, and the LSTM block includes One door, the door determine can whether input be important, be remembered and can be exported;
Original state random initializtion, the corresponding input value of each step, the input value is the corresponding word of each word Vector;
The unit of hidden layer is randomly deleted using Dropout mechanism, keeps input and output neuron constant.
In one embodiment, described according to the single layer LSTM, single layer network, which is copied into double-layer network, includes:
Being summarized multiple basis LSTM units using multilayer RNN unit is one;
The state of RNN unit is controlled by door, and deletes or increase information thereto, and it is mono- to add a RNN every time Member re-calls a basis LSTM unit;
For the original state of each layer network, initial value is set.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, all should be considered as described in this specification.
Some exemplary embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but It cannot be construed as a limitation to the scope of the present invention.It should be pointed out that for the ordinary skill people of this field For member, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to of the invention Protection scope.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.

Claims (10)

1. a kind of composing method based on artificial intelligence, which comprises the following steps:
Note information is obtained, the note information includes the play start time, playing duration time and pitch value of each note;
Musical time sequence is created, autoregression is carried out and integrates sliding average ARIMA model prediction, obtain time series forecasting knot Fruit;
According to the note information and time series forecasting as a result, building music prediction model;
The topological structure for determining music prediction model is predicted according to the note information and time series forecasting result combining music Model, and the topological structure determined, obtain the music predicted, realize automatic composition.
2. the composing method according to claim 1 based on artificial intelligence, which is characterized in that the progress autoregression integral Sliding average ARIMA model prediction includes:
Audio, sound frame and its changing rule are examined according to auto-correlation function and partial autocorrelation function, to the stationarity of time series It is identified;
Tranquilization processing is carried out to nonstationary time series;
According to the recognition rule of time series, ARIMA model is selected, carries out the parameter Estimation of ARIMA model;
Hypothesis testing is carried out, whether diagnosis residual sequence is white noise;
Forecast analysis is carried out using the model examined has been passed through.
3. the composing method according to claim 1 based on artificial intelligence, which is characterized in that it is described according to note information and Time series forecasting is as a result, building music prediction model includes:
Single layer shot and long term memory network LSTM is established, using Dropout Mechanism establishing hidden layer;
According to the single layer LSTM, single layer network is copied into double-layer network;
Weight parameter is initialized, weight parameter in the double-layer network is successively reversely adjusted using back-propagating mechanism, is passed through The training precision that iteration improves network optimizes loss function, constructs music prediction model.
4. the composing method according to claim 3 based on artificial intelligence, which is characterized in that described to establish single layer shot and long term Memory network LSTM includes: using Dropout Mechanism establishing hidden layer
Single layer shot and long term memory network LSTM is established, the single layer LSTM contains LSTM block, and the LSTM block includes one Door, the door determine can whether input be important, be remembered and can be exported;
Original state random initializtion, the corresponding input value of each step, the input value is the corresponding term vector of each word;
The unit of hidden layer is randomly deleted using Dropout mechanism, keeps input and output neuron constant.
5. the composing method according to claim 3 based on artificial intelligence, which is characterized in that described according to the single layer LSTM, single layer network, which is copied into double-layer network, includes:
Being summarized multiple basis LSTM units using multilayer RNN unit is one;
The state of RNN unit is controlled by door, and deletes or increase information thereto, adds a RNN unit weight every time Newly call a basis LSTM unit;
For the original state of each layer network, initial value is set.
6. a kind of compositing system based on artificial intelligence, which is characterized in that the compositing system based on artificial intelligence includes:
Module is obtained, for obtaining note information, the note information includes the play start time of each note, plays persistently Time and pitch value;
Creation module carries out autoregression and integrates sliding average ARIMA model prediction, when obtaining for creating musical time sequence Between sequence prediction result;
Module is constructed, is used for according to the note information and time series forecasting as a result, building music prediction model;
Composition module, for determining the topological structure of music prediction model, according to the note information and time series forecasting knot Fruit combining music prediction model, and the topological structure determined, obtain the music predicted, realize automatic composition.
7. the compositing system according to claim 6 based on artificial intelligence, which is characterized in that the building module is also wrapped It includes:
Unit is established, for establishing single layer shot and long term memory network LSTM, using Dropout Mechanism establishing hidden layer;
Copied cells, for according to the single layer LSTM, single layer network to be copied into double-layer network;
Optimize unit, it is using back-propagating mechanism that weight parameter in the double-layer network is layer-by-layer for initializing weight parameter It is reversed to adjust, loss function is optimized by the training precision that iteration improves network, constructs music prediction model.
8. the compositing system according to claim 7 based on artificial intelligence, which is characterized in that the unit of establishing also wraps It includes:
Subelement is established, for establishing single layer shot and long term memory network LSTM, the single layer LSTM contains LSTM block, described LSTM block includes a door, and the door determines can whether input be important, be remembered and can be exported;
Corresponding subelement is used for original state random initializtion, and the corresponding input value of each step, the input value is each The corresponding term vector of word;
Subelement is deleted, for randomly deleting the unit of hidden layer using Dropout mechanism, keeps input and output nerve Member is constant;
The copied cells further include:
Summarize subelement, is one for being summarized multiple basis LSTM units using multilayer RNN unit;
Subelement is deleted, for controlling the state of RNN unit by door, and deletes or increases information thereto, add every time A RNN unit is added to re-call a basis LSTM unit;
Subelement is set, for initial value to be arranged for the original state of each layer network.
9. a kind of computer equipment, which is characterized in that including memory and processor, being stored with computer in the memory can Reading instruction, when the computer-readable instruction is executed by the processor, so that the processor executes such as claim 1 to 5 Any one of the method the step of.
10. a kind of storage medium for being stored with computer-readable instruction, which is characterized in that the computer-readable instruction is by one Or multiple processors are when executing, so that one or more processors execute the step such as any one of claims 1 to 5 the method Suddenly.
CN201810561621.5A 2018-06-04 2018-06-04 Composing method, system, computer equipment and storage medium based on artificial intelligence Pending CN109192187A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810561621.5A CN109192187A (en) 2018-06-04 2018-06-04 Composing method, system, computer equipment and storage medium based on artificial intelligence
PCT/CN2018/104715 WO2019232959A1 (en) 2018-06-04 2018-09-08 Artificial intelligence-based composing method and system, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810561621.5A CN109192187A (en) 2018-06-04 2018-06-04 Composing method, system, computer equipment and storage medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN109192187A true CN109192187A (en) 2019-01-11

Family

ID=64948568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810561621.5A Pending CN109192187A (en) 2018-06-04 2018-06-04 Composing method, system, computer equipment and storage medium based on artificial intelligence

Country Status (2)

Country Link
CN (1) CN109192187A (en)
WO (1) WO2019232959A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120212A (en) * 2019-04-08 2019-08-13 华南理工大学 Piano auxiliary compositing system and method based on user's demonstration audio genre
CN110288965A (en) * 2019-05-21 2019-09-27 北京达佳互联信息技术有限公司 A kind of music synthesis method, device, electronic equipment and storage medium
CN111583891A (en) * 2020-04-21 2020-08-25 华南理工大学 Automatic musical note vector composing system and method based on context information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952448A (en) * 2015-05-04 2015-09-30 张爱英 Method and system for enhancing features by aid of bidirectional long-term and short-term memory recurrent neural networks
CN107045867A (en) * 2017-03-22 2017-08-15 科大讯飞股份有限公司 Automatic composing method, device and terminal device
CN107102969A (en) * 2017-04-28 2017-08-29 湘潭大学 The Forecasting Methodology and system of a kind of time series data
CN107121679A (en) * 2017-06-08 2017-09-01 湖南师范大学 Recognition with Recurrent Neural Network predicted method and memory unit structure for Radar Echo Extrapolation
CN107123415A (en) * 2017-05-04 2017-09-01 吴振国 A kind of automatic music method and system
CN107622329A (en) * 2017-09-22 2018-01-23 深圳市景程信息科技有限公司 The Methods of electric load forecasting of Memory Neural Networks in short-term is grown based on Multiple Time Scales
CN107769972A (en) * 2017-10-25 2018-03-06 武汉大学 A kind of power telecom network equipment fault Forecasting Methodology based on improved LSTM

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379105A1 (en) * 2015-06-24 2016-12-29 Microsoft Technology Licensing, Llc Behavior recognition and automation using a mobile device
CN107481048A (en) * 2017-08-08 2017-12-15 哈尔滨工业大学深圳研究生院 A kind of financial kind price expectation method and system based on mixed model
CN107644630B (en) * 2017-09-28 2020-07-28 北京灵动音科技有限公司 Melody generation method and device based on neural network and storage medium
CN107993636B (en) * 2017-11-01 2021-12-31 天津大学 Recursive neural network-based music score modeling and generating method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952448A (en) * 2015-05-04 2015-09-30 张爱英 Method and system for enhancing features by aid of bidirectional long-term and short-term memory recurrent neural networks
CN107045867A (en) * 2017-03-22 2017-08-15 科大讯飞股份有限公司 Automatic composing method, device and terminal device
CN107102969A (en) * 2017-04-28 2017-08-29 湘潭大学 The Forecasting Methodology and system of a kind of time series data
CN107123415A (en) * 2017-05-04 2017-09-01 吴振国 A kind of automatic music method and system
CN107121679A (en) * 2017-06-08 2017-09-01 湖南师范大学 Recognition with Recurrent Neural Network predicted method and memory unit structure for Radar Echo Extrapolation
CN107622329A (en) * 2017-09-22 2018-01-23 深圳市景程信息科技有限公司 The Methods of electric load forecasting of Memory Neural Networks in short-term is grown based on Multiple Time Scales
CN107769972A (en) * 2017-10-25 2018-03-06 武汉大学 A kind of power telecom network equipment fault Forecasting Methodology based on improved LSTM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
方积乾: "《医学统计学手册》", 31 May 2018, 中国统计出版社, pages: 166 *
黄孝平: "当代机器深度学习方法与应用研究", 电子科技大学出版社, pages: 077 - 079 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120212A (en) * 2019-04-08 2019-08-13 华南理工大学 Piano auxiliary compositing system and method based on user's demonstration audio genre
CN110120212B (en) * 2019-04-08 2023-05-23 华南理工大学 Piano auxiliary composition system and method based on user demonstration audio frequency style
CN110288965A (en) * 2019-05-21 2019-09-27 北京达佳互联信息技术有限公司 A kind of music synthesis method, device, electronic equipment and storage medium
CN110288965B (en) * 2019-05-21 2021-06-18 北京达佳互联信息技术有限公司 Music synthesis method and device, electronic equipment and storage medium
CN111583891A (en) * 2020-04-21 2020-08-25 华南理工大学 Automatic musical note vector composing system and method based on context information
CN111583891B (en) * 2020-04-21 2023-02-14 华南理工大学 Automatic musical note vector composing system and method based on context information

Also Published As

Publication number Publication date
WO2019232959A1 (en) 2019-12-12

Similar Documents

Publication Publication Date Title
Burgess et al. A revised model of short-term memory and long-term learning of verbal sequences
Schmidhuber et al. Long short-term memory
Denoyer et al. Deep sequential neural network
CN105160249B (en) A kind of method for detecting virus based on improved Artificial neural network ensemble
Mozer Connectionist music composition based on melodic, stylistic, and psychophysical constraints
CN107239825A (en) Consider the deep neural network compression method of load balancing
CN109192187A (en) Composing method, system, computer equipment and storage medium based on artificial intelligence
CN108630198A (en) Method and apparatus for training acoustic model
Suryo et al. Improved time series prediction using LSTM neural network for smart agriculture application
Shi et al. Symmetry in computer-aided music composition system with social network analysis and artificial neural network methods
CN114692310A (en) Virtual-real integration-two-stage separation model parameter optimization method based on Dueling DQN
CN115511069A (en) Neural network training method, data processing method, device and storage medium
Siphocly et al. Top 10 artificial intelligence algorithms in computer music composition
CN116128060A (en) Chess game method based on opponent modeling and Monte Carlo reinforcement learning
CN108470212B (en) Efficient LSTM design method capable of utilizing event duration
Zhang et al. Minicolumn-based episodic memory model with spiking neurons, dendrites and delays
Kotecha Bach2Bach: generating music using a deep reinforcement learning approach
Nayak et al. Optimizing a higher order neural network through teaching learning based optimization algorithm
Nikitin et al. Automated sound generation based on image colour spectrum with using the recurrent neural network
Fernandes et al. Enhanced deep hierarchal GRU & BILSTM using data augmentation and spatial features for tamil emotional speech recognition
Peterson et al. Modulating stdp with back-propagated error signals to train snns for audio classification
Mohanty et al. Temporally conditioning of generative adversarial networks with lstm for music generation
Szelogowski Generative deep learning for virtuosic classical music: Generative adversarial networks as renowned composers
Weyde et al. Design and optimization of neuro-fuzzy-based recognition of musical rhythm patterns
Rahal et al. Separated Feature Learning for Music Composition Using Memory-Based Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111

RJ01 Rejection of invention patent application after publication