CN116013228A - Music generation method and device, electronic equipment and storage medium thereof - Google Patents

Music generation method and device, electronic equipment and storage medium thereof Download PDF

Info

Publication number
CN116013228A
CN116013228A CN202211696464.1A CN202211696464A CN116013228A CN 116013228 A CN116013228 A CN 116013228A CN 202211696464 A CN202211696464 A CN 202211696464A CN 116013228 A CN116013228 A CN 116013228A
Authority
CN
China
Prior art keywords
emotion
data
music
current
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211696464.1A
Other languages
Chinese (zh)
Inventor
高毅
胡卿瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202211696464.1A priority Critical patent/CN116013228A/en
Publication of CN116013228A publication Critical patent/CN116013228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a music generation method, a device, electronic equipment and a storage medium thereof. Comprising the following steps: acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data; acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment; music data is determined based on the emotion parameter at the current time, and the music data is played. According to the music system generated by the scheme, the emotion of the acquired target object can be analyzed and regulated in real time, the problem that deviation exists in generated music due to incapability of objectively evaluating the emotion of the participant is solved, the problem that the participant is required to actively feed back the emotion of the participant in a form of filling a table or the like is avoided, the accuracy of generating the music for regulating the emotion of the target object according to the emotion of the target object is improved, and the regulating capability of the emotion of the target object is enabled to be more rapid and effective.

Description

Music generation method and device, electronic equipment and storage medium thereof
Technical Field
The present invention relates to the field of computer-generated music, and in particular, to a music generating method, apparatus, electronic device, and storage medium thereof.
Background
With the development of computer technology, computer technology is also applied in the field of music composition. Moreover, music has important significance for people's daily life and artistic enjoyment, and the music can express the emotion of a composer and also influence the emotion of a listener.
Since existing music generation methods are typically screened from a limited music library, they are typically in units of one piece. On the one hand, the time resolution of music, which is an influence factor, is very low; on the other hand, the melody cannot be dynamically adjusted according to the real-time emotion of the subject, and the discrete type of music in the music library cannot meet the requirements of different users.
Disclosure of Invention
The invention provides a music generation method, a device, electronic equipment and a storage medium thereof, which aim to generate the current needed music data of a target object in a targeted way so as to adjust the emotion of the target object.
According to an aspect of the present invention, there is provided a music generating method including:
acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data;
acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment;
Music data is determined based on the emotion parameter at the current time, and the music data is played.
Optionally, determining current emotion data corresponding to the current electroencephalogram data includes:
and carrying out emotion analysis processing on the current electroencephalogram data based on a preset emotion analysis model to obtain the current emotion data.
Optionally, determining music data based on the emotion parameter at the current moment, playing the music data, including:
inputting emotion parameters at the current moment into a preset music generator to obtain music data output by the music generator; or alternatively, the process may be performed,
and matching in a music database based on the emotion parameters at the current moment to obtain matched music data, wherein the music database comprises a plurality of candidate music data and emotion parameter ranges corresponding to the candidate music data.
Optionally, the training method of the prediction model includes:
creating an initial prediction model, and iteratively executing the following steps until the training ending condition is met, so as to obtain a trained prediction model:
acquiring electroencephalogram data of a training object at a first moment, and determining first emotion data corresponding to the electroencephalogram data at the first moment;
predicting the first emotion data and the target emotion data based on the initial prediction model to obtain emotion parameters at the first moment, and determining training music data based on the emotion parameters at the first moment;
Playing training music data, collecting electroencephalogram data of a training object at a second moment in the playing process of the training music data, and determining second emotion data corresponding to the electroencephalogram data at the second moment;
generating a first penalty function based on the first and second mood data and/or generating a second penalty function based on the second mood data and the target mood data; parameter adjustments are made to the initial predictive model based on the first loss function and/or the second loss function.
Optionally, generating the first penalty function based on the first mood data and the second mood data includes:
determining an emotion difference value of the first emotion data and the second emotion data, and if the emotion difference value is smaller than a first preset threshold value or the emotion difference value is larger than the first preset threshold value and the emotion adjustment direction corresponding to the target emotion data is different, generating a punishment function as a first loss function;
and if the emotion difference value is larger than a first preset threshold value and the emotion regulating direction corresponding to the target emotion data is the same, generating a reward function as a first loss function.
Optionally, after obtaining the emotion parameter at the current moment, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
If the data difference value is larger than a preset threshold value, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment;
correspondingly, inputting emotion parameters at the current moment into a preset music generation model to obtain music data output by the music generation model, wherein the method comprises the following steps:
and inputting the updated emotion data at the current moment into a preset music generation model to obtain music data output by the music generation model.
Optionally, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment, including:
determining an emotion adjustment direction based on the current emotion data and the target emotion data;
the updated mood data at the current moment is determined based on the mood adjustment value in the mood adjustment direction and the mood parameter at the previous moment.
Optionally, after obtaining the emotion parameter at the current moment, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
if the data difference value is larger than the preset threshold value, generating smooth music data based on the data difference value before determining the music data based on the emotion parameter at the current moment, and playing the smooth music data.
Optionally, the method further comprises:
displaying an emotion visualization page, and performing emotion rendering in the emotion visualization page based on emotion data of each moment; and/or the number of the groups of groups,
acquiring target emotion data, including:
and responding to the triggering operation of the emotion visualization page, and determining target emotion data corresponding to the triggering operation.
According to another aspect of the present invention, there is provided a music generating apparatus comprising:
the emotion data determining module is used for acquiring brain electricity data of the target object at the current moment and determining current emotion data corresponding to the current brain electricity data;
the emotion parameter determining module is used for acquiring target emotion data, and predicting the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment;
and the music data playing module is used for determining the music data based on the emotion parameters at the current moment and playing the music data.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the music generating method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to execute a music generating method according to any one of the embodiments of the present invention.
According to the technical scheme, the music generated according to the real-time emotion of the target object is obtained through a closed-loop emotion adjusting process that the emotion of the target object is stimulated and the emotion of the target object is changed, and the music generated according to the real-time emotion of the target object is obtained, so that the emotion of the target object is adjusted, the problem that generated music is deviated due to the fact that the emotion of the target object cannot be objectively evaluated due to the fact that the emotion parameter is acquired through self-evaluation of the target object is solved, emotion adjustment can be carried out according to the music generated by the real-time electroencephalogram data of the target object, and accurate adjustment of the emotion of a person in a certain designated direction is achieved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a music generating method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a music generating method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a music generating method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a music generating apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing a music generating method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a music generating method according to a first embodiment of the present invention, where the method may be applied to emotion adjustment by music, and the method may be performed by a music generating device, which may be implemented in hardware and/or software, and the music generating device may be configured in an electronic device such as a computer, a mobile phone, a tablet computer, or the like. As shown in fig. 1, the method includes:
S110, acquiring brain electricity data of the target object at the current moment, and determining current emotion data corresponding to the current brain electricity data.
The electroencephalogram data may be specifically understood as data obtained by performing signal processing on an electroencephalogram signal of a target object acquired by an electroencephalogram acquisition device, and the electroencephalogram data may include, but is not limited to, electroencephalogram (EEG). The electroencephalogram data acquisition device may include, but is not limited to, wearable devices such as electrode caps, and the electroencephalogram data acquisition device may be electrically connected or in communication with the electronic device in the embodiment, and the electroencephalogram data acquisition device transmits the acquired electroencephalogram signals to the electronic device. The acquisition of the electroencephalogram data of the target object can be performed at a plurality of moments, and the electroencephalogram acquisition device can be set, for example, to acquire the electroencephalogram data once every 10 seconds, 30 seconds or 40 seconds, and the acquisition time interval of the electroencephalogram data can be preset, for example, can be determined according to the setting operation of an operator of the target object or other devices. It should be noted that, the electroencephalogram signal is acquired based on the authorization of the target object.
The emotion data may be understood as emotion state data, which may include, but is not limited to, data representing sad, happy, etc., and may be in the form of data values, or in the form of coordinate data, or in the form of data vectors, which are not limited thereto. And presetting a trend relation between the emotion data and the emotion states, and correspondingly, determining the corresponding emotion states according to the emotion data. The current emotion data can be obtained by processing the electroencephalogram data through an emotion analysis model, can also be obtained through a mapping relation database of the electroencephalogram data and the emotion state, and the like.
Specifically, the brain data acquisition device is worn on the head of the target object, current brain data of the target object are acquired, the acquired brain data are input into the emotion analysis model as input parameters for analysis, and current emotion data corresponding to the brain data of the emotion are obtained. Optionally, the analyzed electroencephalogram signals are deleted, so that the problems of leakage of electroencephalogram information and the like are avoided.
Optionally, performing emotion analysis processing on the current electroencephalogram data based on a preset emotion analysis model to obtain the current emotion data.
The emotion analysis model is a neural network model which takes electroencephalogram data as input and emotion data as output, and can be obtained by training based on historical electroencephalogram data of a training object and corresponding emotion data labels as training data. The emotion analysis model can map the electroencephalogram data at each moment into one emotion state and represent the emotion state as one or a group of scalars, and the number of output scalars depends on the dimension of the emotion model. Such as mapping discrete emotions onto a two-dimensional orthogonal space with wakefulness and valence as axes, this method of mapping discrete emotions onto a continuous space allows us to represent a specific certain emotional state with a quantity.
Specifically, an emotion analysis model is preset, a model in which electroencephalogram data is mapped on a two-dimensional orthogonal space can be adopted, current electroencephalogram data of a target object collected in real time is used as input of the emotion model, and after model analysis, current emotion data corresponding to the electroencephalogram data of the target object is output.
S120, acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment.
The target emotion data may be specifically understood as emotion data corresponding to an expected emotion state of the target object, and is emotion data corresponding to an expected emotion state after emotion adjustment is performed on the target object. The emotion parameter at the current moment can be understood as an emotion parameter required in the process of adjusting the emotion state corresponding to the acquired brain electrical data of the target object to the target emotion state, namely, the emotion parameter required for adjusting the current emotion data to the target emotion data, and the emotion parameter at the current moment is used as a reference for generating the music data.
Specifically, collected emotion data of a target object at the current moment is used as input parameters of a prediction model, a target emotion state is designated and input into the prediction model, the prediction model is trained by the emotion data of the current moment and the target emotion data, in the training process, the prediction model outputs predicted emotion parameters according to the emotion data of the current moment and the target emotion data, if training music data generated based on the predicted emotion parameters cannot adjust the emotion data of the training object at the current moment to be consistent with the target emotion parameters, a loss function is generated and is transmitted to the prediction model, reinforcement learning is conducted on the prediction model until the emotion parameters generated by the prediction model can guide the generation of music data for accurately adjusting the emotion of the training object, and accordingly, the training process of the prediction model is ended, and the trained prediction model is obtained.
The predictive model training process is illustratively a closed-loop interactive training process in which a target emotional state needs to be specified and the emotion of the training object is moved toward the target emotional state. Input parameters of music generation need to be calculated according to the emotional state of the training object, if the state of the training object at the time t is recorded as S t I.e. the input parameter X at that time needs to be calculated t The music piece generated at time t is denoted as M t The state at time t+1 is denoted as S t+1 . The closed loop interaction process of the system can be expressed as:
X t =f(S t )
M t =Gen(X t )
S t+1 =α(S t ,M t )
finding input mood parameters X of a music generating system t The idea of (a) can be described by a greedy algorithm as for the target state S T The function f needs to satisfy:
X t =f(S t )=argmin X ‖S T -S t+1
and the music can not be changed rapidly, and the change value delta X can be set t The method comprises the following steps:
ΔX t =‖X t+1 -X t ‖≤δ
if the target state S is a scalar or can be mapped onto a scalar, denoted as |S|, then the parameter X t Should change in the direction of the gradient that maximizes emotion (positive and negative depend on |s t I and S T Magnitude of i), i.e.:
Figure BDA0004022535280000081
under the assumption that emotion is a simple one-dimensional problem, we can directly calculate the required input parameters according to the emotion analysis model and the music generation network.
Optionally, the training method of the prediction model includes: creating an initial prediction model, and iteratively executing the following steps 1-4 until a training ending condition is met, so as to obtain a trained prediction model:
step 1: and acquiring the electroencephalogram data of the training object at the first moment, and determining first emotion data corresponding to the electroencephalogram data at the first moment.
The electroencephalogram data at the first moment can be specifically understood as electroencephalogram data of a training object acquired by the electroencephalogram data acquisition device at a certain moment, and can be electroencephalogram data at the current moment, electroencephalogram data at a certain moment in the past, and the like. The first emotion data may be an emotion state having a mapping relationship with the computer data at the first time. In addition, the training object may be the same or different from the target object.
Specifically, electroencephalogram data of a training object at a first moment is acquired through an electroencephalogram acquisition device, and an emotion state corresponding to the electroencephalogram data at the first moment, namely first emotion data, is obtained through a mapping relation between the electroencephalogram data and the emotion state.
Step 2: and carrying out prediction processing on the first emotion data and the target emotion data based on the initial prediction model to obtain emotion parameters at the first moment, and determining training music data based on the emotion parameters at the first moment.
The emotion parameters at the first moment can be specifically understood as emotion parameters required by changing the first emotion data towards the target emotion data, and the first emotion data and the target emotion data can be learned and analyzed through an initial prediction model to obtain the emotion parameters at the first moment. The generation of the training music data can be generated through a pre-built music generator, and the obtained emotion parameters at the first moment are used as input parameters of the music generator, so that corresponding music fragments are generated. The music generator may be one that is continuous to the logical space using techniques such as generating an countermeasure network.
Step 3: and playing the training music data, collecting the electroencephalogram data of the training object at the second moment in the playing process of the training music data, and determining second emotion data corresponding to the electroencephalogram data at the second moment.
The training music can be played by a music player in the electroencephalogram acquisition equipment, and if the training music is not available, the electroencephalogram acquisition equipment can be externally connected with the music player. The electroencephalogram data of the training object at the second moment can be understood to be the electroencephalogram data collected by the training object after hearing the training music data, or the electroencephalogram data collected by the training object after playing the training music data. The second time is the next time adjacent to the first time. The second emotion data may be specifically understood as corresponding emotion state data obtained by mapping the electroencephalogram data at the second moment.
Specifically, after the training object hears the training music data, the electroencephalogram data of the training object, namely, the electroencephalogram data at the second moment is collected again, and the emotional state data corresponding to the electroencephalogram data at the second moment, namely, the second emotional data, is obtained through the mapping relation between the electroencephalogram data and the emotional data. It is understood that the second emotion data may be emotion data of the training subject that is emotion-adjusted by training the music data in the first emotion data.
Step 4: generating a first penalty function based on the first and second mood data and/or generating a second penalty function based on the second mood data and the target mood data; parameter adjustments are made to the initial predictive model based on the first loss function and/or the second loss function.
The loss function is understood to mean, in particular, a function of the degree of difference between the measured prediction value and the actual value used in the model training phase. The first penalty function is a function for characterizing the degree of difference between the first mood data and the second mood data. The second penalty function is a function for characterizing the degree of difference between the second mood data and the target mood data.
Optionally, determining an emotion difference value of the first emotion data and the second emotion data, and if the emotion difference value is smaller than a first preset threshold value, or if the emotion difference value is larger than the first preset threshold value and the emotion adjustment direction corresponding to the target emotion data is different, generating a punishment function as a first loss function; and if the emotion difference value is larger than a first preset threshold value and the emotion regulating direction corresponding to the target emotion data is the same, generating a reward function as a first loss function.
The preset threshold value is specifically understood as a critical value for representing emotion change, and is used for measuring an emotion difference value of the first emotion data and the second emotion data, wherein the emotion difference value of the first emotion data and the second emotion data is larger than the first preset threshold value, the emotion state of the training object is represented to be adjusted to a certain extent by the training music data, the emotion difference value of the first emotion data and the second emotion data is smaller than the first preset threshold value, and the degree of adjusting the emotion state of the training object by the training music data is represented to be small. By setting the first preset threshold, the method can be used for judging whether the emotional state of the training object changes in the playing process of the training music data or not and whether the change quantity meets the requirement of the first preset threshold or not. The setting of the first preset threshold may be determined based on a large amount of experimental data. The determining of the emotion adjustment direction corresponding to the target emotion data may obtain the emotion adjustment direction by making a difference between a position vector corresponding to the target emotion data and a position vector corresponding to the first emotion data.
In this embodiment, the target emotion data is set to determine the emotion adjustment direction of the subject (i.e., the training object or the target object), and the music data is generated in a targeted manner to accurately adjust the emotion change of the subject. In the model training process, the model parameters are accurately regulated by judging the emotion regulating direction, so that a prediction model obtained by training can be generated, and the current emotion data is directed to the emotion parameters of the change of the emotion regulating direction of the target emotion data.
The loss function is a function for measuring the degree of difference between the predicted value and the actual value of the model, and is mainly used in the training stage of the model. The penalty function is determined by a relationship between the set preset threshold and the mood data.
Specifically, difference value judgment is performed on the first emotion data and the second emotion data output through the prediction model, if the judgment result is that the difference value of the two emotion data is smaller than a first preset threshold value, the judgment result indicates that emotion adjustment does not meet the adjustment requirement, namely the generated training music data does not meet the adjustment requirement, and a punishment function is generated as a first loss function. If the judgment result shows that the difference value of the two emotion data is larger than the first preset threshold value, and the adjusting direction is inconsistent with the emotion adjusting direction corresponding to the target emotion, the training music data is characterized in that the emotion of the training object is adjusted to a certain extent, but the adjusting direction is inaccurate, and a punishment function is generated as a first loss function. If the judgment result shows that the difference value of the two emotion data is larger than the first preset threshold value, the adjusting direction is consistent with the emotion adjusting direction corresponding to the target emotion, the training music data is characterized in that the emotion of the training object is adjusted to a certain extent, and the adjusting directions are consistent, a reward function is generated as a first loss function. And (3) continuing to learn and parameter adjustment on the over-prediction model based on the first loss function, and finally obtaining the trained prediction model. Wherein, the rewarding function and the punishment function can be preset and can be directly called.
The second loss function can be determined through the difference value between the second emotion data and the target emotion data output by the prediction model, the error value of the two emotion data is set, if the difference value between the second emotion data and the target emotion data is smaller than the error value, the emotion adjustment is indicated to reach the adjustment requirement, namely the generated training music data meets the adjustment requirement, and the rewarding function is generated as the second loss function. If the judgment result shows that the difference value of the two emotion data is smaller than the error, and the adjusting direction is consistent with the emotion adjusting direction corresponding to the target emotion, the training music data is characterized in that the emotion of the training object is adjusted to a certain extent, but the adjusting direction is inaccurate, and a punishment function is generated as a second loss function. And (3) continuing to learn and parameter adjust the over-prediction model based on the second loss function to finally obtain the trained prediction model.
The first loss function or the second loss function can be generated to adjust the emotion data respectively, so that target emotion parameters are generated, the first loss function and the second loss function can be combined to finish adjusting the emotion data together, for example, the first loss function and the second loss function are added to obtain the target loss function, learning and parameter adjustment can be continuously carried out on the prediction model based on the target loss function, and finally the trained prediction model is obtained.
In this embodiment, through a training process of a prediction model of closed-loop interaction, by designating a target emotional state and moving the emotional state of the training object toward the target emotional state, in a continuous training process, an emotional parameter that can enable the emotional state of the training object to reach the target emotional state, that is, an input parameter of a music generator, is obtained. By adopting a closed-loop process of stimulating the emotion change of the participants and inducing the change of the music generator by the emotion change of the participants, the emotion prediction model and the music generator can adaptively compensate individual differences in the process, and the interactive process is also a continuous reinforcement learning process of the closed-loop real-time interactive system, so that the state of a person can be moved towards the direction of maximizing a certain emotion state.
S130, determining music data based on the emotion parameters at the current moment, and playing the music data.
Wherein the determination of the music data can be accomplished by constructing a music generator, which can be obtained as a logical space continuous music generator for the emotion data by using techniques such as generating an countermeasure network.
Specifically, emotion parameters output by the prediction model are used as input parameters of a music generator, corresponding music data are further obtained, and then the music data are played through a music playing device.
Alternatively, the music data may be generated by: and inputting the emotion parameters at the current moment into a preset music generator to obtain music data output by the music generator.
The music data may be generated by: and matching in a music database based on the emotion parameters at the current moment to obtain matched music data, wherein the music database comprises a plurality of candidate music data and emotion parameter ranges corresponding to the candidate music data.
The matching of music data is specifically understood to be a method for acquiring music data in combination with a screening algorithm, that is, a matching method for selecting melodies with a great influence on a training object from a music database through a step-by-step test. The melodies in the music database cannot be too discrete from one another.
Specifically, in the case of determining the emotion parameter at the current moment, the emotion parameter may be input as an input parameter of the music generator, so as to generate a corresponding music piece. The music database with a large number of melody libraries can be tested step by a screening algorithm, and melodies with larger influence on the emotion of the training object can be screened out.
Further, the method further comprises: and displaying an emotion visualization page, and performing emotion rendering in the emotion visualization page based on emotion data of each moment.
The electroencephalogram data acquisition equipment is externally connected with display equipment, and the display equipment comprises intelligent equipment such as intelligent mobile phones, computers and tablets. The collected electroencephalogram data, emotion data corresponding to the electroencephalogram data and emotion states corresponding to all moments regulated by the predicted emotion model can be displayed through a display interface of external display equipment. The emotional state of the user can be fed back by the target object in real time through the visual page.
On the basis of the above embodiment, the obtaining of the target emotion data may be: and responding to the triggering operation of the emotion visualization page, and determining target emotion data corresponding to the triggering operation.
The triggering operation may be triggered by a button on the visual interface, or may be triggered by an instruction input from an external device, which is not limited herein. And acquiring the target emotion, and inputting the emotion state data which the training object wants to reach as target emotion data through the visual interface external input device.
Specifically, electroencephalogram data of each moment of the target object are collected and displayed on a visual interface, the emotional state of the target object is mediated through a training model, and emotion changes of each moment are displayed on the visual interface.
In this embodiment, the initial emotion data of the training process and the emotion change data at each moment are displayed by adding the visual interface, so that the emotion of the target object can be better guided to move towards the target emotion state, and if deviation exists in the training process, the deviation can be timely changed through the visual interface, so that the difference of individuals can be better adaptively compensated.
According to the technical scheme, through the method for collecting the electroencephalogram signals in real time, emotion is analyzed quantitatively in real time, and through a closed loop process that music stimulates the emotion change of a target object and the emotion change of the target object induces the change of a music generator, an emotion analyzer is constructed, and then emotion parameters capable of generating the emotion state of the target object to move towards the direction of maximizing the appointed emotion state are calculated. And then generating music by combining with a music generator, and adjusting the emotion state of the target object. The real-time interactive system utilizes posterior information obtained by analyzing the emotional state of a person, and the designed emotion regulating system can make better decisions according to more information, so that the constructed music generating system has the capability of regulating the emotion of a target object more rapidly and effectively.
Example two
Fig. 2 is a flowchart of a music generating method according to a second embodiment of the present invention, where the method of the foregoing embodiment is optimized, and optionally, a data difference between an emotion parameter at a current time and an emotion parameter at a previous time is determined; if the data difference value is larger than a preset threshold value, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment; and inputting the updated emotion data at the current moment into a preset music generation model to obtain music data output by the music generation model. As shown in fig. 2, the method includes:
s210, acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data.
S220, acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment.
S230, determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment.
The data difference value can be understood as a difference value between the emotion parameters of two adjacent moments of the prediction model output.
Exemplary, the emotion parameter at the current time t is X t+1 The emotion parameter of the previous moment is X t . The difference of emotion parameters at two moments is DeltaX t
ΔX t =‖X t+1 -X t
And S240, if the data difference value is larger than a preset threshold value, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment.
Among these, smoothing is specifically understood as the difference between two pieces of music generated by mood parameters at two adjacent moments in time being small. The smoothing process may in particular insert a smoothing interpolation at the transition of the two mood parameters, which may be performed using a generation countermeasure network (GAN, generative Adversarial Network). And updating the emotion parameters at the current moment through the obtained smooth interpolation, so that the updated emotion parameters at the current moment are obtained according to the updated emotion parameters.
Optionally, determining an emotion adjustment direction based on the current emotion data and the target emotion data; the updated mood data at the current moment is determined based on the mood adjustment value in the mood adjustment direction and the mood parameter at the previous moment.
The emotion adjustment direction can be understood as the direction of adjusting the current emotion data to the target emotion data, and can be determined by an emotion analysis model. And taking the current emotion data and the target emotion data as input parameters of an emotion analysis model, and determining an emotion adjustment direction through continuous analysis and learning of the emotion analysis model. The emotion regulating value can learn and analyze the emotion data and the emotion regulating direction at the previous moment according to the prediction model to generate an emotion regulating value, and further determine an emotion parameter DeltaX corresponding to the emotion regulating value, so that the emotion parameter X at the current moment t+1 For mood parameter X at the previous moment t Sum with DeltaX, i.e. X t+1 =X t +ΔX。
In this embodiment, under the condition that the difference value between the emotion parameter at the previous moment and the emotion parameter at the previous moment is larger than a preset threshold, the updated emotion parameter is obtained by timely adjusting the emotion parameter at the current moment, so as to reduce the difference between two pieces of music generated at adjacent moments.
S250, inputting the updated emotion data at the current moment into a preset music generation model to obtain music data output by the music generation model.
The music generation model may be specifically understood as a model taking emotion parameters as input and music pieces as output, and may use a countermeasure network (GAN) method to process the emotion parameters.
Specifically, the updated emotion parameters are used as input parameters of a music generation model, and updated music data at the current moment is generated, wherein the difference between the updated music data at the current moment and the music piece generated at the last moment is small.
And S260, determining music data based on the emotion parameters at the current moment, and playing the music data.
According to the technical scheme, under the condition that the difference value of the emotion parameters at the current moment and the emotion parameters at the previous moment is larger than the preset threshold value, the updated emotion parameters at the current moment are obtained after the emotion parameters are adjusted towards the direction of adjusting the target emotion data, and then the updated emotion parameters are input into the music generation model, so that the purpose of generating music fragments with small difference from the music fragments generated at the previous moment is achieved, and smooth transition of the music fragments is achieved.
Example III
Fig. 3 is a flowchart of a music generating method according to a third embodiment of the present invention, where the method of the foregoing embodiment is optimized, and optionally, a data difference between an emotion parameter at a current time and an emotion parameter at a previous time is determined; if the data difference value is larger than the preset threshold value, generating smooth music data based on the data difference value before determining the music data based on the emotion parameter at the current moment, and playing the smooth music data. As shown in fig. 3, the method includes:
s310, acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data.
S320, acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment.
S330, determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment.
And S340, if the data difference value is larger than a preset threshold value, generating smooth music data based on the data difference value before determining the music data based on the emotion parameter at the current moment, and playing the smooth music data.
Specifically, when it is determined that the difference between the emotion parameters at the current time and the emotion parameters at the previous time is greater than a preset threshold, generating music by using the difference as an input parameter of a music generation model, generating a smooth music piece conforming to the difference, and playing the smooth music by a music player.
S350, determining music data based on the emotion parameters at the current moment, and playing the music data.
Specifically, after smooth music is played, the electroencephalogram acquisition equipment continues to acquire current emotion data of the target object, the steps are continuously executed, emotion parameters at the current moment are generated, the current emotion parameters are used as input through the music generation model, corresponding music data are output, and an external music player is used for playing the music data.
According to the technical scheme, when the difference value of the emotion parameter at the current moment and the emotion parameter at the previous moment is judged to be larger than the preset threshold value, reinforcement learning can be directly conducted on the calculated difference value of the emotion data, corresponding emotion parameters are generated, smooth music is generated according to the parameters, and the emotion of the target object is adjusted. By generating smooth music according to the data difference value of the emotion parameters before determining the music data based on the emotion parameters at the current moment, the method is also a method for effectively smoothly transiting the music pieces with differences, so that the music pieces finally generated by the music generator are spliced more completely.
Example IV
Fig. 4 is a schematic structural diagram of a music generating apparatus according to a fourth embodiment of the present invention. As shown in fig. 4, the apparatus includes:
The emotion data determining module 410 is configured to obtain brain electrical data of the target object at a current time, and determine current emotion data corresponding to the current brain electrical data;
the emotion parameter determining module 420 is configured to obtain target emotion data, and predict current emotion data and target emotion data based on a preset prediction model to obtain an emotion parameter at the current moment;
the music data playing module 430 is configured to determine music data based on the emotion parameter at the current time, and play the music data.
Optionally, the emotion data determination module 410 is specifically configured to:
and carrying out emotion analysis processing on the current electroencephalogram data based on a preset emotion analysis model to obtain the current emotion data.
The method further comprises the steps of:
displaying an emotion visualization page, and performing emotion rendering in the emotion visualization page based on emotion data of each moment; and/or the number of the groups of groups,
acquiring target emotion data, including:
and responding to the triggering operation of the emotion visualization page, and determining target emotion data corresponding to the triggering operation.
Optionally, the emotion parameter determination module 420 is specifically configured to:
the training method of the prediction model comprises the following steps: creating an initial prediction model, and iteratively executing the following steps until the training ending condition is met, so as to obtain a trained prediction model:
Acquiring electroencephalogram data of a training object at a first moment, and determining first emotion data corresponding to the electroencephalogram data at the first moment;
predicting the first emotion data and the target emotion data based on the initial prediction model to obtain emotion parameters at the first moment, and determining training music data based on the emotion parameters at the first moment;
playing training music data, collecting electroencephalogram data of a training object at a second moment in the playing process of the training music data, and determining second emotion data corresponding to the electroencephalogram data at the second moment;
generating a first penalty function based on the first and second mood data and/or generating a second penalty function based on the second mood data and the target mood data; parameter adjustments are made to the initial predictive model based on the first loss function and/or the second loss function.
Generating a first penalty function based on the first mood data and the second mood data, comprising:
determining an emotion difference value of the first emotion data and the second emotion data, and if the emotion difference value is smaller than a first preset threshold value or the emotion difference value is larger than the first preset threshold value and the emotion adjustment direction corresponding to the target emotion data is different, generating a punishment function as a first loss function;
And if the emotion difference value is larger than a first preset threshold value and the emotion regulating direction corresponding to the target emotion data is the same, generating a reward function as a first loss function.
After obtaining the mood parameter at the current moment, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
if the data difference value is larger than a preset threshold value, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment;
correspondingly, inputting emotion parameters at the current moment into a preset music generation model to obtain music data output by the music generation model, wherein the method comprises the following steps:
and inputting the updated emotion data at the current moment into a preset music generation model to obtain music data output by the music generation model.
Smoothing the emotion parameters at the current moment based on the emotion parameters at the previous moment to obtain updated emotion data at the current moment, wherein the smoothing comprises the following steps:
determining an emotion adjustment direction based on the current emotion data and the target emotion data;
the updated mood data at the current moment is determined based on the mood adjustment value in the mood adjustment direction and the mood parameter at the previous moment.
After obtaining the mood parameter at the current moment, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
if the data difference value is larger than the preset threshold value, generating smooth music data based on the data difference value before determining the music data based on the emotion parameter at the current moment, and playing the smooth music data.
Optionally, the music data playing module 430 is specifically configured to:
inputting emotion parameters at the current moment into a preset music generator to obtain music data output by the music generator; or alternatively, the process may be performed,
and matching in a music database based on the emotion parameters at the current moment to obtain matched music data, wherein the music database comprises a plurality of candidate music data and emotion parameter ranges corresponding to the candidate music data.
The music generating device provided by the embodiment of the invention can execute the music generating method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example five
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, for example, a music generation method.
In some embodiments, the music generation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the music generation method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the music generation method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
The computer program for implementing the music generation method of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
Example six
The sixth embodiment of the present invention also provides a computer-readable storage medium storing computer instructions for causing a processor to execute a music generating method, the method comprising:
acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data;
acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment;
music data is determined based on the emotion parameter at the current time, and the music data is played.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (12)

1. A music generation method, comprising:
acquiring brain electricity data of a target object at the current moment, and determining current emotion data corresponding to the current brain electricity data;
acquiring target emotion data, and carrying out prediction processing on the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment;
and determining music data based on the emotion parameters at the current moment, and playing the music data.
2. The method of claim 1, wherein determining current mood data corresponding to the current brain electrical data comprises:
and carrying out emotion analysis processing on the current electroencephalogram data based on a preset emotion analysis model to obtain current emotion data.
3. The method of claim 1, wherein the determining music data based on the mood parameter at the current time, playing the music data, comprises:
inputting the emotion parameters at the current moment into a preset music generator to obtain music data output by the music generator; or alternatively, the process may be performed,
and matching in a music database based on the emotion parameters at the current moment to obtain matched music data, wherein the music database comprises a plurality of candidate music data and emotion parameter ranges corresponding to the candidate music data.
4. The method of claim 1, wherein the training method of the predictive model comprises:
creating an initial prediction model, and iteratively executing the following steps until the training ending condition is met, so as to obtain a trained prediction model:
acquiring electroencephalogram data of a training object at a first moment, and determining first emotion data corresponding to the electroencephalogram data at the first moment;
predicting the first emotion data and the target emotion data based on the initial prediction model to obtain emotion parameters at a first moment, and determining training music data based on the emotion parameters at the first moment;
the training music data is played, and in the playing process of the training music data, the electroencephalogram data of the training object at the second moment is collected, and second emotion data corresponding to the electroencephalogram data at the second moment is determined;
generating a first penalty function based on the first and second mood data and/or generating a second penalty function based on the second and target mood data; parameter adjustment is performed on the initial prediction model based on the first loss function and/or the second loss function.
5. The method of claim 4, wherein the generating a first penalty function based on the first mood data and the second mood data comprises:
determining an emotion difference value of the first emotion data and the second emotion data, and if the emotion difference value is smaller than a first preset threshold value or the emotion difference value is larger than the first preset threshold value and the emotion adjustment direction corresponding to the target emotion data is different, generating a punishment function as a first loss function;
and if the emotion difference value is larger than the first preset threshold value and the emotion regulating direction corresponding to the target emotion data is the same, generating a reward function as a first loss function.
6. The method of claim 1, wherein after obtaining the mood parameter for the current time, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
if the data difference value is larger than a preset threshold value, smoothing the emotion parameter at the current moment based on the emotion parameter at the previous moment to obtain updated emotion data at the current moment;
correspondingly, inputting the emotion parameters at the current moment into a preset music generation model to obtain music data output by the music generation model, wherein the method comprises the following steps:
And inputting the updated emotion data at the current moment into a preset music generation model to obtain music data output by the music generation model.
7. The method of claim 6, wherein smoothing the emotion parameter at the current time based on the emotion parameter at the previous time to obtain updated emotion data at the current time, comprises:
determining an emotion adjustment direction based on the current emotion data and the target emotion data;
and determining updated emotion data at the current moment based on the emotion adjustment value in the emotion adjustment direction and the emotion parameter at the previous moment.
8. The method of claim 1, wherein after obtaining the mood parameter for the current time, the method further comprises:
determining a data difference value between the emotion parameter at the current moment and the emotion parameter at the previous moment;
and if the data difference value is larger than a preset threshold value, generating smooth music data based on the data difference value before determining the music data based on the emotion parameter at the current moment, and playing the smooth music data.
9. The method according to claim 1, wherein the method further comprises:
Displaying an emotion visualization page, and performing emotion rendering in the emotion visualization page based on emotion data of each moment; and/or the number of the groups of groups,
the obtaining the target emotion data includes:
and responding to the triggering operation of the emotion visualization page, and determining target emotion data corresponding to the triggering operation.
10. A music generating apparatus, comprising:
the emotion data determining module is used for acquiring the brain electrical data of the target object at the current moment and determining the current emotion data corresponding to the current brain electrical data;
the emotion parameter determining module is used for acquiring target emotion data, and predicting the current emotion data and the target emotion data based on a preset prediction model to obtain emotion parameters at the current moment;
and the music data playing module is used for determining music data based on the emotion parameters at the current moment and playing the music data.
11. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the music generation method of any one of claims 1-9.
12. A computer readable storage medium storing computer instructions for causing a processor to implement the music generation method of any one of claims 1-9 when executed.
CN202211696464.1A 2022-12-28 2022-12-28 Music generation method and device, electronic equipment and storage medium thereof Pending CN116013228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211696464.1A CN116013228A (en) 2022-12-28 2022-12-28 Music generation method and device, electronic equipment and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211696464.1A CN116013228A (en) 2022-12-28 2022-12-28 Music generation method and device, electronic equipment and storage medium thereof

Publications (1)

Publication Number Publication Date
CN116013228A true CN116013228A (en) 2023-04-25

Family

ID=86036776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211696464.1A Pending CN116013228A (en) 2022-12-28 2022-12-28 Music generation method and device, electronic equipment and storage medium thereof

Country Status (1)

Country Link
CN (1) CN116013228A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116825060A (en) * 2023-08-31 2023-09-29 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device
CN117056554A (en) * 2023-10-12 2023-11-14 杭州般意科技有限公司 Music configuration method, device, terminal and medium for brain stem pre-stimulation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116825060A (en) * 2023-08-31 2023-09-29 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device
CN116825060B (en) * 2023-08-31 2023-10-27 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device
CN117056554A (en) * 2023-10-12 2023-11-14 杭州般意科技有限公司 Music configuration method, device, terminal and medium for brain stem pre-stimulation
CN117056554B (en) * 2023-10-12 2024-01-30 深圳般意科技有限公司 Music configuration method, device, terminal and medium for brain stem pre-stimulation

Similar Documents

Publication Publication Date Title
CN116013228A (en) Music generation method and device, electronic equipment and storage medium thereof
US20180365557A1 (en) Information processing method and information processing apparatus
CN109447156B (en) Method and apparatus for generating a model
CN109949286A (en) Method and apparatus for output information
CN110222649B (en) Video classification method and device, electronic equipment and storage medium
CN110503074A (en) Information labeling method, apparatus, equipment and the storage medium of video frame
CN107766946B (en) Method and system for generating combined features of machine learning samples
CN107995428A (en) Image processing method, device and storage medium and mobile terminal
JP2021170313A (en) Method and device for generating videos
TW201928709A (en) Method and apparatus for merging model prediction values, and device
CN113240778A (en) Virtual image generation method and device, electronic equipment and storage medium
CN110413510A (en) A kind of data processing method, device and equipment
CN113160819B (en) Method, apparatus, device, medium, and product for outputting animation
CN108573306A (en) Export method, the training method and device of deep learning model of return information
CN114429767A (en) Video generation method and device, electronic equipment and storage medium
CN107729144B (en) Application control method and device, storage medium and electronic equipment
CN110490389A (en) Clicking rate prediction technique, device, equipment and medium
CN110069991A (en) Feedback information determines method, apparatus, electronic equipment and storage medium
JP2020160551A (en) Analysis support device for personnel item, analysis support method, program, and recording medium
CN115378890B (en) Information input method, device, storage medium and computer equipment
CN115618232A (en) Data prediction method, device, storage medium and electronic equipment
Fernandes et al. Enhanced deep hierarchal GRU & BILSTM using data augmentation and spatial features for tamil emotional speech recognition
CN114419182A (en) Image processing method and device
CN116997866A (en) Generating virtual sensors for use in industrial machines
Cheng et al. Edge4emotion: An edge computing based multi-source emotion recognition platform for human-centric software engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination