CN116564252A - Training method of sound effect parameter adjustment model and related equipment - Google Patents

Training method of sound effect parameter adjustment model and related equipment Download PDF

Info

Publication number
CN116564252A
CN116564252A CN202310383729.0A CN202310383729A CN116564252A CN 116564252 A CN116564252 A CN 116564252A CN 202310383729 A CN202310383729 A CN 202310383729A CN 116564252 A CN116564252 A CN 116564252A
Authority
CN
China
Prior art keywords
sound effect
parameter adjustment
processing result
adjustment model
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310383729.0A
Other languages
Chinese (zh)
Inventor
李楠
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310383729.0A priority Critical patent/CN116564252A/en
Publication of CN116564252A publication Critical patent/CN116564252A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/005Algorithms for electrophonic musical instruments or musical processing, e.g. for automatic composition or resource allocation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/311Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The embodiment of the disclosure provides a training method of an audio parameter adjustment model and related equipment. The method comprises the following steps: acquiring sample audio and first sound effect parameters; inputting the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters; inputting the sample audio to a first sound effect processor configured with the first sound effect parameters to obtain a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio; and training the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result to obtain the trained sound effect parameter adjustment model. The method can enable the processing results of different sound effect processors to be basically consistent with the processing results of the same audio, realize the self-adaptive adaptation of the output results of the different sound effect processors, and improve the efficiency of sound effect parameter adjustment.

Description

Training method of sound effect parameter adjustment model and related equipment
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a training method for an audio parameter adjustment model, an audio parameter adjustment method, a training device for an audio parameter adjustment model, an audio parameter adjustment device, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of sound effect processing technology, sound effect post-processing has been widely applied to entertainment work making scenes of songs, movies, variety and short videos.
Because of the multiple implementation forms of the sound effect processor, in some scenes, the processing results of different sound effect processors need to be aligned, so that the output results of different processors are consistent. For example, in an on-line virtual singing room scene, a singer can choose to send his singing voice to a spectator terminal after processing by a certain special sound effect processor, and monitor the sound effect audio of his singing voice after processing in real time by an ear loop.
In the related art, the efficiency of this method is low by manually adjusting the parameters of one of the sound processors to make the effect approach to the result of the other processor.
Disclosure of Invention
The embodiment of the disclosure provides a training method for an audio parameter adjustment model, an audio parameter adjustment method, a training device for an audio parameter adjustment model, an audio parameter adjustment device, electronic equipment and a computer readable storage medium.
The embodiment of the disclosure provides a training method of an audio parameter adjustment model, which comprises the following steps: acquiring sample audio and first sound effect parameters; inputting the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters; inputting the sample audio to a first sound effect processor configured with the first sound effect parameters to obtain a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio; and training the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result to obtain a trained sound effect parameter adjustment model.
In some exemplary embodiments of the present disclosure, training the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result, to obtain a trained sound effect parameter adjustment model, including: determining a degree of approximation between the first processing result and the second processing result; and under the condition that the approximation degree is larger than a preset threshold, adjusting the model parameters of the to-be-trained sound effect parameter adjustment model until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is smaller than or equal to the preset threshold, and determining the adjusted initial sound effect parameter adjustment model as the trained sound effect parameter adjustment model.
In some exemplary embodiments of the present disclosure, when the approximation degree is greater than a preset threshold, adjusting a model parameter of the to-be-trained sound effect parameter adjustment model until an approximation degree between an adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is less than or equal to the preset threshold, and determining the adjusted initial sound effect parameter adjustment model as the trained sound effect parameter adjustment model includes: adjusting model parameters of the to-be-trained sound effect parameter adjustment model under the condition that the approximation degree is larger than a preset threshold value; inputting the first sound effect parameters into an adjusted sound effect parameter adjustment model to obtain adjusted second sound effect parameters; inputting the sample audio to a second sound effect processor configured with the adjusted second sound effect parameters to obtain an adjusted second processing result; and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model under the condition that the approximation degree between the first processing result and the adjusted second processing result is smaller than or equal to the preset threshold value.
In some exemplary embodiments of the present disclosure, obtaining sample audio includes: acquiring sample singing voice data and sample voice data; adjusting a reverberation parameter value and an equalization parameter value of the sample singing voice data, and adjusting a reverberation parameter value and an equalization parameter value of the sample voice data; and generating the sample audio according to the sample singing voice data, the sample voice data, the adjusted sample singing voice data and the adjusted sample voice data.
In some exemplary embodiments of the present disclosure, the approximation degree includes at least one of a waveform distance, a spectral distance, and a signal-to-noise ratio between the first processing result and the second processing result.
The embodiment of the disclosure provides a sound effect parameter adjustment method, which comprises the following steps: acquiring audio to be processed and first sound parameters of a first sound processor; inputting the first sound effect parameters into a trained sound effect parameter adjustment model to obtain second sound effect parameters; the sound effect parameter adjustment model after training is obtained by training according to any one of the methods; inputting the audio to be processed to a second sound effect processor configured with the second sound effect parameters, and obtaining a sound effect processing result of the audio to be processed.
The embodiment of the disclosure provides a training device for an audio parameter adjustment model, comprising: an acquisition module configured to perform acquiring sample audio and a first sound effect parameter; the obtaining module is configured to input the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters; the obtaining module is further configured to perform inputting the sample audio to a first sound effect processor configured with the first sound effect parameters, obtaining a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio; and the training module is configured to train the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result, and obtain a trained sound effect parameter adjustment model.
The embodiment of the disclosure provides an audio parameter adjusting device, which comprises: the acquisition module is configured to acquire the audio to be processed and the first sound effect parameters of the first sound effect processor; the obtaining module is configured to input the first sound effect parameters into the trained sound effect parameter adjustment model to obtain second sound effect parameters; the sound effect parameter adjustment model after training is obtained by training according to any one of the methods; the obtaining module is further configured to perform inputting the audio to be processed to a second sound effect processor configured with the second sound effect parameters, and obtain a sound effect processing result of the audio to be processed.
An embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute executable instructions to implement a training method of the sound effect parameter adjustment model as any one of the above or a sound effect parameter adjustment method as above.
Embodiments of the present disclosure provide a computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform a training method of an acoustic parameter adjustment model as any one of the above or an acoustic parameter adjustment method as the above.
Embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements a training method of the sound effect parameter adjustment model of any one of the above or a sound effect parameter adjustment method as above.
According to the training method of the sound effect parameter adjustment model, the first sound effect parameter is input into the sound effect parameter adjustment model to be trained, and the second sound effect parameter is obtained; respectively inputting the sample audio to a first sound effect processor configured with first sound effect parameters and a second sound effect processor configured with second sound effect processing parameters to obtain a first processing result of the sample audio and a second processing result of the sample audio; according to the approximation degree between the first processing result and the second processing result, the to-be-trained sound effect parameter adjustment model is trained, and therefore the sound effect parameter adjustment model obtained through training can automatically output the second sound effect parameter of the second sound effect processor according to the first sound effect parameter of the first sound effect processor, so that the processing results of different sound effect processors on the same audio frequency are basically consistent, the output results of different sound effect processors are adaptively adapted, and the efficiency of sound effect parameter adjustment is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which a training method or an acoustic parameter adjustment method of an acoustic parameter adjustment model of embodiments of the present disclosure may be applied.
FIG. 2 is a flowchart illustrating a method of training an acoustic parameter adjustment model, according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a training process and application process for an audio parameter tuning model, according to an example embodiment.
FIG. 4 is a flowchart illustrating another method of training an acoustic parameter adjustment model, according to an exemplary embodiment.
Fig. 5 is a flowchart illustrating a method of adjusting an audio parameter according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating a training apparatus for an acoustic parameter adjustment model, according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an audio parameter adjustment device according to an exemplary embodiment.
Fig. 8 is a schematic diagram illustrating a structure of an electronic device suitable for use in implementing an exemplary embodiment of the present disclosure, according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in at least one hardware module or integrated circuit or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the present specification, the terms "a," "an," "the," "said" and "at least one" are used to indicate the presence of at least one element/component/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which a training method or an acoustic parameter adjustment method of an acoustic parameter adjustment model of embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture may include a server 101, a network 102, a terminal device 103, a terminal device 104, and a terminal device 105. Network 102 is the medium used to provide communication links between terminal device 103, terminal device 104, or terminal device 105 and server 101. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The server 101 may be a server providing various services, such as a background management server providing support for devices operated by a user with the terminal device 103, the terminal device 104, or the terminal device 105. The background management server may perform analysis and other processing on the received data such as the request, and feed back the processing result to the terminal device 103, the terminal device 104, or the terminal device 105.
The terminal device 103, the terminal device 104, and the terminal device 105 may be, but are not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a wearable smart device, a virtual reality device, an augmented reality device, and the like.
Wherein, terminal device 103, terminal device 104, and terminal device 105 may be configured with different sound processors; in the following, an example in which the terminal device 103 is configured with a first sound effect processor and the terminal device 104 is configured with a second sound effect processor is described, and the first sound effect processor and the second sound effect processor have different sound effect processing parameters, but the disclosure is not limited thereto.
The sound effect parameter adjustment method provided by the embodiment of the disclosure can be applied to an on-line virtual song room K song scene, in the on-line virtual song room K song scene, sound effect processors used by singers and audiences may have differences, for example, the sound effect processor used by the singers can be used as a first sound effect processor, the sound effect processor used by the audiences can be used as a second sound effect processor, and the processing result of the first sound effect processor and the processing result of the second sound effect processor can be automatically aligned by the sound effect parameter adjustment method provided by the disclosure.
In the embodiment of the present disclosure, the server 101 may: acquiring sample audio and first sound effect parameters; inputting the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters; inputting the sample audio to a first sound effect processor configured with first sound effect parameters to obtain a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with second sound effect processing parameters to obtain a second processing result of the sample audio; and training the sound effect parameter adjustment model to be trained according to the approximation degree between the first processing result and the second processing result to obtain the trained sound effect parameter adjustment model.
In the embodiment of the present disclosure, the server 101 may obtain, from the terminal device 103, to-be-processed audio and a first sound parameter of a first sound processor; inputting the first sound effect parameters into the sound effect parameter adjustment model trained by the method to obtain second sound effect parameters; and configuring a second sound effect processor of the terminal equipment 104 by using the second sound effect parameters, inputting the audio to be processed into the second sound effect processor, and obtaining a sound effect processing result of the audio to be processed.
It should be understood that the numbers of the terminal device 103, the terminal device 104, the terminal device 105, the network 102 and the server 101 in fig. 1 are only illustrative, and the server 101 may be a server of one entity, may be a server cluster formed by a plurality of servers, may be a cloud server, and may have any number of terminal devices, networks and servers according to actual needs.
Hereinafter, each step of the training method of the sound effect parameter adjustment model in the exemplary embodiment of the present disclosure will be described in more detail with reference to the accompanying drawings and embodiments. The method provided by the embodiments of the present disclosure may be performed by any electronic device, for example, the server and/or the terminal device in fig. 1 described above, but the present disclosure is not limited thereto.
FIG. 2 is a flowchart illustrating a method of training an acoustic parameter adjustment model, according to an exemplary embodiment.
As shown in fig. 2, the method provided by the embodiments of the present disclosure may include the following steps.
In step S210, sample audio and first sound effect parameters are acquired.
In the embodiment of the disclosure, the first sound effect parameter refers to a sound effect processing parameter of the first sound effect processor, the second sound effect parameter refers to a sound effect processing parameter of the second sound effect processor, the first sound effect processor and the second sound effect processor are different sound effect processors, the different sound effect processors have different parameter numbers, parameter numerical ranges, parameter meanings and the like, and the configuration of the parameters determines the effect of the sound effect processor on the sound effect processing; the first sound effect processor and the second sound effect processor can respectively process the sample audio to obtain a sound effect processing result of the sample audio.
In the embodiment of the disclosure, the first sound effect processor may be used as a target sound effect processor, the second sound effect processor may be used as a fitting sound effect processor, and the training task of the sound effect parameter adjustment model is to fit the output sound effect of the second sound effect processor to the output sound effect of the first sound effect processor by using the neural network model.
The sound effect parameters in embodiments of the present disclosure may include, but are not limited to: parameters of different frequency bands of the equalization effector, reverberation time of the reverberation effector, dry-wet ratio, distortion degree, room size, early reverberation time, reverberation bandwidth, etc.
In the embodiment of the disclosure, the first sound effect parameters may include a manual debugging parameter, a random fine tuning parameter and a completely random parameter, wherein the manual debugging parameter refers to that a sound mixing operator or an experienced audio staff adjusts the parameters based on the first sound effect processor, so that some sound effect with high quality under the sound effect processor is realized; the random fine tuning parameters are obtained by randomly adjusting some details on the basis of manual debugging parameters so as to increase the diversity of data, prevent the neural network from being over-fitted and enhance the generalization capability of the neural network; the completely random parameter refers to data augmentation that is completely independent of manpower to increase data diversity.
In the embodiment of the disclosure, the ratio of the manual debugging parameter, the random fine tuning parameter and the completely random parameter to the first sound effect parameter may be set according to actual conditions, for example, the manual debugging parameter is 10%, the random fine tuning parameter is 80% and the completely random parameter is 10%; the number of the manual debugging parameters, the random trimming parameters and the completely random parameters can be set according to practical situations, for example, the total number of the manual debugging parameters, the random trimming parameters and the completely random parameters is more than 1000 groups.
In the embodiment of the disclosure, the sample audio may include sample singing voice data and sample voice data, and the proportion of the sample singing voice data and the sample voice data to the sample audio may be set according to practical situations, for example, the sample singing voice data accounts for 50% and the sample voice data accounts for 50%; the number of the sample singing voice data and the sample voice data may be set according to practical situations, for example, the total number of the sample voice data and the sample voice data is 100 hours or more.
In an exemplary embodiment, acquiring sample audio may include: acquiring sample singing voice data and sample voice data; adjusting a reverberation parameter value and an equalization parameter value of the sample singing voice data, and adjusting a reverberation parameter value and an equalization parameter value of the sample voice data; and generating sample audio according to the sample singing voice data, the sample voice data, the adjusted sample singing voice data and the adjusted sample voice data.
In the embodiment of the disclosure, data augmentation can be performed on sample singing voice data and sample voice data, and certain reverberation and equalization are added to simulate data acquired by using different acquisition devices under different environments; specifically, the reverberation parameter value and the equalization parameter value of the sample singing voice data can be adjusted to obtain adjusted sample singing voice data; adjusting the reverberation parameter value and the equalization parameter value of the sample voice data to obtain adjusted sample voice data; and taking the sample singing voice data, the sample voice data, the adjusted sample singing voice data and the adjusted sample voice data as the sample audio to be respectively input into the first sound effect processor and the second sound effect processor.
In step S220, the first sound effect parameter is input to the sound effect parameter adjustment model to be trained, and the second sound effect parameter is obtained.
Referring to fig. 3, a first sound effect parameter is represented by, for example, parameter a, a second sound effect parameter is represented by, for example, parameter B, a first sound effect processor is represented by, for example, sound effect processor a, a second sound effect processor is represented by, for example, sound effect processor B, a first processing result is represented by, for example, output audio a, and a second processing result is represented by, for example, output audio B; the acoustic parameter tuning model to be trained may employ any neural network model, such as CNN (Convolutional Neural Network ), RNN (Recurrent Neural Network, recurrent neural network), or a combination of structures, as this disclosure is not limited in this regard.
With continued reference to fig. 3, the solid line portion represents a portion required for both the training phase and the application phase of the sound effect parameter adjustment model, and the dotted line portion represents a portion required for only the training phase; inputting the parameter A into the neural network model, and outputting the parameter B; the sound effect processor a is configured with the parameter a and the audio processor B is configured with the parameter B.
In step S230, the sample audio is input to a first sound processor configured with a first sound parameter, and a first processing result of the sample audio is obtained; and inputting the sample audio to a second sound effect processor configured with second sound effect processing parameters to obtain a second processing result of the sample audio.
Continuing to refer to fig. 3, inputting the sample audio to an audio processor a configured with a parameter a to obtain an output audio a; and inputting the sample audio to an audio processor B configured with a parameter B to obtain output audio B.
In step S240, the to-be-trained sound effect parameter adjustment model is trained according to the approximation degree between the first processing result and the second processing result, and the trained sound effect parameter adjustment model is obtained.
In the embodiment of the disclosure, according to the approximation degree between the first processing result and the second processing result, the model parameters of the to-be-trained sound effect parameter adjustment model are adjusted, so that the approximation degree between the first processing result and the second processing result is smaller than or equal to a preset threshold value, and the trained sound effect parameter adjustment model is obtained.
In an exemplary embodiment, the approximation degree includes at least one of a waveform distance, a spectrum distance, and a signal-to-noise ratio between the first processing result and the second processing result.
Referring to fig. 3, a sample audio input is processed in an audio processor a and an audio processor B configured by a pair of parameters a and B, so as to obtain a pair of processing results "output audio a and output audio B", a loss function of the output audio a and output audio B is calculated, the loss function can be defined as an amount describing approximation degree of a time domain waveform distance, a spectral distance, a signal-to-noise ratio and the like between the output audio a and the output audio B, and the neural network model is counter-propagated by using the loss function and a gradient descent method, so as to finally achieve an optimization target that the output audio a and the output audio B output by the audio processor a and the audio processor B are gradually approximated.
In an exemplary embodiment, training a to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result to obtain a trained sound effect parameter adjustment model, including: determining a degree of approximation between the first processing result and the second processing result; and under the condition that the approximation degree is larger than a preset threshold value, adjusting the model parameters of the to-be-trained sound effect parameter adjustment model until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is smaller than or equal to the preset threshold value, and determining the adjusted initial sound effect parameter adjustment model as the sound effect parameter adjustment model after training.
With continued reference to fig. 3, the approximation degree between the output audio a and the output audio B may be calculated using the loss function, and in the case where the approximation degree is greater than a preset threshold, model parameters of the neural network model to be trained are adjusted, the parameter a is input into the adjusted neural network model, and an adjusted parameter (for example, represented by the parameter B') is obtained; configuring an audio processor B using the adjusted parameter B ', inputting the sample audio to the audio processor B, resulting in an adjusted output audio (e.g., represented by the output audio B'); calculating the approximation degree between the output audio A and the adjusted output audio B' by continuously using the loss function, and continuously adjusting model parameters of the neural network model by using the method under the condition that the approximation degree is still larger than a preset threshold value; and under the condition that the approximation degree is smaller than or equal to a preset threshold value, determining the adjusted neural network model as an acoustic parameter adjustment model.
According to the training method of the sound effect parameter adjustment model, the first sound effect parameter is input into the sound effect parameter adjustment model to be trained, and the second sound effect parameter is obtained; respectively inputting the sample audio to a first sound effect processor configured with first sound effect parameters and a second sound effect processor configured with second sound effect processing parameters to obtain a first processing result of the sample audio and a second processing result of the sample audio; according to the approximation degree between the first processing result and the second processing result, the to-be-trained sound effect parameter adjustment model is trained, and therefore the sound effect parameter adjustment model obtained through training can automatically output the second sound effect parameter of the second sound effect processor according to the first sound effect parameter of the first sound effect processor, so that the processing results of different sound effect processors on the same audio frequency are basically consistent, the output results of different sound effect processors are adaptively adapted, and the efficiency of sound effect parameter adjustment is improved.
FIG. 4 is a flowchart illustrating another method of training an acoustic parameter adjustment model, according to an exemplary embodiment. Fig. 4 shows that "in the case where the approximation degree is greater than the preset threshold value, adjusting the model parameters of the sound effect parameter adjustment model to be trained until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is less than or equal to the preset threshold value, determining the adjusted initial sound effect parameter adjustment model as the sound effect parameter adjustment model after training is completed" may include the following steps.
In step S410, in the case where the approximation degree is greater than the preset threshold, the model parameters of the sound effect parameter adjustment model to be trained are adjusted.
In the embodiment of the present disclosure, the preset threshold may be set according to an actual situation, which is not limited by the present disclosure.
In step S420, the first sound effect parameter is input to the adjusted sound effect parameter adjustment model to obtain an adjusted second sound effect parameter.
In an embodiment of the disclosure, the second sound effect processor is reconfigured using the adjusted second sound effect parameter.
In step S430, the sample audio is input to a second sound processor configured with the adjusted second sound parameter, and an adjusted second processing result is obtained.
In the embodiment of the disclosure, the sample audio is re-input to the reconfigured second sound effect processor, and the adjusted second processing result is obtained.
In step S440, in the case where the approximation degree between the first processing result and the adjusted second processing result is less than or equal to the preset threshold, the adjusted sound effect parameter adjustment model is determined as the trained sound effect parameter adjustment model.
In the embodiment of the present disclosure, the relationship between the approximation degree between the first processing result and the adjusted second processing result and the preset threshold is continuously determined, and steps S410 to S430 are repeated if the approximation degree is still greater than the preset threshold; and under the condition that the approximation degree is smaller than or equal to a preset threshold value, describing that the sound effect parameter adjustment model is trained, and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model.
According to the training method for the sound effect parameter adjustment model, when the approximation degree between the first processing result and the second processing result is larger than the preset threshold value, model parameters of the sound effect parameter adjustment model to be trained are adjusted; inputting the first sound effect parameters into the adjusted sound effect parameter adjustment model to obtain adjusted second sound effect parameters; inputting the sample audio to a second sound effect processor configured with the adjusted second sound effect parameters to obtain an adjusted second processing result; continuing to judge the relation between the approximation degree of the first processing result and the adjusted second processing result and the preset threshold value, and continuing to adjust the model parameters under the condition that the approximation degree is still larger than the preset threshold value; under the condition that the approximation degree is smaller than or equal to a preset threshold value, describing that the sound effect parameter adjustment model is trained, and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model; the method can improve the accuracy of training the sound effect parameter adjustment model.
Fig. 5 is a flowchart illustrating a method of adjusting an audio parameter according to an exemplary embodiment. Fig. 5 shows an application process of the sound effect parameter adjustment model after training to obtain the sound effect parameter adjustment model using the method provided in the above embodiment.
In step S510, the audio to be processed and the first sound parameters of the first sound processor are acquired.
In step S520, the first sound effect parameter is input into the trained sound effect parameter adjustment model to obtain a second sound effect parameter; wherein the trained sound effect parameter adjustment model is obtained through training according to the embodiment.
Referring to fig. 3, in a model application stage, a parameter a is input into a trained neural network model, and a parameter B is output; the sound effect processor B is configured using the parameter B.
In step S530, the audio to be processed is input to the second sound processor configured with the second sound parameter, and a sound processing result of the audio to be processed is obtained.
Referring to fig. 3, in the model application stage, the audio to be processed is input to the sound effect processor B configured with the parameter B to obtain the sound effect processing result of the audio to be processed, and the audio to be processed is input to the sound effect processor a configured with the parameter a to obtain the sound effect processing result of the audio to be processed, which is similar to the sound effect processing result of the audio to be processed.
According to the sound effect parameter adjustment method provided by the embodiment of the disclosure, the first sound effect parameter is input into the trained sound effect parameter adjustment model, and the second sound effect parameter is obtained; the method can enable the processing results of different sound effect processors on the same audio to be processed to be basically consistent, so that the output results of the different sound effect processors are adaptively adapted, and the efficiency of sound effect parameter adjustment is improved.
It should also be understood that the above is only intended to assist those skilled in the art in better understanding the embodiments of the present disclosure, and is not intended to limit the scope of the embodiments of the present disclosure. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, some steps of the methods described above may not be necessary, or some steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations thereof are also within the scope of the embodiments of the present disclosure.
It should also be understood that the foregoing description of the embodiments of the present disclosure focuses on highlighting differences between the various embodiments and that the same or similar elements not mentioned may be referred to each other and are not repeated here for brevity.
It should also be understood that the sequence numbers of the above processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
It is also to be understood that in the various embodiments of the disclosure, terms and/or descriptions of the various embodiments are consistent and may be referenced to one another in the absence of a particular explanation or logic conflict, and that the features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
Examples of training methods for the sound effect parameter adjustment model provided by the present disclosure are described above in detail. It will be appreciated that the computer device, in order to carry out the functions described above, comprises corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
FIG. 6 is a block diagram illustrating a training apparatus for an acoustic parameter adjustment model, according to an exemplary embodiment. Referring to fig. 6, the apparatus 600 may include an acquisition module 610, an acquisition module 620, and a training module 630.
Wherein the acquisition module 610 is configured to perform acquiring sample audio and first sound effect parameters; the obtaining module 620 is configured to perform inputting the first sound effect parameter into a sound effect parameter adjustment model to be trained, and obtain a second sound effect parameter; the obtaining module 620 is further configured to perform inputting the sample audio to a first sound processor configured with the first sound parameter, to obtain a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio; the training module 630 is configured to perform training of the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result, so as to obtain a trained sound effect parameter adjustment model.
In some exemplary embodiments of the present disclosure, training module 630 is configured to perform: determining a degree of approximation between the first processing result and the second processing result; and under the condition that the approximation degree is larger than a preset threshold, adjusting the model parameters of the to-be-trained sound effect parameter adjustment model until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is smaller than or equal to the preset threshold, and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model.
In some exemplary embodiments of the present disclosure, training module 630 is configured to perform: adjusting model parameters of the to-be-trained sound effect parameter adjustment model under the condition that the approximation degree is larger than a preset threshold value; inputting the first sound effect parameters into an adjusted sound effect parameter adjustment model to obtain adjusted second sound effect parameters; inputting the sample audio to a second sound effect processor configured with the adjusted second sound effect parameters to obtain an adjusted second processing result; and under the condition that the approximation degree between the first processing result and the adjusted second processing result is smaller than or equal to the preset threshold value, determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model.
In some exemplary embodiments of the present disclosure, the acquisition module 610 is configured to perform: acquiring sample singing voice data and sample voice data; adjusting a reverberation parameter value and an equalization parameter value of the sample singing voice data, and adjusting a reverberation parameter value and an equalization parameter value of the sample voice data; and generating the sample audio according to the sample singing voice data, the sample voice data, the adjusted sample singing voice data and the adjusted sample voice data.
In some exemplary embodiments of the present disclosure, the approximation degree includes at least one of a waveform distance, a spectral distance, and a signal-to-noise ratio between the first processing result and the second processing result.
Fig. 7 is a block diagram illustrating an audio parameter adjustment device according to an exemplary embodiment. Referring to fig. 7, the apparatus 700 may include an acquisition module 710 and an acquisition module 720.
Wherein the obtaining module 710 is configured to perform obtaining first sound parameters of the audio to be processed and the first sound processor; the obtaining module 720 is configured to perform inputting the first sound effect parameter into the trained sound effect parameter adjustment model to obtain a second sound effect parameter, where the trained sound effect parameter adjustment model is obtained through training in the foregoing embodiment; the obtaining module 720 is further configured to perform inputting the audio to be processed to a second sound processor configured with the second sound parameter, so as to obtain a sound processing result of the audio to be processed.
It should be noted that the block diagrams shown in the above figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor terminals and/or microcontroller terminals.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. Components of electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one storage unit 820, a bus 830 connecting the different system components (including the storage unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present description of exemplary methods. For example, the processing unit 810 may perform the various steps as shown in fig. 2.
As another example, the electronic device may implement the various steps shown in fig. 2.
Storage unit 820 may include readable media in the form of volatile storage units such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 870 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment, a computer readable storage medium is also provided, e.g., a memory, comprising instructions executable by a processor of an apparatus to perform the above method. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program/instruction which, when executed by a processor, implements the training method of the sound effect parameter adjustment model in the above embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of training an acoustic parameter tuning model, comprising:
acquiring sample audio and first sound effect parameters;
inputting the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters;
inputting the sample audio to a first sound effect processor configured with the first sound effect parameters to obtain a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio;
And training the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result to obtain a trained sound effect parameter adjustment model.
2. The method of claim 1, wherein training the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result to obtain a trained sound effect parameter adjustment model, comprising:
determining a degree of approximation between the first processing result and the second processing result;
and under the condition that the approximation degree is larger than a preset threshold, adjusting the model parameters of the to-be-trained sound effect parameter adjustment model until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is smaller than or equal to the preset threshold, and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model.
3. The method according to claim 2, wherein, in the case where the approximation degree is greater than a preset threshold, adjusting the model parameters of the to-be-trained sound effect parameter adjustment model until the approximation degree between the adjusted second processing result and the first sound effect processing result obtained according to the adjusted sound effect parameter adjustment model is less than or equal to the preset threshold, determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model includes:
Adjusting model parameters of the to-be-trained sound effect parameter adjustment model under the condition that the approximation degree is larger than a preset threshold value;
inputting the first sound effect parameters into an adjusted sound effect parameter adjustment model to obtain adjusted second sound effect parameters;
inputting the sample audio to a second sound effect processor configured with the adjusted second sound effect parameters to obtain an adjusted second processing result;
and determining the adjusted sound effect parameter adjustment model as the trained sound effect parameter adjustment model under the condition that the approximation degree between the first processing result and the adjusted second processing result is smaller than or equal to the preset threshold value.
4. A method according to any of claims 1-3, wherein obtaining sample audio comprises:
acquiring sample singing voice data and sample voice data;
adjusting a reverberation parameter value and an equalization parameter value of the sample singing voice data, and adjusting a reverberation parameter value and an equalization parameter value of the sample voice data;
and generating the sample audio according to the sample singing voice data, the sample voice data, the adjusted sample singing voice data and the adjusted sample voice data.
5. A method according to any of claims 1-3, wherein the approximation degree comprises at least one of a waveform distance, a spectral distance and a signal-to-noise ratio between the first processing result and the second processing result.
6. A method for adjusting sound parameters, comprising:
acquiring audio to be processed and first sound parameters of a first sound processor;
inputting the first sound effect parameters into a trained sound effect parameter adjustment model to obtain second sound effect parameters; wherein the trained sound effect parameter adjustment model is trained and obtained according to the method of any one of claims 1-5;
inputting the audio to be processed to a second sound effect processor configured with the second sound effect parameters, and obtaining a sound effect processing result of the audio to be processed.
7. A training device for an acoustic parameter adjustment model, comprising:
an acquisition module configured to perform acquiring sample audio and a first sound effect parameter;
the obtaining module is configured to input the first sound effect parameters into a sound effect parameter adjustment model to be trained to obtain second sound effect parameters;
the obtaining module is further configured to perform inputting the sample audio to a first sound effect processor configured with the first sound effect parameters, obtaining a first processing result of the sample audio; inputting the sample audio to a second sound effect processor configured with the second sound effect processing parameters to obtain a second processing result of the sample audio;
And the training module is configured to train the to-be-trained sound effect parameter adjustment model according to the approximation degree between the first processing result and the second processing result, and obtain a trained sound effect parameter adjustment model.
8. An audio parameter adjustment apparatus, comprising:
the acquisition module is configured to acquire the audio to be processed and the first sound effect parameters of the first sound effect processor;
the obtaining module is configured to input the first sound effect parameters into the trained sound effect parameter adjustment model to obtain second sound effect parameters; wherein the trained sound effect parameter adjustment model is trained and obtained according to the method of any one of claims 1-5;
the obtaining module is further configured to perform inputting the audio to be processed to a second sound effect processor configured with the second sound effect parameters, and obtain a sound effect processing result of the audio to be processed.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the executable instructions to implement the training method of the sound effect parameter adjustment model of any one of claims 1 to 5 or the sound effect parameter adjustment method of claim 6.
10. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the training method of the sound effect parameter adjustment model of any one of claims 1 to 5, or the sound effect parameter adjustment method of claim 6.
CN202310383729.0A 2023-04-11 2023-04-11 Training method of sound effect parameter adjustment model and related equipment Pending CN116564252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310383729.0A CN116564252A (en) 2023-04-11 2023-04-11 Training method of sound effect parameter adjustment model and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310383729.0A CN116564252A (en) 2023-04-11 2023-04-11 Training method of sound effect parameter adjustment model and related equipment

Publications (1)

Publication Number Publication Date
CN116564252A true CN116564252A (en) 2023-08-08

Family

ID=87490664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310383729.0A Pending CN116564252A (en) 2023-04-11 2023-04-11 Training method of sound effect parameter adjustment model and related equipment

Country Status (1)

Country Link
CN (1) CN116564252A (en)

Similar Documents

Publication Publication Date Title
US11189287B2 (en) Optimization method, apparatus, device for wake-up model, and storage medium
CN107741976B (en) Intelligent response method, device, medium and electronic equipment
KR20190024762A (en) Music Recommendation Method, Apparatus, Device and Storage Media
CN112309365B (en) Training method and device of speech synthesis model, storage medium and electronic equipment
US10141008B1 (en) Real-time voice masking in a computer network
CN109819375A (en) Adjust method and apparatus, storage medium, the electronic equipment of volume
US11688412B2 (en) Multi-modal framework for multi-channel target speech separation
JP7214798B2 (en) AUDIO SIGNAL PROCESSING METHOD, AUDIO SIGNAL PROCESSING DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
CN113077771B (en) Asynchronous chorus sound mixing method and device, storage medium and electronic equipment
CN110890098B (en) Blind signal separation method and device and electronic equipment
JP2022088528A (en) In-vehicle calling method, device, electronic device, computer-readable storage medium, and computer program
CN108829370B (en) Audio resource playing method and device, computer equipment and storage medium
CN112133328B (en) Evaluation information generation method and device for audio data
US11856359B2 (en) Loudspeaker polar pattern creation procedure
CN116564252A (en) Training method of sound effect parameter adjustment model and related equipment
CN114286278B (en) Audio data processing method and device, electronic equipment and storage medium
CN114155852A (en) Voice processing method and device, electronic equipment and storage medium
CN112307161B (en) Method and apparatus for playing audio
CN109995941B (en) Data adjusting method, device and storage medium
CN109841224B (en) Multimedia playing method, system and electronic equipment
CN112581933A (en) Speech synthesis model acquisition method and device, electronic equipment and storage medium
WO2020073562A1 (en) Audio processing method and device
CN114664316B (en) Audio restoration method, device, equipment and medium based on automatic pickup
CN112967732B (en) Method, apparatus, device and computer readable storage medium for adjusting equalizer
CN111045635B (en) Audio processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination