CN110853606A - Sound effect configuration method and device and computer readable storage medium - Google Patents

Sound effect configuration method and device and computer readable storage medium Download PDF

Info

Publication number
CN110853606A
CN110853606A CN201911171717.1A CN201911171717A CN110853606A CN 110853606 A CN110853606 A CN 110853606A CN 201911171717 A CN201911171717 A CN 201911171717A CN 110853606 A CN110853606 A CN 110853606A
Authority
CN
China
Prior art keywords
sound effect
audio
audio file
playing
target audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911171717.1A
Other languages
Chinese (zh)
Inventor
顾正明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911171717.1A priority Critical patent/CN110853606A/en
Publication of CN110853606A publication Critical patent/CN110853606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10018Improvement or modification of read or write signals analog processing for digital recording or reproduction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/041Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal based on mfcc [mel -frequency spectral coefficients]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/311Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Acoustics & Sound (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The application provides a sound effect configuration method, a sound effect configuration device and a computer readable storage medium. Through the implementation of this application scheme, carry out audio classification based on the audio frequency characteristic to according to the corresponding audio of classification result developments application, effectively reduced the complexity of audio configuration, promoted the efficiency and the suitability of audio configuration, strengthened user's audio-visual listening sense.

Description

Sound effect configuration method and device and computer readable storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a sound effect configuration method and apparatus, and a computer-readable storage medium.
Background
With the continuous development of terminal technology, users frequently use terminals to perform video and audio entertainment. In order to improve the auditory effect of the played audio-video program, the terminal is provided with a sound effect configuration function, namely, a user needs to manually adjust a sound effect equalizer to realize sound effect configuration, however, when the terminal user does not have professional knowledge of sound effect configuration, the optimal sound effect is difficult to configure; in addition, the existing sound effect configuration mode adopts a global configuration mode, that is, the manually configured sound effect is applied to the playing of all the audio and video files, however, the styles of different audio and video files are usually different, and the sound effect of the global configuration cannot be perfectly adapted to all the audio and video programs, so that the adaptability of the sound effect configuration is low.
Disclosure of Invention
The embodiment of the application provides a sound effect configuration method and device and a computer readable storage medium, which can at least solve the problems that in the related art, a user performs sound effect global configuration by manually adjusting a sound effect equalizer, so that the complexity of sound effect configuration is high, and the efficiency and the adaptability of sound effect configuration are low.
The first aspect of the embodiment of the present application provides a sound effect configuration method, including:
extracting audio characteristic information of a target audio file;
determining a music style of the target audio file based on the audio feature information;
and configuring the playing sound effect of the target audio file according to the music style.
A second aspect of the embodiments of the present application provides a sound effect configuration apparatus, including:
the extraction module is used for extracting the audio characteristic information of the target audio file;
a determining module, configured to determine a music style of the target audio file based on the audio feature information;
and the configuration module is used for configuring the playing sound effect of the target audio file according to the music style.
A third aspect of embodiments of the present application provides an electronic apparatus, including: the sound effect configuration method includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps in the sound effect configuration method provided by the first aspect of the embodiments of the present application are implemented.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the sound effect configuration method provided by the first aspect of the embodiments of the present application.
As can be seen from the above, according to the sound effect configuration method, device and computer-readable storage medium provided by the present application, the audio feature information of the target audio file is first extracted, then the music style of the target audio file is determined based on the audio feature information, and finally the playing sound effect of the target audio file is configured according to the music style. Through the implementation of this application scheme, carry out audio classification based on the audio frequency characteristic to according to the corresponding audio of classification result developments application, effectively reduced the complexity of audio configuration, promoted the efficiency and the suitability of audio configuration, strengthened user's audio-visual listening sense.
Drawings
FIG. 1 is a schematic diagram illustrating a basic flow of a sound effect configuration method according to a first embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a sound effect enhancement configuration method according to a first embodiment of the present application;
fig. 3 is a schematic flowchart of a method for acquiring emotional state data of a user according to a first embodiment of the present application;
FIG. 4 is a detailed flowchart of a sound effect configuration method according to a second embodiment of the present application;
FIG. 5 is a schematic diagram of program modules of a sound effect configuration apparatus according to a third embodiment of the present application;
FIG. 6 is a schematic diagram illustrating program modules of another sound effect configuration apparatus according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the defects that the complexity of sound effect configuration is large and the efficiency and the adaptability of the sound effect configuration are low due to the fact that a user performs sound effect global configuration by manually adjusting a sound effect equalizer in the related art, a first embodiment of the present application provides a sound effect configuration method, for example, fig. 1 is a basic flow chart of the sound effect configuration method provided by the present embodiment, and the sound effect configuration method includes the following steps:
step 101, extracting audio characteristic information of a target audio file.
Specifically, in this embodiment, when configuring the sound effect of the target audio-video program, it is first required to extract the audio features of the audio file of the audio-video program, where the audio-video program of this embodiment may be a song, and in other embodiments, may also be a movie, a recording, and the like. In addition, the audio file for extracting the audio features in this embodiment may be an entire audio file of a single audio-video program, or may be one or more audio file segments of an audio-video program.
Optionally, the extracting the audio feature information of the target audio file specifically includes: decoding the target audio file into a Pulse Code Modulation (PCM) format; merr Cepstral Coefficients (MFCC) of the decoded target audio file are extracted.
Specifically, in practical applications, the audio feature information may include a time domain feature, a frequency domain feature, and the like, where the time domain feature includes: any one or any combination of amplitude, short-time energy, short-time average amplitude, and short-time zero-crossing rate, and the frequency domain characteristics include: any one or any combination of frequency spectrum, fundamental wave, harmonic wave, cepstrum, formant, and resonant frequency. The MFCC of the present embodiment can characterize the distribution of the energy of the audio signal in different frequency ranges, and can typically characterize the audio characteristics.
Further, the specific process of extracting mel-frequency cepstrum coefficients includes: pre-emphasis processing is carried out on the audio signal through a high-pass filter; performing frame division processing on the pre-emphasized audio signal; windowing the audio signal subjected to the framing processing by adopting a preset window function; performing fast Fourier transform on the audio signal subjected to windowing processing to obtain an energy spectrum of each frame; enabling the energy spectrum to pass through a group of Mel-scale triangular band-pass filter banks, and calculating the logarithmic energy output by each filter; discrete cosine transform is carried out based on the logarithmic energy to obtain a Mel cepstrum coefficient.
In an optional implementation manner of this embodiment, the extracting the audio feature information of the target audio file includes: when the current playing mode is a list circulation playing mode, determining the next audio-video program of the audio-video program corresponding to the currently played target audio file from the playing list; acquiring an audio file corresponding to a next shadow syllable as a target audio file; and extracting the audio characteristic information of the target audio file.
Specifically, in practical applications, the target audio file may be an audio file corresponding to a currently played audio-video program, or an audio file corresponding to an audio-video program to be played. However, sound effect adjustment is performed during the playing process of the audio-video program, which may cause the audio effects before and after the audio-video program is played to be inconsistent, so that a user may have a sharp feeling. In the list circulation playing mode, the next audio-video program in the list can be effectively determined in the process of playing the current audio-video program, then the sound effect configuration process of the embodiment is executed based on the audio file of the audio-video program, and the sound effect of the audio-video program to be played is configured in advance. It should be noted that, in practical applications, if the video program is in the online playing mode, the audio file of the next video program may be cached in advance, and if the video program is in the offline playing mode, the audio file of the next video program may be directly obtained from the local storage.
In an optional implementation manner of this embodiment, before extracting the audio feature information of the target audio file, the method further includes: and detecting whether the sound effect global configuration function is in a closed state, if so, executing the step of extracting the audio characteristic information of the target audio file.
Specifically, in practical applications, some users may have a special preference for a certain sound effect, so that a specific sound effect may be expected to be used in any music style of audio-video programs.
In addition, it should be noted that in practical applications, not all audio and video programs have audio configuration requirements, and in this embodiment, an audio configuration white list may be set, and when an audio and video program is played, it is first detected whether a program identifier of a current audio and video program is in the audio configuration white list, and when the program identifier is in the audio configuration white list, an audio file of the current audio and video program is taken as a target audio file, and a step of extracting audio feature information of the target audio file is performed.
And 102, determining the music style of the target audio file based on the audio characteristic information.
Specifically, in this embodiment, the music styles may include styles such as pop, country, rock, and blue, in practical applications, when listening to rock, a user expects to hear a very loud sound to emphasize the sound of a singer, and when listening to the rock, the popular music emphasizes the rhythm of the sound of a musical instrument and the voice, so that the embodiment determines the music style based on the audio features to dynamically adapt to the sound effect.
In an optional implementation manner of this embodiment, determining the music style of the target audio file based on the audio feature information includes: inputting the audio characteristic information into a style classification model to classify the music style, so as to obtain the music style corresponding to the target audio file; the style classification model is obtained by training a neural network based on preset training samples, and each training sample comprises a classification label and audio characteristic information corresponding to the classification label.
Specifically, in the present embodiment, the model is trained through a supervised learning algorithm in the deep learning algorithm, that is, input data with specific classification labels and audio feature information, where the training samples may be expert data and/or habit data of a user, and the neural network adopted in the present embodiment may include any one of a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), and a Recurrent Neural Network (RNN). In this embodiment, based on training samples, a certain optimization algorithm is adopted to perform neural network training in a specific training environment, wherein the learning rate and the training times during training can be determined according to actual requirements, and are not limited herein.
And 103, configuring the playing sound effect of the target audio file according to the music style.
Specifically, audio classification is performed based on audio features, corresponding sound effects are dynamically applied according to classification results, then configured playing sound effects are loaded to audio files when corresponding audio and video programs are played, the audio files are processed by an analog circuit, and finally played sound, namely sound with enhanced audibility, is specifically embodied that the reality and atmosphere of the sound are enhanced. Through the sound effect dynamic configuration mode of this embodiment, effectively reduced the complexity of sound effect configuration, promoted the efficiency and the suitability of sound effect configuration, strengthened user's audio-visual sense of hearing.
In an optional implementation manner of this embodiment, after configuring the play sound effects of the target audio file according to the music style, the method further includes: associating the playing sound effect of the target audio file with the program identification of the audio-video program corresponding to the target audio file; and storing the association relation obtained by association into a play sound effect index table.
Specifically, in this embodiment, after completing the sound effect configuration for the target audio file, in order to save the processing performance of the terminal and improve the processing efficiency of the terminal when the audio-video program corresponding to the target audio file is subsequently played again, the embodiment further associates the audio-video program played for the first time with the corresponding configured playing sound effect, so that the corresponding playing sound effect can be directly called when the same audio-video program is subsequently played again, and the whole sound effect configuration process does not need to be executed again.
In an optional implementation manner of this embodiment, before extracting the audio feature information of the target audio file, the method further includes: inquiring a preset playing sound effect index table based on the program identification of the audio-video program corresponding to the target audio file, wherein the playing sound effect index table is a corresponding relation table of the program identification and the playing sound effect; when the query is successful, directly configuring the queried playing sound effect into the playing sound effect of the target audio file; and when the query fails, executing the step of extracting the audio characteristic information of the target audio file.
Specifically, the play audio index table in this embodiment may be generated by associating the configured audio with the corresponding audio/video program based on the foregoing embodiments, and in other embodiments, may also be generated by acquiring ready-made expert data from a server, or by associating the audio with the corresponding audio/video program after the audio is manually configured by the terminal user for the audio/video program. In this embodiment, when the audio configuration requirement is met, it is first determined whether the ready-made audio playing effect can be queried in the audio playing index table for the audio program to be configured, and when the result is queried, the ready-made audio playing effect can be directly used to perform audio configuration on the audio program to be configured, and when the result is not queried, the step of extracting the audio feature information of the target audio file is executed to meet the audio configuration requirement.
As shown in fig. 2, which is a flowchart illustrating a sound effect enhancement configuration method provided in this embodiment, further, in an optional implementation manner of this embodiment, configuring the playing sound effect of the target audio file according to the music style specifically includes the following steps:
step 201, determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect;
step 202, acquiring user emotion state data, and performing sound effect strengthening processing on the determined playing sound effect when the user emotion state data exceeds a normal emotion state data range;
step 203, configuring the playing sound effect after the sound effect strengthening processing as the playing sound effect of the target audio file.
In particular, since the sound effect of the playing determined based on the music style of the present embodiment generally needs to be transferred to the listening sense of the whole user group, the listening sense may be relatively weak for the extreme user group or the extreme emotional state of the user. Therefore, when configuring the sound effect, the sound effect determined based on the music style can be configured in an enhanced manner by considering the attribute of the audio file and the external user factor, so that the sound effect is further enhanced, where the enhancement manner can be to increase or decrease the relevant parameters in the sound effect parameters according to a certain proportion, for example, when the mood agitation index of the user exceeds the normal range, the bass index and the surround intensity index in the sound effect parameters are controlled to be increased according to the preset proportion, so that the auditory sense not only conforms to the music style, but also conforms to the mood state of the user.
As shown in fig. 3, which is a schematic flow chart of a method for acquiring emotional state data of a user according to this embodiment, further, in an optional implementation manner of this embodiment, acquiring emotional state data of the user specifically includes the following steps:
301, acquiring music styles of all audio files played in a historical time period;
step 302, performing statistical processing on the acquired music style to obtain music style statistical data;
step 303, determining the emotional state data of the user based on the music style statistical data.
Specifically, in this embodiment, music appreciation habits of the user can reflect emotional states of the user to a certain extent, based on which, in this embodiment, the emotional state data of the user is determined according to music style statistical data of audio files listened to by the user in a historical period, for example, music in a classical style listened to by the user in the last two hours indicates that the current emotion of the user is very slow, and if the music is listened to almost all metal music, the emotion of the user is restless. In addition, in other embodiments, the emotional state of the user may also be obtained based on the sign parameters of the user, for example, the terminal may receive the characteristic parameters (e.g., heartbeat, pulse, etc.) acquired by the wearable device based on a communication connection with the wearable device (e.g., smart band, etc.), and then determine the emotional state of the user based on the characteristic parameters.
In an optional implementation manner of this embodiment, configuring the playing sound effect of the target audio file according to the music style specifically includes: determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect; acquiring the limit working frequency of an audio functional component playing the target audio file under each frequency component; when the frequency of the corresponding frequency component in the playing sound effect exceeds the limit working frequency, adjusting the frequency of the corresponding frequency component in the playing sound effect to the limit working frequency; and configuring the playing sound effect after the frequency adjustment as the playing sound effect of the target audio file.
Specifically, in practical applications, the configurations of audio functional components (e.g., earphones and terminal speakers) adopted when outputting audio are different, and audio effects that are simply matched by the music style may not have universality, for example, a part of earphones or terminals with relatively low performance may be "pushed" or "broken" under the audio effects, and the like, which may result in the playing effect of an audio file even lower than that of a default audio effect, so that the embodiment adjusts the frequency part of the playing audio that is not adapted to hardware in consideration of the performance difference of the actual audio functional components, and finally configures the adjusted audio effects as the target audio effects. It should be understood that each frequency component of the present embodiment corresponds to a different frequency band, and the limit operating frequency may be the highest operating frequency and the lowest operating frequency, and exceeding the limit operating frequency in both cases means being greater than the highest operating frequency and less than the lowest operating frequency, respectively.
Based on the technical scheme of the embodiment of the application, the audio characteristic information of the target audio file is extracted firstly, then the music style of the target audio file is determined based on the audio characteristic information, and finally the playing sound effect of the target audio file is configured according to the music style. Through the implementation of this application scheme, carry out audio classification based on the audio frequency characteristic to according to the corresponding audio of classification result developments application, effectively reduced the complexity of audio configuration, promoted the efficiency and the suitability of audio configuration, strengthened user's audio-visual listening sense.
The method in fig. 4 is a detailed sound effect configuration method provided in the second embodiment of the present application, and the sound effect configuration method includes:
step 401, determine the next video program located after the currently played video program from the playlist.
Specifically, in order to avoid an abrupt feeling caused by performing sound configuration on the audio/video program in the audio/video program playing process, the sound configuration of the next audio/video program is triggered in the current audio/video program playing process.
Step 402, obtaining audio characteristic information of an audio file corresponding to a next shadow syllable.
Specifically, the audio feature information of this embodiment may include a time domain feature, a frequency domain feature, and the like, where the time domain feature includes: any one or any combination of amplitude, short-time energy, short-time average amplitude, and short-time zero-crossing rate, and the frequency domain characteristics include: any one or any combination of frequency spectrum, fundamental wave, harmonic wave, cepstrum, formant, and resonant frequency.
Step 403, inputting the audio characteristic information into the style classification model to perform music style classification, so as to obtain the music style corresponding to the next syllable of shadow.
In this embodiment, the style classification model is obtained by training a neural network based on preset training samples, where each training sample includes a classification label and audio feature information corresponding to the classification label. In the embodiment, the model is trained through a supervised learning algorithm in the deep learning algorithm, namely, input data with definite classification labels and audio characteristic information, and then music style classification is performed through the trained model, so that the accuracy of a classification result can be effectively ensured.
And step 404, determining a playing sound effect corresponding to the obtained music style according to the mapping relation between the preset music style and the playing sound effect.
Step 405, acquiring user emotion state data, and judging whether the user emotion state data exceeds a normal emotion state data range; if yes, go to step 406, otherwise go to step 408.
Specifically, the user emotional state data is used to represent an emotional state or a psychological state of the user, and in practical applications, the emotional state data may be one or a combination of heartbeat data, pulse data, and body temperature data of the user. Since the playing sound effect determined based on the music style in the present embodiment generally needs to be in harmony with the entire user group, the harmony may be relatively weak for a particular user group or a particular emotional state of the user. Therefore, when the sound effect is configured, the external user factors can be considered on the basis of considering the attribute of the audio file.
And 406, performing sound effect enhancement processing on the determined playing sound effect.
Step 407, configure the sound effect after the sound effect enhancement processing as the sound effect of the next sound effect.
In this embodiment, when the emotional state of the user is in an extreme state exceeding the normal emotional state (for example, too high, too low, or too frequent emotional fluctuation), the audio effect determined based on the music style is configured to be enhanced, so that the audio effect is further enhanced, where the enhancement mode may be to increase or decrease the relevant parameters in the audio effect parameters according to a certain proportion, so that the auditory sense not only conforms to the music style itself, but also conforms to the emotional state of the user.
Step 408, directly configuring the determined playing sound effect to be the playing sound effect of the next audio-video program.
When the emotional state of the user is normal, the playing sound effect is directly configured to be the sound effect related to the music style, and the normal listening requirement of the user is met.
It should be understood that, the size of the serial number of each step in this embodiment does not mean the execution sequence of the step, and the execution sequence of each step should be determined by its function and inherent logic, and should not be limited uniquely to the implementation process of the embodiment of the present application.
The embodiment of the application discloses a sound effect configuration method, which comprises the steps of inputting audio characteristic information of an audio file corresponding to a next shadow syllable into a style classification model for carrying out music style classification to obtain a music style corresponding to the next shadow syllable; determining a playing sound effect corresponding to the obtained music style according to a mapping relation between a preset music style and the playing sound effect; when the emotion state data of the user exceeds the normal emotion state data range, sound effect strengthening processing is carried out on the determined playing sound effect, and the playing sound effect after the sound effect strengthening processing is configured into the playing sound effect of the next movie; and when the emotion state data of the user is in the normal emotion state data range, directly configuring the determined playing sound effect into the playing sound effect of the next audio-video program. Through the implementation of this application scheme, carry out audio classification based on the audio frequency characteristic to according to the corresponding audio of classification result developments application, effectively reduced the complexity of audio configuration, promoted the efficiency and the suitability of audio configuration, strengthened user's audio-visual listening sense.
Fig. 5 is a sound effect configuration apparatus according to a third embodiment of the present application. The sound effect configuration device can be used for realizing the sound effect configuration method in the embodiment. As shown in fig. 5, the sound effect configuration apparatus mainly includes:
an extracting module 501, configured to extract audio feature information of a target audio file;
a determining module 502, configured to determine a music style of the target audio file based on the audio feature information;
the configuration module 503 is configured to configure the playing sound effect of the target audio file according to the music style.
In an optional implementation manner of this embodiment, the determining module 502 is specifically configured to: inputting the audio characteristic information into a style classification model to classify the music style, so as to obtain the music style corresponding to the target audio file; the style classification model is obtained by training a neural network based on preset training samples, and each training sample comprises a classification label and audio characteristic information corresponding to the classification label.
In an optional implementation manner of this embodiment, the extraction module 501 is specifically configured to: when the current playing mode is a list circulation playing mode, determining the next audio-video program of the audio-video program corresponding to the currently played target audio file from the playing list; acquiring an audio file corresponding to a next shadow syllable as a target audio file; and extracting the audio characteristic information of the target audio file.
As shown in fig. 6, another sound effect configuration apparatus provided in this embodiment is an optional implementation manner of this embodiment, the sound effect configuration apparatus further includes: the query module 504 is configured to query a preset playing sound effect index table based on the program identifier of the audio-video program corresponding to the target audio file; wherein, the playing sound effect index table is a corresponding relation table of the program identification and the playing sound effect. Correspondingly, when the query module 504 queries successfully, the configuration module 503 is further configured to directly configure the queried playing sound effect as the playing sound effect of the target audio file; and when the query module 504 fails to query, the extraction module 501 performs a function of extracting audio feature information of the target audio file.
Referring to fig. 6 again, in an alternative implementation manner of this embodiment, the sound effect configuration apparatus further includes: the association module 505 is configured to, after configuring the playing sound effect of the target audio file according to the music style, associate the playing sound effect of the target audio file with the program identifier of the audio-video program corresponding to the target audio file; and storing the association relation obtained by association into a play sound effect index table.
In addition, in an optional implementation manner of this embodiment, the configuration module 503 is specifically configured to: determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect; acquiring user emotion state data, and performing sound effect strengthening processing on the determined playing sound effect when the user emotion state data exceeds a normal emotion state data range; configuring the playing sound effect after the sound effect strengthening processing into the playing sound effect of the target audio file.
Further, in an optional implementation manner of this embodiment, when acquiring the emotional state data of the user, the configuration module 503 is specifically configured to: acquiring the music styles of all audio files played in a historical time period; carrying out statistical processing on the acquired music style to obtain music style statistical data; user emotional state data is determined based on the music style statistics.
In an optional implementation manner of this embodiment, the configuration module 503 is specifically configured to: determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect; acquiring the limit working frequency of an audio functional component playing a target audio file under each frequency component; when the frequency under the corresponding frequency component in the determined playing sound effect exceeds the limit working frequency, adjusting the frequency under the corresponding frequency component in the playing sound effect to the limit working frequency; and configuring the playing sound effect after the frequency adjustment as the playing sound effect of the target audio file.
It should be noted that the sound effect configuration method in the first and second embodiments can be implemented based on the sound effect configuration device provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the sound effect configuration device described in this embodiment may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
According to the sound effect configuration device provided by the embodiment, firstly, the audio characteristic information of the target audio file is extracted, then, the music style of the target audio file is determined based on the audio characteristic information, and finally, the playing sound effect of the target audio file is configured according to the music style. Through the implementation of this application scheme, carry out audio classification based on the audio frequency characteristic to according to the corresponding audio of classification result developments application, effectively reduced the complexity of audio configuration, promoted the efficiency and the suitability of audio configuration, strengthened user's audio-visual listening sense.
Referring to fig. 7, fig. 7 is an electronic device according to a fourth embodiment of the present disclosure. The electronic device can be used for realizing the sound effect configuration method in the embodiment. As shown in fig. 7, the electronic device mainly includes:
a memory 701, a processor 702, a bus 703 and a computer program stored on the memory 701 and executable on the processor 702, the memory 701 and the processor 702 being connected by the bus 703. The processor 702, when executing the computer program, implements the sound effect configuration method in the foregoing embodiments. Wherein the number of processors may be one or more.
The Memory 701 may be a high-speed Random Access Memory (RAM) Memory or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 701 is used for storing executable program code, and the processor 702 is coupled with the memory 701.
Further, an embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium may be provided in an electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory in the foregoing embodiment shown in fig. 7.
The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the sound effect configuration method in the foregoing embodiments. Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the sound effect configuration method, apparatus and computer readable storage medium provided by the present application, those skilled in the art will recognize that there are variations from the foregoing description to the following description in accordance with the spirit of the embodiments of the present application.

Claims (10)

1. A sound effect configuration method is characterized by comprising the following steps:
extracting audio characteristic information of a target audio file;
determining a music style of the target audio file based on the audio feature information;
and configuring the playing sound effect of the target audio file according to the music style.
2. The sound effect configuration method according to claim 1, wherein the extracting the audio feature information of the target audio file comprises:
when the current playing mode is a list circulation playing mode, determining the next audio-video program of the audio-video program corresponding to the currently played target audio file from the playing list;
acquiring an audio file corresponding to the next audio-video program as a target audio file;
and extracting the audio characteristic information of the target audio file.
3. The sound effect configuration method according to claim 1, wherein before extracting the audio feature information of the target audio file, the method further comprises:
inquiring a preset playing sound effect index table based on the program identification of the audio-video program corresponding to the target audio file; wherein, the playing sound effect index table is a corresponding relation table of program identification and playing sound effect;
when the query is successful, directly configuring the queried playing sound effect as the playing sound effect of the target audio file;
and when the query fails, executing the step of extracting the audio characteristic information of the target audio file.
4. The audio effect configuration method according to claim 3, wherein after configuring the playing audio effects of the target audio file according to the music style, the method further comprises:
associating the playing sound effect of the target audio file with the program identification of the audio-video program corresponding to the target audio file;
and storing the association relation obtained by association to the play sound effect index table.
5. The audio effect configuration method according to any one of claims 1 to 4, wherein the configuring the playing audio effects of the target audio file according to the music style comprises:
determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect;
acquiring user emotion state data, and performing sound effect strengthening processing on the determined playing sound effect when the user emotion state data exceeds a normal emotion state data range;
and configuring the playing sound effect after the sound effect strengthening treatment into the playing sound effect of the target audio file.
6. The sound effect configuration method according to claim 5, wherein the acquiring of the user emotional state data comprises:
acquiring the music styles of all audio files played in a historical time period;
carrying out statistical processing on the acquired music style to obtain music style statistical data;
determining user emotional state data based on the music style statistics.
7. The audio effect configuration method according to any one of claims 1 to 4, wherein the configuring the playing audio effects of the target audio file according to the music style comprises:
determining a corresponding playing sound effect according to the music style and the mapping relation between the music style and the playing sound effect;
acquiring the limit working frequency of an audio functional component playing the target audio file under each frequency component;
when the frequency of the corresponding frequency component in the playing sound effect exceeds the limit working frequency, adjusting the frequency of the corresponding frequency component in the playing sound effect to the limit working frequency;
and configuring the playing sound effect after the frequency adjustment as the playing sound effect of the target audio file.
8. An audio effect configuration apparatus, comprising:
the extraction module is used for extracting the audio characteristic information of the target audio file;
a determining module, configured to determine a music style of the target audio file based on the audio feature information;
and the configuration module is used for configuring the playing sound effect of the target audio file according to the music style.
9. An electronic device, comprising: the system comprises a memory, a processor and a bus, wherein the bus is used for realizing connection communication between the memory and the processor; the processor is configured to execute a computer program stored on the memory, and when the processor executes the computer program, the processor implements the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911171717.1A 2019-11-26 2019-11-26 Sound effect configuration method and device and computer readable storage medium Pending CN110853606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911171717.1A CN110853606A (en) 2019-11-26 2019-11-26 Sound effect configuration method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911171717.1A CN110853606A (en) 2019-11-26 2019-11-26 Sound effect configuration method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110853606A true CN110853606A (en) 2020-02-28

Family

ID=69604548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911171717.1A Pending CN110853606A (en) 2019-11-26 2019-11-26 Sound effect configuration method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110853606A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112185325A (en) * 2020-10-12 2021-01-05 上海闻泰电子科技有限公司 Audio playing style adjusting method and device, electronic equipment and storage medium
CN112927665A (en) * 2021-01-22 2021-06-08 咪咕音乐有限公司 Authoring method, electronic device, and computer-readable storage medium
CN113703711A (en) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 Playing sound effect control method and device, electronic equipment and computer storage medium
WO2021248964A1 (en) * 2020-06-09 2021-12-16 广东美的制冷设备有限公司 Home appliance and control method therefor, and computer-readable storage medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012004885A (en) * 2010-06-17 2012-01-05 Hoshun Ri Voice speech terminal device, voice speech system, and voice speech method
CN103186527A (en) * 2011-12-27 2013-07-03 北京百度网讯科技有限公司 System for building music classification model, system for recommending music and corresponding method
CN104866091A (en) * 2015-03-25 2015-08-26 百度在线网络技术(北京)有限公司 Method and device for outputting audio-effect information in computer equipment
CN106024034A (en) * 2016-06-16 2016-10-12 广东欧珀移动通信有限公司 Method for adjusting sound effect and terminal
CN107169430A (en) * 2017-05-02 2017-09-15 哈尔滨工业大学深圳研究生院 Reading environment audio strengthening system and method based on image procossing semantic analysis
CN107249080A (en) * 2017-06-26 2017-10-13 维沃移动通信有限公司 A kind of method, device and mobile terminal for adjusting audio
CN107332994A (en) * 2017-06-29 2017-11-07 深圳传音控股有限公司 A kind of tuning effect Self Matching method and system
CN107590152A (en) * 2016-07-07 2018-01-16 乐视控股(北京)有限公司 A kind of method and device for the audio pattern for adjusting audio
CN107633853A (en) * 2017-08-03 2018-01-26 广东小天才科技有限公司 Control method for playing audio and video files and user terminal
CN107689229A (en) * 2017-09-25 2018-02-13 广东小天才科技有限公司 Voice processing method and device for wearable equipment
CN108010512A (en) * 2017-12-05 2018-05-08 广东小天才科技有限公司 Sound effect acquisition method and recording terminal
CN109190652A (en) * 2018-07-06 2019-01-11 中国平安人寿保险股份有限公司 It attends a banquet sort management method, device, computer equipment and storage medium
CN109308912A (en) * 2018-08-02 2019-02-05 平安科技(深圳)有限公司 Music style recognition methods, device, computer equipment and storage medium
CN109492664A (en) * 2018-09-28 2019-03-19 昆明理工大学 A kind of musical genre classification method and system based on characteristic weighing fuzzy support vector machine
CN109697290A (en) * 2018-12-29 2019-04-30 咪咕数字传媒有限公司 Information processing method, information processing equipment and computer storage medium
CN109726309A (en) * 2018-11-22 2019-05-07 百度在线网络技术(北京)有限公司 Audio generation method, device and storage medium
CN109872710A (en) * 2019-03-13 2019-06-11 腾讯音乐娱乐科技(深圳)有限公司 Audio modulator approach, device and storage medium
CN110135355A (en) * 2019-05-17 2019-08-16 吉林大学 A method of utilizing color and audio active control driver's mood
CN110188235A (en) * 2019-05-05 2019-08-30 平安科技(深圳)有限公司 Music style classification method, device, computer equipment and storage medium
CN110189742A (en) * 2019-05-30 2019-08-30 芋头科技(杭州)有限公司 Determine emotion audio, affect display, the method for text-to-speech and relevant apparatus
CN110188356A (en) * 2019-05-30 2019-08-30 腾讯音乐娱乐科技(深圳)有限公司 Information processing method and device

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012004885A (en) * 2010-06-17 2012-01-05 Hoshun Ri Voice speech terminal device, voice speech system, and voice speech method
CN103186527A (en) * 2011-12-27 2013-07-03 北京百度网讯科技有限公司 System for building music classification model, system for recommending music and corresponding method
CN104866091A (en) * 2015-03-25 2015-08-26 百度在线网络技术(北京)有限公司 Method and device for outputting audio-effect information in computer equipment
CN106024034A (en) * 2016-06-16 2016-10-12 广东欧珀移动通信有限公司 Method for adjusting sound effect and terminal
CN107590152A (en) * 2016-07-07 2018-01-16 乐视控股(北京)有限公司 A kind of method and device for the audio pattern for adjusting audio
CN107169430A (en) * 2017-05-02 2017-09-15 哈尔滨工业大学深圳研究生院 Reading environment audio strengthening system and method based on image procossing semantic analysis
CN107249080A (en) * 2017-06-26 2017-10-13 维沃移动通信有限公司 A kind of method, device and mobile terminal for adjusting audio
CN107332994A (en) * 2017-06-29 2017-11-07 深圳传音控股有限公司 A kind of tuning effect Self Matching method and system
CN107633853A (en) * 2017-08-03 2018-01-26 广东小天才科技有限公司 Control method for playing audio and video files and user terminal
CN107689229A (en) * 2017-09-25 2018-02-13 广东小天才科技有限公司 Voice processing method and device for wearable equipment
CN108010512A (en) * 2017-12-05 2018-05-08 广东小天才科技有限公司 Sound effect acquisition method and recording terminal
CN109190652A (en) * 2018-07-06 2019-01-11 中国平安人寿保险股份有限公司 It attends a banquet sort management method, device, computer equipment and storage medium
CN109308912A (en) * 2018-08-02 2019-02-05 平安科技(深圳)有限公司 Music style recognition methods, device, computer equipment and storage medium
CN109492664A (en) * 2018-09-28 2019-03-19 昆明理工大学 A kind of musical genre classification method and system based on characteristic weighing fuzzy support vector machine
CN109726309A (en) * 2018-11-22 2019-05-07 百度在线网络技术(北京)有限公司 Audio generation method, device and storage medium
CN109697290A (en) * 2018-12-29 2019-04-30 咪咕数字传媒有限公司 Information processing method, information processing equipment and computer storage medium
CN109872710A (en) * 2019-03-13 2019-06-11 腾讯音乐娱乐科技(深圳)有限公司 Audio modulator approach, device and storage medium
CN110188235A (en) * 2019-05-05 2019-08-30 平安科技(深圳)有限公司 Music style classification method, device, computer equipment and storage medium
CN110135355A (en) * 2019-05-17 2019-08-16 吉林大学 A method of utilizing color and audio active control driver's mood
CN110189742A (en) * 2019-05-30 2019-08-30 芋头科技(杭州)有限公司 Determine emotion audio, affect display, the method for text-to-speech and relevant apparatus
CN110188356A (en) * 2019-05-30 2019-08-30 腾讯音乐娱乐科技(深圳)有限公司 Information processing method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703711A (en) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 Playing sound effect control method and device, electronic equipment and computer storage medium
WO2021248964A1 (en) * 2020-06-09 2021-12-16 广东美的制冷设备有限公司 Home appliance and control method therefor, and computer-readable storage medium
CN113852521A (en) * 2020-06-09 2021-12-28 广东美的制冷设备有限公司 Household appliance, control method thereof and computer readable storage medium
CN112185325A (en) * 2020-10-12 2021-01-05 上海闻泰电子科技有限公司 Audio playing style adjusting method and device, electronic equipment and storage medium
CN112927665A (en) * 2021-01-22 2021-06-08 咪咕音乐有限公司 Authoring method, electronic device, and computer-readable storage medium
CN112927665B (en) * 2021-01-22 2022-08-30 咪咕音乐有限公司 Authoring method, electronic device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US20230056955A1 (en) Deep Learning Based Method and System for Processing Sound Quality Characteristics
CN110853606A (en) Sound effect configuration method and device and computer readable storage medium
US10014002B2 (en) Real-time audio source separation using deep neural networks
CN106898340B (en) Song synthesis method and terminal
CN108305603B (en) Sound effect processing method and equipment, storage medium, server and sound terminal thereof
US10325615B2 (en) Real-time adaptive audio source separation
WO2019109787A1 (en) Audio classification method and apparatus, intelligent device, and storage medium
JP2022173437A (en) Volume leveler controller and controlling method
JP4150798B2 (en) Digital filtering method, digital filter device, digital filter program, and computer-readable recording medium
CN103943104B (en) A kind of voice messaging knows method for distinguishing and terminal unit
US10635389B2 (en) Systems and methods for automatically generating enhanced audio output
US10971125B2 (en) Music synthesis method, system, terminal and computer-readable storage medium
WO2019233361A1 (en) Method and device for adjusting volume of music
WO2011035626A1 (en) Audio playing method and audio playing apparatus
JP2023527473A (en) AUDIO PLAYING METHOD, APPARATUS, COMPUTER-READABLE STORAGE MEDIUM AND ELECTRONIC DEVICE
GB2595222A (en) Digital audio workstation with audio processing recommendations
CN108172241B (en) Music recommendation method and music recommendation system based on intelligent terminal
EP3920049A1 (en) Techniques for audio track analysis to support audio personalization
US20240213943A1 (en) Dynamic audio playback equalization using semantic features
CN114333874A (en) Method for processing audio signal
CN112992167A (en) Audio signal processing method and device and electronic equipment
CN113395577A (en) Sound changing playing method and device, storage medium and electronic equipment
Sandvold et al. Towards a semantic descriptor of subjective intensity in music
WO2024130865A1 (en) Audio signal enhancement method and apparatus, and device and readable storage medium
WO2024103383A1 (en) Audio processing method and apparatus, and device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication