CN115410544A - Sound effect processing method and device and electronic equipment - Google Patents

Sound effect processing method and device and electronic equipment Download PDF

Info

Publication number
CN115410544A
CN115410544A CN202211037097.4A CN202211037097A CN115410544A CN 115410544 A CN115410544 A CN 115410544A CN 202211037097 A CN202211037097 A CN 202211037097A CN 115410544 A CN115410544 A CN 115410544A
Authority
CN
China
Prior art keywords
song
sound effect
processed
audio
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211037097.4A
Other languages
Chinese (zh)
Other versions
CN115410544B (en
Inventor
夏妍
林锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mgjia Beijing Technology Co ltd
Original Assignee
Mgjia Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mgjia Beijing Technology Co ltd filed Critical Mgjia Beijing Technology Co ltd
Priority to CN202211037097.4A priority Critical patent/CN115410544B/en
Publication of CN115410544A publication Critical patent/CN115410544A/en
Application granted granted Critical
Publication of CN115410544B publication Critical patent/CN115410544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a sound effect processing method, a sound effect processing device and electronic equipment, wherein the sound effect processing method comprises the following steps: acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; classifying the styles of the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; acquiring audio target characteristics of the song to be processed; performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed; and performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed.

Description

Sound effect processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of intelligent sound effect, in particular to a sound effect processing method and device and electronic equipment.
Background
When the existing intelligent sound effect technology selects sound effects, a simpler rule is generally used, for example, the sound effects matched with music based on song genre information provided by a song making party are very rough due to the fact that the song genre information is very general, the sound effects matched with music based on the genre by an intelligent sound effect system are even inaccurate due to the fact that the actual output results of the intelligent sound effects are possibly inaccurate, completely unsuitable sound effects can be designated for songs, and the sound effect adding effect is influenced.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defect of inaccurate music effect allocation in the prior art, so as to provide a sound effect processing method, device and electronic device.
According to a first aspect, an embodiment of the present invention discloses a sound effect processing method, including: acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; classifying the styles of the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; acquiring audio target characteristics of the song to be processed; performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain an audio compensation sound effect of the song to be processed; and performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed.
Optionally, the method further comprises: when receiving the sound effect application request, responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played.
Optionally, the responding, when receiving the sound effect application request, to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played includes: when a sound effect application request is received, carrying out song matching operation according to attribute information of a song to be played; when the corresponding song is matched, carrying out similarity comparison on the audio frequency of the song to be played and the matched audio frequency of the song; and when the similarity is greater than a preset threshold value, adding a corresponding sound effect to the song to be played.
Optionally, the method further comprises: and displaying the identification information of the currently added sound effect at the client side while responding to the adding operation of the sound effect of the song to be played.
Optionally, when the corresponding song is matched, performing similarity comparison between the audio of the song to be played and the matched audio of the song includes: obtaining a first frequency distribution vector according to the frequency distribution of the audio of the song to be played in the target time length; determining a second frequency distribution vector matched to the corresponding song audio within the target time length; and comparing the similarity of the first frequency distribution vector and the second frequency distribution vector.
According to a second aspect, an embodiment of the present invention further discloses a sound effect processing apparatus, including: the first acquisition module is used for acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; the classification module is used for carrying out style classification on the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; the first determining module is used for determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; the second acquisition module is used for acquiring the audio target characteristics of the song to be processed; the second determining module is used for performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed; and the first storage module is used for storing the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed in a correlation manner.
Optionally, the apparatus further comprises: and the first response module is used for responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played when the sound effect application request is received.
Optionally, the first response module includes: the matching sub-module is used for carrying out song matching operation according to the attribute information of the song to be played when a sound effect application request is received; the comparison submodule is used for comparing the similarity of the audio of the song to be played and the matched audio of the song when the corresponding song is matched; and the application submodule is used for adding a corresponding sound effect to the song to be played when the similarity is greater than a preset threshold value.
According to a third aspect, an embodiment of the present invention further discloses an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the steps of the sound effect processing method according to the first aspect or any one of the optional embodiments of the first aspect.
According to a fourth aspect, the present invention further discloses a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the sound effect processing method according to the first aspect or any optional implementation manner of the first aspect.
The technical scheme of the invention has the following advantages:
the sound effect processing method/device provided by the invention comprises the following steps: acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; classifying the styles of the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; acquiring audio target characteristics of the song to be processed; performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed; and performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed. The method classifies the songs to be processed through the audio byte array of the songs to be processed and the text information representing the types of the songs, then determines the style sound effect of the audio to be processed through a preset sound effect decision method according to the classification result, so that the classification of the songs can be richer, the obtained style sound effect is better, the audio compensation sound effect of the songs to be processed is obtained through carrying out sound effect compensation operation on the audio target characteristics of the songs to be processed, and the obtained compensation sound effect is applied to the songs to be processed, so that the problem of poor effect of the songs to be processed can be compensated. The compensation sound effect and the style sound effect of the song to be processed are stored in a correlated mode, a better effect can be obtained when the sound effect is applied subsequently, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating an example of an audio processing method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a specific example of an audio effect processing apparatus according to an embodiment of the present invention;
fig. 3 is a diagram of a specific example of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention discloses a sound effect processing method, as shown in figure 1, the method comprises the following steps:
step 101, acquiring text information corresponding to the song to be processed and used for representing the type of the song.
For example, the text information corresponding to the song to be processed may include, but is not limited to, lyrics, popular comments, affiliated song lists, and the like, and the cloud end obtains the text information corresponding to the song to be processed and used for characterizing the type of the song.
And 102, performing style classification on the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed.
Illustratively, the cloud end processes text information to be used for characterizing the song type and an audio byte array of the song to determine the style classification of the song to be processed. Specifically, the style classification of the song to be processed is determined jointly according to the song style classification obtained by inputting the audio byte array of the song into a Convolutional Neural Network (CNN) and the text information representing the type of the song, the style classification comprises a genre classification, an emotion classification, an instrument classification, a BPM classification and the like, the genre classification comprises pop, rock, classical, bruce and the like, the instrument classification comprises piano, guitar, drum, violin and the like, the Bpm classification comprises slow, general rhythm, joy and the like, and the emotion classification comprises excitement, happy, calm, sadness and the like.
And 103, determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result.
For example, the preset prominence decision method may be a method of determining a corresponding prominence for a song by its genre classification. Specifically, the preset sound effect decision method comprises the step of carrying out sound effect decision on the songs to be processed based on classification of all dimensions through a random forest.
And 104, acquiring the audio target characteristics of the song to be processed. Illustratively, the audio target characteristics of the song to be processed include the frequency distribution of the song audio, the reverberation size, and the sound field width.
And 105, performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed. Illustratively, according to the obtained audio target characteristics of the song to be processed, performing audio compensation operation on the audio of the song to be processed, wherein the audio compensation operation comprises the steps of increasing low frequency, reducing low frequency, widening a sound field and adding reverberation, performing sound field widening and adding reverberation operation when the human voice reverberation of the audio of the song is small and the sound field of the song is narrow, performing increasing low frequency operation when the low frequency energy of the audio of the song is less, and performing reducing low frequency operation when the low frequency energy of the audio of the song is too high to obtain the audio compensation audio of the song to be processed.
And 106, performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed. Illustratively, the attribute information of the song to be processed includes song audio, song title, artist, and album title. In the embodiment of the application, the sound effect of the to-be-processed audio finally stored in the cloud end is the superposition of the audio compensation sound effect of the to-be-processed song and the sound effect of the corresponding style. Present intelligent sound effect system can use same audio to a whole song, can lead to very poor listening to the huge song of style difference between some different sections, and this embodiment handles and classifies through song audio frequency itself, uses different audios to the song segmentation, adapts to the great song of style difference between the different sections more, and the audio of using is the segmentation when the song is broadcast to the audio that can smooth switching between the different sections.
The sound effect processing method provided by the invention comprises the following steps: acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; classifying the styles of the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; acquiring audio target characteristics of the song to be processed; performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed; and performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed. The method classifies the songs to be processed through the audio byte array of the songs to be processed and the text information representing the types of the songs, then determines the style sound effect of the audio to be processed through a preset sound effect decision method according to the classification result, so that the classification of the songs can be richer, the obtained style sound effect is better, the audio compensation sound effect of the songs to be processed is obtained through carrying out sound effect compensation operation on the audio target characteristics of the songs to be processed, and the obtained compensation sound effect is applied to the songs to be processed, so that the problem of poor effect of the songs to be processed can be compensated. The compensation sound effect and the style sound effect of the song to be processed are stored in a correlated mode, a better effect can be obtained when the sound effect is applied subsequently, and user experience is improved.
As an optional embodiment of the present invention, the method further comprises: when receiving the sound effect application request, responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played.
Illustratively, when receiving an audio effect application request sent by the client, adding a corresponding audio effect to the song to be played. Specifically, when a song to be played hits a certain song stored in the cloud, a corresponding sound effect is applied during playing, the applied sound effect is segmented, and the sound effects between different segments can be smoothly switched.
As an optional embodiment of the present invention, the responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played when the sound effect application request is received includes: when receiving a sound effect application request, performing song matching operation according to attribute information of a song to be played; when the corresponding song is matched, carrying out similarity comparison on the audio frequency of the song to be played and the matched audio frequency of the song; and when the similarity is greater than a preset threshold value, adding a corresponding sound effect to the song to be played.
Illustratively, when a sound effect application request is received, matching attribute information of a song to be played with a song stored in a cloud, wherein the attribute information of the song to be played comprises a song name, a singer and an album name, when the corresponding song is matched, comparing the similarity of the song to be played with the matched song, and when the similarity is greater than a preset threshold value, adding the corresponding sound effect to the song to be played, wherein the preset threshold value is an empirical value accumulated through calculation of a large amount of data.
The prior art mainly relies on song name, singer and album name to the discernment of song, has neglected each big music platform song audio version in the actual life and has sometime quietly had the change of sound, and the user confirms the fact that song audio version is also very difficult when using USB equipment broadcast music, and this embodiment can avoid the misuse of audio through the accurate discernment of audio to song version.
As an optional embodiment of the present invention, the method further comprises: and displaying the identification information of the currently added sound effect at the client side while responding to the adding operation of the sound effect of the song to be played.
Illustratively, when a song to be played is added with a corresponding sound effect, the identification information of the previously added sound effect is displayed at the client, and the identification information of the currently added sound effect comprises the name and description information of the sound effect.
The existing intelligent sound effect is presented to a user, the sound effect result is a complete black box, the user cannot have visual perception on what effect the intelligent sound effect actually plays, the identification information of the currently added sound effect is displayed at the client, the sound effect and specific description of specific application can be displayed on a user interface, and the user use feeling is better.
As an optional embodiment of the present invention, when matching a corresponding song, performing similarity comparison between an audio of the song to be played and the matched audio of the song includes: obtaining a first frequency distribution vector according to the frequency distribution of the audio of the song to be played in the target time length; determining a second frequency distribution vector matched to the corresponding song audio in the target duration; and comparing the similarity of the first frequency distribution vector and the second frequency distribution vector.
Illustratively, the first frequency distribution vector of the audio frequency of the song to be played in the target time length is compared with the first frequency distribution vector of the matched audio frequency of the song in the target time length, and then the similarity of the two songs is determined.
Specifically, when a user plays a certain song, the name of the singer and the name of the album are used for matching, when a corresponding song is matched, a histogram of frequency distribution of 20 seconds before the audio frequency of the song to be played is calculated and converted into a frequency distribution vector, similarity comparison is carried out on the frequency distribution vector of the 20 seconds before the audio frequency of the song to be played and the matched frequency distribution vector of the 20 seconds before the audio frequency of the corresponding song to be played, and only if the similarity is greater than a certain threshold value, the sound effect is applied to the played song, and specific sound effect description of the applied sound effect can be displayed on an interface.
The embodiment of the invention also discloses a sound effect processing device, as shown in fig. 2, the device comprises: the method comprises the following steps: a first obtaining module 201, configured to obtain text information, which is used to represent a type of a song, corresponding to a song to be processed; the classification module 202 is configured to perform style classification on the songs to be processed according to the text information for characterizing the types of the songs and the audio byte array of the songs to be processed; the first determining module 203 is configured to determine a style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; a second obtaining module 204, configured to obtain an audio target feature of the song to be processed; a second determining module 205, configured to perform a sound effect compensation operation according to the audio target characteristic of the song to be processed to obtain an audio compensation sound effect of the song to be processed; the first storage module 206 is configured to store the attribute information of the song to be processed, the compensation sound effect of the song to be processed, and the style sound effect of the song to be processed in an associated manner.
The sound effect processing device provided by the invention comprises: the first acquisition module is used for acquiring text information which corresponds to the song to be processed and is used for representing the type of the song; the classification module is used for performing style classification on the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed; the first determining module is used for determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result; the second acquisition module is used for acquiring the audio target characteristics of the song to be processed; the second determining module is used for performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed; the first storage module is used for storing the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed in a correlation mode. The device classifies the songs to be processed through the audio byte array of the songs to be processed and the text information representing the types of the songs, then determines the style sound effect of the audio to be processed through a preset sound effect decision method according to the classification result, so that the classification of the songs can be richer, the obtained style sound effect is better, the audio compensation sound effect of the songs to be processed is obtained through carrying out sound effect compensation operation on the audio target characteristics of the songs to be processed, and the obtained compensation sound effect is applied to the songs to be processed, so that the problem of poor effect of the songs to be processed can be compensated. The compensation sound effect and the style sound effect of the song to be processed are stored in a correlated mode, a better effect can be obtained when the sound effect is applied subsequently, and user experience is improved.
As an optional embodiment of the present invention, the apparatus further comprises: and the first response module is used for responding the adding operation of the style sound effect and/or the compensation sound effect of the song to be played when the sound effect application request is received.
As an optional embodiment of the present invention, the first response module includes: the matching sub-module is used for carrying out song matching operation according to the attribute information of the song to be played when the sound effect application request is received; the comparison submodule is used for comparing the similarity of the audio of the song to be played and the matched audio of the song when the corresponding song is matched; and the application submodule is used for adding a corresponding sound effect to the song to be played when the similarity is greater than a preset threshold value.
As an optional embodiment of the present invention, the apparatus further comprises: and the second response module is used for displaying the identification information of the currently added sound effect on the client side while responding to the adding operation of the sound effect of the song to be played.
As an optional embodiment of the present invention, the ratio pair sub-module comprises: the first determining submodule is used for obtaining a first frequency distribution vector according to the frequency distribution of the audio frequency of the song to be played in the target time length; the second determining submodule is used for determining a second frequency distribution vector matched with the corresponding song audio in the target duration; and the third determining submodule is used for comparing the similarity of the first frequency distribution vector and the second frequency distribution vector.
An embodiment of the present invention further provides an electronic device, as shown in fig. 3, the electronic device may include a processor 401 and a memory 402, where the processor 401 and the memory 402 may be connected through a bus or in another manner, and fig. 3 takes the connection through the bus as an example.
Processor 401 may be a Central Processing Unit (CPU). The Processor 401 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 402 is a non-transitory computer readable storage medium, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the sound effect processing method in the embodiment of the present invention. The processor 401 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 402, so as to implement the sound effect processing method in the above method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 401, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to processor 401 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 402 and, when executed by the processor 401, perform the sound effects processing method of the embodiment shown in FIG. 1.
The details of the electronic device may be understood with reference to the corresponding description and effects in the embodiment shown in fig. 1, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk Drive (Hard Disk Drive, abbreviated as HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A sound effect processing method is characterized by comprising the following steps:
acquiring text information which corresponds to the song to be processed and is used for representing the type of the song;
classifying the styles of the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed;
determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result;
acquiring audio target characteristics of the song to be processed;
performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain an audio compensation sound effect of the song to be processed;
and performing associated storage on the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed.
2. The method of claim 1, further comprising:
when receiving the sound effect application request, responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played.
3. The method according to claim 2, wherein the responding to the adding operation of the style sound effect and/or the compensation sound effect of the song to be played when receiving the sound effect application request comprises:
when receiving a sound effect application request, performing song matching operation according to attribute information of a song to be played;
when the corresponding song is matched, carrying out similarity comparison on the audio frequency of the song to be played and the matched audio frequency of the song;
and when the similarity is greater than a preset threshold value, adding a corresponding sound effect to the song to be played.
4. The method of claim 3, further comprising:
and displaying the identification information of the currently added sound effect at the client side while responding to the adding operation of the sound effect of the song to be played.
5. The method of claim 4, wherein when the corresponding song is matched, performing similarity comparison between the audio of the song to be played and the matched audio of the song comprises:
obtaining a first frequency distribution vector according to the frequency distribution of the audio frequency of the song to be played in the target time length;
determining a second frequency distribution vector matched to the corresponding song audio in the target duration;
and comparing the similarity of the first frequency distribution vector and the second frequency distribution vector.
6. An audio processing apparatus, comprising:
the first acquisition module is used for acquiring text information which corresponds to the song to be processed and is used for representing the type of the song;
the classification module is used for carrying out style classification on the songs to be processed according to the text information for representing the types of the songs and the audio byte array of the songs to be processed;
the first determining module is used for determining the style sound effect of the song to be processed by using a preset sound effect decision method according to the style classification result;
the second acquisition module is used for acquiring the audio target characteristics of the song to be processed;
the second determining module is used for performing sound effect compensation operation according to the audio target characteristics of the song to be processed to obtain the audio compensation sound effect of the song to be processed;
the first storage module is used for storing the attribute information of the song to be processed, the compensation sound effect of the song to be processed and the style sound effect of the song to be processed in a correlation manner;
and the sending module is used for storing the attribute information of the song to be processed and the corresponding style sound effect.
7. The apparatus of claim 6, further comprising:
and the first response module is used for responding the adding operation of the style sound effect and/or the compensation sound effect of the song to be played when the sound effect application request is received.
8. The apparatus of claim 7, wherein the first response module comprises:
the matching sub-module is used for carrying out song matching operation according to the attribute information of the song to be played when a sound effect application request is received;
the comparison submodule is used for comparing the similarity of the audio of the song to be played and the matched audio of the song when the corresponding song is matched;
and the application submodule is used for adding a corresponding sound effect to the song to be played when the similarity is greater than a preset threshold value.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the sound effect processing method of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the sound-effect processing method according to any one of claims 1-5.
CN202211037097.4A 2022-08-26 2022-08-26 Sound effect processing method and device and electronic equipment Active CN115410544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211037097.4A CN115410544B (en) 2022-08-26 2022-08-26 Sound effect processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211037097.4A CN115410544B (en) 2022-08-26 2022-08-26 Sound effect processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115410544A true CN115410544A (en) 2022-11-29
CN115410544B CN115410544B (en) 2024-01-30

Family

ID=84162067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211037097.4A Active CN115410544B (en) 2022-08-26 2022-08-26 Sound effect processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115410544B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5542000A (en) * 1993-03-19 1996-07-30 Yamaha Corporation Karaoke apparatus having automatic effector control
CN104978962A (en) * 2014-04-14 2015-10-14 安徽科大讯飞信息科技股份有限公司 Query by humming method and system
CN112002296A (en) * 2020-08-24 2020-11-27 广州小鹏汽车科技有限公司 Music playing method, vehicle, server and storage medium
CN113421585A (en) * 2021-05-10 2021-09-21 云境商务智能研究院南京有限公司 Audio fingerprint database generation method and device
CN113641329A (en) * 2021-08-10 2021-11-12 广州艾美网络科技有限公司 Sound effect configuration method and device, intelligent sound box, computer equipment and storage medium
WO2021248964A1 (en) * 2020-06-09 2021-12-16 广东美的制冷设备有限公司 Home appliance and control method therefor, and computer-readable storage medium
CN114661939A (en) * 2022-03-24 2022-06-24 杭州网易云音乐科技有限公司 Song matching method, medium, device and computing equipment
CN114842820A (en) * 2022-05-18 2022-08-02 北京地平线信息技术有限公司 K song audio processing method and device and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5542000A (en) * 1993-03-19 1996-07-30 Yamaha Corporation Karaoke apparatus having automatic effector control
CN104978962A (en) * 2014-04-14 2015-10-14 安徽科大讯飞信息科技股份有限公司 Query by humming method and system
WO2021248964A1 (en) * 2020-06-09 2021-12-16 广东美的制冷设备有限公司 Home appliance and control method therefor, and computer-readable storage medium
CN112002296A (en) * 2020-08-24 2020-11-27 广州小鹏汽车科技有限公司 Music playing method, vehicle, server and storage medium
CN113421585A (en) * 2021-05-10 2021-09-21 云境商务智能研究院南京有限公司 Audio fingerprint database generation method and device
CN113641329A (en) * 2021-08-10 2021-11-12 广州艾美网络科技有限公司 Sound effect configuration method and device, intelligent sound box, computer equipment and storage medium
CN114661939A (en) * 2022-03-24 2022-06-24 杭州网易云音乐科技有限公司 Song matching method, medium, device and computing equipment
CN114842820A (en) * 2022-05-18 2022-08-02 北京地平线信息技术有限公司 K song audio processing method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN115410544B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US11017010B2 (en) Intelligent playing method and apparatus based on preference feedback
JP7136932B2 (en) Sound range balancing method, device and system based on deep learning
JP2021525493A (en) Sound quality characteristic processing method and system based on deep learning
KR101578279B1 (en) Methods and systems for identifying content in a data stream
WO2018045988A1 (en) Method and device for generating digital music score file of song, and storage medium
US11511200B2 (en) Game playing method and system based on a multimedia file
CN106898339B (en) Song chorusing method and terminal
WO2020155490A1 (en) Method and apparatus for managing music based on speech analysis, and computer device
WO2019137392A1 (en) File classification processing method and apparatus, terminal, server, and storage medium
WO2019233361A1 (en) Method and device for adjusting volume of music
US20150055934A1 (en) Enhancing karaoke systems utilizing audience sentiment feedback and audio watermarking
CN108920585A (en) The method and device of music recommendation, computer readable storage medium
CN110853606A (en) Sound effect configuration method and device and computer readable storage medium
CN105718486A (en) Online query by humming method and system
WO2023128877A2 (en) Video generating method and apparatus, electronic device, and readable storage medium
CN103873003A (en) Gain adjustment method and device for audio signal
KR20160056104A (en) Analyzing Device and Method for User's Voice Tone
US11775070B2 (en) Vibration control method and system for computer device
JP6288197B2 (en) Evaluation apparatus and program
CN113032616B (en) Audio recommendation method, device, computer equipment and storage medium
CN115410544A (en) Sound effect processing method and device and electronic equipment
US9384758B2 (en) Derivation of probabilistic score for audio sequence alignment
CN111008287A (en) Audio and video processing method and device, server and storage medium
EP3920049A1 (en) Techniques for audio track analysis to support audio personalization
JP6589521B2 (en) Singing standard data correction device, karaoke system, program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant