CN109545249B - Method and device for processing music file - Google Patents
Method and device for processing music file Download PDFInfo
- Publication number
- CN109545249B CN109545249B CN201811409332.XA CN201811409332A CN109545249B CN 109545249 B CN109545249 B CN 109545249B CN 201811409332 A CN201811409332 A CN 201811409332A CN 109545249 B CN109545249 B CN 109545249B
- Authority
- CN
- China
- Prior art keywords
- content
- beat
- music
- music file
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
The application relates to a method and a device for processing music files, and belongs to the field of audio processing. The method comprises the following steps: analyzing a first music file to be processed to obtain structural feature information of the first music file, wherein the structural feature information comprises the content category of each music content of the first music file and the beat of each music content; and processing the first music file to obtain a second music file according to an effector library and the structural feature information, wherein the effector library comprises at least one mixing effect, and each mixing effect corresponds to the content category and at least one beat of corresponding music content. The device comprises: the device comprises an acquisition module and a processing module. The method and the device can enrich the playing effect of the music file.
Description
Technical Field
The present application relates to the field of audio processing, and in particular, to a method and an apparatus for processing music files.
Background
Listening to music is one of entertainment lives of people at present, and people can play music files by using equipment such as a mobile phone, an mp3 player and the like. The existing music files are obtained by recording the singing content of a live singer and the playing content of a live musical instrument when the singer sings a song. The music file can then be uploaded to a music website for downloading and playing by people.
The existing music file may have a playing effect from the beginning to the end of playing the music file, that is, the playing effect of the music file is single, and the demand of people cannot be met.
Disclosure of Invention
In order to enrich the playing effect of music files, the embodiment of the application provides a method and a device for processing music files. The technical scheme is as follows:
in a first aspect, the present application provides a method of processing music files, the method comprising:
analyzing a first music file to be processed to obtain structural feature information of the first music file, wherein the structural feature information comprises the content category of each music content of the first music file and the beat of each music content;
and processing the first music file to obtain a second music file according to an effector library and the structural feature information, wherein the effector library comprises at least one mixing effect, and each mixing effect corresponds to the content category and at least one beat of corresponding music content.
Optionally, the processing the first music file according to the effector library and the structural feature information to obtain a second music file includes:
according to a material library and the structural feature information, audio materials included in the material library are superposed in the first music file to obtain a third music file, wherein the material library includes at least one audio material, and each audio material corresponds to the content type and at least one beat of corresponding music content;
and processing the third music file according to the effector library and the structural feature information to obtain a second music file.
Optionally, the superimposing, according to the material library and the structural feature information, the audio material included in the material library in the first music file to obtain a third music file includes:
according to the content category and at least one beat corresponding to a target audio material, determining target content located in the at least one beat in the music content corresponding to the content category in a first music file, wherein the target audio material is any one audio material in the material library;
superimposing the target audio material on the target content in a first music file.
Optionally, the processing the third music file according to the effector library and the structural feature information to obtain a second music file includes:
according to the content category and at least one beat corresponding to the target mixing effect device, determining target content located in the at least one beat in the music content corresponding to the content category in the third music file, wherein the target mixing effect device is any mixing effect device in an effect library;
processing the target content in the third music file using the target mix effector.
Optionally, the mixing effectors in the effector library include at least one of a high-pass filter, a low-pass filter, a bit distorter, a vibrater, and a beat circulator;
the high-pass filter corresponds to a content type and at least one beat of corresponding music content, the low-pass filter corresponds to a content type and at least one beat of corresponding music content, the bit distorter corresponds to a content type and at least one beat of corresponding music content, the vibrato corresponds to a content type and at least one beat of corresponding music content, and the beat circulator corresponds to a content type and at least one beat of corresponding music content.
Optionally, the content category corresponding to the high-pass filter is a prelude, and the corresponding beat is a first beat to a third beat;
the content category corresponding to the low-pass filter is a master song, and the corresponding beat is a first beat to a second beat;
the content category corresponding to the bit distorter is a master song, and the corresponding beats are fifth to eighth beats;
the content category corresponding to the vibrato is chorus, and the corresponding beat is the first beat to the fourth beat;
the content category corresponding to the beat circulator is interlude, and the corresponding beat is the first beat to the second beat.
In a second aspect, the present application provides an apparatus for processing music files, the apparatus comprising:
the analysis module is used for analyzing a first music file to be processed to obtain structural characteristic information of the first music file, wherein the structural characteristic information comprises the content category of each music content of the first music file and the beat of each music content;
and the processing module is used for processing the first music file to obtain a second music file according to an effector library and the structural characteristic information, wherein the effector library comprises at least one mixing effector, and each mixing effector corresponds to the content category and at least one beat of corresponding music content.
Optionally, the processing module is configured to:
according to a material library and the structural feature information, audio materials included in the material library are superposed in the first music file to obtain a third music file, wherein the material library includes at least one audio material, and each audio material corresponds to the content type and at least one beat of corresponding music content;
and processing the third music file according to the effector library and the structural feature information to obtain a second music file.
Optionally, the processing module is configured to:
according to the content category and at least one beat corresponding to a target audio material, determining target content located in the at least one beat in the music content corresponding to the content category in a first music file, wherein the target audio material is any one audio material in the material library;
superimposing the target audio material on the target content in a first music file.
Optionally, the processing module is configured to:
according to the content category and at least one beat corresponding to the target mixing effect device, determining target content located in the at least one beat in the music content corresponding to the content category in the third music file, wherein the target mixing effect device is any mixing effect device in an effect library;
processing the target content in the third music file using the target mix effector.
Optionally, the mixing effectors in the effector library include at least one of a high-pass filter, a low-pass filter, a bit distorter, a vibrater, and a beat circulator;
the high-pass filter corresponds to a content type and at least one beat of corresponding music content, the low-pass filter corresponds to a content type and at least one beat of corresponding music content, the bit distorter corresponds to a content type and at least one beat of corresponding music content, the vibrato corresponds to a content type and at least one beat of corresponding music content, and the beat circulator corresponds to a content type and at least one beat of corresponding music content.
Optionally, the content category corresponding to the high-pass filter is a prelude, and the corresponding beat is a first beat to a third beat;
the content category corresponding to the low-pass filter is a master song, and the corresponding beat is a first beat to a second beat;
the content category corresponding to the bit distorter is a master song, and the corresponding beats are fifth to eighth beats;
the content category corresponding to the vibrato is chorus, and the corresponding beat is the first beat to the fourth beat;
the content category corresponding to the beat circulator is interlude, and the corresponding beat is the first beat to the second beat.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps provided by the first aspect or any alternative form of the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the structure characteristic information of the first music file is obtained, and the music content in the first music file is processed according to the structure characteristic information and the mixing effect device in the effect device library to obtain the second music file, so that different music content of the second music file has different playing effects, and the effect of playing the music file is enriched.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a first music file provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method for processing music files according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for processing music files provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a third music file provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus for processing music files according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to a fourth embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The content of the current music file is divided into four parts, namely prelude content, master song content, refrain content and interlude content. The prelude content refers to the content from the starting position of the music file to the beginning of playing the lyrics. The piece of music comprises at least one part of lyrics of the climax, the lyrics of the climax are frequently repeated, and the content of each part of the climax in the music file is the content of the chorus. The content between the position of the music file starting to play the lyrics and the initial position of the first appearing chorus content is the master song content, the content between the end position of the last chorus content of the music file and the end position of the music file is also the master song content, and the content between any two adjacent chorus contents is the interlude content.
For example, referring to the music file shown in fig. 1, the duration of the music file is 3 minutes, the content of the first 20 seconds of the music file is the introduction content, and the content at the 20 th second position of the music file starts to include lyrics. The content at 60 th to 100 th seconds of the music file is the climax part of the music, and the content at 120 th to 160 th seconds of the music file is also the climax part of the music, i.e. the content at 60 th to 100 th seconds of the music file and the content at 120 th to 160 th seconds of the music file are both the contents of the chorus, and the music file includes two contents of the chorus. The 20 th to 60 th second contents of the music file and the 160 th to 180 th second contents of the music file are both the song master contents, and the music file includes two song master contents. The content of the music file in the 100 th to 120 th seconds is the content between two adjacent refrain contents, that is, the content of the music file in the 100 th to 120 th seconds is the interlude content.
The time period during which each part of the content in the music file lasts may include a plurality of beats, each of which lasts for an equal length of time. For example, assuming that the duration of each beat is 5 seconds, in the music file shown in fig. 1, the introduction content includes four beats, i.e., beats a1, a2, A3, a 4; the first piece of the master song content includes eight beats, which are beats B1, B2, B3, B4, B5, B6, B7, and B8, respectively; the first refrain content includes eight beats, respectively beats C1, C2, C3, C4, C5, C6, C7 and C8; the interlude content includes four beats, namely beats D1, D2, D3 and D4; the second refrain content also includes eight beats, respectively beats E1, E2, E3, E4, E5, E6, E7, and E8; the second piece of the master song content includes four beats, which are beats F1, F2, F3, and F4, respectively.
The content of the introduced music file is often not rich enough or the playing effect is single. The music file often includes lyrics and/or the sound of one or more instruments, and from the start position to the end position of the music file, a live play to an end position is always a play effect. In order to enrich the content of music playing and/or the effect of playing, the application processes the music file through the following embodiments.
Referring to fig. 2, an embodiment of the present application provides a method for processing a music file, where the method includes:
step 101: the method comprises the steps of analyzing a first music file to be processed to obtain structural characteristic information of the first music file, wherein the structural characteristic information comprises the content type of each music content of the first music file and the beat of each music content.
Step 102: and processing the first music file to obtain a second music file according to an effector library and the structural characteristic information, wherein the effector library comprises at least one mixing effector, and the content category and the beat corresponding to each mixing effector.
The mixing effect device is used for changing the playing effect of the music content in the first music file.
In the embodiment of the application, because the structural feature information of the first music file is obtained, and the music content in the first music file is processed according to the structural feature information and the mixing effect device in the effect device library to obtain the second music file, different music contents of the second music file have different playing effects, and the effect of playing the music file is enriched.
Referring to fig. 3, an embodiment of the present application provides a method for processing a music file, including:
step 201: the method comprises the steps of analyzing a first music file to be processed to obtain structural characteristic information of the first music file, wherein the structural characteristic information comprises the content type of each music content of the first music file and the beat of each music content.
The music content of the first music file may include at least one of introduction content, verse content, refrain content, and interlude content, etc.
For example, assuming that the first music file is the music file shown in fig. 1, analyzing the first music file shown in fig. 1 to obtain the structural feature information of the first music file includes that the content category corresponding to the prelude content is the prelude and the beats a1, a2, a3 and a4 included in the prelude content, the content category corresponding to the first master song content is the beat B1, B2, B3 and B3 included in the master song and the first master song content, the content category corresponding to the first refrain content is the refrain and the beats C3, C3 and C3 included in the first refrain content, the content category corresponding to the intercross content and the beats D3, D3 and D3 included in the intercross content, and the second content category corresponding to the refrain content, E72, E3 and the intercross content includes the beat E72, E3, E72 and the second content corresponding to the intercross content, E8, the content category corresponding to the second song title content is the beat F1, F2, F3 and F4 included in the second song title content and the song title content.
Optionally, the first music file may be analyzed by using an algorithm for analyzing a music structure, so as to obtain structural feature information of the first music file.
Step 202: according to the content type and at least one beat of the music content corresponding to the target audio material, determining the target content located in the at least one beat in the music content corresponding to the content type in the first music file, and overlapping the target audio material on the target content to obtain a third music file.
The target audio material is any audio material in the material library.
Optionally, a material library may be set in advance, where the material library includes at least one audio material, each audio material corresponds to a content category of the corresponding music content and at least one beat, and the at least one beat corresponding to the audio material belongs to a duration of the music content of the content category corresponding to the audio material.
The material library may be a correspondence of audio materials, content categories of music contents, and tempos, see the material library shown in table 1 below.
TABLE 1
For example, for the first music file shown in fig. 1, for the audio material 1, the corresponding content category is "prelude" and the corresponding tempo is third to fourth tempo are acquired from the material library shown in table 1, the target content within the third to fourth tempo, i.e., the content within the tempos A3 and a4, is determined among the prelude content corresponding to the category "prelude" in the first music file, and the audio material 1 is superimposed on the basis of the target content within the tempos A3 and a 4.
For the audio material 2, the corresponding content category is "master song" and the corresponding beats are the first to second beats and the fifth to sixth beats from the material library shown in table 1; determining first target content located in first to second beats and second target content located in fifth to sixth beats in first song content corresponding to a category 'song title' in a first music file, wherein the first target content is content in beats B1 and B2, and the second target content is content in beats B5 and B6; determining third target content located in first to second beats in second song content corresponding to the category 'song title' in the first music file, wherein the third target content is content in beats F1 and F2, and as the second song content only comprises four beats, namely, no fifth to sixth beats, the content in the fifth to sixth beats does not need to be acquired; the audio material 2 is superimposed on the basis of the first target content in the beats B1 and B2, the audio material 2 is superimposed on the basis of the second target content in the beats B5 and B6, and the audio material 2 is superimposed on the basis of the third target content in the beats F1 and F2.
For the audio material 3, the corresponding content category is "refrain" and the corresponding beats are the first to second beats and the fifth to sixth beats are obtained from the material library shown in table 1; determining fourth target content located in first to second beats and fifth target content located in fifth to sixth beats in first song content corresponding to the category 'refrain' in the first music file, wherein the fourth target content is content in beats C1 and C2, and the fifth target content is content in beats C5 and C6; determining sixth target content located in the first to second beats and seventh target content located in the fifth to sixth beats in second refrain content corresponding to the category 'refrain' in the first music file, wherein the sixth target content is content in beats E1 and E2, and the seventh target content is content in nodes E5 and E6; the audio material 3 is superimposed on the basis of the fourth target content in the beats C1 and C2, the audio material 3 is superimposed on the basis of the fifth target content in the beats C5 and C6, the audio material 3 is superimposed on the basis of the sixth target content in the beats E1 and E2, and the audio material 3 is superimposed on the basis of the seventh target content in the beats E5 and E6.
For the audio material 4, the corresponding content category is "interlude" and the corresponding tempo is first to second tempo are obtained from the material library shown in table 1; determining eighth target content located within the first to second beats, i.e., content within the beats D1 and D2, among the interlude content corresponding to the category "interlude" in the first music file; the audio material 4 is superimposed on the eighth target content within the beats D1 and D2, resulting in a third music file as shown in fig. 4.
Step 203: and according to the content category corresponding to the target mixing effect device and at least one beat, determining the target content positioned in the at least one beat according to the music content corresponding to the content category in the third music file.
The target mixing effector is any mixing effector in the effector library. The mixer effector may be configured to change a playing effect of the music content in the music file, the playing of the changed music content being dramatic.
Optionally, an effector library may be set in advance, where the effector library includes at least one mixing effector, each mixing effector corresponds to a content category and at least one beat of corresponding music content, and the at least one beat corresponding to the mixing effector belongs to a duration of the music content in the content category corresponding to the mixing effector.
The effector library may be a correspondence of mixing effectors, content categories of music contents, and tempos, see the effector library shown in table 2 below. A high-pass filter for filtering out music contents having a frequency lower than the first cut-off frequency among the music contents; and the low-pass filter is used for filtering out the music content with the frequency higher than the first cut-off frequency in the music content, and the first cut-off frequency and the second cut-off frequency are both preset frequency values. The bit distorter is for changing the sampling rate of music content, the vibrato is for changing the rate of amplitude change of the music content, and the beat circulator is for repeatedly adding to a music content in a music file.
TABLE 2
Frequency mixing effect device | Content categories | Beat of sound |
High-pass filter | Prelude to the more | First to third beats |
Low-pass filter | Master song | First to second beats |
Bit distorter | Master song | Fifth to eighth beats |
Trill device | Karaoke (music of karaoke) | First to fourth beats |
Beat circulator | Interlude musical instrument | First to second beats |
Step 204: and processing the target content in the third music file by using the target mixing effect device to obtain a second music file.
For example, for the second music file shown in fig. 2, for the high-pass filter, the corresponding content category is obtained from the effector library shown in table 2 as the prelude and the corresponding beat is the first to third beats, the first target content located in the first to third beats, that is, the content in the beats a1 and A3, is determined in the prelude content corresponding to the category "prelude" in the second music file, the first target content located in the beats a1 and A3 in the second music file is filtered by using the high-pass filter, and the content below the first cut-off frequency is filtered out, so that only the first target content of high frequency is played when the first target content is played, and the playing effect of playing the first target content can be improved.
For the low-pass filter, the corresponding content category is "master song" and the corresponding beat is first to second beat "obtained from the effector library shown in table 2, the second target content located in the first to second beat is determined in the first master song content corresponding to the category" master song "in the third music file, and the third target content located in the first to second beat is determined in the second master song content corresponding to the category" master song "in the second music file; the second target content is the content in the beats B1 and B2, and the third target content is the content in the beats F1 and F2; the second target content in the beats B1 and B2 and the third target content in the beats F1 and F2 in the third music file are filtered by using a low-pass filter to filter out the content higher than the second cut-off frequency, so that only the low-frequency content is played while the second target content and the third target content are played, and the playing effect of playing the second target content and the third target content can be improved.
For the bit distorter, the corresponding content category is "verse" and the corresponding tempo is the fifth to eighth tempo are obtained from the effect library shown in table 2; the fourth target content in the fifth to eighth beats is determined in the first song master content corresponding to the category 'song master' in the second music file, because the second song master content only includes four beats, namely, there is no fifth to eighth beats, it is not necessary to acquire the content in the fifth to eighth beats, the fourth target content is the content in the beats B5 and B8, the sampling rate of the fourth target content in the beats B5 and B8 in the second music file is changed by using a bit distorter, so that the sampling rate of the third target content played when the fourth target content is played is different from the sampling rate of other content, and the playing effect of playing the fourth target content can be improved.
For the vibrato, the corresponding content category is 'refrain' and the corresponding beat is the first to fourth beat, the fifth target content in the first to fourth beat is determined in the first refrain content corresponding to the category 'refrain' in the second music file, and the sixth target content in the first to fourth beat is determined in the second refrain content corresponding to the category 'refrain' in the second music file, which are obtained from the effect library shown in table 2; the fifth target content is the content in the beats C1 and C4, and the sixth target content is the content in the beats E1 and E4; the rate of amplitude change of the fifth target content located within the beats C1 and C4 and the rate of amplitude change of the sixth target content located within the beats E1 and E4 in the second music file are changed using a vibrato. In this way, the speed of the amplitude change of the fifth target content and the sixth target content played when the fifth target content and the sixth target content are played is different from that of other contents, and the effect of sound vibration occurs when the fifth target content and the sixth target content are played, so that the playing effect of playing the fifth target content and the sixth target content can be improved.
For the beat circulator, the corresponding content category is "interlude" and the corresponding beat is first to second beats are obtained from the effect library shown in table 2, and seventh target content located in the first to second beats is determined in interlude content corresponding to the category "interlude" in the second music file; the seventh target content is the content within the beats D1 and D4; and a plurality of preset seventh target contents are added between the ending position of the beat D4 and the starting position of the beat D5 of the beat circulator to obtain a third music file, so that the seventh target contents can be played repeatedly when the music file is played, and the playing effect is improved.
In the embodiment of the application, the third music file is obtained by acquiring the structural feature information of the first music file and adding the audio material in the first music file according to the structural feature information, so that the music content in the third music file is richer. And processing the music content in the third music file according to the structural characteristic information and the mixing effect device in the effect device library to obtain a second music file, so that different music contents of the second music file have different playing effects, and the effect of playing the music file is enriched.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, an embodiment of the present application provides an apparatus 300 for processing a music file, where the apparatus 300 includes:
an analysis module 301, configured to analyze a first music file to be processed to obtain structural feature information of the first music file, where the structural feature information includes a content category of each piece of music content of the first music file and a beat included in each piece of music content;
a processing module 302, configured to process the first music file to obtain a second music file according to an effector library and the structural feature information, where the effector library includes at least one mixing effector, and each mixing effector corresponds to a content category and at least one beat of corresponding music content.
Optionally, the processing module 302 is configured to:
according to a material library and the structural feature information, audio materials included in the material library are superposed in the first music file to obtain a third music file, wherein the material library includes at least one audio material, and each audio material corresponds to the content type and at least one beat of corresponding music content;
and processing the third music file according to the effector library and the structural feature information to obtain a second music file.
Optionally, the processing module 302 is configured to:
according to the content category and at least one beat corresponding to a target audio material, determining target content located in the at least one beat in the music content corresponding to the content category in a first music file, wherein the target audio material is any one audio material in the material library;
superimposing the target audio material on the target content in a first music file.
Optionally, the processing module 302 is configured to:
according to the content category and at least one beat corresponding to the target mixing effect device, determining target content located in the at least one beat in the music content corresponding to the content category in the third music file, wherein the target mixing effect device is any mixing effect device in an effect library;
processing the target content in the third music file using the target mix effector.
Optionally, the mixing effectors in the effector library include at least one of a high-pass filter, a low-pass filter, a bit distorter, a vibrater, and a beat circulator;
the high-pass filter corresponds to a content type and at least one beat of corresponding music content, the low-pass filter corresponds to a content type and at least one beat of corresponding music content, the bit distorter corresponds to a content type and at least one beat of corresponding music content, the vibrato corresponds to a content type and at least one beat of corresponding music content, and the beat circulator corresponds to a content type and at least one beat of corresponding music content.
Optionally, the content category corresponding to the high-pass filter is a prelude, and the corresponding beat is a first beat to a third beat;
the content category corresponding to the low-pass filter is a master song, and the corresponding beat is a first beat to a second beat;
the content category corresponding to the bit distorter is a master song, and the corresponding beats are fifth to eighth beats;
the content category corresponding to the vibrato is chorus, and the corresponding beat is the first beat to the fourth beat;
the content category corresponding to the beat circulator is interlude, and the corresponding beat is the first beat to the second beat.
In the embodiment of the application, the acquisition module acquires the structural feature information of the first music file, and the processing module adds audio materials to the first music file according to the structural feature information to obtain a third music file, so that the music content in the third music file is richer. And processing the music content in the third music file according to the structural characteristic information and the mixing effect device in the effect device library to obtain a second music file, so that different music contents of the second music file have different playing effects, and the effect of playing the music file is enriched.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present invention. The terminal 400 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture experts Group Audio Layer III, motion video experts compression standard Audio Layer 3), an MP4 player (Moving Picture experts Group Audio Layer IV, motion video experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (location based Service). The positioning component 408 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When the power source 409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (10)
1. A method of processing music files, the method comprising:
analyzing a first music file to be processed to obtain structural characteristic information of the first music file, wherein the structural characteristic information comprises the content category of each music content of the first music file and the beat of each music content, and the music content of the first music file comprises at least one of prelude content, master song content, refrain content and interlude content;
according to a material library and the structural feature information, audio materials included in the material library are superposed in the first music file to obtain a third music file, wherein the material library includes at least one audio material, and each audio material corresponds to the content type and at least one beat of corresponding music content;
and processing the third music file to obtain a second music file according to an effector library and the structural feature information, wherein the effector library comprises at least one mixing effector, and each mixing effector corresponds to the content category and at least one beat of corresponding music content.
2. The method of claim 1, wherein said superimposing, in the first music file, audio materials included in the material library based on the material library and the structural feature information to obtain a third music file, comprises:
according to the content category and at least one beat corresponding to a target audio material, determining target content located in the at least one beat in the music content corresponding to the content category in a first music file, wherein the target audio material is any one audio material in the material library;
superimposing the target audio material on the target content in a first music file.
3. The method according to claim 1 or 2, wherein the processing the third music file to obtain a second music file according to an effector library and the structural feature information comprises:
according to the content category and at least one beat corresponding to the target mixing effect device, determining target content located in the at least one beat in the music content corresponding to the content category in the third music file, wherein the target mixing effect device is any mixing effect device in an effect library;
processing the target content in the third music file using the target mix effector.
4. The method of any of claims 1 or 2, wherein the mixing effectors in the effector library comprise at least one of a high pass filter, a low pass filter, a bit distorter, a vibrater, and a beat circulator;
the high-pass filter corresponds to a content type and at least one beat of corresponding music content, the low-pass filter corresponds to a content type and at least one beat of corresponding music content, the bit distorter corresponds to a content type and at least one beat of corresponding music content, the vibrato corresponds to a content type and at least one beat of corresponding music content, and the beat circulator corresponds to a content type and at least one beat of corresponding music content.
5. The method of claim 4, wherein the high pass filter corresponds to a content category being a prelude and corresponding beats being first to third beats;
the content category corresponding to the low-pass filter is a master song, and the corresponding beat is a first beat to a second beat;
the content category corresponding to the bit distorter is a master song, and the corresponding beats are fifth to eighth beats;
the content category corresponding to the vibrato is chorus, and the corresponding beat is the first beat to the fourth beat;
the content category corresponding to the beat circulator is interlude, and the corresponding beat is the first beat to the second beat.
6. An apparatus for processing music files, the apparatus comprising:
the analysis module is used for analyzing a first music file to be processed to obtain structural characteristic information of the first music file, wherein the structural characteristic information comprises the content category of each music content of the first music file and the beat of each music content, and the music content of the first music file comprises at least one of prelude content, master song content, refrain content and interlude content;
the processing module is used for superposing audio materials included in the material library in the first music file to obtain a third music file according to the material library and the structural characteristic information, wherein the material library includes at least one audio material, and each audio material corresponds to the content type and at least one beat of corresponding music content; and processing the third music file to obtain a second music file according to an effector library and the structural feature information, wherein the effector library comprises at least one mixing effector, and each mixing effector corresponds to the content category and at least one beat of corresponding music content.
7. The apparatus of claim 6, wherein the processing module is to:
according to the content category and at least one beat corresponding to a target audio material, determining target content located in the at least one beat in the music content corresponding to the content category in a first music file, wherein the target audio material is any one audio material in the material library;
superimposing the target audio material on the target content in a first music file.
8. The apparatus of claim 6 or 7, wherein the processing module is to:
according to the content category and at least one beat corresponding to the target mixing effect device, determining target content located in the at least one beat in the music content corresponding to the content category in the third music file, wherein the target mixing effect device is any mixing effect device in an effect library;
processing the target content in the third music file using the target mix effector.
9. The apparatus of any of claims 6 or 7, wherein the mixing effectors in the effector library comprise at least one of a high pass filter, a low pass filter, a bit distorter, a vibrater, and a beat circulator;
the high-pass filter corresponds to a content type and at least one beat of corresponding music content, the low-pass filter corresponds to a content type and at least one beat of corresponding music content, the bit distorter corresponds to a content type and at least one beat of corresponding music content, the vibrato corresponds to a content type and at least one beat of corresponding music content, and the beat circulator corresponds to a content type and at least one beat of corresponding music content.
10. The apparatus of claim 9, wherein the high pass filter corresponds to a content category being a prelude and corresponding beats being first to third beats;
the content category corresponding to the low-pass filter is a master song, and the corresponding beat is a first beat to a second beat;
the content category corresponding to the bit distorter is a master song, and the corresponding beats are fifth to eighth beats;
the content category corresponding to the vibrato is chorus, and the corresponding beat is the first beat to the fourth beat;
the content category corresponding to the beat circulator is interlude, and the corresponding beat is the first beat to the second beat.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811409332.XA CN109545249B (en) | 2018-11-23 | 2018-11-23 | Method and device for processing music file |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811409332.XA CN109545249B (en) | 2018-11-23 | 2018-11-23 | Method and device for processing music file |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109545249A CN109545249A (en) | 2019-03-29 |
CN109545249B true CN109545249B (en) | 2020-11-03 |
Family
ID=65850383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811409332.XA Active CN109545249B (en) | 2018-11-23 | 2018-11-23 | Method and device for processing music file |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109545249B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112711330B (en) * | 2020-12-25 | 2023-03-31 | 瑞声新能源发展(常州)有限公司科教城分公司 | Vibration effect realization method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1119318A (en) * | 1994-08-10 | 1996-03-27 | 雅马哈株式会社 | Sound signal producing apparatus |
CN101211643A (en) * | 2006-12-28 | 2008-07-02 | 索尼株式会社 | Music editing device, method and program |
CN102568454A (en) * | 2011-12-13 | 2012-07-11 | 北京百度网讯科技有限公司 | Method and device for analyzing music BPM (Beat Per Minutes) |
CN107332994A (en) * | 2017-06-29 | 2017-11-07 | 深圳传音控股有限公司 | A kind of tuning effect Self Matching method and system |
CN108335688A (en) * | 2017-12-28 | 2018-07-27 | 广州市百果园信息技术有限公司 | Main beat point detecting method and computer storage media, terminal in music |
CN108831425A (en) * | 2018-06-22 | 2018-11-16 | 广州酷狗计算机科技有限公司 | Sound mixing method, device and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080074975A (en) * | 2005-12-09 | 2008-08-13 | 소니 가부시끼 가이샤 | Music edit device, music edit information creating method, and recording medium where music edit information is recorded |
CN104103300A (en) * | 2014-07-04 | 2014-10-15 | 厦门美图之家科技有限公司 | Method for automatically processing video according to music beats |
CN105468328A (en) * | 2014-09-03 | 2016-04-06 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105045809B (en) * | 2015-06-05 | 2017-03-15 | 广州酷狗计算机科技有限公司 | The method and device of piloting of multimedia file |
GB2581032B (en) * | 2015-06-22 | 2020-11-04 | Time Machine Capital Ltd | System and method for onset detection in a digital signal |
CN107967706B (en) * | 2017-11-27 | 2021-06-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Multimedia data processing method and device and computer readable storage medium |
-
2018
- 2018-11-23 CN CN201811409332.XA patent/CN109545249B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1119318A (en) * | 1994-08-10 | 1996-03-27 | 雅马哈株式会社 | Sound signal producing apparatus |
CN101211643A (en) * | 2006-12-28 | 2008-07-02 | 索尼株式会社 | Music editing device, method and program |
CN102568454A (en) * | 2011-12-13 | 2012-07-11 | 北京百度网讯科技有限公司 | Method and device for analyzing music BPM (Beat Per Minutes) |
CN107332994A (en) * | 2017-06-29 | 2017-11-07 | 深圳传音控股有限公司 | A kind of tuning effect Self Matching method and system |
CN108335688A (en) * | 2017-12-28 | 2018-07-27 | 广州市百果园信息技术有限公司 | Main beat point detecting method and computer storage media, terminal in music |
CN108831425A (en) * | 2018-06-22 | 2018-11-16 | 广州酷狗计算机科技有限公司 | Sound mixing method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109545249A (en) | 2019-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109033335B (en) | Audio recording method, device, terminal and storage medium | |
CN108538302B (en) | Method and apparatus for synthesizing audio | |
CN108008930B (en) | Method and device for determining K song score | |
CN110688082B (en) | Method, device, equipment and storage medium for determining adjustment proportion information of volume | |
CN110931053B (en) | Method, device, terminal and storage medium for detecting recording time delay and recording audio | |
CN109448761B (en) | Method and device for playing songs | |
CN109587549B (en) | Video recording method, device, terminal and storage medium | |
CN109192218B (en) | Method and apparatus for audio processing | |
CN111061405B (en) | Method, device and equipment for recording song audio and storage medium | |
CN109346111B (en) | Data processing method, device, terminal and storage medium | |
CN108831425B (en) | Sound mixing method, device and storage medium | |
CN109003621B (en) | Audio processing method and device and storage medium | |
CN107862093B (en) | File attribute identification method and device | |
CN112487940B (en) | Video classification method and device | |
CN109743461B (en) | Audio data processing method, device, terminal and storage medium | |
CN110266982B (en) | Method and system for providing songs while recording video | |
CN109065068B (en) | Audio processing method, device and storage medium | |
CN109616090B (en) | Multi-track sequence generation method, device, equipment and storage medium | |
CN109243479B (en) | Audio signal processing method and device, electronic equipment and storage medium | |
CN111261185A (en) | Method, device, system, equipment and storage medium for playing audio | |
CN108053832B (en) | Audio signal processing method, audio signal processing device, electronic equipment and storage medium | |
CN113596516A (en) | Method, system, equipment and storage medium for chorus of microphone and microphone | |
CN111092991B (en) | Lyric display method and device and computer storage medium | |
CN111081277B (en) | Audio evaluation method, device, equipment and storage medium | |
CN111402844A (en) | Song chorusing method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |