CN105264913A - Mixing management device and mixing management method - Google Patents

Mixing management device and mixing management method Download PDF

Info

Publication number
CN105264913A
CN105264913A CN201480031354.0A CN201480031354A CN105264913A CN 105264913 A CN105264913 A CN 105264913A CN 201480031354 A CN201480031354 A CN 201480031354A CN 105264913 A CN105264913 A CN 105264913A
Authority
CN
China
Prior art keywords
data
melody
target characteristic
characteristic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201480031354.0A
Other languages
Chinese (zh)
Inventor
木村繁树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Corp
Original Assignee
Yamaha Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Corp filed Critical Yamaha Corp
Publication of CN105264913A publication Critical patent/CN105264913A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/46Volume control

Abstract

A first acquisition unit (32) acquires recording characteristic data (Q) representing the acoustic characteristics of a plurality of sets of recording data (DA) that represent sounds of different performed parts of a target musical piece. A second acquisition unit (34) acquires target characteristic data (R) representing target acoustic characteristics of the target musical piece. A settings unit (36), in response to the recording characteristic data (Q) and the target characteristic data (R), sets a control parameter (X) for application to the mixing process, doing so in such a way that the acoustic characteristics subsequent to execution of the mixing process on the plurality of sets of recording data (DA) of the musical piece approximate the acoustic characteristics represented by the target characteristic data (R).

Description

Mixed management device and mixed management method
Technical field
The present invention relates to the mixing of multiple sound datas of the waveform representing sound equipment, especially relate to the setting of the parameter being applied to mixing.
Background technology
In the past, proposed there is the technology that multiple sound datas of the performance sound of each performance part of having included melody are mixed.Such as, in patent documentation 1, following technology is disclosed: prepare the template of the mixing tendency reflecting mixing technician in advance and be applied in the mixing of multiple sound data.
Patent documentation 1: Japanese Patent No. 4079260 publication
Summary of the invention
The problem that invention will solve
In practice, multiple sound data is included under various condition (include environment, use equipment).In the technology of patent documentation 1, due to application and condition of including the independently pre-prepd template of sound data in mixing, therefore likely cannot perform because of each the including condition of sound data the suitable mixing originally wanted in template Central Plains.Consider above situation, the object of the invention is to by compensating the difference of the condition of including and realize desired mixing.
For solving the technical scheme of problem
In order to solve above problem, mixed management device of the present invention possesses: the first acquisition unit, obtain represent multiple include the acoustic characteristic of data include performance data, above-mentioned multiple sound equipment of including the different performance part of data representation melody; Second acquisition unit, obtains the target characteristic data of the acoustic characteristic of the target representing above-mentioned melody; And configuration part, with make with above-mentioned multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by above-mentioned target characteristic data, include according to above-mentioned the controling parameters that performance data and the setting of above-mentioned target characteristic data are applicable to this mixing.
In addition, mixed management method of the present invention possesses: the first obtaining step, obtain represent multiple include the acoustic characteristic of data include performance data, above-mentioned multiple sound equipment of including the different performance part of data representation melody; Second obtaining step, obtains the target characteristic data of the acoustic characteristic of the target representing above-mentioned melody; And setting procedure, with make with above-mentioned multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by above-mentioned target characteristic data, include according to above-mentioned the controling parameters that performance data and the setting of above-mentioned target characteristic data are applicable to this mixing.
Accompanying drawing explanation
Fig. 1 is the block diagram of the acoustic processing system of the first execution mode of the present invention.
Fig. 2 is multiple key diagrams of including the mixing of data.
Fig. 3 is the key diagram of various data.
Fig. 4 is the block diagram of functional structure of acoustic processing system.
Fig. 5 is the key diagram of the action of acoustic processing system.
Fig. 6 is the block diagram of functional structure of the acoustic processing system of the second execution mode.
Fig. 7 is the key diagram of the action of the acoustic processing system of the second execution mode.
Fig. 8 is the key diagram revising data.
Fig. 9 is the key diagram of the target characteristic data of the 3rd execution mode.
Figure 10 is the key diagram of the action of the acoustic processing system of the 3rd execution mode.
Figure 11 is the key diagram of the list of target characteristic data.
Figure 12 is the block diagram of functional structure of the mixed management device of the 4th execution mode.
Figure 13 is the block diagram of functional structure of the mixed management device of variation.
Description of reference numerals
100 ... acoustic processing system, 12 ... terminal installation, 14 ... mixed management device, 16 ... communication network, 22 ... sound equipment handling part, 121,142 ... control device, 122,144 ... storage device, 123,146 ... communicator, 124 ... display unit, 125 ... input unit, 126 ... playback set, 32 ... first acquisition unit, 34 ... second acquisition unit, 36 ... configuration part, 42 ... communication control unit, 44 ... update section
Embodiment
(the first execution mode)
Fig. 1 is the block diagram of the acoustic processing system 100 of the first execution mode of the present invention.As the example shown in figure 1, acoustic processing system 100 is the communication systems possessing multiple terminal installation 12 and mixed management device 14.Each terminal installation 12 is such as the communication terminals such as mobile phone, smart mobile phone, panel computer, intercoms with mixed management device 14 phase via the such as communication network such as mobile radio communication, the Internet 16.
A terminal installation 12 is as illustrated typically in Fig. 1, and each terminal installation 12 is that the computer system by possessing control device 121, storage device 122, communicator 123, display unit 124, input unit 125 and playback set 126 realizes.Control device 121 is the arithmetic processing apparatus performing various control treatment and calculation process by performing the program being stored in storage device 122.The program that storage device 122 (such as, semiconductor recording medium) memory control device 121 performs and the various data that control device 121 uses.Communicator 123 communicates with mixed management device 14 via communication network 16.In addition, the communication between terminal installation 12 and communication network 16 is radio communication in the typical case, but such as when the information processor of fixed is used as terminal installation 12, terminal installation 12 and communication network 16 also can carry out wire communication.
Display unit 124 (such as, display panels) shows the image indicated by control device 121.Input unit 125 receives the operating equipment from the instruction to terminal installation 12 of user, such as, comprises for multiple functional unit of user's operation and form.In addition, the touch panel be integrally constituted with display unit 124 also can be adopted as input unit 125.
The control device 121 of the first execution mode is stored in the program of storage device 122 by execution and the sound equipment handling part 22 as Fig. 2 plays a role.As illustrated in Figure 2, sound equipment handling part 22 is included data DA by mixing and corresponding multiple of the different performance part of melody and generates and adjust sound data DB.Respectively include data DA and represent specific performance part about melody and the sound data of the time waveform of the performance sound (sound, musical sound) of including in advance.On the other hand, the sound data (such as, the sound data of left and right 2 sound channel) that sound data DB is the time waveform of the performance sound representing the melody be made up of multiple performance part is adjusted.
Specifically, sound equipment handling part 22 mutually carries out add operation on the basis of melody multiple being included to data DA and separately perform sound equipment process, generates adjustment sound data DB thus.The various signal transacting that acoustic characteristic is changed for the sound equipment process of respectively including data DA.As to the sound equipment process of including data DA, the delay disposal etc. that illustration has the amplification process such as making the average volume of sound equipment increase and decrease, the characteristic that the frequency characteristic of sound equipment (such as, the volume of each band domain) is changed adjustment processes, sound equipment is postponed on a timeline.At multiple parameter (hereinafter referred to as " the controling parameters ") X including application controls in the mixing of data DA based on sound equipment handling part 22.Playback set 126 plays the sound equipment corresponding with the adjustment sound data DB that sound equipment handling part 22 generates.In addition, for the ease of representing, eliminate the diagram of D/A converter adjustment sound data DB being converted to analog signal from digital signal.
Multiple line pipes that are mixed into of including data DA that the sound equipment handling part 22 of mixed management device 14 to each terminal installation 12 of Fig. 1 carries out are managed.Specifically, the mixed management device 14 of the first execution mode is that setting is applied to multiple server unit (being the webserver in typical situation) of including the preferred controling parameters X of the mixing of data DA, is realized by the computer system possessing control device 142, storage device 144 and communicator 146.In addition, also can by mutual split multiple devices (such as, mutually carrying out the multiple server units communicated via communication network 16) of forming realize mixed management device 14.Control device 142 is the arithmetic processing apparatus performing various control treatment and calculation process by performing the program being stored in storage device 144.Communicator 146 communicates with each terminal installation 12 via communication network 16.
The program that storage device 144 memory control device 142 performs and the various data that control device 142 uses.Such as, the combination of the well-known recording medium such as semiconductor recording medium, magnetic recording media or multiple recording medium can be adopted as storage device 144.In addition, also can adopt following structure: with the external device (ED) (such as server unit) of mixed management device 14 split in storage device 144 is set, mixed management device 14 performs write, the reading of the information of the storage device 144 to external device (ED) via communication network 16.
Fig. 3 is the key diagram of the storage content of storage device 144, the movement content of control device 142.As illustrated in Figure 3 like that, the storage device 144 of the first execution mode stores multiple music data M.Each music data M comprises the attribute data MA of relevant information and the data DA that includes of N number of (N is the natural number of more than 2) corresponding from the different performance part of melody such as specifying melody name, singer's name and forms.As mentioned above, N number ofly the sound data that data DA is the object of the mixing becoming terminal installation 12 (sound equipment handling part 22) is included.N number of (the N rail) of having included the performance sound of N number of performance part of melody side by side or is independently included data DA and is stored in storage device 144.Specifically, what generated by the including of the performance sound in the sound spaces such as music studio (multitrack recording) is N number ofly included data DA is being sent to mixed management device 14 basis from terminal installation 12 and is stored in storage device 144.Therefore, even if at melody, this shares in multiple music data M, the condition of including (including environment, use equipment) of each music data M respectively being included to data DA can be different.In addition, in the following description, for the ease of representing, situation about being made up of N number of performance part exemplified with each melody, but the sum in reality, each melody being played to part is different.
In addition, as illustrated in Figure 3, the storage device 144 of the first execution mode stores the multiple target characteristic data Rs corresponding from different melodies.Each target characteristic data R be the acoustic characteristic of the Exemplary performance sound representing melody (namely, become the acoustic characteristic of the target of mixing) data, as illustrated in Figure 3 like that, comprise and specify the attribute data RA of the information of being correlated with from melody and corresponding N number of unit data r [1] ~ r [N] and forming with the different performance part of melody.Attribute data RA such as formulates relevant informations such as melody name, melody structures (structure such as prelude, A trifle, B trifle, refrain).
Constituent parts data r [n] (n=1 ~ N) that be contained in a target characteristic data R represents one of n-th in the N number of performance part the forming melody acoustic characteristic (acoustic characteristic of target) playing the performance sound of part.Specifically, the acoustic characteristics such as the time variations of the average volume in melody, volume, audio-video position (phonotape and videotape of listener institute perception is by the position of locating), frequency characteristic and reverberation characteristic are specified by constituent parts data r [n].As reverberation characteristic, such as illustrate have the reverberation time, the time span etc. in the interval or reverberation interval, rear portion of initial reflection when reverberation interval being divided into the interval and reverberation interval, rear portion of initial reflection on a timeline.
The kind of the acoustic characteristic of being specified by constituent parts data r [n] sets respectively to each performance part, and can be different between each performance part.Such as, as illustrated in Figure 3 like that, the unit data r [n] of the performance part that vocal music part, guitar part etc. have sound (humorous Boeing) to have the advantage comprises the time variations of the frequency characteristic of the envelope of harmonic structure (system of fundamental tone composition and multiple overtone composition) etc., the unit data r [n] of the performance part that the voiceless sounds (anharmonic Boeing) such as tum portion (rhythm part) are had the advantage comprises the information (such as, the beat of melody, the cycle etc. of each bat point) relevant to the rhythm of melody.
The target characteristic data R of the first execution mode generates in advance by analyzing existing sound data (hereinafter referred to as " target data ") and is stored in storage device 144.Such as, by using be recorded in the recording mediums such as music CD existing sound data, for the MP3 form of the transmission of each terminal installation 12 sound data as target data (namely, the sound data of Exemplary performance sound) and analyze, and generate target characteristic data R.In addition, the producer such as sound technique person also manually can generate target characteristic data R.
Fig. 4 is functional structure chart of the mixed management device 14 of the first execution mode.As illustrated in Figure 4, the control device 142 of mixed management device 14 realizes for setting by performing the program being stored in storage device 144 and utilizes the N number of multiple functions (the first acquisition unit 32, second acquisition unit 34, configuration part 36, communication control unit 42) of including the controling parameters X of the mixing of data DA being applicable to specific melody (hereinafter referred to as " object melody ").In addition, the structure each function of control device 142 being scattered in multiple integrated circuit, the structure realizing a part for the function of control device 142 with special electronic circuit (such as DSP) can also be adopted.
First acquisition unit 32 of Fig. 4 obtains includes performance data Q, this N number of acoustic characteristic of including performance sound (such as, the performance sound of the reality of each user) represented by data DA of including performance data Q indicated object melody.As mentioned above, the condition of including (playing the acoustic characteristic of sound) of respectively including data DA can be different for each music data M, even if therefore when melody itself shares, represent that the performance data Q that includes respectively including the acoustic characteristic of data DA also can be different for each music data M.
As illustrated in Figure 3 like that, include performance data Q and comprise N number of unit data q [the 1] ~ q [N] corresponding from the different performance part (rail) of object melody and form.The acoustic characteristic of the performance sound of in N number of performance part of constituent parts data q [n] indicated object melody, n-th a performance part.Specifically, the acoustic characteristics such as the time variations of the average volume in melody, volume, audio-video position, frequency characteristic and reverberation characteristic are specified by the constituent parts data q [n] including performance data Q in the same manner with the constituent parts data r [n] of target characteristic data R.The kind of the acoustic characteristic of being specified by constituent parts data q [n] can be different for each performance part.
First acquisition unit 32 of the first execution mode is included data DA to generate in music data M N number of of the object melody of storage device 144 by analyzing stored and is included performance data Q.Specifically, the first acquisition unit 32 pairs of object melodies N number of include data DA distinguish respectively play part and analyze each performance part include data DA, generate thus unit data q [n] (the including performance data Q) of each performance part.
For the distinguishing of performance part of respectively including data DA, such as, the comparable data of the time series (i.e. melody) of the note of each performance part of regulation melody is preferably utilized.Comparable data is such as the time series data of MIDI (MusicalInstrumentDigitalInterface: the musical instrument digital interface) form utilized in the performance of Karaoke.Specifically, first acquisition unit 32 analyzes the time variations of respectively including the pitch that data DA represents, and with reference in multiple performance parts of data, performance part that the time series of note is similar with the time variations of the pitch of including data DA is characterized as the performance part that this includes data DA.In addition, also can adopt and distinguish the structure playing part, the structure (such as, the performance part of including data DA that there is not harmonic structure being characterized as the structure in tum portion) distinguishing performance part according to the presence or absence of the harmonic structure of including the sound equipment that data DA represents according to the range (distribution of pitch) of including the sound equipment that data DA represents.
Second acquisition unit 34 of Fig. 4 obtains the target characteristic data R of object melody.Second acquisition unit 34 of the first execution mode is selected be stored in multiple target characteristic data R of storage device 144, corresponding with object melody target characteristic data R and obtain from storage device 14 for different melodies.
Configuration part 36 sets the controling parameters X of object melody according to the target characteristic data R including performance data Q and the acquisition of the second acquisition unit 34 that the first acquisition unit 32 obtains.Specifically, configuration part 36 is similar to the mode of the acoustic characteristic represented by (ideally consistent) target characteristic data R, according to including performance data Q and target characteristic data R setup control parameter X with the acoustic characteristic after making the mixing of including data DA for object melody N number of perform.Such as, following optimization process is preferably utilized in the setting of the controling parameters X carried out in configuration part 36: the controling parameters X fixed tentatively to make application includes to N number of the process that mode that the data DA acoustic characteristic of having carried out when mixing is similar to the acoustic characteristic (difference of acoustic characteristic is minimized) of target characteristic data R upgrades controling parameters X successively.As mentioned above, the condition of including of respectively including data DA can be different for each music data M, even if therefore when melody itself shares, controling parameters X also can be different for each music data M.
As illustrated in Figure 3 like that, the controling parameters X of the first execution mode comprises and N number ofly includes the many kinds of parameters (time code X1, volume parameters X2, positional parameter X3, frequency characteristic (F is special) parameter X4, reverberation parameters X5) of the mixing reflection of data DA and form.
Time code X1 is the data for being on a timeline adjusted to moment corresponding to target data (make musical sound corresponding in melody synchronous on a timeline) each moment of respectively including data DA.Specifically, time code X1 specifies the retardation (amount of movement on time shaft) in each moment of including data DA in the mode making each moment of including data DA correspond to each moment of target data.Configuration part 36 is such as included in performance data Q, specified by the unit data q [n] of rhythm part cadence information and target characteristic data R, specified by the unit data r [n] of rhythm part cadence information by mutual contrast and is carried out setting-up time code X1.In addition, the generation (including the Synchronization Analysis of data DA and target data) of time code X1, preferably utilizes the technology such as described in Japanese Laid-Open 2011-053589 publication.
Volume parameters X2 is the data for making the volume of respectively including data DA of music data M be similar to the volume of each performance part of (ideally consistent) target data.Specifically, different between the volume specified by constituent parts data r [n] of the volume (average volume melody in, the time variations of volume) of configuration part 36 specified by the constituent parts data q [n] including performance data Q and target characteristic data R set volume parameters X2.In addition, positional parameter X3 is the data for making the audio-video position of respectively including data DA be similar to the audio-video position of each performance part of (ideally consistent) target data.Specifically, different between the audio-video position specified by constituent parts data r [n] of the audio-video position of configuration part 36 specified by the constituent parts data q [n] including performance data Q and target characteristic data R set positional parameter X3.
Frequency characteristic parameter X4 is the data for making the frequency characteristic of respectively including data DA of music data M be similar to the frequency characteristic of each performance part of (ideally consistent) target data, such as, the gain (parametric equalizer) being applied to each band domain of the sound equipment of respectively including data DA is specified.Specifically, between the frequency characteristic specified by the constituent parts data r [n] of the frequency characteristic (such as, the envelope of harmonic structure) of configuration part 36 specified by the constituent parts data q [n] including performance data Q and target characteristic data R different come setpoint frequency characterisitic parameter X4.In addition, reverberation parameters X5 is the data for making the reverberation characteristic of respectively including data DA be similar to the reverberation characteristic of each performance part of (ideally consistent) target data.Different between the reverberation characteristic specified by the constituent parts data r [n] of the reverberation characteristic of configuration part 36 specified by the constituent parts data q [n] including performance data Q and target characteristic data R set reverberation parameters X5.It is more than the concrete example of controling parameters X.
The communication control unit 42 of Fig. 4 to carry out via communicator 146 and communication between each terminal installation 12 control.Specifically, N number of data DA that includes of music data M of the controling parameters X that set for object melody configuration part 36 of communication control unit 42 and the object melody that is stored in storage device 144 sends to terminal installation 12 from communicator 146.That is, the control device 142 (communication control unit 42) of the first execution mode sends to the key element of terminal installation 12 to play a role as by controling parameters X and multiple data DA that includes.
Fig. 5 is the key diagram of the action of the first execution mode.User selects desired object melody by the input unit 125 of suitably operating terminal device 12 and the beginning of instruction mixing.When receiving above instruction, the interior processing requirements that specifies in of the object melody comprised selected by user is sent to mixed management device 14 (S1) from communicator 123 via communication network 16 by the control device 121 of terminal installation 12.
When communicator 146 receives the processing requirements sent from terminal installation 12, the music data M of the object melody of being specified by processing requirements retrieved by the control device 142 (the first acquisition unit 32) of mixed management device 14 from storage device 144 by referring to the attribute data MA (melody name etc.) of each melody, and performance data Q (S2) is included in generation by the music data M (respectively including data DA) of analytic target melody.In addition, control device 142 (the second acquisition unit 34) is the target characteristic data R of searching object melody by referring to the attribute data RA (melody name etc.) of each target characteristic data R, and obtains the target characteristic data R (S3) of object melody from storage device 144.Further, control device 142 (configuration part 36) generates the target characteristic data R that obtains in performance data Q and step S3 with including of obtaining in step S2 corresponding controling parameters X (X1 ~ X5) (S4) in the mode that the acoustic characteristic after making the mixing of including data DA for object melody N number of perform is similar to the acoustic characteristic of target characteristic data R.Control device 142 (communication control unit 42) is contained in the N number of of the music data M of object melody from terminal installation 12 transmission of the transmission source of communicator 146 pairs of processing requirements (S1) and includes the controling parameters X (S5) set data DA and step S4.
When communicator 123 receive from mixed management device 14 send controling parameters X and N number of include data DA time, control device 121 (sound equipment handling part 22) the application controls parameter X of terminal installation 12 mixes and N number ofly includes data DA, generate adjustment sound data DB thus, and by adjustment sound data DB being supplied to playback set 126 and playing object melody (S6).
As discussed above, in the first embodiment, to make the mode setup control parameter X N number of acoustic characteristic of including when data DA mixes being similar to the acoustic characteristic of target characteristic data R, therefore approximate or consistent with the acoustic characteristic specified by target characteristic data R from the acoustic characteristic of the broadcasting sound of playback set 126 playback in the step S6 of Fig. 5.Namely, according to the first execution mode, there is following advantage: even if when the condition of including of respectively including data DA is different for each music data M (namely, have nothing to do with the condition of including of music data M), also can reduce (compensation) and include the difference of condition and realize desired mixing.In addition, in the first embodiment, the setting corresponding to the controling parameters X including performance data Q and target characteristic data R is performed uniformly by mixed management device 14, and controling parameters X is sent to each terminal installation 12.Therefore, and performed being calculated to of controling parameters X by each terminal installation 12 and set compared with the structure of (configuration part 36), there is the advantage of the processing load reducing each terminal installation 12.
(the second execution mode)
Below, the second execution mode of the present invention is described.In addition, in following illustrative each form, for the key element acted on, function is identical with the first execution mode, continue to use the Reference numeral of institute's reference in the explanation of the first execution mode, and suitably omit detailed description respectively.
Fig. 6 is the block diagram of the acoustic processing system 100 of the second execution mode, and Fig. 7 is the key diagram of the action of the second execution mode.As illustrated in figure 7, in this second embodiment, by performing the action (S1 ~ S6) identical with the first execution mode, playing from playback set 126 thus and being mixed with N number of adjustment sound data DB including data DA according to controling parameters X.
Uppick can the correction of acoustic characteristic of broadcasting sound of denoted object melody by the input unit 125 of suitably operating terminal device 12 from the user of the broadcasting sound of the object melody of playback set 126 playback.The control device 121 of terminal installation 12 generates the correction data Z (S7) corresponding with the instruction of the correction of acoustic characteristic, as according to Fig. 6 and Fig. 7 understand, via communication network 16, correction data Z is sent to mixed management device 14 (S8) from communicator 123.
Fig. 8 is the key diagram of the correction of acoustic characteristic.At the end of the broadcasting of the adjustment sound data DB of object melody, the control device 121 of terminal installation 12 makes display unit 124 show the edited image 50 of Fig. 8.In edited image 50, show the acoustic characteristic after the mixing execution mixing acoustic characteristic (including the acoustic characteristic of the data DA) CA before performing and apply controling parameters X (acoustic characteristic of adjustment sound data DB) CB contrastively.In fig. 8, exemplified with the time variations (transverse axis: time, the longitudinal axis: volume) of the volume before and after the mixing applying controling parameters X.User considers the result of the broadcasting sound of uppick adjustment sound data DB, and performs the acoustic characteristic CA before mix and compares and suitably input device 125, the acoustic characteristic CB after mixing execution can be compiled as desired acoustic characteristic CZ thus.The control device 121 of terminal installation 12 generates the correction data Z (S7) representing the revised acoustic characteristic CZ indicated by user, and correction data Z is sent to mixed management device 14 (S8) from communicator 123.
According to Fig. 6 understand such, the communicator 146 of the mixed management device 14 of the second execution mode receives the correction data Z sent from terminal installation 12.Correction data Z received by communication control unit 42 obtaining communication device 146.That is, the control device 142 (communication control unit 42) of the second execution mode plays a role as the key element (instruction acceptance division) obtaining the correction data Z corresponding with the instruction of the user of the broadcasting sound adjusting sound data DB from uppick.Repeatedly carry out in the broadcasting of the adjustment sound data DB that the editor of the acoustic characteristic CB in terminal installation 12 (revising the transmission of data Z) generates in the mixing applying controling parameters X at every turn.Therefore, communication control unit 42 obtains the multiple correction data Zs corresponding from the instruction from different users for common object melody successively from multiple terminal installation 12.
As illustrated in Figure 6 like that, the mixed management device 14 of the second execution mode is the structure mixed management device 14 of the first execution mode having been added to update section 44.Update section 44 upgrades the target characteristic data R (S9) of the object melody in storage device 144 from the correction data Z that terminal installation 12 obtains according to communication control unit 42.Specifically, update section 44 is close to the target characteristic data R (constituent parts data r [n]) of the mode upgating object melody of the acoustic characteristic revised represented by data Z.Such as, update section 44 performs predetermined statistical disposition (being average in typical case) for communication control unit 42 from multiple correction data Z that each terminal installation 12 obtains, and utilizes the target characteristic data R of correction data Z to object melody after process to upgrade.
In this second embodiment, also the effect identical with the first execution mode can be realized.In addition, in this second embodiment, instruction (revising data Z) from the user of the broadcasting sound of uppick adjustment sound data DB is reflected in target characteristic data R, therefore has the advantage of the content that the target characteristic data R being stored in storage device 144 can be updated to the acoustic characteristic of specifying desired by user.In this second embodiment, especially corresponding from the instruction from different users multiple correction data Z are reflected in target characteristic data R, therefore have and target characteristic data R can be updated to the so special advantage of preferred content for the user of majority.
(the 3rd execution mode)
Fig. 9 is the key diagram of the target characteristic data R of the 3rd execution mode.In the first embodiment, the multiple target characteristic data Rs corresponding from different melodies have been prepared.In the third embodiment, as illustrated in Figure 9, each melody is stored in storage device 144 to multiple target characteristic data R of the acoustic characteristic representing different.That is, for an arbitrary melody, the multiple target characteristic data R having acoustic characteristic different are prepared.Such as, for each melody, in storage device 144 except representing the target characteristic data R of the acoustic characteristic of standard, also store represent the bullet of musical instrument sing preferred acoustic characteristic, pay attention to the target characteristic data R of the acoustic characteristic of the individual character such as acoustic characteristic of rhythm.
Figure 10 is the key diagram of the action of the 3rd execution mode.Identical with the first execution mode, when receiving from terminal installation 12 processing requirements specifying object melody (S1), the multiple target characteristic data R object melody of being specified by processing requirements being stored in storage device 144 are informed to terminal installation 12 (S11) as the selection candidate of user by the control device 142 (communication control unit 42) of mixed management device 14.As illustrated in Figure 11 like that, the control device 121 of terminal installation 12 makes display unit 124 show the list 52 (S12) arranged as the selection candidate of user by multiple target characteristic data R of object melody.User can select desired target characteristic data R by suitably input device 125.When receiving the selection of target characteristic data R (S13), the target characteristic data R selected by user is informed to mixed management device 14 (S14) by the control device 121 of terminal installation 12.
When receiving the notice of the target characteristic data R selected by user, the control device 142 of mixed management device 14 is generated by the analysis of the music data M of object melody includes performance data Q (S2), and from storage device 144 obtain multiple target characteristic data R of object melody, in step S14 from terminal installation 12 notify target characteristic data R (S3).The target characteristic data R that control device 142 (configuration part 36) is selected according to the user including performance data Q and terminal installation 12 of object melody carrys out setup control parameter X (S4).Following action and the first execution mode, the second execution mode are identical.
In the third embodiment, also the effect identical with the first execution mode can be realized.In addition, in the third embodiment, the multiple target characteristic data R prepared for each melody are selectively used for the setting of controling parameters X, therefore with a kind of target characteristic data R is applied to regularly compared with the structure of the setting of controling parameters X, the adjustment sound data DB of various acoustic characteristic can be generated.In the third embodiment, especially by multiple target characteristic data R, target characteristic data R selected by user is applied to the setting of controling parameters X, therefore have and can generate and the intention of user, the advantage of having a liking for the adjustment sound data DB of consistent acoustic characteristic.
In addition, the structure upgrading second execution mode of target characteristic data R according to correction data Z can also be adopted in the third embodiment.Specifically, update section 44 upgrades the target characteristic data R in multiple target characteristic data R of the object melody being stored in storage device 144, user selects from list 52 according to the correction data Z of the instruction of the editor reflected from user.
(the 4th execution mode)
In above-mentioned each form, exemplified with the acoustic processing system 100 be made up of the terminal installation 12 intercomed mutually via communication network 16 and mixed management device 14.The mixed management device 14 of the 4th execution mode realizes the function identical with the acoustic processing system 100 of above-mentioned each form by device monomer.
Figure 12 is the block diagram of the mixed management device 14 of the 4th execution mode.As illustrated in Figure 12 like that, the mixed management device 14 of the 4th execution mode is realized by the computer system comprising control device 121, storage device 122, display unit 124, input unit 125 and playback set 126 identically with the terminal installation 12 of above-mentioned each form.Such as, the information processors such as mobile phone, smart mobile phone, personal computer are used as mixed management device 14.
Storage device 122, except the program that memory control device 121 performs, also stores music data M and target characteristic data R for each melody.Each function (the first acquisition unit 32, second acquisition unit 34, configuration part 36, sound equipment handling part 22, update section 44) that control device 121 realizes illustrated in above-mentioned form by performing the program being stored in storage device 122.Specifically, the first acquisition unit 32 to be generated in the music data M of the object melody of storage device 122 by analyzing stored and includes performance data Q, and the second acquisition unit 34 obtains the target characteristic data R of object melody from storage device 122.The target characteristic data R including performance data Q and the acquisition of the second acquisition unit 34 that configuration part 36 generates according to the first acquisition unit 32 generates controling parameters X.Controling parameters X set by sound equipment handling part 22 application settings portion 36 the N number of of music data M of blending objects melody includes data DA, generates adjustment sound data DB thus and plays from playback set 126.Update section 44 upgrades the target characteristic data R of the object melody in storage device 122 according to the instruction (revising data Z) of the user of the broadcasting sound from uppick adjustment sound data DB.The content of the concrete process of each key element is as illustrated in above-mentioned each form.
In the 4th execution mode, also can realize the effect identical with the first execution mode.In addition, in the 4th execution mode, also can adopt the structure of the 3rd execution mode multiple target characteristic data Rs corresponding with object melody being selectively used for the setting of controling parameters X.In addition, the structure (update section 44) upgrading target characteristic data R according to correction data Z can be omitted from the 4th execution mode.
< variation >
Above-mentioned each form can be diversely out of shape.Below, the execution mode of concrete distortion is illustrated.The plural execution mode at random selected from following illustration can suitably merge.
(1) in above-mentioned each form, generate the structure of including performance data Q exemplified with the first acquisition unit 32 by analyzing respectively the including data DA of music data M, but also the performance data Q that includes of each melody can be pre-stored within storage device 144 (being storage device 122 in the 4th execution mode).Be stored in the structure of storage device 144 by including performance data Q, the first acquisition unit 32 plays a role as the key element of including performance data Q from storage device 144 reading object melody.As according to above illustrate understand, first acquisition unit 32 generally show as obtain represent multiple key element of including performance data Q of including the acoustic characteristic of data DA, comprise the analysis by respectively including data DA and generate include performance data Q key element (above-mentioned each form) and reading pre-stored in these both sides of the key element of including performance data Q of storage device 144.
In addition, in above-mentioned each form, read the structure of the target characteristic data R being stored in storage device 144 (being storage device 122 in the 4th execution mode) exemplified with the second acquisition unit 34, but also the target data of each melody can be stored in storage device 144.Target data be stored in the structure of storage device 144, the second acquisition unit 34 generates target characteristic data R key element as the target data by analytic target melody plays a role.As according to above illustrate understand, second acquisition unit 34 is generally shown as the key element of target characteristic data R of the acoustic characteristic obtaining the target representing melody, comprise the key element (above-mentioned each form) of reading pre-stored in the target characteristic data R of storage device 144 and the analysis according to target data and generate these both sides of key element of target characteristic data R.
(2) at the first execution mode in the 3rd execution mode, terminal installation 12 is sent to from mixed management device 14 exemplified with by controling parameters X and N number of data DA that includes, the sound equipment handling part 22 of terminal installation 12 mixes N number of structure of including data DA, but also as illustrated in Figure 13, sound equipment handling part 22 can be arranged at mixed management device 14.The controling parameters X of sound equipment handling part 22 application settings portion 36 setting of Figure 13 and N number of data DA that includes of blending objects melody, generate adjustment sound data DB thus.The adjustment sound data DB that sound equipment handling part 22 generates by communication control unit 42 sends to terminal installation 12 from communicator 146 via communication network 16.According to above structure, there is the advantage without the need to assembling sound equipment handling part 22 at each terminal installation 12.
(3) in this second embodiment, the terminal installation 12 of monomer performs the broadcasting of adjustment sound data DB and revises the transmission of data Z, but also split ground formation can play the device of adjustment sound data DB and send the device of correction data Z.Such as, be preferably as follows structure: make the information processors such as personal computer perform the broadcasting of adjustment sound data DB, make the portable information processor such as mobile phone, smart mobile phone perform generation and the transmission of the correction data Z corresponding with the instruction from user.
(4) in this second embodiment, by reflecting the purpose of various user, multiple correction data Z of hobby send to mixed management device 14 from each terminal installation 12.In above structure, also can utilize the multiple correction data Z from each terminal installation 12 acquisition and generate multiple target characteristic data R.Such as, according to the tendency of the instruction content of user, by for a melody from the multiple correction data Z that each terminal installation 12 obtains classify (hiving off) be multiple set, about each set of multiple set, upgrade initial (standard) target characteristic data R according to each correction data Z in this set, thus independent target characteristic data R is generated to each set (tendency to each correction data Z).Such as, update section 44 is reflected in initial target characteristic data R by each correction data Z making to be categorized as set G1 and generates target characteristic data R1, generates the target characteristic data R2 represented with target characteristic data R1 independently acoustic characteristic by making each correction data Z being categorized as set G2 be reflected in initial target characteristic data R.Identical with the 3rd execution mode by the multiple target characteristic data R of above order to each Music Generation, the setting of controling parameters X is selectively used for according to the instruction from user.According to above structure, such as, there is the advantage that can generate (average) target characteristic data R of standard and the target characteristic data R of individual character.
(5) illustration of above-mentioned each form is not limited to based on the content of the process of sound equipment handling part 22.Such as, sound equipment handling part 22 also by adding specific frequency content (such as overtone composition) to respectively including data DA, and can generate the adjustment sound data DB being similar to the acoustic characteristic of target characteristic data R.Specifically, the various acoustics such as distortion, excitation can be additional to and include data DA.
Below, the disclosure is concluded.
Mixed management device of the present invention possesses: the first acquisition unit, obtain represent multiple include the acoustic characteristic of data include performance data, above-mentioned multiple sound equipment of including the different performance part of data representation melody; Second acquisition unit, obtains the target characteristic data of the acoustic characteristic of the target representing melody; And configuration part, with make with multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by target characteristic data, according to the controling parameters of including performance data and target characteristic data setting and be applicable to this mixing.In above structure, to make the mode acoustic characteristic after multiple mixing of including data performs being similar to the acoustic characteristic represented by target characteristic data, according to the controling parameters of including performance data and target characteristic data setting and be applicable to this mixing.Multiplely include the including condition of data therefore, it is possible to compensate and realize desired mixing.
Mixed management device of the present invention such as can realize as the communicator (such as server unit) carrying out communicating via communication network and terminal installation.Specifically, the mixed management device of preference of the present invention possesses the controling parameters set by configuration part and multiplely includes data to send to terminal installation communication control unit via communication network.In addition, the mixed management device of other form of the present invention possesses: sound equipment handling part, is included the mixing of data by the multiple of controling parameters applied set by configuration part and generated adjustment sound data; And communication control unit, via communication network, adjustment sound data is sent to terminal installation.In above form, have without the need to the advantage at each terminal installation assembling sound equipment handling part.
In preferred form of the present invention, second acquisition unit obtains the target characteristic data being stored in storage device, mixed management device possesses the update section that upgrades the target characteristic data of storage device according to the instruction from listener, and above-mentioned listener is the controling parameters that obtains of application settings portion and to multiple listener including the sound equipment that data mix.In above form, target characteristic data is upgraded according to the instruction from uppick user, above-mentioned user has heard application controls parameter and to multiple sound equipment of including data and mixing, has therefore had the advantage of the content that target characteristic data can be updated to the acoustic characteristic of specifying desired by user.In addition, adopt following structure: mixed management device possesses the instruction acceptance division that the multiple terminal installations used from different listeners via communication network obtain the multiple correction data corresponding with the instruction from each listener, the structure that update section upgrades target characteristic data according to instruction multiple correction data of obtaining of acceptance division, thus there is the special advantage that target characteristic data can be updated to preferred content for the user of majority.
In preferred form of the present invention, the second acquisition unit is selected for common melody to represent any one in multiple target characteristic data of different acoustic characteristic.In above form, owing to optionally will represent that multiple target characteristic data of different acoustic characteristics are applied to the setting of controling parameters for the melody shared, therefore with regularly a kind of target characteristic data is applied to controling parameters setting structure compared with, various mixing can be realized.In addition, obtain the structure of the target characteristic data selected by multiple target characteristic data, user according to the second acquisition unit, have and can realize and the intention of user, the advantage of having a liking for consistent mixing.
In preferred form of the present invention, include acoustic characteristic that performance data and target characteristic data represent respectively for each of multiple performance parts of melody play part comprise in average volume in melody, the time variations of volume, audio-video position, frequency characteristic and reverberation characteristic one of at least.Such as, adopt and also make audio-video position except the time variations of average volume, volume, structure that reverberation characteristic is reflected in controling parameters, mixing that is various and height can be realized.
The mixed management device of above each form, except by except hardware (electronic circuit) such as being applied to DSP (DigitalSignalProcessor: digital signal processor) special in the management of the controling parameters of mixing realizes, is also realized by the cooperation of the general arithmetic processing apparatus such as CPU (CentralProcessingUnit: central processing unit) and program.Program of the present invention is provided with the form of the recording medium being stored in computer and can reading and can be installed on computer.Recording medium is such as the recording medium of non-transitory (non-transitory), the optical recording media (CD) such as CD-ROM are good examples, but also can comprise the recording medium of the well-known arbitrary form such as semiconductor recording medium, magnetic recording media.In addition, such as, program of the present invention is provided with the form carrying out via communication network issuing and can be installed on computer.
In addition, the present invention is also by the method (mixed management method) specifically for management controling parameters.In mixed management method of the present invention, obtain represent multiple include the acoustic characteristic of data include performance data, the plurality of sound equipment of including the different performance part of data representation melody, and obtain the target characteristic data of the acoustic characteristic of the target representing melody, with make with multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by target characteristic data, according to the controling parameters of including performance data and target characteristic data setting and be applicable to this mixing.
Mixed management method of the present invention possesses: the first obtaining step, obtain represent multiple include the acoustic characteristic of data include performance data, above-mentioned multiple sound equipment of including the different performance part of data representation melody; Second obtaining step, obtains the target characteristic data of the acoustic characteristic of the target representing above-mentioned melody; And setting procedure, with make with above-mentioned multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by above-mentioned target characteristic data, include according to above-mentioned the controling parameters that performance data and the setting of above-mentioned target characteristic data are applicable to this mixing.
Such as, above-mentioned mixed management method also possesses the controling parameters set in above-mentioned setting procedure and above-mentionedly multiplely includes data to send to terminal installation Control on Communication step via communication network.
Such as, the target characteristic data being stored in storage device is obtained in above-mentioned second obtaining step, above-mentioned mixed management method also possesses the step of updating that upgrades the target characteristic data of above-mentioned storage device according to the instruction from listener, and above-mentioned listener is the described controling parameters that obtains in the described setting procedure of application and to above-mentioned multiple listener including the sound equipment that data mix.
Such as, above-mentioned mixed management method also possesses the instruction receiving step that the multiple terminal installations used from different listeners via communication network obtain the multiple correction data corresponding with the instruction from each listener, in above-mentioned step of updating, according to the multiple correction data obtained in above-mentioned instruction receiving step, above-mentioned target characteristic data is upgraded.
Such as, in above-mentioned second obtaining step, common melody is selected represent any one in multiple target characteristic data of different acoustic characteristic.
Such as, above-mentioned include acoustic characteristic that performance data and above-mentioned target characteristic data represent respectively for each of multiple performance parts of melody play part comprise in average volume in melody, the time variations of volume, audio-video position, frequency characteristic and reverberation characteristic one of at least.
Understand the present invention in detail with reference to specific execution mode, but can apply various change, correction under the premise without departing from the spirit and scope of the present invention, this is apparent to those skilled in the art.
The Japanese patent application (Japanese Patent Application 2013-139212) that the application applied for based on July 2nd, 2013, this with reference to and quote its content.
Industrial applicibility
According to mixed management device of the present invention and mixed management method, the difference of the condition of including can be compensated and realize desired mixing.

Claims (12)

1. a mixed management device, possesses:
First acquisition unit, obtain represent multiple include the acoustic characteristic of data include performance data, described multiple sound equipment of including the different performance part of data representation melody;
Second acquisition unit, obtains the target characteristic data of the acoustic characteristic of the target representing described melody; And
Configuration part, with make with described multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by described target characteristic data, include according to described the controling parameters that performance data and the setting of described target characteristic data are applicable to this mixing.
2. mixed management device according to claim 1, wherein,
Described mixed management device also possesses the controling parameters set by described configuration part and describedly multiplely includes data to send to terminal installation communication control unit via communication network.
3. mixed management device according to claim 1 and 2, wherein,
Described second acquisition unit obtains the target characteristic data being stored in storage device,
Described mixed management device also possesses the update section that upgrades the target characteristic data of described storage device according to the instruction from listener, and described listener is the controling parameters that obtains of the described configuration part of application and to described multiple listener including the sound equipment that data mix.
4. mixed management device according to claim 3, wherein,
Described mixed management device possesses the instruction acceptance division that the multiple terminal installations used from different listeners via communication network obtain the multiple correction data corresponding with the instruction from each listener,
Multiple correction data that described update section obtains according to described instruction acceptance division and described target characteristic data is upgraded.
5. the mixed management device according to any one of Claims 1 to 4, wherein,
Described second acquisition unit is selected for common melody to represent any one in multiple target characteristic data of different acoustic characteristic.
6. the mixed management device according to any one of Claims 1 to 5, wherein,
Described include acoustic characteristic that performance data and described target characteristic data represent respectively for each of multiple performance parts of melody play part comprise in average volume in melody, the time variations of volume, audio-video position, frequency characteristic and reverberation characteristic one of at least.
7. a mixed management method, possesses:
First obtaining step, obtain represent multiple include the acoustic characteristic of data include performance data, described multiple sound equipment of including the different performance part of data representation melody;
Second obtaining step, obtains the target characteristic data of the acoustic characteristic of the target representing described melody; And
Setting procedure, with make with described multiple include mixing that data are object perform after acoustic characteristic be similar to the mode of the acoustic characteristic represented by described target characteristic data, include according to described the controling parameters that performance data and the setting of described target characteristic data are applicable to this mixing.
8. mixed management method according to claim 7, wherein,
Described mixed management method also possesses the controling parameters set in described setting procedure and describedly multiplely includes data to send to terminal installation Control on Communication step via communication network.
9. the mixed management method according to claim 7 or 8, wherein,
The target characteristic data being stored in storage device is obtained in described second obtaining step,
Described mixed management method also possesses the step of updating that upgrades the target characteristic data of described storage device according to the instruction from listener, and described listener is the described controling parameters that obtains in the described setting procedure of application and to described multiple listener including the sound equipment that data mix.
10. mixed management method according to claim 9, wherein,
Described mixed management method also possesses the instruction receiving step that the multiple terminal installations used from different listeners via communication network obtain the multiple correction data corresponding with the instruction from each listener,
In described step of updating, according to the multiple correction data obtained in described instruction receiving step, described target characteristic data is upgraded.
11. mixed management methods according to any one of claim 7 ~ 10, wherein,
In described second obtaining step, common melody is selected represent any one in multiple target characteristic data of different acoustic characteristic.
12. mixed management methods according to any one of claim 7 ~ 11, wherein,
Described include acoustic characteristic that performance data and described target characteristic data represent respectively for each of multiple performance parts of melody play part comprise in average volume in melody, the time variations of volume, audio-video position, frequency characteristic and reverberation characteristic one of at least.
CN201480031354.0A 2013-07-02 2014-07-02 Mixing management device and mixing management method Pending CN105264913A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2013139212A JP6201460B2 (en) 2013-07-02 2013-07-02 Mixing management device
JP2013-139212 2013-07-02
PCT/JP2014/067672 WO2015002238A1 (en) 2013-07-02 2014-07-02 Mixing management device and mixing management method

Publications (1)

Publication Number Publication Date
CN105264913A true CN105264913A (en) 2016-01-20

Family

ID=52143810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480031354.0A Pending CN105264913A (en) 2013-07-02 2014-07-02 Mixing management device and mixing management method

Country Status (4)

Country Link
JP (1) JP6201460B2 (en)
KR (1) KR20150135517A (en)
CN (1) CN105264913A (en)
WO (1) WO2015002238A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111656434A (en) * 2018-02-14 2020-09-11 雅马哈株式会社 Sound parameter adjusting device, sound parameter adjusting method, and sound parameter adjusting program
CN112912951A (en) * 2018-09-03 2021-06-04 雅马哈株式会社 Information processing device for data representing motion

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6657713B2 (en) 2015-09-29 2020-03-04 ヤマハ株式会社 Sound processing device and sound processing method
JP6696140B2 (en) * 2015-09-30 2020-05-20 ヤマハ株式会社 Sound processor
CN110867174A (en) * 2018-08-28 2020-03-06 努音有限公司 Automatic sound mixing device
US20220012007A1 (en) * 2020-07-09 2022-01-13 Sony Interactive Entertainment LLC Multitrack container for sound effect rendering
CN112542183B (en) * 2020-12-09 2022-03-18 阿波罗智联(北京)科技有限公司 Audio data processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101048935A (en) * 2004-10-26 2007-10-03 杜比实验室特许公司 Calculating and adjusting the perceived loudness and/or the perceived spectral balance of an audio signal
JP4079260B2 (en) * 2002-12-24 2008-04-23 独立行政法人科学技術振興機構 Music mixing apparatus, method and program
CN101421781A (en) * 2006-04-04 2009-04-29 杜比实验室特许公司 Calculating and adjusting the perceived loudness and/or the perceived spectral balance of an audio signal
JP2011075652A (en) * 2009-09-29 2011-04-14 Nec Corp Ensemble system, ensemble device and ensemble method
EP2485213A1 (en) * 2011-02-03 2012-08-08 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Semantic audio track mixer

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3263546B2 (en) * 1994-10-14 2002-03-04 三洋電機株式会社 Sound reproduction device
JPH08146951A (en) * 1994-11-25 1996-06-07 Roland Corp Automatic playing device and playing information converting device
JP3900188B2 (en) * 1999-08-09 2007-04-04 ヤマハ株式会社 Performance data creation device
JP5287616B2 (en) * 2009-09-04 2013-09-11 ヤマハ株式会社 Sound processing apparatus and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4079260B2 (en) * 2002-12-24 2008-04-23 独立行政法人科学技術振興機構 Music mixing apparatus, method and program
CN101048935A (en) * 2004-10-26 2007-10-03 杜比实验室特许公司 Calculating and adjusting the perceived loudness and/or the perceived spectral balance of an audio signal
CN101421781A (en) * 2006-04-04 2009-04-29 杜比实验室特许公司 Calculating and adjusting the perceived loudness and/or the perceived spectral balance of an audio signal
JP2011075652A (en) * 2009-09-29 2011-04-14 Nec Corp Ensemble system, ensemble device and ensemble method
EP2485213A1 (en) * 2011-02-03 2012-08-08 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Semantic audio track mixer

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111656434A (en) * 2018-02-14 2020-09-11 雅马哈株式会社 Sound parameter adjusting device, sound parameter adjusting method, and sound parameter adjusting program
CN111656434B (en) * 2018-02-14 2023-08-04 雅马哈株式会社 Sound parameter adjustment device, sound parameter adjustment method, and recording medium
CN112912951A (en) * 2018-09-03 2021-06-04 雅马哈株式会社 Information processing device for data representing motion
CN112912951B (en) * 2018-09-03 2024-03-29 雅马哈株式会社 Information processing device for data representing operation

Also Published As

Publication number Publication date
JP2015012592A (en) 2015-01-19
WO2015002238A1 (en) 2015-01-08
JP6201460B2 (en) 2017-09-27
KR20150135517A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN105264913A (en) Mixing management device and mixing management method
CN1248135C (en) Network based music playing/song accompanying service system and method
CN108269578B (en) Method and apparatus for handling information
JP2011516907A (en) Music learning and mixing system
JP5200434B2 (en) Sound setting support device
WO2012021799A2 (en) Browser-based song creation
KR100687683B1 (en) Apparatus and method for generating performance control data and storage medium for storing program for executing the method therein
KR20180012397A (en) Management system and method for digital sound source, device and method of playing digital sound source
CN113821189B (en) Audio playing method, device, terminal equipment and storage medium
KR100731232B1 (en) Musical data editing and reproduction apparatus, and portable information terminal therefor
CN111883090A (en) Method and device for making audio file based on mobile terminal
JP6543899B2 (en) Electronic music apparatus and program
WO2024066790A1 (en) Audio processing method and apparatus, and electronic device
US20140281970A1 (en) Methods and apparatus for modifying audio information
JP5731661B2 (en) Recording apparatus, recording method, computer program for recording control, and reproducing apparatus, reproducing method, and computer program for reproducing control
US20240135909A1 (en) Information processing device, information processing method, and non-transitory computer readable recording medium
JP5551983B2 (en) Karaoke performance control system
CN105122360A (en) Device and program for processing separating data
JP3861872B2 (en) Performance control data conversion device and program
KR20180012398A (en) Management system and method for digital sound source, device and method of playing digital sound source
KR101656081B1 (en) System and method for music production and sharing
KR20220112005A (en) Apparatus and method for generating adaptive music based on user&#39;s consumption information and context information
JP3925358B2 (en) Performance information editing apparatus and program for realizing performance information editing method
JP4792819B2 (en) Remote editing method and remote editing system
JP2015179116A (en) Server device, karaoke communication terminal device, program and data providing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160120

WD01 Invention patent application deemed withdrawn after publication