WO2015002238A1 - ミキシング管理装置及びミキシング管理方法 - Google Patents
ミキシング管理装置及びミキシング管理方法 Download PDFInfo
- Publication number
- WO2015002238A1 WO2015002238A1 PCT/JP2014/067672 JP2014067672W WO2015002238A1 WO 2015002238 A1 WO2015002238 A1 WO 2015002238A1 JP 2014067672 W JP2014067672 W JP 2014067672W WO 2015002238 A1 WO2015002238 A1 WO 2015002238A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- characteristic data
- target
- mixing
- music
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/46—Volume control
Definitions
- the present invention relates to mixing of a plurality of acoustic data representing acoustic waveforms, and more particularly to setting of parameters applied to mixing.
- Patent Document 1 discloses a technique in which a template that reflects a tendency of mixing by a mixing engineer is prepared in advance and applied to mixing of a plurality of acoustic data.
- an object of the present invention is to realize a desired mixing by compensating for a difference in recording conditions.
- the mixing management device of the present invention includes a first acquisition unit that acquires recording characteristic data representing acoustic characteristics of a plurality of recording data representing sounds of different performance parts of the music, and the music
- a second acquisition unit that acquires target characteristic data representing the target acoustic characteristics, and an acoustic characteristic after performing the mixing on the plurality of recorded data approximates the acoustic characteristic represented by the target characteristic data
- a setting unit is provided for setting control parameters applied to the mixing in accordance with the recording characteristic data and the target characteristic data.
- the mixing management method of the present invention represents a first acquisition step of acquiring recording characteristic data representing the acoustic characteristics of a plurality of recorded data representing the sounds of different performance parts of the music, and represents the target acoustic characteristics of the music.
- FIG. 1 is a block diagram of a sound processing system according to a first embodiment of the present invention. It is explanatory drawing of mixing of some recorded data. It is explanatory drawing of various data. It is a block diagram of a functional structure of a sound processing system. It is explanatory drawing of operation
- FIG. 1 is a block diagram of a sound processing system 100 according to the first embodiment of the present invention.
- the sound processing system 100 is a communication system including a plurality of terminal devices 12 and a mixing management device 14.
- Each terminal device 12 is a communication terminal such as a mobile phone, a smart phone, or a tablet, and communicates with the mixing management device 14 via a communication network 16 such as a mobile communication network or the Internet.
- each terminal device 12 includes a control device 121, a storage device 122, a communication device 123, a display device 124, an input device 125, and a sound emitting device 126. It is realized by a computer system provided.
- the control device 121 is an arithmetic processing device that executes various types of control processing and arithmetic processing by executing a program stored in the storage device 122.
- the storage device 122 (for example, a semiconductor recording medium) stores a program executed by the control device 121 and various data used by the control device 121.
- the communication device 123 communicates with the mixing management device 14 via the communication network 16.
- the communication between the terminal device 12 and the communication network 16 is typically wireless communication. However, for example, when a stationary information processing device is used as the terminal device 12, the terminal device 12 and the communication network 16 Can also be wired.
- Display device 124 (for example, a liquid crystal display panel) displays an image instructed from control device 121.
- the input device 125 is an operating device that accepts an instruction from the user to the terminal device 12 and includes, for example, a plurality of operating elements operated by the user.
- a touch panel configured integrally with the display device 124 may be employed as the input device 125.
- the control device 121 functions as the acoustic processing unit 22 in FIG. 2 by executing a program stored in the storage device 122.
- the acoustic processing unit 22 generates adjusted acoustic data DB by mixing a plurality of recorded data DA corresponding to different performance parts of the music.
- Each recorded data DA is acoustic data representing a time waveform of a performance sound (sound or music) recorded in advance for a specific performance part of the music.
- the adjustment sound data DB is sound data (for example, sound data of two channels on the left and right sides) representing a time waveform of a performance sound of a music composed of a plurality of performance parts.
- the acoustic processing unit 22 performs the acoustic processing individually for each of the plurality of recorded data DA of the music, and generates the adjusted acoustic data DB by adding each other.
- the acoustic processing for each recorded data DA is various signal processing for changing acoustic characteristics. For example, amplification processing for increasing / decreasing the average sound volume, characteristic adjustment processing for changing sound frequency characteristics (for example, volume for each band), delay processing for delaying sound on the time axis, etc. Illustrated as a process.
- a control parameter (hereinafter referred to as “control parameter”) X is applied to the mixing of a plurality of recorded data DA by the acoustic processing unit 22.
- the sound emitting device 126 reproduces sound according to the adjusted sound data DB generated by the sound processing unit 22.
- illustration of the D / A converter which converts the adjustment acoustic data DB from digital to analog was abbreviate
- the mixing management device 14 in FIG. 1 manages mixing of a plurality of recorded data DA by the sound processing unit 22 of each terminal device 12.
- the mixing management device 14 of the first embodiment is a server device (typically a web server) that sets a suitable control parameter X that is applied to mixing of a plurality of recorded data DA. 142, a storage device 144, and a communication device 146.
- the mixing management apparatus 14 can be realized by a plurality of apparatuses configured separately from each other (for example, a plurality of server apparatuses communicating with each other via the communication network 16).
- the control device 142 is an arithmetic processing device that executes various types of control processing and arithmetic processing by executing a program stored in the storage device 144.
- the communication device 146 communicates with each terminal device 12 via the communication network 16.
- the storage device 144 stores a program executed by the control device 142 and various data used by the control device 142.
- a known recording medium such as a semiconductor recording medium or a magnetic recording medium or a combination of a plurality of recording media may be employed as the storage device 144.
- the storage device 144 is installed in an external device (for example, a server device) separate from the mixing management device 14, and the mixing management device 14 writes and reads information to and from the storage device 144 of the external device via the communication network 16.
- an external device for example, a server device
- FIG. 3 is an explanatory diagram of contents stored by the storage device 144 and operation contents of the control device 142.
- the storage device 144 of the first embodiment stores a plurality of music data M.
- Each piece of music data M includes attribute data MA for specifying related information such as a music title and singer name, and N pieces of recorded data DA (N is a natural number of 2 or more) corresponding to different performance parts of the music. Composed.
- the N pieces of recorded data DA are acoustic data to be mixed by the terminal device 12 (acoustic processing unit 22).
- N pieces (N tracks) of recorded data DA in which performance sounds of N performance parts of the music are recorded in parallel or individually are stored in the storage device 144.
- N pieces of recorded data DA generated by recording performance sounds (multitrack recording) in an acoustic space such as a music studio are transmitted from the terminal device 12 to the mixing management device 14 and then stored in the storage device 144. Stored. Therefore, even when the music itself is common to a plurality of music data M, the recording conditions (recording environment and equipment used) of each recording data DA may be different for each music data M.
- the recording conditions (recording environment and equipment used) of each recording data DA may be different for each music data M.
- the case where each piece of music is composed of N performance parts is illustrated for convenience, but the total number of performance parts may actually differ for each piece of music.
- each target characteristic data R is data representing the acoustic characteristics of an exemplary performance sound of a musical piece (that is, the acoustic characteristic that is the target of mixing), and is an attribute that specifies information related to the musical piece as illustrated in FIG. It includes data RA and N unit data r [1] to r [N] corresponding to different performance parts of the music.
- the attribute data RA specifies, for example, related information such as a music title and a music composition (structure such as an intro, A melody, B melody, and chorus).
- each unit data r [n] is individually set for each performance part, and may differ between performance parts.
- the unit data r [n] of a performance part in which a voiced sound (harmonic sound) such as a vocal part or a guitar part is dominant has a harmonic structure (a fundamental component and a plurality of harmonic components).
- the unit data r [n] of the performance part that includes the time variation of the frequency characteristic such as the envelope of the series) and that is dominant in the unvoiced sound (non-harmonic sound) such as the drum part (rhythm part) For example, the tempo of music and the cycle of each beat point).
- the target characteristic data R of the first embodiment is generated in advance by analyzing existing acoustic data (hereinafter referred to as “target data”) and stored in the storage device 144. For example, by analyzing existing sound data recorded on a recording medium such as a music CD or sound data in MP3 format for distribution to each terminal device 12 as target data (that is, sound data of an exemplary performance sound). Target characteristic data R is generated. It is also possible for a producer such as an acoustic engineer to manually generate the target characteristic data R.
- FIG. 4 is a functional configuration diagram of the mixing management device 14 in the first embodiment.
- the control device 142 of the mixing management device 14 executes the program stored in the storage device 144, so that N pieces of recorded data DA of a specific music piece (hereinafter referred to as “target music piece”).
- a plurality of functions (a first acquisition unit 32, a second acquisition unit 34, a setting unit 36, and a communication control unit 42) for setting and using the control parameter X applied to the above mixing are realized.
- a configuration in which each function of the control device 142 is distributed over a plurality of integrated circuits, or a configuration in which a dedicated electronic circuit (for example, DSP) realizes a part of the function of the control device 142 may be employed.
- the recording characteristic data Q representing the acoustic characteristics of each recording data DA is the same even if the music itself is common. It may be different for each piece of music data M.
- the recording characteristic data Q includes N unit data q [1] to q [N] corresponding to different performance parts (tracks) of the target music.
- Each unit data q [n] represents the acoustic characteristics of the performance sound of the nth performance part among the N performance parts of the target music piece.
- the sound characteristics such as the average sound volume in the music, the time change of the sound volume, the sound image position, the frequency characteristic, and the reverberation characteristic are recorded characteristic data Q Specified by the unit data q [n].
- the type of acoustic characteristic specified by each unit data q [n] can be different for each performance part.
- the first acquisition unit 32 of the first embodiment generates recording characteristic data Q by analyzing N pieces of recording data DA of the music data M of the target music stored in the storage device 144. Specifically, the first acquisition unit 32 determines the performance part for each of the N pieces of recorded data DA of the target music, and analyzes the recorded data DA of each performance part, thereby unit data q [ n] (recording characteristic data Q) is generated.
- reference data defining the time series (that is, melody) of the notes of each performance part of the music is preferably used.
- the reference data is, for example, time series data in the MIDI (Musical Instrument Digital Interface) format used for karaoke performance.
- the first acquisition unit 32 analyzes the time change of the pitch represented by each recorded data DA, and the time series of notes among the plurality of performance parts of the reference data is the time change of the pitch of the recorded data DA. Is determined as the performance part of the recorded data DA.
- determines a performance part according to the sound range (pitch distribution range) which the recording data DA represents, or the structure which discriminates a performance part according to the presence or absence of the acoustic harmonic structure which the recording data DA represents ( For example, a configuration in which a performance part of recorded data DA having no harmonic structure is determined as a drum part may be employed.
- the 2nd acquisition part 34 of 1st Embodiment selects the one target characteristic data R corresponding to an object music from the several target characteristic data R memorize
- the setting unit 36 sets the control parameter X of the target music according to the recording characteristic data Q acquired by the first acquisition unit 32 and the target characteristic data R acquired by the second acquisition unit 34. Specifically, the setting unit 36 approximates (ideally matches) the acoustic characteristics after performing the mixing on the N recorded data DA of the target music to the acoustic characteristics represented by the target characteristic data R.
- the control parameter X is set according to the recording characteristic data Q and the target characteristic data R. For example, control is performed so that the acoustic characteristics when N recorded data DA are mixed by applying the provisional control parameter X approximate to the acoustic characteristics of the target characteristic data R (a difference in acoustic characteristics is minimized).
- Optimization processing for sequentially updating the parameter X is preferably used for setting the control parameter X by the setting unit 36.
- the control parameter X can be different for each piece of music data M even when the music itself is common.
- control parameter X of the first embodiment includes a plurality of types of parameters (time code X1, volume parameter X2, localization parameter X3, frequency characteristics (reflected in the mixing of the N recorded data DA).
- F characteristic is configured to include a parameter X4 and a reverberation parameter X5).
- the time code X1 is data for adjusting each time point of each recorded data DA to a time point corresponding to the target data on the time axis (synchronizing corresponding musical sounds in the music on the time axis). Specifically, the time code X1 designates a delay amount (movement amount on the time axis) at each time point of the recorded data DA so that each time point of the recorded data DA corresponds to each time point of the target data.
- the setting unit 36 for example, the rhythm information specified by the rhythm part unit data q [n] in the recording characteristic data Q and the rhythm specified by the rhythm part unit data r [n] in the target characteristic data R.
- the time code X1 is set by comparing information with each other. For generation of the time code X1 (synchronized analysis of recorded data DA and target data), for example, a technique described in Japanese Patent Application Laid-Open No. 2011-053589 is preferably used.
- the volume parameter X2 is data for approximating (ideally matching) the volume of each recorded data DA of the music data M to the volume of each performance part of the target data.
- the setting unit 36 sets each unit data r [n] of the volume specified by each unit data q [n] of the recording characteristic data Q (the average volume in the music and the time change of the volume) and the target characteristic data R. ]
- the localization parameter X3 is data for approximating (ideally matching) the sound image position of each recorded data DA to the sound image position of each performance part of the target data.
- the setting unit 36 responds to the difference between the sound image position specified by each unit data q [n] of the recording characteristic data Q and the sound image position specified by each unit data r [n] of the target characteristic data R. To set the localization parameter X3.
- the frequency characteristic parameter X4 is data for approximating (ideally matching) the frequency characteristic of each recorded data DA of the music data M to the frequency characteristic of each performance part of the target data, for example, the sound of each recorded data DA. Specifies the gain (equalizer parameter) for each band applied to. Specifically, the setting unit 36 determines whether each unit data r [n] of the frequency characteristics (for example, envelope of harmonic structure) designated by each unit data q [n] of the recording characteristic data Q and the target characteristic data R is specified. The frequency characteristic parameter X4 is set according to the difference from the designated frequency characteristic.
- the reverberation parameter X5 is data for approximating (ideally matching) the reverberation characteristic of each recorded data DA to the reverberation characteristic of each performance part of the target data.
- the setting unit 36 sets the reverberation parameter X5 according to the difference between the reverberation characteristic designated by each unit data q [n] of the recorded characteristic data Q and the reverberation characteristic designated by each unit data r [n] of the target characteristic data R.
- the above is a specific example of the control parameter X.
- the communication control unit 42 in FIG. 4 controls communication with each terminal device 12 via the communication device 146. Specifically, the communication control unit 42 receives from the communication device 146 the control parameter X set by the setting unit 36 for the target music and the N recorded data DA of the music data M of the target music stored in the storage device 144. It transmits to the terminal device 12. That is, the control device 142 (communication control unit 42) of the first embodiment functions as an element that transmits the control parameter X and the plurality of recorded data DA to the terminal device 12.
- FIG. 5 is an explanatory diagram of the operation of the first embodiment.
- the user appropriately operates the input device 125 of the terminal device 12 to select a desired target music piece and instruct to start mixing.
- the control device 121 of the terminal device 12 transmits a processing request including the designation of the target music selected by the user from the communication device 123 to the mixing management device 14 via the communication network 16 (S1). .
- the control device 142 (first acquisition unit 32) of the mixing management device 14 refers to the attribute data MA (music name, etc.) of each song. Then, the music data M of the target music designated by the processing request is searched from the storage device 144, and the music data M (each recorded data DA) of the target music is analyzed to generate the recording characteristic data Q (S2). In addition, the control device 142 (second acquisition unit 34) searches the target characteristic data R of the target music by referring to the attribute data RA (music name, etc.) of each target characteristic data R, and the target characteristic data of the target music. R is acquired from the storage device 144 (S3).
- control device 142 (setting unit 36) acquires the recording characteristics acquired in step S2 so that the acoustic characteristics after performing the mixing on the N recorded data DA of the target music approximate the acoustic characteristics of the target characteristic data R.
- Control parameters X (X1 to X5) corresponding to the data Q and the target characteristic data R acquired in step S3 are generated (S4).
- the control device 142 (communication control unit 42) uses the N recorded data DA included in the music data M of the target music and the control parameter X set in step S4 to transmit the terminal device 12 that is the transmission source of the processing request (S1). Is transmitted from the communication device 146 (S5).
- the control device 121 (acoustic processing unit 22) of the terminal device 12 applies the control parameter X to N pieces.
- the recorded sound data DA is mixed to generate adjusted sound data DB, and the adjusted sound data DB is supplied to the sound emitting device 126 to reproduce the target music (S6).
- the control parameter X is set so that the acoustic characteristics when the N recorded data DA are mixed approximate the acoustic characteristics of the target characteristic data R.
- the acoustic characteristic of the reproduced sound emitted from the sound emitting device 126 in step S6 approximates or matches the acoustic characteristic specified by the target characteristic data R. That is, according to the first embodiment, even when the recording conditions of each recording data DA are different for each piece of music data M (that is, regardless of the recording conditions of the music data M), the difference in the recording conditions is reduced (compensated). There is an advantage that the desired mixing can be realized.
- the setting of the control parameter X according to the recording characteristic data Q and the target characteristic data R is uniformly executed by the mixing management device 14, and the control parameter X is transmitted to each terminal device 12. . Therefore, there is an advantage that the processing load on each terminal device 12 is reduced as compared with the configuration in which the calculation or setting (setting unit 36) of the control parameter X is executed on each terminal device 12.
- Second Embodiment A second embodiment of the present invention will be described below.
- standard referred by description of 1st Embodiment is diverted, and each detailed description is abbreviate
- FIG. 6 is a block diagram of the sound processing system 100 in the second embodiment
- FIG. 7 is an explanatory diagram of the operation of the second embodiment.
- the same operation (S1 to S6) as in the first embodiment is executed, so that N pieces of recorded data DA are mixed according to the control parameter X.
- the acoustic data DB is reproduced from the sound emitting device 126.
- a user who has listened to the reproduction sound of the target music emitted from the sound emission device 126 instructs the correction of the acoustic characteristics of the reproduction sound of the target music by appropriately operating the input device 125 of the terminal device 12. Is possible.
- the control device 121 of the terminal device 12 generates correction data Z corresponding to the instruction for correcting the acoustic characteristics (S7), and mixing is performed from the communication device 123 via the communication network 16 as understood from FIGS.
- the correction data Z is transmitted to the management device 14 (S8).
- FIG. 8 is an explanatory diagram of correction of acoustic characteristics.
- the control device 121 of the terminal device 12 displays the edited image 50 of FIG.
- the acoustic characteristics before the mixing acoustic characteristics of the recorded data DA
- the acoustic characteristics after the mixing using the control parameter X acoustic characteristics of the adjusted acoustic data DB
- the time change horizontal axis: time, vertical axis: volume
- the user In consideration of the result of listening to the reproduced sound of the adjusted acoustic data DB, the user appropriately operates the input device 125 while comparing it with the acoustic characteristic CA before performing the mixing, so that the acoustic characteristic after the mixing is performed. It is possible to edit CB to a desired acoustic characteristic CZ.
- the control device 121 of the terminal device 12 generates correction data Z representing the corrected acoustic characteristic CZ instructed by the user (S7) and transmits the correction data Z from the communication device 123 to the mixing management device 14 (S8). ).
- the communication device 146 of the mixing management device 14 of the second embodiment receives the correction data Z transmitted from the terminal device 12.
- the communication control unit 42 acquires the correction data Z received by the communication device 146. That is, the control device 142 (communication control unit 42) of the second embodiment functions as an element (instruction receiving unit) that acquires the correction data Z according to the instruction from the user who has listened to the reproduced sound of the adjusted acoustic data DB. To do.
- the editing of the acoustic characteristic CB (transmission of the correction data Z) in the terminal device 12 is repeated every time the adjusted acoustic data DB generated by mixing to which the control parameter X is applied is reproduced. Therefore, the communication control unit 42 sequentially acquires a plurality of correction data Z corresponding to instructions from different users from a plurality of terminal devices 12 for a common target music piece.
- the mixing management device 14 of the second embodiment has a configuration in which an update unit 44 is added to the mixing management device 14 of the first embodiment.
- the update unit 44 updates the target characteristic data R of the target music in the storage device 144 according to the correction data Z acquired by the communication control unit 42 from the terminal device 12 (S9).
- the updating unit 44 updates the target characteristic data R (each unit data r [n]) of the target music so as to approach the acoustic characteristic represented by the correction data Z.
- the updating unit 44 performs predetermined statistical processing (typically average) on the plurality of correction data Z acquired from each terminal device 12 by the communication control unit 42 and uses the corrected data Z after processing.
- the target characteristic data R of the target music is updated.
- the same effect as in the first embodiment is realized.
- the instruction (corrected data Z) from the user who has received the reproduced sound of the adjusted acoustic data DB is reflected in the target characteristic data R
- the target characteristic data R stored in the storage device 144 is reflected.
- the target characteristic data R can be updated to contents suitable for a large number of users.
- FIG. 9 is an explanatory diagram of the target characteristic data R in the third embodiment.
- a plurality of target characteristic data R corresponding to different music pieces are prepared.
- a plurality of target characteristic data R representing different acoustic characteristics is stored in the storage device 144 for each music piece. That is, for a single piece of music, a plurality of target characteristic data R having different acoustic characteristics is prepared.
- target characteristic data R representing standard acoustic characteristics target characteristic data R representing individual acoustic characteristics such as acoustic characteristics suitable for narration of musical instruments and acoustic characteristics with an emphasis on rhythm are provided for each musical piece.
- target characteristic data R representing individual acoustic characteristics such as acoustic characteristics suitable for narration of musical instruments and acoustic characteristics with an emphasis on rhythm are provided for each musical piece. Stored in the storage device 144.
- FIG. 10 is an explanatory diagram of the operation of the third embodiment.
- the control device 142 communication control unit 42
- the mixing management device 14 determines the target music specified in the processing request.
- a plurality of target characteristic data R stored in the storage device 144 is notified to the terminal device 12 as selection candidates by the user (S11).
- the control device 121 of the terminal device 12 causes the display device 124 to display a list 52 in which a plurality of target characteristic data R of the target music are arranged as selection candidates by the user (S12).
- the user can select desired target characteristic data R by appropriately operating the input device 125.
- the control device 121 of the terminal device 12 notifies the mixing management apparatus 14 of the target characteristic data R selected by the user (S14).
- the control device 142 of the mixing management device 14 When the notification of the target characteristic data R selected by the user is received, the control device 142 of the mixing management device 14 generates the recording characteristic data Q by analyzing the music data M of the target music (S2), and a plurality of target music pieces. Among the target characteristic data R, the target characteristic data R notified from the terminal device 12 in step S14 is acquired from the storage device 144 (S3).
- the control device 142 (setting unit 36) sets the control parameter X according to the recording characteristic data Q of the target music and the target characteristic data R selected by the user of the terminal device 12 (S4). Subsequent operations are the same as those in the first and second embodiments.
- the same effect as in the first embodiment is realized.
- a plurality of target characteristic data R prepared for each piece of music is selectively applied to the setting of the control parameter X, one type of target characteristic data R is fixed to the setting of the control parameter X. Therefore, it is possible to generate adjusted acoustic data DB having various acoustic characteristics as compared with the configuration to be applied to the system.
- the target characteristic data R selected by the user among the plurality of target characteristic data R is applied to the setting of the control parameter X, so that the adjustment sound of the acoustic characteristic that matches the user's intention and preference
- data DB can be generated.
- the update unit 44 reflects the instruction for editing the target characteristic data R selected by the user from the list 52 among the plurality of target characteristic data R of the target music stored in the storage device 144. It is updated according to the corrected data Z.
- the acoustic processing system 100 including the terminal device 12 and the mixing management device 14 that communicate with each other via the communication network 16 is illustrated.
- the mixing management device 14 according to the fourth embodiment realizes the same function as the acoustic processing system 100 according to each embodiment described above by a single device.
- FIG. 12 is a block diagram of the mixing management device 14 in the fourth embodiment.
- the mixing management device 14 of the fourth embodiment is similar to the terminal device 12 of each of the above-described embodiments, in that the control device 121, the storage device 122, the display device 124, the input device 125, and the sound emitting device 126 and a computer system.
- an information processing device such as a mobile phone, a smartphone, or a personal computer is used as the mixing management device 14.
- the storage device 122 stores a program executed by the control device 121, and stores music data M and target characteristic data R for each music.
- the control device 121 executes each program (first acquisition unit 32, second acquisition unit 34, setting unit 36, acoustic processing unit 22, update, etc.) exemplified in the above-described embodiment by executing a program stored in the storage device 122. Part 44).
- the first acquisition unit 32 generates the recording characteristic data Q by analyzing the music data M of the target music stored in the storage device 122, and the second acquisition unit 34 sets the target characteristics of the target music.
- Data R is acquired from the storage device 122.
- the setting unit 36 generates a control parameter X from the recording characteristic data Q generated by the first acquisition unit 32 and the target characteristic data R acquired by the second acquisition unit 34.
- the sound processing unit 22 applies the control parameter X set by the setting unit 36 and mixes the N pieces of recorded data DA of the music data M of the target music to generate the adjusted sound data DB and generates the adjusted sound data DB from the sound emitting device 126. Let it play.
- the update unit 44 updates the target characteristic data R of the target music in the storage device 122 in accordance with an instruction (correction data Z) from the user who has received the reproduced sound of the adjusted sound data DB.
- the specific processing content of each element is as exemplified in the above-described embodiments.
- the same effect as in the first embodiment is realized.
- the structure (update part 44) which updates the target characteristic data R according to the correction data Z can be omitted from the fourth embodiment.
- the first acquisition unit 32 exemplifies a configuration in which the recording characteristic data Q is generated by analyzing the recording data DA of the music data M. It is also possible to store in advance in the storage device 144 (storage device 122 in the fourth embodiment). In the configuration in which the recording characteristic data Q is stored in the storage device 144, the first acquisition unit 32 functions as an element for reading the recording characteristic data Q of the target music piece from the storage device 144. As understood from the above description, the first acquisition unit 32 is comprehensively expressed as an element for acquiring the recording characteristic data Q representing the acoustic characteristics of the plurality of recording data DA, and the recording characteristics are analyzed by analyzing each recording data DA. It includes both an element that generates data Q (each of the above-described forms) and an element that reads recording characteristic data Q stored in advance in storage device 144.
- the second acquisition unit 34 has exemplified the configuration in which the target characteristic data R stored in the storage device 144 (the storage device 122 in the fourth embodiment) is read. Can also be stored in the storage device 144.
- the second acquisition unit 34 functions as an element that generates the target characteristic data R by analyzing the target data of the target music.
- the second acquisition unit 34 is comprehensively expressed as an element that acquires target characteristic data R representing the target acoustic characteristic of the music, and is stored in advance in the storage device 144. It includes both elements for reading data R (each of the above-described forms) and elements for generating target characteristic data R by analyzing target data.
- the control parameter X and N pieces of recorded data DA are transmitted from the mixing management device 14 to the terminal device 12, and the acoustic processing unit 22 of the terminal device 12 has N pieces of sound processing units 22.
- the acoustic processing unit 22 can be installed in the mixing management device 14 as illustrated in FIG.
- the sound processing unit 22 in FIG. 13 generates the adjusted sound data DB by mixing the N pieces of recorded data DA of the target music by applying the control parameter X set by the setting unit 36.
- the communication control unit 42 transmits the adjusted acoustic data DB generated by the acoustic processing unit 22 from the communication device 146 to the terminal device 12 via the communication network 16.
- the reproduction of the adjustment sound data DB and the transmission of the correction data Z are performed by the single terminal device 12, but the device for reproducing the adjustment sound data DB and the device for transmitting the correction data Z Can be configured separately.
- the adjustment acoustic data DB is reproduced by an information processing apparatus such as a personal computer, and the correction data Z is generated and transmitted according to an instruction from the user, and is executed by a portable information processing apparatus such as a mobile phone or a smartphone.
- a portable information processing apparatus such as a mobile phone or a smartphone.
- the structure to be made is suitable.
- a large number of correction data Z reflecting the intentions and preferences of various users is transmitted from each terminal device 12 to the mixing management device 14.
- a plurality of target characteristic data R can be generated using a plurality of correction data Z acquired from each terminal device 12.
- a plurality of correction data Z acquired from each terminal device 12 for one piece of music is classified (clustered) into a plurality of sets according to the tendency of the instruction content by the user, and each of the plurality of sets is initially ( By updating the standard target characteristic data R according to each correction data Z in the set, individual target characteristic data R is generated for each set (for each tendency of the correction data Z).
- the update unit 44 generates the target characteristic data R1 by reflecting each correction data Z classified into the set G1 in the initial target characteristic data R, and initially sets each correction data Z classified into the set G2.
- target characteristic data R2 representing an acoustic characteristic different from the target characteristic data R1 is generated.
- the plurality of target characteristic data R generated for each music piece by the above procedure is selectively applied to the setting of the control parameter X in accordance with an instruction from the user, as in the third embodiment. According to the above configuration, there is an advantage that, for example, standard (average) target characteristic data R and individual target characteristic data R can be generated.
- the acoustic processing unit 22 can generate adjusted acoustic data DB that approximates the acoustic characteristics of the target characteristic data R by adding a specific frequency component (for example, a harmonic component) to each recorded data DA. It is. Specifically, various sound effects such as distortion and exciter can be added to the recorded data DA.
- a specific frequency component for example, a harmonic component
- the mixing management device of the present invention includes a first acquisition unit that acquires recording characteristic data representing the acoustic characteristics of a plurality of recorded data representing the sounds of different performance parts of a song, and target characteristic data that represents the target acoustic characteristics of the song. And a control parameter applied to the mixing so that the acoustic characteristics after performing the mixing for a plurality of recorded data approximate the acoustic characteristics represented by the target characteristic data.
- a setting unit configured to set according to the recording characteristic data and the target characteristic data.
- the control parameters applied to the mixing are the recording characteristic data and the target characteristic data so that the acoustic characteristics after mixing for a plurality of recorded data approximate the acoustic characteristics represented by the target characteristic data. Is set according to Therefore, it is possible to realize the desired mixing by compensating the recording conditions of a plurality of recorded data.
- the mixing management device can be realized as a communication device (for example, a server device) that communicates with a terminal device via a communication network, for example.
- a mixing management apparatus includes a communication control unit that transmits control parameters set by the setting unit and a plurality of recorded data to a terminal device via a communication network.
- the mixing management device includes an acoustic processing unit that generates adjusted acoustic data by mixing a plurality of recorded data to which the control parameter set by the setting unit is applied, and an adjusted acoustic signal via a communication network.
- a communication control unit that transmits data to the terminal device.
- the second acquisition unit acquires the target characteristic data stored in the storage device, and applies the control parameter acquired by the setting unit to the sound listener who has mixed a plurality of recorded data.
- the updating unit updates the target characteristic data of the storage device in response to the instruction.
- the target characteristic data is updated in accordance with an instruction from the user who has listened to the sound obtained by mixing the plurality of recorded data by applying the control parameter.
- an instruction receiving unit that acquires a plurality of correction data according to instructions from different listeners from a plurality of terminal devices used by each listener via a communication network, the plurality of data acquired by the instruction receiving unit
- the update unit updates the target characteristic data according to the correction data
- the target characteristic data can be updated to contents suitable for a large number of users.
- the second acquisition unit selects any one of a plurality of target characteristic data representing different acoustic characteristics for a common musical piece.
- a plurality of target characteristic data representing different acoustic characteristics for a common musical piece are selectively used for setting a control parameter, one type of target characteristic data is fixedly applied to the control parameter setting. It is possible to realize various mixing as compared with the configuration to be performed. Further, according to the configuration in which the second acquisition unit acquires target characteristic data selected by the user from among the plurality of target characteristic data, there is an advantage that mixing that matches the user's intention and preference can be realized.
- the acoustic characteristics represented by each of the recording characteristic data and the target characteristic data are, for each of a plurality of performance parts of the music, an average volume in the music, a temporal change in volume, a sound image position, a frequency characteristic, and Includes at least one reverberation characteristic.
- the sound image position and the reverberation characteristics are reflected in the control parameters in addition to the average sound volume and the time change of the sound volume, various and advanced mixing can be realized.
- the mixing management device is realized by hardware (electronic circuit) such as DSP (Digital Signal Processor) dedicated to management of control parameters applied to mixing, and CPU (Central Processing Unit) It is also realized by cooperation between a general-purpose arithmetic processing device such as the above and a program.
- the program of the present invention can be provided in a form stored in a computer-readable recording medium and installed in the computer.
- the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included.
- the program of the present invention can be provided in the form of distribution via a communication network and installed in a computer.
- the present invention is also specified as a method for managing control parameters (mixing management method).
- the mixing management method of the present invention acquires recording characteristic data representing the acoustic characteristics of a plurality of recording data representing the sounds of different performance parts of the music, obtains target characteristic data representing the target acoustic characteristics of the music, and Depending on the recording characteristic data and the target characteristic data, the control parameters applied to the mixing should be adjusted so that the acoustic characteristics after mixing for the recorded data of the target approximate the acoustic characteristics represented by the target characteristic data. To set.
- the mixing management method includes a first acquisition step of acquiring recording characteristic data representing the acoustic characteristics of a plurality of recording data representing the sounds of different performance parts of the music, and a target representing the target acoustic characteristics of the music
- the second acquisition step of acquiring characteristic data and the acoustic characteristics after performing the mixing for the plurality of recorded data are applied to the mixing so as to approximate the acoustic characteristics represented by the target characteristic data.
- a setting step for setting the control parameter according to the recording characteristic data and the target characteristic data.
- the mixing management method further includes a communication control step of transmitting the control parameter set in the setting step and the plurality of recorded data to a terminal device via a communication network.
- target characteristic data stored in a storage device is acquired, and in the mixing management method, the plurality of recorded data are mixed by applying the control parameter acquired in the setting step.
- the mixing management method further includes an instruction receiving step of acquiring a plurality of correction data according to instructions from different listeners from a plurality of terminal devices used by each listener via a communication network.
- the updating step the target characteristic data is updated according to the plurality of correction data acquired in the instruction receiving step.
- one of a plurality of target characteristic data representing different acoustic characteristics for a common musical piece is selected.
- the acoustic characteristics represented by each of the recording characteristic data and the target characteristic data are at least one of an average volume in the music, a temporal change in volume, a sound image position, a frequency characteristic, and a reverberation characteristic for each of a plurality of performance parts of the music. Includes one.
- the desired mixing can be realized by compensating for the difference in recording conditions.
- DESCRIPTION OF SYMBOLS 100 ... Acoustic processing system, 12 ... Terminal device, 14 ... Mixing management device, 16 ... Communication network, 22 ... Acoustic processing part, 121, 142 ... Control device, 122, 144 ... Storage device, 123 146: Communication device 124 ... Display device 125 ... Input device 126 ... Sound emitting device 32 ... First acquisition unit 34 ... Second acquisition unit 36 ... Setting unit 42 ... ... Communication control unit, 44 ... Update unit
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Description
また、本発明のミキシング管理方法は、楽曲の相異なる演奏パートの音響を表す複数の収録データの音響特性を表す収録特性データを取得する第1取得ステップと、前記楽曲の目標の音響特性を表す目標特性データを取得する第2取得ステップと、前記複数の収録データを対象としたミキシングの実行後の音響特性が、前記目標特性データで表される音響特性に近似するように、当該ミキシングに適用される制御パラメータを前記収録特性データと前記目標特性データとに応じて設定する設定ステップと、を具備する。
図1は、本発明の第1実施形態に係る音響処理システム100のブロック図である。図1に例示される通り、音響処理システム100は、複数の端末装置12とミキシング管理装置14とを具備する通信システムである。各端末装置12は、例えば携帯電話機やスマートフォン、タブレット等の通信端末であり、例えば移動通信網やインターネット等の通信網16を介してミキシング管理装置14と相互に通信する。
本発明の第2実施形態を以下に説明する。なお、以下に例示する各形態において作用や機能が第1実施形態と同様である要素については、第1実施形態の説明で参照した符号を流用して各々の詳細な説明を適宜に省略する。
図9は、第3実施形態における目標特性データRの説明図である。第1実施形態では、相異なる楽曲に対応する複数の目標特性データRを用意した。第3実施形態では、図9に例示される通り、相異なる音響特性を表す複数の目標特性データRが楽曲毎に記憶装置144に記憶される。すなわち、任意の1個の楽曲について、音響特性が相違する複数の目標特性データRが用意される。例えば、標準的な音響特性を表す目標特性データRのほか、楽器の弾語りに好適な音響特性やリズムを重視した音響特性等の個性的な音響特性を表す目標特性データRが、各楽曲について記憶装置144に記憶される。
前述の各形態では、通信網16を介して相互に通信する端末装置12とミキシング管理装置14とで構成される音響処理システム100を例示した。第4実施形態のミキシング管理装置14は、前述の各形態の音響処理システム100と同様の機能を装置単体で実現する。
前述の各形態は多様に変形され得る。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様は適宜に併合され得る。
本発明のミキシング管理装置は、楽曲の相異なる演奏パートの音響を表す複数の収録データの音響特性を表す収録特性データを取得する第1取得部と、楽曲の目標の音響特性を表す目標特性データを取得する第2取得部と、複数の収録データを対象としたミキシングの実行後の音響特性が、目標特性データで表される音響特性に近似するように、当該ミキシングに適用される制御パラメータを収録特性データと目標特性データとに応じて設定する設定部とを具備する。以上の構成では、複数の収録データに対するミキシングの実行後の音響特性が目標特性データで表される音響特性に近似するように、当該ミキシングに適用される制御パラメータが収録特性データと目標特性データとに応じて設定される。したがって、複数の収録データの収録条件を補償して所期のミキシングを実現することが可能である。
本出願は、2013年7月2日出願の日本特許出願(特願2013-139212)に基づくものであり、その内容はここに参照として取り込まれる。
Claims (12)
- 楽曲の相異なる演奏パートの音響を表す複数の収録データの音響特性を表す収録特性データを取得する第1取得部と、
前記楽曲の目標の音響特性を表す目標特性データを取得する第2取得部と、
前記複数の収録データを対象としたミキシングの実行後の音響特性が、前記目標特性データで表される音響特性に近似するように、当該ミキシングに適用される制御パラメータを前記収録特性データと前記目標特性データとに応じて設定する設定部と、
を具備するミキシング管理装置。 - 前記設定部が設定した制御パラメータと前記複数の収録データとを、通信網を介して端末装置に送信する通信制御部を、
さらに具備する請求項1のミキシング管理装置。 - 前記第2取得部は、記憶装置に記憶された目標特性データを取得し、
前記設定部が取得した制御パラメータを適用して前記複数の収録データをミキシングした音響の受聴者からの指示に応じて前記記憶装置の目標特性データを更新する更新部を、
さらに具備する請求項1または請求項2のミキシング管理装置。 - 相異なる受聴者からの指示に応じた複数の修正データを、各受聴者が使用する複数の端末装置から通信網を介して取得する指示受付部を具備し、
前記更新部は、前記指示受付部が取得した複数の修正データに応じて前記目標特性データを更新する、
請求項3のミキシング管理装置。 - 前記第2取得部は、共通の楽曲について相異なる音響特性を表す複数の目標特性データの何れかを選択する、
請求項1から請求項4の何れかのミキシング管理装置。 - 前記収録特性データおよび前記目標特性データの各々が表す音響特性は、楽曲の複数の演奏パートの各々について、楽曲内の平均音量、音量の時間変化、音像位置、周波数特性および残響特性の少なくともひとつを包含する、
請求項1から請求項5の何れかのミキシング管理装置。 - 楽曲の相異なる演奏パートの音響を表す複数の収録データの音響特性を表す収録特性データを取得する第1取得ステップと、
前記楽曲の目標の音響特性を表す目標特性データを取得する第2取得ステップと、
前記複数の収録データを対象としたミキシングの実行後の音響特性が、前記目標特性データで表される音響特性に近似するように、当該ミキシングに適用される制御パラメータを前記収録特性データと前記目標特性データとに応じて設定する設定ステップと、
を具備するミキシング管理方法。 - 前記設定ステップで設定した制御パラメータと前記複数の収録データとを、通信網を介して端末装置に送信する通信制御ステップを、
さらに具備する請求項7のミキシング管理方法。 - 前記第2取得ステップでは、記憶装置に記憶された目標特性データを取得し、
前記設定ステップで取得した前記制御パラメータを適用して前記複数の収録データをミキシングした音響の受聴者からの指示に応じて前記記憶装置の目標特性データを更新する更新ステップを、
さらに具備する請求項7または請求項8のミキシング管理方法。 - 相異なる受聴者からの指示に応じた複数の修正データを、各受聴者が使用する複数の端末装置から通信網を介して取得する指示受付ステップを、さらに具備し、
前記更新ステップでは、前記指示受付ステップで取得した複数の修正データに応じて前記目標特性データを更新する、
請求項9のミキシング管理方法。 - 前記第2取得ステップでは、共通の楽曲について相異なる音響特性を表す複数の目標特性データの何れかを選択する、
請求項7から請求項10の何れかのミキシング管理方法。 - 前記収録特性データおよび前記目標特性データの各々が表す音響特性は、楽曲の複数の演奏パートの各々について、楽曲内の平均音量、音量の時間変化、音像位置、周波数特性および残響特性の少なくともひとつを包含する、
請求項7から請求項11の何れかのミキシング管理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480031354.0A CN105264913A (zh) | 2013-07-02 | 2014-07-02 | 混合管理装置及混合管理方法 |
KR1020157031020A KR20150135517A (ko) | 2013-07-02 | 2014-07-02 | 믹싱 관리 장치 및 믹싱 관리 방법 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-139212 | 2013-07-02 | ||
JP2013139212A JP6201460B2 (ja) | 2013-07-02 | 2013-07-02 | ミキシング管理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015002238A1 true WO2015002238A1 (ja) | 2015-01-08 |
Family
ID=52143810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/067672 WO2015002238A1 (ja) | 2013-07-02 | 2014-07-02 | ミキシング管理装置及びミキシング管理方法 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP6201460B2 (ja) |
KR (1) | KR20150135517A (ja) |
CN (1) | CN105264913A (ja) |
WO (1) | WO2015002238A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110867174A (zh) * | 2018-08-28 | 2020-03-06 | 努音有限公司 | 自动混音装置 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6657713B2 (ja) | 2015-09-29 | 2020-03-04 | ヤマハ株式会社 | 音響処理装置および音響処理方法 |
JP6696140B2 (ja) * | 2015-09-30 | 2020-05-20 | ヤマハ株式会社 | 音響処理装置 |
DE112018007079B4 (de) * | 2018-02-14 | 2024-05-29 | Yamaha Corporation | Audioparameter-anpassungsvorrichtung, audioparameter-anpassungsverfahren und audioparameter-anpassungsprogramm |
JP7147384B2 (ja) * | 2018-09-03 | 2022-10-05 | ヤマハ株式会社 | 情報処理方法および情報処理装置 |
US20220012007A1 (en) * | 2020-07-09 | 2022-01-13 | Sony Interactive Entertainment LLC | Multitrack container for sound effect rendering |
CN112542183B (zh) | 2020-12-09 | 2022-03-18 | 阿波罗智联(北京)科技有限公司 | 音频数据处理的方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08115097A (ja) * | 1994-10-14 | 1996-05-07 | Sanyo Electric Co Ltd | 音響再生装置 |
JPH08146951A (ja) * | 1994-11-25 | 1996-06-07 | Roland Corp | 自動演奏装置及び演奏情報変換装置 |
JP2004206747A (ja) * | 2002-12-24 | 2004-07-22 | Japan Science & Technology Agency | 楽曲ミキシング装置、方法およびプログラム |
JP2005173632A (ja) * | 1999-08-09 | 2005-06-30 | Yamaha Corp | 演奏データ作成装置 |
JP2011053589A (ja) * | 2009-09-04 | 2011-03-17 | Yamaha Corp | 音響処理装置およびプログラム |
WO2012104119A1 (en) * | 2011-02-03 | 2012-08-09 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Semantic audio track mixer |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2005299410B2 (en) * | 2004-10-26 | 2011-04-07 | Dolby Laboratories Licensing Corporation | Calculating and adjusting the perceived loudness and/or the perceived spectral balance of an audio signal |
CN101421781A (zh) * | 2006-04-04 | 2009-04-29 | 杜比实验室特许公司 | 音频信号的感知响度和/或感知频谱平衡的计算和调整 |
JP2011075652A (ja) * | 2009-09-29 | 2011-04-14 | Nec Corp | 合奏システム、合奏装置および合奏方法 |
-
2013
- 2013-07-02 JP JP2013139212A patent/JP6201460B2/ja not_active Expired - Fee Related
-
2014
- 2014-07-02 CN CN201480031354.0A patent/CN105264913A/zh active Pending
- 2014-07-02 KR KR1020157031020A patent/KR20150135517A/ko not_active Application Discontinuation
- 2014-07-02 WO PCT/JP2014/067672 patent/WO2015002238A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08115097A (ja) * | 1994-10-14 | 1996-05-07 | Sanyo Electric Co Ltd | 音響再生装置 |
JPH08146951A (ja) * | 1994-11-25 | 1996-06-07 | Roland Corp | 自動演奏装置及び演奏情報変換装置 |
JP2005173632A (ja) * | 1999-08-09 | 2005-06-30 | Yamaha Corp | 演奏データ作成装置 |
JP2004206747A (ja) * | 2002-12-24 | 2004-07-22 | Japan Science & Technology Agency | 楽曲ミキシング装置、方法およびプログラム |
JP2011053589A (ja) * | 2009-09-04 | 2011-03-17 | Yamaha Corp | 音響処理装置およびプログラム |
WO2012104119A1 (en) * | 2011-02-03 | 2012-08-09 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Semantic audio track mixer |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110867174A (zh) * | 2018-08-28 | 2020-03-06 | 努音有限公司 | 自动混音装置 |
Also Published As
Publication number | Publication date |
---|---|
JP6201460B2 (ja) | 2017-09-27 |
CN105264913A (zh) | 2016-01-20 |
KR20150135517A (ko) | 2015-12-02 |
JP2015012592A (ja) | 2015-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6201460B2 (ja) | ミキシング管理装置 | |
WO2014003072A1 (ja) | オーディオ波形データを使用する自動演奏技術 | |
US8887051B2 (en) | Positioning a virtual sound capturing device in a three dimensional interface | |
US10497347B2 (en) | Singing voice edit assistant method and singing voice edit assistant device | |
d'Escrivan | Music technology | |
WO2012021799A2 (en) | Browser-based song creation | |
KR20100106598A (ko) | 오디오 플레이어들 간의 출력 볼륨의 유사성을 향상시키기 위한 시스템들 및 방법들 | |
JP6316099B2 (ja) | カラオケ装置 | |
JP5510207B2 (ja) | 楽音編集装置及びプログラム | |
JP5598722B2 (ja) | 音声再生装置、音声再生装置における再生音調整方法 | |
JP5731661B2 (ja) | 記録装置、記録方法、及び記録制御用のコンピュータプログラム、並びに再生装置、再生方法、及び再生制御用のコンピュータプログラム | |
JP2019174526A (ja) | 音楽再生システム、端末装置、音楽再生方法、及びプログラム | |
WO2014142201A1 (ja) | 分離用データ処理装置およびプログラム | |
US20240211201A1 (en) | Acoustic device, acoustic device control method, and program | |
US20140281970A1 (en) | Methods and apparatus for modifying audio information | |
US20240144901A1 (en) | Systems and Methods for Sending, Receiving and Manipulating Digital Elements | |
WO2023062865A1 (ja) | 情報処理装置および方法、並びにプログラム | |
JP2018112725A (ja) | 音楽コンテンツ送信装置、音楽コンテンツ送信プログラムおよび音楽コンテンツ送信方法 | |
JP2014071215A (ja) | 演奏装置、演奏システム、プログラム | |
JP4835433B2 (ja) | 演奏パターン再生装置及びそのコンピュータプログラム | |
JP2017073590A (ja) | 音信号処理装置用プログラム | |
WO2020208811A1 (ja) | 再生制御装置、プログラムおよび再生制御方法 | |
JP2017062340A (ja) | 出力制御装置、出力制御方法及びプログラム | |
JP6492754B2 (ja) | 楽器及び楽器システム | |
JP2023092863A (ja) | カラオケ装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480031354.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14819517 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20157031020 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14819517 Country of ref document: EP Kind code of ref document: A1 |