WO2020066681A1 - 情報処理装置および方法、並びにプログラム - Google Patents
情報処理装置および方法、並びにプログラム Download PDFInfo
- Publication number
- WO2020066681A1 WO2020066681A1 PCT/JP2019/036032 JP2019036032W WO2020066681A1 WO 2020066681 A1 WO2020066681 A1 WO 2020066681A1 JP 2019036032 W JP2019036032 W JP 2019036032W WO 2020066681 A1 WO2020066681 A1 WO 2020066681A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- metadata
- information processing
- processing device
- content
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000010365 information processing Effects 0.000 title claims abstract description 57
- 238000003066 decision tree Methods 0.000 claims description 53
- 230000005236 sound signal Effects 0.000 claims description 37
- 230000000694 effects Effects 0.000 claims description 23
- 238000003672 processing method Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 93
- 238000005516 engineering process Methods 0.000 description 21
- 238000004364 calculation method Methods 0.000 description 17
- 239000011295 pitch Substances 0.000 description 12
- 230000001755 vocal effect Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/305—Electronic adaptation of stereophonic audio signals to reverberation of the listening space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
Definitions
- the present technology relates to an information processing apparatus, method, and program, and more particularly to an information processing apparatus, method, and program that can easily create 3D audio content.
- 3D Audio which is handled by the MPEG-H 3D Audio Standard, can reproduce three-dimensional sound directions, distances, and spreads, enabling more realistic audio playback than conventional stereo playback. Become.
- the number of dimensions of the position information of the object that is, the position information of the sound source is higher than in stereo (3D @ Audio is three-dimensional and stereo is two-dimensional).
- the time cost is particularly high in the task of determining the parameters constituting the metadata for each object such as the horizontal angle and the vertical angle indicating the position of the object, the distance, the gain for the object, and the like.
- the present technology has been made in view of such a situation, and is intended to easily produce 3D audio content.
- the information processing apparatus includes a determining unit that determines one or a plurality of parameters constituting metadata of an object based on one or a plurality of pieces of attribute information of the object.
- the information processing method or the program according to an embodiment of the present technology includes a step of determining one or a plurality of parameters constituting metadata of an object based on one or a plurality of pieces of attribute information of the object.
- one or more parameters constituting metadata of an object are determined based on one or more pieces of attribute information of the object.
- FIG. 9 is a diagram for describing determination of metadata using a decision tree.
- FIG. 4 is a diagram for explaining distribution adjustment of metadata.
- FIG. 4 is a diagram for explaining distribution adjustment of metadata.
- FIG. 2 is a diagram illustrating a configuration example of an information processing apparatus. It is a flowchart explaining a metadata determination process.
- FIG. 14 is a diagram illustrating a configuration example of a computer.
- This technology determines the metadata for each object, more specifically, one or more parameters that make up the metadata, so that 3D Audio content with sufficiently high quality can be produced more easily, that is, in a short time. Is what you do.
- the present technology has the following features (F1) to (F5).
- high-quality 3D audio content can be produced in a short time by determining metadata of an object from the following information. This is expected to increase the number of producers of high-quality 3D Audio content and 3D Audio content.
- the object may be any object such as an audio object or an image object as long as it has parameters such as position information and gain as metadata.
- the present technology is also applied to a case where a parameter indicating the position of an image object such as a 3D model in space is used as metadata and metadata of the image object is determined based on attribute information indicating an attribute of the image object. It is possible.
- the attribute information of the image object can be the type (type) or priority of the image object.
- the object is an audio object
- the metadata is composed of one or a plurality of parameters (information) used for processing for sound reproduction based on the audio signal of the object, more specifically, for rendering processing of the object.
- the metadata includes, for example, a horizontal angle, a vertical angle, and a distance that constitute position information indicating a position of the object in the three-dimensional space, and a gain of the audio signal of the object.
- the metadata includes a total of four parameters of a horizontal angle, a vertical angle, a distance, and a gain.
- any number of metadata parameters may be used as long as the number is one or more.
- the horizontal angle is an angle indicating the horizontal position of the object as viewed from a predetermined reference position such as the position of the user
- the vertical angle is an angle indicating the vertical position of the object as viewed from the reference position.
- the distance constituting the position information is a distance from the reference position to the object.
- the metadata of an object is often determined based on information about the attribute of the object, such as instrument information, sound effect information, and priority information.
- information about the attribute of the object such as instrument information, sound effect information, and priority information.
- the rules for determining metadata according to the musical instrument information and the like differ depending on the creator of the 3D audio content.
- the musical instrument information is information indicating what type of object (sound source) is, for example, vocal "vocal”, drum “drums”, bass “bass”, guitar “guitar”, piano “piano”, That is, the information indicates the sound source type. More specifically, the musical instrument information is information indicating the type of an object such as a musical instrument, a voice part, or the sex of a voice of a man or a woman, that is, information indicating an attribute of the object itself as a sound source.
- the horizontal angle forming the metadata is often set to 0 ° (0 °), and the gain is set to a value larger than 1.0. Tend to be.
- the vertical angle forming the metadata is often set to a negative value for an object whose musical instrument information is “bass”.
- the creator of the 3D Audio content may decide to some extent the value of the parameter constituting the metadata and the possible range of the value of the parameter constituting the metadata for the musical instrument information to some extent. In such a case, it is possible to determine the metadata of the object from the instrument information.
- the sound effect information is information indicating sound effects such as effects added to the audio signal of the object, that is, effects applied to the audio signal.
- the sound effect information is information indicating an attribute related to the sound effect of the object.
- reverberation effects as sound effects that is, information indicating reverberation characteristics
- sound information indicating sound effects other than the reverberation effect are referred to as sound information.
- the reverberation information is, for example, information indicating a reverberation effect added to (added to) the audio signal of the object, that is, reverberation characteristics of the audio signal, such as dry “dry”, short reverb “short reverb”, and long reverb “long reverb”. . Note that, for example, “dry” indicates that no reverberation effect has been applied to the audio signal.
- the horizontal angle of the metadata is often set to a value within the range of -90 degrees to 90 degrees, and
- the vertical angle that constitutes the metadata is often a positive value.
- the value of the parameter of the metadata and the range of the value of the parameter of the metadata for the reverberation information may be determined to some extent, similarly to the instrument information. Therefore, it is possible to determine metadata even using reverberation information.
- the acoustic information is information indicating an acoustic effect other than reverberation added (appended) to the audio signal of the object, such as natural “natural” or distortion “dist”. It should be noted that the natural “natural” indicates that no effect is particularly applied to the audio signal.
- the horizontal angle forming the metadata is often set to a value within a range from -90 degrees to 90 degrees, and the acoustic information is For an object that is “dist”, the vertical angle forming the metadata is often set to a positive value. Therefore, it is possible to determine the metadata even using the acoustic information.
- the priority information is information indicating the priority of the object.
- the priority information is a value from 0 to 7, and a larger value indicates that the object has a higher priority.
- Such priority can also be said to be information indicating the attribute of the object.
- the horizontal angle of the metadata is often set to a value outside the range of -30 degrees to 30 degrees. If the value of the degree information is not 6 or more, the vertical angle tends to be less than 0 degree. Therefore, it is possible to determine the metadata even by using the priority information.
- Metadata is often determined based on channel information of the object.
- the channel information corresponds to a speaker to which an audio signal of an object is supplied, such as stereo L and R and 5.1 channel (5.1 ch) C, L, R, Ls and Rs.
- Information indicating a channel that is, an attribute related to the channel of the object.
- the metadata of each object more specifically, the parameters constituting the metadata are determined based on at least one of the instrument information, the reverberation information, the acoustic information, the priority information, and the channel information.
- the metadata is determined using, for example, a decision tree that is a supervised learning method.
- the instrument information, reverberation information, acoustic information, priority information, and channel information for each object, which have been collected in advance for a plurality of 3D audio contents, and the parameter values of the metadata are used as learning data (learning data). ).
- FIG. 1 shows an example of a decision tree for determining the horizontal angle and the vertical angle that constitute the metadata.
- the reverberation information is “long ⁇ reverb ”. If it is determined that the reverberation information is “long ⁇ reverb ”, the horizontal angle of the object is determined to be 0 ° and the vertical angle is determined to be 30 °, and the processing of the decision tree ends.
- the determination is continuously performed up to the end of the decision tree according to the determination result based on each information such as the instrument information, reverberation information, acoustic information, priority information, and channel information. Horizontal and vertical angles are determined.
- the horizontal angle and the vertical angle that constitute the metadata for each object are determined from information given to each object such as instrument information, reverberation information, acoustic information, priority information, and channel information. It is possible to
- the metadata determination method is not limited to the decision tree, but may be another supervised learning method such as a linear determination, a support vector machine, or a neural network.
- the metadata of an object may be determined based on information such as sound pressure (sound pressure information) and pitch (pitch information) obtained from an audio signal of the object. Since the information such as the sound pressure and the pitch of the sound (pitch) indicates the characteristics of the sound of the object, it can be said that the information indicates the attribute of the object.
- sound pressure information sound pressure information
- pitch information pitch obtained from an audio signal of the object. Since the information such as the sound pressure and the pitch of the sound (pitch) indicates the characteristics of the sound of the object, it can be said that the information indicates the attribute of the object.
- the vertical angle of the metadata is likely to be a negative value
- the vertical angle is positive. Tends to be valued.
- Metadata is added. Can be determined more accurately.
- a feature amount calculated by a method described below may be added to the input of the above-described metadata determination method, that is, the input of the decision tree or the like.
- the feature level (i_obj) calculated by the following equation (1) may be used as one of the inputs of the metadata determination method.
- i_obj indicates the index of the object
- i_sample indicates the index of the sample of the audio signal.
- Equation (1) pcm (i_obj, i_sample) indicates the sample value of the sample whose index is i_sample in the audio signal of the object whose index is i_obj, and n_sample indicates the number of all samples of the audio signal. Is shown.
- the feature level_sub (i_obj, i_band) calculated by the following equation (2) may be used as one of the inputs of the metadata determination method.
- index i_obj, index i_sample, and n_sample are the same as in equation (1), and i_band is an index indicating a band.
- the audio signal of each object is divided into audio signals of three bands of 0 kHz to 2 kHz, 2 kHz to 8 kHz, and 8 kHz to 15 kHz.
- the audio signal of each band is represented as pcm_sub (i_obj, i_band, i_sample).
- the feature amount level_sub (i_obj, 1), the feature amount level_sub (i_obj, 2), and the feature amount level_sub (i_obj, 3) are obtained by Expression (2) and are input to the metadata determination method. .
- the metadata of an object may be determined based on information such as the number of objects of 3D @ Audio content, metadata of other objects, an object name, and a genre of 3D @ Audio content composed of objects. Therefore, by adding such information to the input of the metadata determination method, the determination accuracy can be improved.
- the object name often contains information that substitutes for the instrument information and channel information, such as the name of the instrument of the object and the corresponding channel, that is, information indicating the attribute of the object. can do.
- information indicating the genre of 3D Audio content such as music, such as jazz
- the number of objects which is the total number of objects constituting the 3D Audio content
- information indicating attributes of content constituted by the objects are information indicating attributes of content constituted by the objects. Therefore, information on the attribute of the content, such as the genre and the number of objects, can also be used as attribute information of the object in determining the metadata.
- the objects are arranged at irregular intervals in the space, and conversely, if the number of objects is small, the objects are equal. Often placed at intervals.
- the number of objects constituting the 3D audio content can be added as one of the inputs of the metadata determination method.
- the horizontal angle, the vertical angle, and the distance constituting the metadata are determined so that the objects are arranged at equal or irregular intervals in the space.
- the metadata of another object whose metadata has already been determined may be used as an input of the metadata determination method.
- the information given for each object described above, the information obtained from the audio signal, and other information such as the number of objects may be used alone as an input of the metadata determination method. May be combined and used as an input of the metadata determination method.
- the horizontal axis indicates the horizontal angle forming the metadata
- the vertical axis indicates the vertical angle forming the metadata
- one circle indicates one object, and the pattern added to each circle differs for each piece of musical instrument information assigned to the object corresponding to the circle.
- circles C11 and C12 indicate objects to which vocal "vocal” is added as instrument information
- circles C13 and C14 indicate objects to which base “bass” is added as instrument information
- Circles C15 to C20 indicate objects to which a piano “piano” is given as instrument information.
- the size of each circle indicates the magnitude (height) of the sound pressure of the audio signal of the object, and the size of the circle increases in proportion to the sound pressure.
- FIG. 2 shows the distribution of parameters (metadata) of each object in the parameter space (parameter space) with the horizontal angle and the vertical angle as axes, and the magnitude of the sound pressure of the object signal of each object. It can be said that.
- circles C11 to C18 are concentrated in the center in FIG. 2, and it can be seen that the metadata of the objects corresponding to those circles has close values.
- the distribution of the metadata of each object is a distribution concentrated at positions close to each other in the parameter space. In such a case, if rendering is performed using the determined metadata as it is, the resulting content will be of low quality with no three-dimensional sound direction, distance, or spread.
- the present technology by adjusting the distribution of the objects, that is, the distribution of the metadata of the objects, it is possible to obtain high-quality contents with three-dimensional sound directions, distances, and spreads.
- the distribution adjustment metadata that has already been determined by an input of a creator or the like or metadata that has been determined by prediction using a decision tree or the like is used as an input. Therefore, it can be applied independently of the above-described metadata determination method. That is, the distribution of the metadata can be adjusted regardless of the method of determining the metadata.
- the distribution of metadata may be adjusted by either a manual method (hereinafter, referred to as a manual adjustment method) or an automatic method (hereinafter, referred to as an automatic adjustment method).
- a manual adjustment method hereinafter, referred to as a manual adjustment method
- an automatic adjustment method hereinafter, each method will be described.
- a predetermined value for addition, a predetermined value for multiplication, or both addition and multiplication are performed on the parameter values of the object metadata.
- the distribution of metadata is adjusted.
- the value added in the addition process of the manual adjustment method and the value multiplied in the multiplication process may be adjusted by an operation on a bar or the like on a 3D audio content creation tool of a GUI (Graphical User Interfac). .
- GUI Graphic User Interfac
- a negative value is added to a parameter having a negative value among the parameters of the metadata, and a parameter having a positive value is set.
- the distribution of metadata can be adjusted (corrected) to a distribution having a more spatial spread.
- the distribution of metadata is adjusted only by the addition processing, by adding the same value to each parameter, the objects are translated in space while maintaining the positional relationship of the objects. It is possible to realize the distribution adjustment of the metadata.
- each of the objects is regarded as a vector indicated by a horizontal angle, a vertical angle, and a distance constituting the metadata.
- a vector having such a horizontal angle, a vertical angle, and a distance as elements will be referred to as an object vector.
- the average value of the object vectors of all objects is obtained as an object average vector.
- a difference vector between the object average vector and each of the object vectors is obtained, and a vector having a root mean square value of the difference vectors as an element is obtained. That is, for each of the horizontal angle, the vertical angle, and the distance, a vector having, as an element, a root-mean-square value of the difference between the respective values of the object is obtained from the average value.
- the mean square value for each of the horizontal angle, the vertical angle, and the distance thus obtained corresponds to the variance for each of the horizontal angle, the vertical angle, and the distance, and the horizontal angle, the vertical angle, and the distance
- a vector having a root-mean-square value as an element for each of them is called an object variance vector. It can be said that the object distribution vector indicates the distribution of metadata of a plurality of objects.
- the metadata is adjusted so that the object variance vector obtained by the above calculation has a desired value, that is, a target variance value.
- one parameter such as a horizontal angle that constitutes the metadata may be adjusted, or a plurality of parameters may be adjusted.
- all parameters constituting the metadata may be adjusted.
- the desired value to be the target of the object variance vector may be, for example, an object variance vector for a plurality of 3D audio contents in advance, and the average value of the object variance vectors.
- the metadata may be adjusted so that the object average vector becomes a target value, or the metadata may be adjusted so that both the object average vector and the object variance vector become the target values. May be adjusted.
- the values of the object average vector and the object variance vector targeted at the time of adjustment in the automatic adjustment method are obtained in advance by learning or the like for each genre or creator of the 3D Audio content and for each number of objects of the 3D Audio content. It may be. Then, distribution adjustment suitable for the genre of the content and distribution adjustment reflecting the creator's identity can be realized.
- the sound pressure of each object may be weighted for the object vector. That is, the object vector obtained for the object may be multiplied by a weight corresponding to the sound pressure of the audio signal of the object, and the resulting vector may be used as the final object vector.
- the sound pressure distribution can be set to a desired value, that is, a target sound pressure distribution, and higher-quality metadata can be adjusted (corrected). This is because audio content having an appropriate sound pressure distribution is regarded as high quality content.
- the object's metadata is not used in calculating the object average vector.
- the metadata of the excluded object may be used for calculating the object average vector.
- an object whose instrument information is “vocal” is often important in content, and the quality may be higher if the distribution of metadata is biased to one location. In such a case, the object whose musical instrument information is “vocal” may be excluded from the target of the metadata distribution adjustment.
- the object excluded from the object of the distribution adjustment of the metadata may be an object in which information given to each object such as musical instrument information indicates a predetermined one (value or the like) or a creator. Alternatively, the object may be specified by the user.
- the distribution shown in FIG. 2 becomes, for example, as shown in FIG. Note that, in FIG. 3, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate. Also, in FIG. 3, the horizontal axis indicates the horizontal angle forming the metadata, and the vertical axis indicates the vertical angle forming the metadata.
- the information processing apparatus is configured as shown in FIG.
- the information processing apparatus 11 illustrated in FIG. 4 includes a metadata determination unit 21 and a distribution adjustment unit 22.
- the metadata determination unit 21 predictively determines, for each object, metadata of each object based on information on attributes of the object supplied from the outside, that is, one or more pieces of attribute information of the object, and determines the determined metadata. Is output.
- the number of objects whose metadata is to be determined may be one or more.
- metadata is determined for a plurality of objects.
- the attribute information of the object is at least one of instrument information, reverberation information, acoustic information, priority information, channel information, number of objects, metadata of another object, object name, and information indicating a genre. It is.
- the metadata determination unit 21 is also supplied with an audio signal for calculating a feature amount relating to a sound pressure or a pitch as attribute information of the object.
- the metadata determination unit 21 has a decision tree processing unit 31.
- the metadata determining unit 21 appropriately calculates a feature amount relating to sound pressure and sound pitch as attribute information of the object based on the audio signal, and calculates the calculated feature amount and attribute information of the object supplied from the outside. Is input to the decision tree processing unit 31.
- the attribute information input to the decision tree processing unit 31 may be one or plural.
- the decision tree processing unit 31 performs a process of determining metadata by a decision tree based on the attribute information of the input object, and supplies the metadata of each object obtained as a result of the determination to the distribution adjusting unit 22. .
- the decision tree processing unit 31 holds a decision tree (decision tree model) obtained by learning in advance.
- the determined parameters may include a gain.
- any one or more of the plurality of parameters constituting the metadata may be determined by the decision tree processing unit 31.
- the distribution adjusting unit 22 performs the above-described distribution adjustment on the metadata of each of the plurality of objects supplied from the decision tree processing unit 31, and uses the metadata after the distribution adjustment as the final metadata of each object in the subsequent stage.
- Supply (output)
- the distribution adjustment unit 22 includes an object variance vector calculation unit 32, a coefficient vector calculation unit 33, and a coefficient vector application unit 34.
- the object variance vector calculation unit 32 sets, as an object vector, a vector having elements of the horizontal angle, the vertical angle, and the distance that constitute the metadata of each object supplied from the decision tree processing unit 31, and an object vector of each object.
- the object average vector is obtained based on Further, the object variance vector calculation unit 32 calculates an object variance vector based on the obtained object average vector and each object vector, and supplies the object variance vector to the coefficient vector calculation unit 33.
- the coefficient vector calculation unit 33 converts each of the elements of the predetermined value vector having a predetermined value determined in advance for each of the horizontal angle, the vertical angle, and the distance into the object variance vector supplied from the object variance vector calculation unit 32. By dividing by each of the elements, a coefficient vector having a coefficient as an element for each of the horizontal angle, the vertical angle, and the distance is calculated and supplied to the coefficient vector application unit 34.
- the predetermined value vector obtained in advance is a target object variance vector, and is obtained by, for example, learning for each genre or each creator.
- the target value of the object variance vector is a vector having as an element the average value of each element of the object variance vector obtained for a plurality of 3D audio contents of the same genre.
- the coefficient vector application unit 34 calculates the distribution-adjusted metadata by multiplying the metadata supplied from the decision tree processing unit 31 by the coefficient vector supplied from the coefficient vector calculation unit 33 for each element. , And outputs the obtained metadata to the subsequent stage.
- the coefficient vector application unit 34 adjusts the distribution of the metadata by multiplying the metadata by the coefficient vector for each element. As a result, the distribution of the metadata becomes a distribution corresponding to the target object variance vector.
- rendering processing is performed based on the audio signal and metadata of each object, and metadata adjustment is performed manually by the creator.
- the metadata not only the metadata but also the attribute information of the object such as the musical instrument information is supplied from the decision tree processing unit 31 to the object variance vector calculation unit 32 and the coefficient vector application unit 34, and based on the attribute information of the object.
- an object to be excluded from the distribution adjustment may be determined.
- the distribution of the metadata is not adjusted for the object excluded from the object, and the metadata determined by the decision tree processing unit 31 is output as it is as the final metadata.
- the adjustment of the object average vector may be performed, or the adjustment of both the object variance vector and the object average vector may be performed. Further, here, an example in which the distribution adjustment is performed by the automatic adjustment method in the distribution adjustment unit 22 has been described. However, the distribution adjustment unit 22 performs the distribution adjustment by the manual adjustment method according to the input of the creator or the like. Is also good.
- the distribution adjustment unit 22 performs an operation based on the predetermined value and the metadata by adding or multiplying a predetermined value specified by the creator or the like to the metadata of the object, and Obtain the adjusted metadata.
- the object specified by the creator or the object determined by the attribute information of the object may be excluded from the distribution adjustment.
- step S11 the decision tree processing unit 31 determines metadata based on the attribute information of the object, and supplies the determination result to the object variance vector calculation unit 32 and the coefficient vector application unit.
- the metadata determination unit 21 calculates the above-described formulas (1) and (2) on the basis of the audio signal supplied as necessary, thereby calculating the characteristic amounts of the sound pressure and the pitch. . Then, the metadata determination unit 21 inputs the calculated feature amount, musical instrument information supplied from the outside, and the like to the decision tree processing unit 31 as attribute information of the object.
- the decision tree processing unit 31 performs a process of deciding metadata by a decision tree based on the supplied attribute information of the object. Further, the metadata determination unit 21 also supplies the attribute information of the object to the object variance vector calculation unit 32 and the coefficient vector application unit 34 as needed.
- step S12 the object variance vector calculation unit 32 calculates an object average vector based on the metadata of each object supplied from the decision tree processing unit 31, and calculates an object variance vector from the object average vector and the object vector. It is supplied to the coefficient vector calculation unit 33.
- step S13 the coefficient vector calculation unit 33 calculates a vector having predetermined values obtained in advance for each of the horizontal angle, the vertical angle, and the distance, that is, a target object variance vector obtained in advance as an object variance vector calculation.
- the coefficient vector is calculated by dividing each element by the object variance vector supplied from the unit 32 and supplied to the coefficient vector application unit 34.
- step S14 the coefficient vector application unit 34 adjusts the distribution of the metadata supplied from the decision tree processing unit 31 based on the coefficient vector supplied from the coefficient vector calculation unit 33, and obtains the resulting distribution adjusted Is output, and the metadata determination process ends.
- the coefficient vector application unit 34 adjusts the distribution of the metadata by multiplying the metadata by the coefficient vector for each element.
- the predetermined object may be excluded from the target of the metadata distribution adjustment.
- the information processing apparatus 11 determines the metadata of each object based on the attribute information of the object, and adjusts the distribution of the metadata. This eliminates the need for the creator to specify (input) the metadata of each object each time, so that high-quality 3D audio content can be produced easily, that is, in a short time.
- the metadata can be determined by the above-described method.
- the number of decision patterns that is, the number of decision trees used for the decision of metadata is not one but plural. This is because it is difficult to respond to a wide variety of contents with one decision pattern (decision tree, etc.), and it is possible to select the most appropriate one for a creator from a plurality of decision patterns, thereby achieving higher quality. This is because high-quality 3D Audio content can be produced.
- the learning data is divided into a plurality of pieces, and the learning of the decision tree model is performed using each of the divided pieces of the learning data. Will be able to do it. At this time, the advantage differs depending on how the learning data is divided.
- the accuracy of determining metadata for each creator can be improved. That is, it is possible to obtain a decision tree (decision tree model) for determining metadata that more reflects the characteristics of the creator.
- the characteristics of the creator are one of the most important factors in determining the quality of the content, and by dividing the learning data for each creator, it becomes possible to increase the variation of the quality in the determined pattern.
- a decision can be made that reflects the past characteristics of the creator more, and the production time can be shortened.
- a general user or the like can select a decision tree of a favorite creator from among the decision trees for each of a plurality of producers.
- the metadata can be determined using the selected decision tree. As a result, it is possible to obtain a content reflecting the characteristics of the creator of his or her preference.
- the accuracy of metadata determination can be improved. That is, if a decision tree is learned for each genre of content, metadata suitable for the genre of content can be obtained.
- target values of the object average vector and the object variance vector used for adjusting the distribution of metadata are also determined by learning or the like for each genre, each creator, and each number of objects constituting the content. be able to.
- the metadata determination processing described with reference to FIG. 5 is performed for each time, and the metadata between the two times may be obtained by interpolation or the like as necessary.
- Example of computer configuration By the way, the above-described series of processing can be executed by hardware or can be executed by software.
- a program constituting the software is installed in a computer.
- the computer includes a computer incorporated in dedicated hardware, a general-purpose personal computer that can execute various functions by installing various programs, and the like.
- FIG. 6 is a block diagram illustrating a configuration example of hardware of a computer that executes the series of processes described above by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the input / output interface 505 is further connected to the bus 504.
- An input unit 506, an output unit 507, a recording unit 508, a communication unit 509, and a drive 510 are connected to the input / output interface 505.
- the input unit 506 includes a keyboard, a mouse, a microphone, an image sensor, and the like.
- the output unit 507 includes a display, a speaker, and the like.
- the recording unit 508 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 509 includes a network interface and the like.
- the drive 510 drives a removable recording medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 501 loads the program recorded in the recording unit 508 into the RAM 503 via the input / output interface 505 and the bus 504 and executes the program. Is performed.
- the program executed by the computer (CPU 501) can be provided by being recorded on a removable recording medium 511 as a package medium or the like, for example. Further, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the recording unit 508 via the input / output interface 505 by attaching the removable recording medium 511 to the drive 510.
- the program can be received by the communication unit 509 via a wired or wireless transmission medium and installed in the recording unit 508.
- the program can be installed in the ROM 502 or the recording unit 508 in advance.
- the program executed by the computer may be a program in which processing is performed in chronological order in the order described in this specification, or may be performed in parallel or at a necessary timing such as when a call is made. It may be a program that performs processing.
- the present technology can adopt a configuration of cloud computing in which one function is shared by a plurality of devices via a network, and processing is performed jointly.
- each step described in the above-described flowchart can be executed by a single device, or can be shared and executed by a plurality of devices.
- one step includes a plurality of processes
- the plurality of processes included in the one step may be executed by one device or may be shared and executed by a plurality of devices.
- the present technology may have the following configurations.
- An information processing apparatus comprising: a determination unit that determines one or more parameters constituting metadata of an object based on one or more pieces of attribute information of the object.
- the information processing device according to (1), wherein the parameter is position information indicating a position of the object.
- the information processing device according to any one of (1) to (3), wherein the attribute information is information indicating a type of the object.
- the information processing device wherein the attribute information is information indicating a sound source type of the object.
- the sound source type is information indicating a musical instrument, a voice part, or gender of voice.
- the attribute information is information indicating a sound effect applied to an audio signal of the object.
- the acoustic effect is a reverberation effect.
- the attribute information is information relating to a sound pressure or a pitch of an audio signal of the object.
- the information processing device according to any one of (6) to (11), wherein the attribute information is information regarding an attribute of a content configured by the object.
- the information processing apparatus according to (12), wherein the information on the attribute of the content is a genre of the content or a number of the objects configuring the content.
- the information processing apparatus according to any one of (1) to (13), further including a distribution adjustment unit configured to adjust distribution of the parameters of the plurality of objects.
- the distribution adjustment unit adjusts the distribution by adjusting a variance or an average of the parameter.
- the distribution adjustment unit adjusts the distribution such that the variance or the average of the parameters is a value determined for the number of objects constituting a content, a content creator, or a genre of the content.
- the information processing device according to (15).
- the information processing apparatus according to any one of (1) to (16), wherein the determining unit determines the parameter using a decision tree that receives the attribute information as input and outputs the parameter.
- the decision tree is learned for each genre of content constituted by the object or for each content creator.
- the information processing device is An information processing method for determining one or more parameters constituting metadata of an object based on one or more pieces of attribute information of the object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stereophonic System (AREA)
Abstract
Description
〈本技術について〉
本技術はオブジェクトごとにメタデータ、より詳細にはメタデータを構成する1または複数のパラメタを決定することで、より簡単に、すなわち短時間で十分に高い品質の3D Audioコンテンツを制作できるようにするものである。
特徴(F2):オブジェクトごとのオーディオ信号からメタデータを決定する
特徴(F3):その他の情報からメタデータを決定する
特徴(F4):所望の分布になるようにメタデータを修正する
特徴(F5):メタデータの決定パターンが複数である
まず、オブジェクトごとに付与される情報からメタデータ、より詳細にはメタデータのパラメタを決定する方法について説明する。
続いて、各オブジェクトのオーディオ信号からメタデータを決定する方法について説明する。
さらに、その他の情報からメタデータを決定する方法について説明する。
ところで、上述した各情報を用いれば、オブジェクトのメタデータを決定することが可能である。しかし、オブジェクト数が少ない3D Audioコンテンツ(以下、単にコンテンツとも称する)では、決定されたメタデータのパラメタが一か所に偏って決定されることがある。そのような例を図2に示す。
まず、メタデータの手動調整方法について説明する。
自動調整方法では、オブジェクトのそれぞれがメタデータを構成する水平角度、垂直角度、および距離により示されるベクトルとみなされる。以下では、そのような水平角度、垂直角度、および距離を要素として持つベクトルをオブジェクトベクトルと呼ぶこととする。
次に、以上において説明したメタデータ決定手法によりメタデータを決定し、さらに決定されたメタデータの分布調整を行う情報処理装置について説明する。
続いて、図4に示した情報処理装置11の動作について説明する。すなわち、以下、図5のフローチャートを参照して、情報処理装置11によるメタデータ決定処理について説明する。
ところで、上述の手法によってメタデータを決定することができるが、決定のパターン、すなわちメタデータの決定に用いる決定木等は1つではなく複数であった方がよい。これは、1つの決定パターン(決定木等)で多種多様なコンテンツに対応することは困難であり、また、複数の決定パターンから制作者にとって最適なものを選択できるようにすることで、より品質の高い3D Audioコンテンツの制作が可能になるからである。
ところで、上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウェアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する決定部を備える
情報処理装置。
(2)
前記パラメタは、前記オブジェクトの位置を示す位置情報である
(1)に記載の情報処理装置。
(3)
前記パラメタは、前記オブジェクトのオーディオ信号のゲインである
(1)または(2)に記載の情報処理装置。
(4)
前記属性情報は、前記オブジェクトの種類を示す情報である
(1)乃至(3)の何れか一項に記載の情報処理装置。
(5)
前記属性情報は、前記オブジェクトの優先度を示す優先度情報である
(1)乃至(4)の何れか一項に記載の情報処理装置。
(6)
前記オブジェクトはオーディオオブジェクトである
(1)乃至(5)の何れか一項に記載の情報処理装置。
(7)
前記属性情報は、前記オブジェクトの音源種別を示す情報である
(6)に記載の情報処理装置。
(8)
前記音源種別は、楽器、音声パート、または音声の性別を示す情報である
(7)に記載の情報処理装置。
(9)
前記属性情報は、前記オブジェクトのオーディオ信号に施された音響効果を示す情報である
(6)乃至(8)の何れか一項に記載の情報処理装置。
(10)
前記音響効果は残響効果である
(9)に記載の情報処理装置。
(11)
前記属性情報は、前記オブジェクトのオーディオ信号の音圧または音高に関する情報である
(6)乃至(10)の何れか一項に記載の情報処理装置。
(12)
前記属性情報は、前記オブジェクトにより構成されるコンテンツの属性に関する情報である
(6)乃至(11)の何れか一項に記載の情報処理装置。
(13)
前記コンテンツの属性に関する情報は、前記コンテンツのジャンル、または前記コンテンツを構成する前記オブジェクトの数である
(12)に記載の情報処理装置。
(14)
複数の前記オブジェクトの前記パラメタの分布調整を行う分布調整部をさらに備える
(1)乃至(13)の何れか一項に記載の情報処理装置。
(15)
前記分布調整部は、前記パラメタの分散または平均を調整することにより前記分布調整を行う
(14)に記載の情報処理装置。
(16)
前記分布調整部は、前記パラメタの前記分散または前記平均が、コンテンツを構成する前記オブジェクトの数、コンテンツ制作者、または前記コンテンツのジャンルに対して定められた値となるように前記分布調整を行う
(15)に記載の情報処理装置。
(17)
前記決定部は、前記属性情報を入力とし、前記パラメタを出力とする決定木により前記パラメタを決定する
(1)乃至(16)の何れか一項に記載の情報処理装置。
(18)
前記決定木は、前記オブジェクトにより構成されるコンテンツのジャンルごと、またはコンテンツ制作者ごとに学習される
(17)に記載の情報処理装置。
(19)
情報処理装置が、
オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する
情報処理方法。
(20)
オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する
ステップを含む処理をコンピュータに実行させるプログラム。
Claims (20)
- オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する決定部を備える
情報処理装置。 - 前記パラメタは、前記オブジェクトの位置を示す位置情報である
請求項1に記載の情報処理装置。 - 前記パラメタは、前記オブジェクトのオーディオ信号のゲインである
請求項1に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトの種類を示す情報である
請求項1に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトの優先度を示す優先度情報である
請求項1に記載の情報処理装置。 - 前記オブジェクトはオーディオオブジェクトである
請求項1に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトの音源種別を示す情報である
請求項6に記載の情報処理装置。 - 前記音源種別は、楽器、音声パート、または音声の性別を示す情報である
請求項7に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトのオーディオ信号に施された音響効果を示す情報である
請求項6に記載の情報処理装置。 - 前記音響効果は残響効果である
請求項9に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトのオーディオ信号の音圧または音高に関する情報である
請求項6に記載の情報処理装置。 - 前記属性情報は、前記オブジェクトにより構成されるコンテンツの属性に関する情報である
請求項6に記載の情報処理装置。 - 前記コンテンツの属性に関する情報は、前記コンテンツのジャンル、または前記コンテンツを構成する前記オブジェクトの数である
請求項12に記載の情報処理装置。 - 複数の前記オブジェクトの前記パラメタの分布調整を行う分布調整部をさらに備える
請求項1に記載の情報処理装置。 - 前記分布調整部は、前記パラメタの分散または平均を調整することにより前記分布調整を行う
請求項14に記載の情報処理装置。 - 前記分布調整部は、前記パラメタの前記分散または前記平均が、コンテンツを構成する前記オブジェクトの数、コンテンツ制作者、または前記コンテンツのジャンルに対して定められた値となるように前記分布調整を行う
請求項15に記載の情報処理装置。 - 前記決定部は、前記属性情報を入力とし、前記パラメタを出力とする決定木により前記パラメタを決定する
請求項1に記載の情報処理装置。 - 前記決定木は、前記オブジェクトにより構成されるコンテンツのジャンルごと、またはコンテンツ制作者ごとに学習される
請求項17に記載の情報処理装置。 - 情報処理装置が、
オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する
情報処理方法。 - オブジェクトのメタデータを構成する1または複数のパラメタを、前記オブジェクトの1または複数の属性情報に基づいて決定する
ステップを含む処理をコンピュータに実行させるプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217007548A KR20210066807A (ko) | 2018-09-28 | 2019-09-13 | 정보 처리 장치 및 방법, 그리고 프로그램 |
JP2020548454A JP7363795B2 (ja) | 2018-09-28 | 2019-09-13 | 情報処理装置および方法、並びにプログラム |
CN201980061183.9A CN112740721A (zh) | 2018-09-28 | 2019-09-13 | 信息处理装置、方法和程序 |
BR112021005241-0A BR112021005241A2 (pt) | 2018-09-28 | 2019-09-13 | dispositivo, método e programa de processamento de informações |
EP19864545.9A EP3860156A4 (en) | 2018-09-28 | 2019-09-13 | DEVICE, PROCESS AND PROGRAM FOR PROCESSING INFORMATION |
US17/278,178 US11716586B2 (en) | 2018-09-28 | 2019-09-13 | Information processing device, method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018184161 | 2018-09-28 | ||
JP2018-184161 | 2018-09-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020066681A1 true WO2020066681A1 (ja) | 2020-04-02 |
Family
ID=69952679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/036032 WO2020066681A1 (ja) | 2018-09-28 | 2019-09-13 | 情報処理装置および方法、並びにプログラム |
Country Status (7)
Country | Link |
---|---|
US (1) | US11716586B2 (ja) |
EP (1) | EP3860156A4 (ja) |
JP (1) | JP7363795B2 (ja) |
KR (1) | KR20210066807A (ja) |
CN (1) | CN112740721A (ja) |
BR (1) | BR112021005241A2 (ja) |
WO (1) | WO2020066681A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023062865A1 (ja) * | 2021-10-15 | 2023-04-20 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000047681A (ja) * | 1998-07-31 | 2000-02-18 | Toshiba Corp | 情報処理方法 |
US20100318911A1 (en) * | 2009-06-16 | 2010-12-16 | Harman International Industries, Incorporated | System for automated generation of audio/video control interfaces |
JP2014187638A (ja) * | 2013-03-25 | 2014-10-02 | Yamaha Corp | デジタルオーディオミキシング装置及びプログラム |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004194108A (ja) * | 2002-12-12 | 2004-07-08 | Sony Corp | 情報処理装置および情報処理方法、記録媒体、並びにプログラム |
CA2748301C (en) * | 2008-12-30 | 2017-06-27 | Karen Collins | Method and system for visual representation of sound |
EP2465114B1 (en) * | 2009-08-14 | 2020-04-08 | Dts Llc | System for adaptively streaming audio objects |
TW202339510A (zh) * | 2011-07-01 | 2023-10-01 | 美商杜比實驗室特許公司 | 用於適應性音頻信號的產生、譯碼與呈現之系統與方法 |
MY172402A (en) * | 2012-12-04 | 2019-11-23 | Samsung Electronics Co Ltd | Audio providing apparatus and audio providing method |
EP2784955A3 (en) | 2013-03-25 | 2015-03-18 | Yamaha Corporation | Digital audio mixing device |
US10063207B2 (en) * | 2014-02-27 | 2018-08-28 | Dts, Inc. | Object-based audio loudness management |
CN112562697A (zh) * | 2015-06-24 | 2021-03-26 | 索尼公司 | 音频处理装置和方法以及计算机可读存储介质 |
EP3145220A1 (en) * | 2015-09-21 | 2017-03-22 | Dolby Laboratories Licensing Corporation | Rendering virtual audio sources using loudspeaker map deformation |
US20170098452A1 (en) * | 2015-10-02 | 2017-04-06 | Dts, Inc. | Method and system for audio processing of dialog, music, effect and height objects |
JP6834971B2 (ja) * | 2015-10-26 | 2021-02-24 | ソニー株式会社 | 信号処理装置、信号処理方法、並びにプログラム |
US11290819B2 (en) * | 2016-01-29 | 2022-03-29 | Dolby Laboratories Licensing Corporation | Distributed amplification and control system for immersive audio multi-channel amplifier |
EP3301951A1 (en) * | 2016-09-30 | 2018-04-04 | Koninklijke KPN N.V. | Audio object processing based on spatial listener information |
JP7160032B2 (ja) * | 2017-04-26 | 2022-10-25 | ソニーグループ株式会社 | 信号処理装置および方法、並びにプログラム |
US10735882B2 (en) * | 2018-05-31 | 2020-08-04 | At&T Intellectual Property I, L.P. | Method of audio-assisted field of view prediction for spherical video streaming |
US11545166B2 (en) * | 2019-07-02 | 2023-01-03 | Dolby International Ab | Using metadata to aggregate signal processing operations |
-
2019
- 2019-09-13 US US17/278,178 patent/US11716586B2/en active Active
- 2019-09-13 EP EP19864545.9A patent/EP3860156A4/en not_active Withdrawn
- 2019-09-13 WO PCT/JP2019/036032 patent/WO2020066681A1/ja unknown
- 2019-09-13 KR KR1020217007548A patent/KR20210066807A/ko unknown
- 2019-09-13 JP JP2020548454A patent/JP7363795B2/ja active Active
- 2019-09-13 CN CN201980061183.9A patent/CN112740721A/zh active Pending
- 2019-09-13 BR BR112021005241-0A patent/BR112021005241A2/pt unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000047681A (ja) * | 1998-07-31 | 2000-02-18 | Toshiba Corp | 情報処理方法 |
US20100318911A1 (en) * | 2009-06-16 | 2010-12-16 | Harman International Industries, Incorporated | System for automated generation of audio/video control interfaces |
JP2014187638A (ja) * | 2013-03-25 | 2014-10-02 | Yamaha Corp | デジタルオーディオミキシング装置及びプログラム |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023062865A1 (ja) * | 2021-10-15 | 2023-04-20 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
BR112021005241A2 (pt) | 2021-06-15 |
JPWO2020066681A1 (ja) | 2021-08-30 |
US20220116732A1 (en) | 2022-04-14 |
EP3860156A1 (en) | 2021-08-04 |
EP3860156A4 (en) | 2021-12-01 |
CN112740721A (zh) | 2021-04-30 |
KR20210066807A (ko) | 2021-06-07 |
US11716586B2 (en) | 2023-08-01 |
JP7363795B2 (ja) | 2023-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8699727B2 (en) | Visually-assisted mixing of audio using a spectral analyzer | |
CA2887124C (en) | System and method for performing automatic audio production using semantic data | |
Wilmering et al. | A history of audio effects | |
US20170301330A1 (en) | Automatic multi-channel music mix from multiple audio stems | |
JP7230799B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2023139188A (ja) | 方向性音源のエンコードおよびデコードのための方法、装置およびシステム | |
WO2022014326A1 (ja) | 信号処理装置および方法、並びにプログラム | |
WO2020066681A1 (ja) | 情報処理装置および方法、並びにプログラム | |
US10630254B2 (en) | Information processing device and information processing method | |
JP6233625B2 (ja) | 音声処理装置および方法、並びにプログラム | |
US20210105571A1 (en) | Sound image reproduction device, sound image reproduction method, and sound image reproduction program | |
EP3920049A1 (en) | Techniques for audio track analysis to support audio personalization | |
WO2021140959A1 (ja) | 符号化装置および方法、復号装置および方法、並びにプログラム | |
WO2022020365A1 (en) | Multi-stage processing of audio signals to facilitate rendering of 3d audio via a plurality of playback devices | |
CN114667563A (zh) | 声学空间的模态混响效果 | |
CN114631142A (zh) | 电子设备、方法和计算机程序 | |
WO2020152264A1 (en) | Electronic device, method and computer program | |
US20230135778A1 (en) | Systems and methods for generating a mixed audio file in a digital audio workstation | |
Ziemer | Goniometers are a powerful acoustic feature for music information retrieval tasks | |
WO2024024468A1 (ja) | 情報処理装置および方法、符号化装置、音声再生装置、並びにプログラム | |
WO2021124919A1 (ja) | 情報処理装置および方法、並びにプログラム | |
GB2561595A (en) | Ambience generation for spatial audio mixing featuring use of original and extended signal | |
WO2023062865A1 (ja) | 情報処理装置および方法、並びにプログラム | |
KR20230091455A (ko) | 사운드 이펙트 효과 설정 방법 | |
CN117501364A (zh) | 用于训练机器学习模型的装置、方法和计算机程序 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19864545 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020548454 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021005241 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2019864545 Country of ref document: EP Effective date: 20210428 |
|
ENP | Entry into the national phase |
Ref document number: 112021005241 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210319 |