WO2018190151A1 - Dispositif, procédé et programme de traitement de signal - Google Patents

Dispositif, procédé et programme de traitement de signal Download PDF

Info

Publication number
WO2018190151A1
WO2018190151A1 PCT/JP2018/013630 JP2018013630W WO2018190151A1 WO 2018190151 A1 WO2018190151 A1 WO 2018190151A1 JP 2018013630 W JP2018013630 W JP 2018013630W WO 2018190151 A1 WO2018190151 A1 WO 2018190151A1
Authority
WO
WIPO (PCT)
Prior art keywords
ambisonic
spread
gain
signal
audio
Prior art date
Application number
PCT/JP2018/013630
Other languages
English (en)
Japanese (ja)
Inventor
本間 弘幸
優樹 山本
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to US16/500,591 priority Critical patent/US10972859B2/en
Priority to JP2019512429A priority patent/JP7143843B2/ja
Priority to BR112019020887A priority patent/BR112019020887A2/pt
Priority to KR1020197026586A priority patent/KR102490786B1/ko
Priority to RU2019131411A priority patent/RU2763391C2/ru
Priority to EP18784930.2A priority patent/EP3624116B1/fr
Publication of WO2018190151A1 publication Critical patent/WO2018190151A1/fr
Priority to US17/200,532 priority patent/US20210204086A1/en
Priority to JP2022145788A priority patent/JP2022172391A/ja

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/307Frequency adjustment, e.g. tone control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/11Application of ambisonics in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels

Definitions

  • the present technology relates to a signal processing device, method, and program, and more particularly, to a signal processing device, method, and program that can reduce a calculation load.
  • a moving sound source or the like is treated as an independent audio object together with the conventional two-channel stereo method or multi-channel stereo method such as 5.1 channel, and the position information of the object along with the audio object signal data It can be encoded as metadata.
  • Non-Patent Document 1 in addition to the above-described audio object, data such as ambisonic (also called HOA (High Order Ambisonic)) that handles spatial acoustic information around the viewer is handled. Can do.
  • ambisonic also called HOA (High Order Ambisonic)
  • an audio object is assumed to be a point sound source when rendered to a speaker signal, a headphone signal, or the like, an audio object having a size cannot be expressed.
  • Non-Patent Document 1 For example, in the standard of Non-Patent Document 1, during playback, 19 spread audio object signals are newly generated for one audio object based on the spread and rendered and output to a playback device such as a speaker. As a result, an audio object having a pseudo size can be expressed.
  • This technology has been made in view of such a situation, and is intended to reduce the calculation load.
  • the signal processing device includes an ambisonic gain calculation unit that obtains an ambisonic gain when the object is at a predetermined position based on spread information of the object.
  • the signal processing device may further include an ambisonic signal generation unit that generates the ambisonic signal of the object based on the audio object signal of the object and the ambisonic gain.
  • the ambisonic gain calculation unit Based on the spread information, the ambisonic gain calculation unit obtains a reference position ambisonic gain when the object is at a reference position, and based on the object position information indicating the predetermined position The reference position ambisonic gain can be rotated to obtain the ambisonic gain.
  • the ambisonic gain calculator can determine the reference position ambisonic gain based on the spread information and the gain table.
  • a spread angle and the reference position ambisonic gain can be associated with each other.
  • the ambisonic gain calculation unit is indicated by the spread information by performing an interpolation process based on each of the reference position ambisonic gains associated with each of the plurality of spread angles in the gain table.
  • the reference position ambisonic gain corresponding to the spread angle can be obtained.
  • the reference position ambisonic gain is a sum of values obtained by substituting each of angles indicating each of a plurality of positions on a space determined with respect to a spread angle indicated by the spread information into a spherical harmonic function. can do.
  • the signal processing method or program according to one aspect of the present technology includes a step of obtaining an ambisonic gain when the object is at a predetermined position based on spread information of the object.
  • an ambisonic gain when the object is at a predetermined position is obtained based on the spread information of the object.
  • the calculation load can be reduced.
  • FIG. 1 is a diagram illustrating an example of a metadata format of an audio object including spread information.
  • the audio object metadata is encoded using a format shown in FIG. 1 at predetermined time intervals.
  • num_objects indicates the number of audio objects included in the bitstream.
  • Tcimsbf is an abbreviation for Two's'complement integer and most significant bit first
  • uimsbf is an abbreviation for Unsigned integer and most significant bit first.
  • the metadata stores object_priority, spread, position_azimuth, position_elevation, position_radius, and gain_factor for each audio object.
  • Object_priority is priority information indicating a priority when rendering an audio object on a playback device such as a speaker. For example, when audio data is played back on a device with less calculation resources, it is possible to preferentially play back an audio object signal having a large object_priority.
  • Spread is metadata (spread information) representing the size of the audio object, and is defined as an angle representing the spread of the audio object from the spatial position in the MPEG-H Part 3: 3D audio standard.
  • gain_factor is gain information indicating the gain of each audio object.
  • Position_azimuth, position_elevation, and position_radius are the azimuth, elevation, and radius (distance) representing the spatial position information of the audio object, and the relationship between these azimuth, elevation, and radius is as shown in FIG. .
  • the x axis, the y axis, and the z axis that pass through the origin O and are perpendicular to each other are axes of the three-dimensional orthogonal coordinate system.
  • a straight line connecting the origin O and the position of the audio object OB11 in space is a straight line r
  • a straight line obtained by projecting the straight line r on the xy plane is a straight line L.
  • an angle formed by the x axis and the straight line L is an azimuth indicating the position of the audio object OB11, that is, position_azimuth
  • an angle formed by the straight line r and the xy plane is an elevation angle indicating the position of the audio object OB11, that is, position_elevation.
  • the length of the straight line r is a radius indicating the position of the audio object OB11, that is, position_radius.
  • the decoding side reads out the object_priority, spread, position_azimuth, position_elevation, position_radius, and gain_factor shown in FIG. 1 and uses them as appropriate.
  • VBAP Vector Base Amplitude Panning
  • VBAP is described in, for example, “INTERNATIONAL STANDARD ISO / IEC 23008-3 First First Edition 2015-10-15 Information Information technology-High Information efficiency-coding medium and delivery medium in delivery gene in the heterogeneous environment, Part 3: 3D audio, etc. The description is omitted.
  • vectors p 0 to p 18 indicating the positions of 19 spread audio objects are obtained based on the spread.
  • a vector indicating the position indicated by the metadata of the audio object to be processed is set as a basic vector p 0 . Further, an angle indicated by each of position_azimuth and position_elevation of the audio object to be processed is an angle ⁇ and an angle ⁇ .
  • the basic vector v and the basic vector u are obtained by the following equations (1) and (2).
  • the vector p m obtained by normalizing, spread 19 amino spread audio object corresponding to the (spread information) is generated.
  • one audio object spreads is a virtual object at the position on the space shown by a vector p m.
  • FIG. 4 is a diagram showing a plot of 19 spread audio objects in a three-dimensional orthogonal coordinate system when the angle indicated by spread is 30 degrees.
  • FIG. 5 is a diagram showing a plot of 19 spread audio objects in a three-dimensional orthogonal coordinate system when the angle indicated by spread is 90 degrees.
  • one circle represents the position indicated by one vector. That is, one circle represents one spread audio object.
  • an audio object composed of these 19 spread audio object signals is reproduced as one audio object signal, thereby expressing an audio object having a size.
  • ⁇ shown in the following equation (5) is set as a proration ratio
  • the rendering result when the angle indicated by spread is 90 degrees
  • all speakers have a constant gain.
  • the calculation load is reduced by directly obtaining the ambisonic gain based on the spread information without generating 19 spread audio objects at the time of rendering for the audio object having the spread information. .
  • FIG. 6 is a diagram illustrating a configuration example of an embodiment of a signal processing device to which the present technology is applied.
  • the signal processing apparatus 11 shown in FIG. 6 has an ambisonic gain calculation unit 21, an ambisonic rotation unit 22, an ambisonic matrix application unit 23, an addition unit 24, and an ambisonic rendering unit 25.
  • the signal processing device 11 is supplied with an input ambisonic signal that is an ambisonic audio signal and an input audio object signal that is an audio signal of an audio object as audio signals for reproducing the sound of the content. Is done.
  • the input ambisonic signal is a signal of the ambisonic channel C n, m corresponding to the order n and the order m of the spherical harmonic function Sn , m ( ⁇ , ⁇ ). That is, the input ambisonic signal of each ambisonic channel C n, m is supplied to the signal processing device 11.
  • the input audio object signal is a monaural audio signal for reproducing the sound of one audio object, and the input audio object signal of each audio object is supplied to the signal processing device 11.
  • the signal processing device 11 is supplied with object position information and spread information as metadata for each audio object.
  • the object position information is information including the above-described position_azimuth, position_elevation, and position_radius.
  • position_azimuth indicates the azimuth indicating the position of the audio object in space
  • position_elevation indicates the elevation angle indicating the position of the audio object in space
  • position_radius indicates the radius indicating the position of the audio object in space.
  • the spread information is the above-described spread, and is angle information indicating the size of the audio object, that is, the degree of spread of the sound image of the audio object.
  • the present invention is not limited to this, and the signal processing device 11 may of course be supplied with input audio object signals, object position information, and spread information for a plurality of audio objects.
  • the ambisonic gain calculation unit 21 obtains an ambisonic gain when the audio object is at the front position based on the supplied spread information, and supplies the ambisonic gain to the ambisonic rotation unit 22.
  • the front position is a position in the front direction when viewed from the user position serving as a reference in space, and is a position at which position_azimuth and position_elevation as object position information are each 0 degrees.
  • the ambisonic gain of the ambisonic channel C n, m of the audio object, particularly when the audio object is in the front position is also referred to as a front position ambisonic gain G n, m .
  • the front position ambisonic gain G n, m of each ambisonic channel C n , m is as follows.
  • each Ambisonic channel C n front position Ambisonic gain G n of m, by multiplying the input audio object signals m, they each Ambisonic channel C n of Ambisonic signal m, i.e. the Ambisonic format Suppose that it is a signal.
  • the sound image of the sound of the audio object is localized at the front position.
  • the sound of the audio object is a sound having an angular spread indicated by the spread information. That is, it is possible to express the same sound spread as when 19 spread audio objects are generated using spread information.
  • the relationship between the angle indicated by the spread information (hereinafter also referred to as spread angle) and the front position ambisonic gain G n, m of each ambisonic channel C n, m is as shown in FIG.
  • the vertical axis indicates the value of the front position ambisonic gain G n, m
  • the horizontal axis indicates the spread angle.
  • curves L11 to L17 indicate the front position ambisonic gain G n, m of the ambisonic channel C n, m for each spread angle.
  • the front position ambisonic gain G 1,1 of C 1,1 is shown.
  • the front position ambisonic gain G 3,1 of the ambisonic channel C 3,1 corresponding to 1 is shown.
  • the curve L17 represents the ambisonic channels C 1, -1 , C 1,0 , C 2,1 , C 2, -1 , C 2, -2 , C 3,0 , C 3, -1 , C 3, 2 , the front position ambisonic gain of C 3, -2 and C 3, -3 is shown.
  • the front position ambisonic gain indicated by the curve L17 is 0 regardless of the spread angle.
  • spherical harmonic function Sn , m ( ⁇ , ⁇ ) is, for example, ⁇ INTERNATIONAL STANDARD ISO / IEC 23008-3 First edition 2015-10-15 Information technology-High efficiency coding and media delivery in heterogeneous environments-Part 3 : 3D audio ”is described in detail in chapter F.1.3, so its explanation is omitted.
  • the elevation angle and azimuth angle indicating the three-dimensional spatial position of the spread audio object determined in accordance with the spread angle are ⁇ and ⁇ , respectively.
  • the elevation angle and the azimuth angle of the i-th (0 ⁇ i ⁇ 18) spread audio object among the 19 spread audio objects are denoted as ⁇ i and ⁇ i .
  • the elevation angle ⁇ i and the azimuth angle ⁇ i correspond to the above-described position_elevation and position_azimuth, respectively.
  • the elevation angle ⁇ i and the azimuth angle ⁇ i of the spread audio object are substituted into the spherical harmonic function Sn, m ( ⁇ , ⁇ ), and the resulting spherical harmony for each of the 19 spread audio objects.
  • the front position ambisonic gain G n, m can be obtained by adding the functions S n, m ( ⁇ i , ⁇ i ). That is, the front position ambisonic gain G n, m can be obtained by calculating the following equation (6).
  • Equation (6) the sum of 19 spherical harmonic functions S n, m ( ⁇ i , ⁇ i ) obtained for the same ambisonic channel C n, m is the sum of the ambisonic channel C n, m .
  • the front position ambisonic gain is G n, m .
  • a plurality of, in this case, 19 spread audio objects are positioned in the space, and the angle indicating the position of each spread audio object is an elevation angle ⁇ . i and azimuth angle ⁇ i .
  • a value obtained by substituting the elevation angle ⁇ i and the azimuth angle ⁇ i of the spread audio object into the spherical harmonic function is the spherical harmonic function S n, m ( ⁇ i , ⁇ i ), and 19 spread audio objects
  • the sum of spherical harmonic functions S n, m ( ⁇ i , ⁇ i ) obtained for the object is defined as the front position ambisonic gain G n, m .
  • ambisonic channels C 0,0 , C 1,1 , C 2,0 , C 2,2 , C 3,1 , and C 3,3 are substantially front position ambisonic.
  • a gain G n, and m, the other Ambisonic channel C n, front position Ambisonic gain of m G n, m is zero.
  • the ambisonic gain calculation unit 21 may calculate the front position ambisonic gain G n, m of each ambisonic channel C n, m by performing the calculation of Expression (6) based on the spread information. Then, a front table ambisonic gain G n, m is acquired using a gain table.
  • a gain table in which each spread angle is associated with the front position ambisonic gain Gn, m is generated and held in advance for each ambisonic channel Cn , m .
  • the value of each spread angle may be associated with the value of the front position ambisonic gain G n, m corresponding to the spread angle. Further, for example, the value of the front position ambisonic gain G n, m corresponding to the range of the spread angle value may be associated.
  • the resolution of the spread angle in the gain table may be determined according to the resource scale of the device that reproduces the sound of the content based on the input audio object signal or the like and the reproduction quality required at the time of content reproduction.
  • the spread angle when the spread angle is small, the amount of change in the front position ambisonic gain G n, m is small with respect to the change in the spread angle. Therefore, in the gain table, for a small spread angle, the spread angle range to which one front position ambisonic gain G n, m is associated, that is, the step width of the spread angle is increased, and the step width is increased as the spread angle increases. It may be made smaller.
  • the front position ambisonic gain G n, m is obtained by performing an interpolation process such as linear interpolation. It may be.
  • the ambisonic gain calculation unit 21 performs an interpolation process based on the front position ambisonic gain G n, m associated with the spread angle in the gain table to obtain the spread angle indicated by the spread information. Find the corresponding front position ambisonic gain G n, m .
  • the spread angle indicated by the spread information is 65 degrees.
  • the spread angle “60 degrees” is associated with the front position ambisonic gain G n, m “0.2”, and the spread angle “70 degrees” and the front position ambisonic gain G n, m “0.3” are associated. "Is associated.
  • the ambisonic gain calculation unit 21 calculates a front position ambisonic gain G n, m “0.25” corresponding to the spread angle “65 degrees” by linear interpolation based on the spread information and the gain table.
  • the ambisonic gain calculation unit 21 holds in advance a gain table obtained by tabulating the front position ambisonic gain G n, m of each ambisonic channel C n, m that changes according to the spread angle. Has been.
  • the front position ambisonic gain G n, m can be obtained directly from the gain table without separately generating 19 spread audio objects from the spread information. If the gain table is used, the calculation load can be further reduced as compared with the case where the front position ambisonic gain G n, m is directly calculated.
  • the ambisonic gain calculation unit 21 calculates the ambisonic gain when the audio object is in the front position.
  • the ambisonic gain calculation unit 21 may obtain the ambisonic gain when the audio object is at another reference position.
  • the ambisonic gain calculation unit 21 determines the front position ambisonic gain G n, m of each ambisonic channel C n, m based on the supplied spread information and the held gain table. Is obtained, the obtained front position ambisonic gain G n, m is supplied to the ambisonic rotation unit 22.
  • the ambisonic rotation unit 22 performs rotation processing on the front position ambisonic gain G n, m supplied from the ambisonic gain calculation unit 21 based on the supplied object position information.
  • the ambisonic rotation unit 22 supplies the object position ambisonic gain G ′ n, m of each ambisonic channel C n, m obtained by the rotation process to the ambisonic matrix application unit 23.
  • the object position ambisonic gain G ′ n, m is an ambisonic gain when the audio object is at the position indicated by the object position information, that is, the actual position of the audio object.
  • the position of the audio object is rotationally moved from the front position to the original audio object position, and the ambisonic gain after the rotational movement is calculated as the object position ambisonic gain G ′ n, m .
  • the front position ambisonic gain G n, m corresponding to the front position is rotated and the object position ambisonic gain G ′ n, m corresponding to the actual audio object position indicated by the object position information is calculated. Is done.
  • the rotation matrix M corresponding to the rotation angle of the audio object that is, the rotation angle of the ambisonic gain, and the front position ambisonic gain G n of each ambisonic channel C n, m , m and the matrix G are obtained.
  • the element of the matrix G ′ obtained as a result is set as the object position ambisonic gain G ′ n, m of each ambisonic channel C n, m .
  • the rotation angle here is a rotation angle when the audio object is rotated from the front position to the position indicated by the object position information.
  • the rotation matrix M is described in, for example, “Wigner-D functions, J. Sakurai, J. Napolitano,“ Modern Quantum Mechanics ”, Addison-Wesley, 2010”, etc.
  • the rotation matrix M is a block diagonal matrix represented by the following equation (8).
  • the ambisonic gain calculation unit 21 and the ambisonic rotation unit 22 calculate the object position ambisonic gain G ′ n, m for the audio object based on the spread information and the object position information.
  • the ambisonic matrix application unit 23 converts the supplied input audio object signal into an ambisonic signal based on the object position ambisonic gain G ′ n, m supplied from the ambisonic rotation unit 22.
  • the ambisonic matrix application unit 23 calculates each ambisonic channel C n, output Ambisonic signal C n of m, determine the m (t).
  • the input audio object signal Obj (t) is multiplied by the object position ambisonic gain G ′ n, m of a predetermined ambisonic channel C n, m , so that the ambisonic channel C n, m An output ambisonic signal C n, m (t) is obtained.
  • the output ambisonic signal C n, m (t) obtained in this way is the same as when 19 spread audio objects are generated using the spread information and the sound based on the input audio object signal is reproduced. It is a signal from which sound is reproduced.
  • the output ambisonic signal C n, m (t) reproduces the sound of an audio object that can localize the sound image at the position indicated by the object position information and express the sound spread indicated by the spread information. It is an ambisonic format signal.
  • the ambisonic matrix application unit 23 supplies the output ambisonic signal C n, m (t) of each ambisonic channel C n, m thus obtained to the addition unit 24.
  • Such an ambisonic matrix application unit 23 outputs an ambisonic signal C n, m (t) based on the input audio object signal Obj (t) of the audio object and the object position ambisonic gain G ′ n, m.
  • the adder 24 adds the output ambisonic signal C n, m (t) supplied from the ambisonic matrix application unit 23 and the supplied input ambisonic signal for each ambisonic channel C n, m ,
  • the obtained ambisonic signal C ′ n, m (t) is supplied to the ambisonic rendering unit 25. That is, the adder 24 mixes the output ambisonic signal C n, m (t) and the input ambisonic signal.
  • the ambisonic rendering unit 25 corresponds to the ambisonic signal C ′ n, m (t) of each ambisonic channel C n, m supplied from the adding unit 24 and the three-dimensional spatial position of the output speaker (not shown). Based on a matrix called a coding matrix, an output audio signal O k (t) to be supplied to each output speaker is obtained .
  • a column vector (matrix) composed of the ambisonic signals C ′ n, m (t) of each ambisonic channel C n, m is set as a vector C, and the output audio signal O k ( A column vector (matrix) consisting of t) is denoted as vector O. Further, the decoding matrix is denoted as D.
  • the ambisonic rendering unit 25 calculates the vector O by obtaining the product of the decoding matrix D and the vector C, for example, as shown in the following equation (10).
  • the decoding matrix D is a matrix having ambisonic channels C n, m as rows and audio channels k as columns.
  • the decoding matrix D may be obtained.
  • the ambisonic rendering unit 25 outputs the output audio signal O k (t) of each audio channel k obtained as described above, for example, to an output speaker corresponding to the audio channel k.
  • step S ⁇ b > 11 the ambisonic gain calculation unit 21 obtains the front position ambisonic gain G n, m for each ambisonic channel C n, m based on the supplied spread information, and supplies it to the ambisonic rotation unit 22. .
  • the ambisonic gain calculation unit 21 reads the front position ambisonic gain G n, m associated with the spread angle indicated by the supplied spread information from the held gain table, so that the ambisonic channel is read out.
  • C n front position Ambisonic gain of m G n, obtaining m.
  • the ambisonic gain calculation unit 21 performs an interpolation process as necessary to obtain the front position ambisonic gain G n, m .
  • step S12 the ambisonic rotation unit 22 performs a rotation process on the front position ambisonic gain G n, m supplied from the ambisonic gain calculation unit 21 based on the supplied object position information.
  • the ambisonic rotation unit 22 performs the calculation of the above-described equation (7) based on the rotation matrix M determined by the object position information , and the object position ambisonic gain G ′ n, m of each ambisonic channel C n, m . Calculate m .
  • the ambisonic rotation unit 22 supplies the obtained object position ambisonic gain G ′ n, m to the ambisonic matrix application unit 23.
  • step S13 the ambisonic matrix application unit 23 outputs an ambisonic signal C n based on the object position ambisonic gain G ′ n, m supplied from the ambisonic rotation unit 22 and the supplied input audio object signal. , m (t).
  • the ambisonic matrix application unit 23 calculates the output ambisonic signal C n, m (t) for each ambisonic channel C n, m by performing the calculation of the above-described equation (9).
  • the ambisonic matrix application unit 23 supplies the obtained output ambisonic signal C n, m (t) to the addition unit 24.
  • step S ⁇ b > 14 the adder 24 mixes the output ambisonic signal C n, m (t) supplied from the ambisonic matrix application unit 23 with the supplied input ambisonic signal.
  • the adder 24 adds the output ambisonic signal C n, m (t) and the input ambisonic signal for each ambisonic channel C n, m and obtains the obtained ambisonic signal C ′ n, m (t ) Is supplied to the ambisonic rendering unit 25.
  • step S15 the ambisonic rendering unit 25 generates an output audio signal O k (t) of each audio channel k based on the ambisonic signal C ′ n, m (t) supplied from the adding unit 24.
  • the ambisonic rendering unit 25 obtains the output audio signal O k (t) of each audio channel k by performing the calculation of the above-described equation (10).
  • the ambisonic rendering unit 25 When the output audio signal O k (t) is obtained, the ambisonic rendering unit 25 outputs the obtained output audio signal O k (t) to the subsequent stage, and the content rendering process ends.
  • the signal processing apparatus 11 calculates the object position ambisonic gain based on the spread information and the object position information, and converts the input audio object signal into an ambisonic signal based on the object position ambisonic gain. .
  • the calculation load of the rendering process can be reduced.
  • the signal processing device 11 it is possible to obtain the front position ambisonic gain from the spread information even when such two spread angles are used.
  • the spread information includes the spread angle ⁇ width in the horizontal direction, that is, the azimuth angle direction, and the spread angle ⁇ height in the vertical direction, that is, the elevation angle direction.
  • FIG. 9 is a diagram showing an example of the format of the metadata of the audio object when the spread angle ⁇ width and the spread angle ⁇ height are included as spread information. In FIG. 9, the description of the portions corresponding to those in FIG. 1 is omitted.
  • spread_width [i] and spread_height [i] are stored as spread information.
  • spread_width [i] indicates the spread angle ⁇ width of the i-th audio object
  • spread_height [i] indicates the spread angle ⁇ height of the i-th audio object
  • ⁇ r which is a ratio of two spread angles ⁇ width and spread angle ⁇ height is obtained by the following equation (11).
  • the basic vector v is corrected by multiplying the basic vector v shown in the above-described equation (1) by the spread angle ratio ⁇ r .
  • v ′ represents a corrected basic vector multiplied by the spread angle ratio ⁇ r .
  • the above-described equations (2) and (3) are calculated as they are, and the angle ⁇ ′ in the equation (4) is defined as an angle ⁇ ′ in which the spread angle ⁇ width is limited to 0.001 degrees or more and 90 degrees or less. Used. Further, calculation is performed using the spread angle ⁇ width as the angle ⁇ in the equation (5).
  • the spread information includes the spread angle ⁇ width and the spread angle ⁇ height , such as a method based on MPEG-H 3D Audio Phase 2, 19 audio objects for spread are generated. Therefore, the computational load of rendering processing remains large.
  • the gain table is used as in the first embodiment described above.
  • a position ambisonic gain G n, m can be obtained.
  • the front position ambisonic gain G n, m is associated with the ambisonic gain calculation unit 21 with, for example, one spread angle indicated by spread information.
  • the gain table was retained.
  • the spread information includes the spread angle ⁇ width and the spread angle ⁇ height
  • the spread angle ⁇ width and the spread angle ⁇ height for example, one front position ambisonic gain G n for a combination of the spread angle ⁇ width and the spread angle ⁇ height , m are held in the ambisonic gain calculation unit 21.
  • the j-axis indicates the spread angle ⁇ width
  • the k-axis indicates the spread angle ⁇ height
  • the l-axis indicates the front position ambisonic gain G 0,0 .
  • the curved surface SF11 indicates the front position ambisonic gain G 0,0 determined for each combination of the spread angle ⁇ width and the spread angle ⁇ height .
  • the ambisonic gain calculation unit 21 holds a table obtained from the relationship indicated by the curved surface SF11 as a gain table of the ambisonic channel C 0,0 .
  • the j-axis indicates the spread angle ⁇ width
  • the k-axis indicates the spread angle ⁇ height
  • the l-axis indicates the front position ambisonic gain G 3,1 .
  • the curved surface SF21 indicates the front position ambisonic gain G 3,1 determined for each combination of the spread angle ⁇ width and the spread angle ⁇ height .
  • the ambisonic gain calculation unit 21 holds a gain table in which the spread angle ⁇ width and the spread angle ⁇ height are associated with the front position ambisonic gain G n, m for each ambisonic channel C n, m. Yes.
  • the ambisonic gain calculation unit 21 uses the gain table in step S11 of FIG. 8 to each ambisonic channel C n, m.
  • the front position ambisonic gain G n, m is obtained. That is, the ambisonic gain calculation unit 21 reads out the front position ambisonic gain G n, m from the gain table based on the spread angle ⁇ width and the spread angle ⁇ height included in the supplied spread information, whereby each ambisonic gain is calculated.
  • the front position ambisonic gain G n, m of the sonic channel C n , m is acquired. Even in this case, an interpolation process is appropriately performed.
  • the signal processing apparatus 11 can obtain the front position ambisonic gain G n, m directly from the gain table without generating 19 spread audio objects. Further, if the front position ambisonic gain G n, m is used, the input audio object signal can be converted into an ambisonic signal. Thereby, the calculation load of rendering processing can be reduced.
  • the present technology can also be applied to an elliptical spread as handled in MPEG-H 3D Audio Phase 2. Furthermore, the present technology can be applied to a spread having a complicated shape such as a quadrangle or a star shape, which is not described in MPEG-H 3D Audio Phase 2.
  • 19 spread audio objects are generated in accordance with the standards described in MPEG-H
  • a general decoder is configured as shown in FIG.
  • a core decoder 61 includes a core decoder 61, an object rendering unit 62, an ambisonic rendering unit 63, and a mixer 64.
  • the core decoder 61 When the input bit stream is supplied to the decoder 51, the core decoder 61 performs a decoding process on the input bit stream to obtain a channel signal, an audio object signal, audio object metadata, and an ambisonic signal.
  • the channel signal is an audio signal of each audio channel.
  • the audio object metadata includes object position information and spread information.
  • the object rendering unit 62 performs a rendering process based on a three-dimensional space position of an output speaker (not shown).
  • the metadata input to the object rendering unit 62 includes spread information in addition to the object position information indicating the three-dimensional space position of the audio object.
  • the spread angle indicated by the spread information is not 0 degree, as described above, virtual objects corresponding to the spread angle, that is, 19 spread audio objects are generated. Then, a rendering process is performed for each of the 19 spread audio objects, and the audio signals of the respective audio channels obtained as a result are supplied to the mixer 64 as object output signals.
  • the ambisonic rendering unit 63 generates a decoding matrix based on the three-dimensional spatial position of the output speaker and the number of ambisonic channels. Then, the ambisonic rendering unit 63 performs the same calculation as the above-described equation (10) based on the decoding matrix and the ambisonic signal supplied from the core decoder 61, and outputs the obtained ambisonic output signal. This is supplied to the mixer 64.
  • the mixer 64 performs mixing processing on the channel signal from the core decoder 61, the object output signal from the object rendering unit 62, and the ambisonic output signal from the ambisonic rendering unit 63 to obtain a final output audio signal Is generated. That is, for each audio channel, a channel signal, an object output signal, and an ambisonic output signal are added to obtain an output audio signal.
  • the decoder is configured as shown in FIG.
  • the 15 includes a core decoder 101, an object / ambisonic signal converter 102, an adder 103, an ambisonic rendering unit 104, and a mixer 105.
  • the core decoder 101 decodes the input bit stream, and a channel signal, an audio object signal, audio object metadata, and an ambisonic signal are obtained.
  • the core decoder 101 supplies the channel signal obtained by the decoding process to the mixer 105, supplies the audio object signal and metadata to the object / ambisonic signal converter 102, and supplies the ambisonic signal to the adder 103.
  • the object / ambisonic signal conversion unit 102 includes the ambisonic gain calculation unit 21, the ambisonic rotation unit 22, and the ambisonic matrix application unit 23 illustrated in FIG.
  • the object ambisonic signal converter 102 calculates the object position ambisonic gain of each ambisonic channel based on the object position information and spread information included in the metadata supplied from the core decoder 101.
  • the object / ambisonic signal converter 102 obtains an ambisonic signal of each ambisonic channel based on the calculated object position ambisonic gain and the supplied audio object signal, and supplies the ambisonic signal to the adder 103.
  • the object ambisonic signal converter 102 converts the audio object signal into an ambisonic signal in the ambisonic format based on the metadata.
  • the audio object signal when an audio object signal is converted into an ambisonic signal, the audio object signal can be directly converted into an ambisonic signal without generating 19 spread audio objects. Thereby, compared with the case where the rendering process is performed in the object rendering unit 62 shown in FIG. 14, the amount of calculation can be greatly reduced.
  • the adder 103 performs mixing of the ambisonic signal supplied from the object / ambisonic signal converter 102 and the ambisonic signal supplied from the core decoder 101. That is, the adder 103 adds the ambisonic signal supplied from the object ambisonic signal converter 102 and the ambisonic signal supplied from the core decoder 101 for each ambisonic channel, and the resulting ambisonic signal.
  • the sonic signal is supplied to the ambisonic rendering unit 104.
  • the ambisonic rendering unit 104 generates an ambisonic output signal based on the ambisonic signal supplied from the adding unit 103 and a decoding matrix based on the three-dimensional spatial position of the output speaker and the number of ambisonic channels. That is, the ambisonic rendering unit 104 performs the same calculation as the above-described equation (10), generates an ambisonic output signal of each audio channel, and supplies it to the mixer 105.
  • the mixer 105 mixes the channel signal supplied from the core decoder 101 and the ambisonic output signal supplied from the ambisonic rendering unit 104, and outputs the resulting output audio signal to the subsequent stage. That is, the channel signal and the ambisonic output signal are added for each audio channel to obtain an output audio signal.
  • ⁇ Application example 2 of this technology> the present technology can be applied not only to a decoder but also to an encoder that performs pre-rendering processing.
  • signals of different formats such as an input channel signal, an input audio object signal, and an input ambisonic signal are input to the encoder.
  • Such processing is generally called pre-rendering processing.
  • pre-rendering processing As described above, when spread information is included in the metadata of the audio object, 19 audio objects for spread are generated according to the spread angle. Since each of the 19 spread audio objects is converted into an ambisonic signal, the amount of processing increases.
  • the amount of processing in the encoder that is, the amount of calculation, can be reduced by converting the input audio object signal into an ambisonic format signal using this technology.
  • an encoder to which the present technology is applied is configured as shown in FIG. 16, for example.
  • 16 includes a channel / ambisonic signal converter 141, an object / ambisonic signal converter 142, a mixer 143, and a core encoder 144.
  • the channel ambisonic signal conversion unit 141 converts the input channel signal of each supplied audio channel into an ambisonic output signal and supplies it to the mixer 143.
  • the channel / ambisonic signal converter 141 has the same configuration as the ambisonic gain calculator 21 to the ambisonic matrix application unit 23 shown in FIG.
  • the channel / ambisonic signal converter 141 performs the same processing as in the signal processing apparatus 11 to convert the input channel signal into an ambisonic output signal.
  • the object / ambisonic signal conversion unit 142 includes the ambisonic gain calculation unit 21, the ambisonic rotation unit 22, and the ambisonic matrix application unit 23 shown in FIG.
  • the object / ambisonic signal conversion unit 142 obtains an ambisonic output signal of each ambisonic channel based on the supplied metadata of the audio object and the input audio object signal, and supplies the ambisonic output signal to the mixer 143.
  • the object ambisonic signal conversion unit 142 converts the input audio object signal into an ambisonic output signal based on the metadata.
  • the input audio object signal when the input audio object signal is converted into the ambisonic output signal, the input audio object signal can be directly converted into the ambisonic output signal without generating 19 spread audio objects. it can. Thereby, the calculation amount can be greatly reduced.
  • the mixer 143 mixes the supplied input ambisonic signal, the ambisonic output signal supplied from the channel ambisonic signal converter 141, and the ambisonic output signal supplied from the object ambisonic signal converter 142. To do.
  • the mixer 143 supplies the ambisonic signal obtained by mixing to the core encoder 144.
  • the core encoder 144 encodes the ambisonic signal supplied from the mixer 143 and outputs the obtained output bit stream.
  • the calculation amount can be reduced by converting the input channel signal or the input audio object signal into an ambisonic format signal by using the present technology.
  • an ambisonic gain is directly obtained and converted into an ambisonic signal without generating a spread audio object according to spread information included in the metadata of the audio object.
  • the present technology is highly effective in converting an audio object signal into an ambisonic signal during decoding of a bit stream including an audio object signal and an ambisonic signal and pre-rendering processing in an encoder.
  • the above-described series of processing can be executed by hardware or can be executed by software.
  • a program constituting the software is installed in the computer.
  • the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing a computer incorporated in dedicated hardware and various programs.
  • FIG. 17 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 505 is further connected to the bus 504.
  • An input unit 506, an output unit 507, a recording unit 508, a communication unit 509, and a drive 510 are connected to the input / output interface 505.
  • the input unit 506 includes a keyboard, a mouse, a microphone, an image sensor, and the like.
  • the output unit 507 includes a display, a speaker, and the like.
  • the recording unit 508 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 509 includes a network interface or the like.
  • the drive 510 drives a removable recording medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 501 loads the program recorded in the recording unit 508 to the RAM 503 via the input / output interface 505 and the bus 504 and executes the program, for example. Is performed.
  • the program executed by the computer (CPU 501) can be provided by being recorded in a removable recording medium 511 as a package medium, for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the recording unit 508 via the input / output interface 505 by attaching the removable recording medium 511 to the drive 510. Further, the program can be received by the communication unit 509 via a wired or wireless transmission medium and installed in the recording unit 508. In addition, the program can be installed in the ROM 502 or the recording unit 508 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • the present technology can be configured as follows.
  • a signal processing apparatus comprising: an ambisonic gain calculation unit that obtains an ambisonic gain when the object is at a position indicated by the object position information based on object position information and spread information of the object.
  • the signal processing apparatus according to (1) further comprising: an ambisonic signal generation unit that generates an ambisonic signal of the object based on the audio object signal of the object and the ambisonic gain.
  • the ambisonic gain calculator is Based on the spread information, a reference position ambisonic gain when the object is at a reference position is obtained, The signal processing device according to (1) or (2), wherein a rotation process is performed on the reference position ambisonic gain based on the object position information to obtain the ambisonic gain.
  • the signal processing device obtains the reference position ambisonic gain based on the spread information and a gain table.
  • the gain table is a table in which a spread angle is associated with the reference position ambisonic gain.
  • the ambisonic gain calculation unit performs an interpolation process based on each of the reference position ambisonic gains associated with each of the plurality of spread angles in the gain table, thereby causing a spread angle indicated by the spread information.
  • the signal processing device according to (5), wherein the reference position ambisonic gain corresponding to is obtained.
  • the reference position ambisonic gain is a sum of values obtained by substituting each of angles indicating each of a plurality of positions on a space determined with respect to a spread angle indicated by the spread information into a spherical harmonic function.
  • the signal processing apparatus according to any one of (3) to (6).
  • a signal processing method including a step of obtaining an ambisonic gain when the object is at a position indicated by the object position information based on object position information and spread information of the object.
  • a program for causing a computer to execute a process including a step of obtaining an ambisonic gain when the object is at a position indicated by the object position information based on object position information and spread information of the object.

Abstract

L'invention concerne un dispositif, un procédé et un programme de traitement de signal configurés de façon à pouvoir réduire la charge de calcul. Le dispositif de traitement de signal comprend une unité de calcul de gain ambiophonique qui dérive un gain ambiophonique lorsqu'un objet est dans une position prescrite d'après les informations d'étalement de l'objet. L'invention peut s'appliquer à un codeur et à un décodeur.
PCT/JP2018/013630 2017-04-13 2018-03-30 Dispositif, procédé et programme de traitement de signal WO2018190151A1 (fr)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US16/500,591 US10972859B2 (en) 2017-04-13 2018-03-30 Signal processing apparatus and method as well as program
JP2019512429A JP7143843B2 (ja) 2017-04-13 2018-03-30 信号処理装置および方法、並びにプログラム
BR112019020887A BR112019020887A2 (pt) 2017-04-13 2018-03-30 aparelho e método de processamento de sinal, e, programa.
KR1020197026586A KR102490786B1 (ko) 2017-04-13 2018-03-30 신호 처리 장치 및 방법, 그리고 프로그램
RU2019131411A RU2763391C2 (ru) 2017-04-13 2018-03-30 Устройство, способ и постоянный считываемый компьютером носитель для обработки сигналов
EP18784930.2A EP3624116B1 (fr) 2017-04-13 2018-03-30 Dispositif, procédé et programme de traitement de signal
US17/200,532 US20210204086A1 (en) 2017-04-13 2021-03-12 Signal processing apparatus and method as well as program
JP2022145788A JP2022172391A (ja) 2017-04-13 2022-09-14 信号処理装置および方法、並びにプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017079446 2017-04-13
JP2017-079446 2017-04-13

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/500,591 A-371-Of-International US10972859B2 (en) 2017-04-13 2018-03-30 Signal processing apparatus and method as well as program
US17/200,532 Continuation US20210204086A1 (en) 2017-04-13 2021-03-12 Signal processing apparatus and method as well as program

Publications (1)

Publication Number Publication Date
WO2018190151A1 true WO2018190151A1 (fr) 2018-10-18

Family

ID=63792594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/013630 WO2018190151A1 (fr) 2017-04-13 2018-03-30 Dispositif, procédé et programme de traitement de signal

Country Status (7)

Country Link
US (2) US10972859B2 (fr)
EP (1) EP3624116B1 (fr)
JP (2) JP7143843B2 (fr)
KR (1) KR102490786B1 (fr)
BR (1) BR112019020887A2 (fr)
RU (1) RU2763391C2 (fr)
WO (1) WO2018190151A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020153092A1 (fr) * 2019-01-25 2020-07-30 ソニー株式会社 Dispositif et procédé de traitement d'informations

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2763391C2 (ru) * 2017-04-13 2021-12-28 Сони Корпорейшн Устройство, способ и постоянный считываемый компьютером носитель для обработки сигналов

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757927A (en) * 1992-03-02 1998-05-26 Trifield Productions Ltd. Surround sound apparatus
FR2836571B1 (fr) * 2002-02-28 2004-07-09 Remy Henri Denis Bruno Procede et dispositif de pilotage d'un ensemble de restitution d'un champ acoustique
US20070160216A1 (en) * 2003-12-15 2007-07-12 France Telecom Acoustic synthesis and spatialization method
KR102018824B1 (ko) * 2010-03-26 2019-09-05 돌비 인터네셔널 에이비 오디오 재생을 위한 오디오 사운드필드 표현을 디코딩하는 방법 및 장치
TWI792203B (zh) * 2011-07-01 2023-02-11 美商杜比實驗室特許公司 用於適應性音頻信號的產生、譯碼與呈現之系統與方法
CA2849889C (fr) * 2011-09-23 2020-01-07 Novozymes Biologicals, Inc. Combinaisons de lipo-chito-oligosaccharides et leurs methodes d'utilisation pour ameliorer la croissance de plantes
US9761229B2 (en) * 2012-07-20 2017-09-12 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for audio object clustering
US9479886B2 (en) * 2012-07-20 2016-10-25 Qualcomm Incorporated Scalable downmix design with feedback for object-based surround codec
EP2738762A1 (fr) * 2012-11-30 2014-06-04 Aalto-Korkeakoulusäätiö Procédé de filtrage spatial d'au moins un premier signal sonore, support de stockage lisible par ordinateur et système de filtrage spatial basé sur la cohérence de motifs croisés
US9883310B2 (en) * 2013-02-08 2018-01-30 Qualcomm Incorporated Obtaining symmetry information for higher order ambisonic audio renderers
US9609452B2 (en) * 2013-02-08 2017-03-28 Qualcomm Incorporated Obtaining sparseness information for higher order ambisonic audio renderers
US9338420B2 (en) * 2013-02-15 2016-05-10 Qualcomm Incorporated Video analysis assisted generation of multi-channel audio data
EP2806658B1 (fr) * 2013-05-24 2017-09-27 Barco N.V. Agencement et procédé de reproduction de données audio d'une scène acoustique
EP3934283B1 (fr) * 2013-12-23 2023-08-23 Wilus Institute of Standards and Technology Inc. Procédé de traitement de signal audio et dispositif de paramétérisation associé
CN109087653B (zh) * 2014-03-24 2023-09-15 杜比国际公司 对高阶高保真立体声信号应用动态范围压缩的方法和设备
EP2928216A1 (fr) * 2014-03-26 2015-10-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil et procédé de remappage d'objet audio apparenté à un écran
CN103888889B (zh) * 2014-04-07 2016-01-13 北京工业大学 一种基于球谐展开的多声道转换方法
CN111556426B (zh) * 2015-02-06 2022-03-25 杜比实验室特许公司 用于自适应音频的混合型基于优先度的渲染系统和方法
US10136240B2 (en) * 2015-04-20 2018-11-20 Dolby Laboratories Licensing Corporation Processing audio data to compensate for partial hearing loss or an adverse hearing environment
US10419869B2 (en) * 2015-04-24 2019-09-17 Dolby Laboratories Licensing Corporation Augmented hearing system
US10277997B2 (en) * 2015-08-07 2019-04-30 Dolby Laboratories Licensing Corporation Processing object-based audio signals
WO2017037032A1 (fr) * 2015-09-04 2017-03-09 Koninklijke Philips N.V. Procédé et appareil pour traiter un signal audio associé à une image vidéo
US9961475B2 (en) * 2015-10-08 2018-05-01 Qualcomm Incorporated Conversion from object-based audio to HOA
US11128978B2 (en) * 2015-11-20 2021-09-21 Dolby Laboratories Licensing Corporation Rendering of immersive audio content
KR102465227B1 (ko) * 2016-05-30 2022-11-10 소니그룹주식회사 영상 음향 처리 장치 및 방법, 및 프로그램이 저장된 컴퓨터 판독 가능한 기록 매체
RU2763391C2 (ru) * 2017-04-13 2021-12-28 Сони Корпорейшн Устройство, способ и постоянный считываемый компьютером носитель для обработки сигналов

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"INTERNATIONAL STANDARD ISO/IEC 23008-3", 15 October 2015
BLEIDT, R. L. ET AL.: "Development of the MPEG-H TV Audio System for ATSC 3.0", IEEE TRANSACTIONS ON BROADCASTING, vol. 63, no. 1, 8 March 2017 (2017-03-08), pages 202 - 236, XP055545453 *
HERRE, J. ET AL.: "MPEG-H 3D Audio--The new Standard for Coding of Immersive Spatial Audio", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, vol. 9, no. 5, 9 March 2015 (2015-03-09), pages 770 - 779, XP055545455 *
WIGNER-D FUNCTIONSJ. SAKURAIJ. NAPOLITANO: "Modern Quantum Mechanics", 2010, ADDISON-WESLEY

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020153092A1 (fr) * 2019-01-25 2020-07-30 ソニー株式会社 Dispositif et procédé de traitement d'informations

Also Published As

Publication number Publication date
KR102490786B1 (ko) 2023-01-20
KR20190139206A (ko) 2019-12-17
BR112019020887A2 (pt) 2020-04-28
US20210204086A1 (en) 2021-07-01
RU2019131411A (ru) 2021-04-05
JPWO2018190151A1 (ja) 2020-02-20
RU2763391C2 (ru) 2021-12-28
US20200068336A1 (en) 2020-02-27
EP3624116A1 (fr) 2020-03-18
EP3624116A4 (fr) 2020-03-18
RU2019131411A3 (fr) 2021-07-05
JP7143843B2 (ja) 2022-09-29
JP2022172391A (ja) 2022-11-15
EP3624116B1 (fr) 2022-05-04
US10972859B2 (en) 2021-04-06

Similar Documents

Publication Publication Date Title
KR102483042B1 (ko) 근거리/원거리 렌더링을 사용한 거리 패닝
RU2759160C2 (ru) УСТРОЙСТВО, СПОСОБ И КОМПЬЮТЕРНАЯ ПРОГРАММА ДЛЯ КОДИРОВАНИЯ, ДЕКОДИРОВАНИЯ, ОБРАБОТКИ СЦЕНЫ И ДРУГИХ ПРОЦЕДУР, ОТНОСЯЩИХСЯ К ОСНОВАННОМУ НА DirAC ПРОСТРАНСТВЕННОМУ АУДИОКОДИРОВАНИЮ
US8290167B2 (en) Method and apparatus for conversion between multi-channel audio formats
KR102615550B1 (ko) 신호 처리 장치 및 방법, 그리고 프로그램
WO2016208406A1 (fr) Dispositif, procédé et programme de traitement du son
JP7283392B2 (ja) 信号処理装置および方法、並びにプログラム
JP2022172391A (ja) 信号処理装置および方法、並びにプログラム
WO2013181272A2 (fr) Système audio orienté objet utilisant un panoramique d'amplitude sur une base de vecteurs
JP2023164970A (ja) 情報処理装置および方法、並びにプログラム
US11122386B2 (en) Audio rendering for low frequency effects
WO2020080099A1 (fr) Dispositif et procédé de traitement de signaux et programme
EP3777242B1 (fr) Restitution spatiale de sons
JP2022536676A (ja) DirACベースの空間オーディオ符号化のためのパケット損失隠蔽
CN112133316A (zh) 空间音频表示和渲染
JP2011048279A (ja) 3次元音響符号化装置、3次元音響復号装置、符号化プログラム及び復号プログラム
WO2023074039A1 (fr) Dispositif, procédé et programme de traitement d'informations
WO2023074009A1 (fr) Dispositif, procédé et programme de traitement d'informations
WO2022262758A1 (fr) Système et procédé de rendu audio et dispositif électronique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18784930

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197026586

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019512429

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112019020887

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2018784930

Country of ref document: EP

Effective date: 20191113

ENP Entry into the national phase

Ref document number: 112019020887

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20191004