WO2015102452A1 - Method and apparatus for improved ambisonic decoding - Google Patents

Method and apparatus for improved ambisonic decoding Download PDF

Info

Publication number
WO2015102452A1
WO2015102452A1 PCT/KR2015/000072 KR2015000072W WO2015102452A1 WO 2015102452 A1 WO2015102452 A1 WO 2015102452A1 KR 2015000072 W KR2015000072 W KR 2015000072W WO 2015102452 A1 WO2015102452 A1 WO 2015102452A1
Authority
WO
WIPO (PCT)
Prior art keywords
ambisonic
signals
audio signal
independent subcomponents
speaker
Prior art date
Application number
PCT/KR2015/000072
Other languages
French (fr)
Inventor
Hossein Najaf-Zadeh
Yeshwant MUTHUSAMY
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP15733320.4A priority Critical patent/EP3090574B1/en
Priority to KR1020167021086A priority patent/KR101862356B1/en
Publication of WO2015102452A1 publication Critical patent/WO2015102452A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0364Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0212Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0324Details of processing therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/06Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being correlation coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

Definitions

  • This disclosure relates generally to ambisonic decoding. More specifically, this disclosure relates to improved ambisonic decoding by masking inaudible sounds and using independent component analysis.
  • Ambisonics is an effective technique to encode and reconstruct sound fields. This technique is based on the orthogonal decomposition of a sound field in the spherical coordinates in the 3D space or cylindrical decomposition in the 2D space. In the decoding process, the ambisonic signals are decoded to produce speaker signals. The higher the order of the ambisonics, the finer reconstruction of the sound fields achieved.
  • a complex sound field plays a key role in the quality of the reconstructed sound field for a given ambisonics order.
  • a less complex sound field might be well described by low-order ambisonics whereas a more complex sound field requires Higher-Order Ambisonics (HOA) to be reconstructed with high quality.
  • a complex sound field contains many simultaneous active sources (either localized or distributed sources). If at any time instance (or in a frequency band) there are few active sources, lower order ambisonics would be able to describe and encode the sound field.
  • This disclosure provides a method and apparatus for improved ambisonic decoding.
  • improved ambisonic decoding by masking inaudible sounds and using independent component analysis may be provided.
  • FIGURE 1 illustrates an example computing system 100 according to this disclosure
  • FIGURES 2 and 3 illustrate example devices in a computing system according to this disclosure
  • FIGURE 4 illustrates a block diagram of a sub-band based ambisonic decoder according to some embodiments of the current disclosure
  • FIGURE 5 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor according to some embodiments of the current disclosure
  • FIGURE 6 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor and ICA according to some embodiments of the current disclosure
  • FIGURE 7 illustrates a block diagram of an ambisonic decoder using speaker-specific smoothing factors according to some embodiments of the current disclosure.
  • FIGURE 8 illustrates a process for managing an audio signal in accordance with an embodiment of this disclosure.
  • this disclosure provides an audio receiver.
  • the audio receiver includes a memory configured to store an audio signal and processing circuitry coupled to the memory.
  • the processing circuitry is configured to receive the audio signal.
  • the audio signal comprises a plurality of ambisonic components.
  • the processing circuitry is also configured to separate the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents.
  • the processing circuitry is also configured to decode each of the independent subcomponents.
  • the processing circuitry is also configured to combine each of the decoded independent subcomponents into a speaker signal.
  • this disclosure provides a method for managing an audio signal.
  • the method includes receiving the audio signal.
  • the audio signal comprises a plurality of ambisonic components.
  • the method also includes separating the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents.
  • the method also includes decoding each of the independent subcomponents.
  • the method also includes combining each of the decoded independent subcomponents into a speaker signal.
  • this disclosure provides a non-transitory computer readable medium embodying a computer program.
  • the computer program includes computer readable program code that, when executed, causes at least one processing device to receive the audio signal.
  • the audio signal comprises a plurality of ambisonic components.
  • the computer program includes computer readable program code that, when executed, also causes at least one processing device to separate the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents.
  • the computer program includes computer readable program code that, when executed, also causes at least one processing device to decode each of the independent subcomponents.
  • the computer program includes computer readable program code that, when executed, also causes at least one processing device to combine each of the decoded independent subcomponents into a speaker signal.
  • Couple and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another.
  • transmit and “communicate,” as well as derivatives thereof, encompass both direct and indirect communication.
  • the term “or” is inclusive, meaning and/or.
  • controller means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.
  • phrases “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed.
  • “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
  • various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium.
  • application and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code.
  • computer readable program code includes any type of computer code, including source code, object code, and executable code.
  • computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
  • ROM read only memory
  • RAM random access memory
  • CD compact disc
  • DVD digital video disc
  • a “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals.
  • a non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
  • FIGURES 1 through 8, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of this disclosure may be implemented in any suitably arranged device or system.
  • FIGURE 1 illustrates an example computing system 100 according to this disclosure.
  • the embodiment of the computing system 100 shown in FIGURE 1 is for illustration only. Other embodiments of the computing system 100 could be used without departing from the scope of this disclosure.
  • the system 100 includes a network 102, which facilitates communication between various components in the system 100.
  • the network 102 may communicate Internet Protocol (IP) packets, frame relay frames, Asynchronous Transfer Mode (ATM) cells, or other information between network addresses.
  • IP Internet Protocol
  • ATM Asynchronous Transfer Mode
  • the network 102 may include one or more local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of a global network such as the Internet, or any other communication system or systems at one or more locations.
  • LANs local area networks
  • MANs metropolitan area networks
  • WANs wide area network
  • the Internet or any other communication system or systems at one or more locations.
  • the network 102 facilitates communications between at least one server 104 and various client devices 106-114.
  • Each server 104 includes any suitable computing or processing device that can provide computing services for one or more client devices.
  • Each server 104 could, for example, include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network 102.
  • Each client device 106-114 represents any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network 102.
  • the client devices 106-114 include a desktop computer 106, an audio receiver 107, a mobile telephone or smartphone 108, a personal digital assistant (PDA) 110, a laptop computer 112, and a tablet computer 114.
  • PDA personal digital assistant
  • any other or additional client devices could be used in the computing system 100.
  • client devices 108-114 communicate indirectly with the network 102.
  • the client devices 108-110 communicate via one or more base stations 116, such as cellular base stations or eNodeBs.
  • the client devices 112-114 communicate via one or more wireless access points 118, such as IEEE 802.11 wireless access points. Note that these are for illustration only and that each client device could communicate directly with the network 102 or indirectly with the network 102 via any suitable intermediate device(s) or network(s).
  • audio receiver 107 can receive an audio signal from any one of client devices 106-114.
  • audio receiver 107 can receive an audio signal from the Internet through network 102.
  • Audio receiver 107 can send speaker signals directly to speakers 120 and 122 through a wired network.
  • audio receiver 107 can send speaker signals indirectly to speakers 120 and 122 through a wireless network.
  • FIGURE 1 illustrates one example of a computing system 100
  • the system 100 could include any number of each component in any suitable arrangement.
  • computing and communication systems come in a wide variety of configurations, and FIGURE 1 does not limit the scope of this disclosure to any particular configuration.
  • FIGURE 1 illustrates one operational environment in which various features disclosed in this patent document can be used, these features could be used in any other suitable system.
  • FIGURES 2 and 3 illustrate example devices in a computing system according to this disclosure.
  • FIGURE 2 illustrates an example receiver 200
  • FIGURE 3 illustrates an example client device 300.
  • the receiver 200 could represent the receiver 107 in FIGURE 1
  • the client device 300 could represent one or more of the client devices 106-114 in FIGURE 1.
  • the receiver 200 includes a bus system 205, which supports communication between at least one processing device 210, at least one storage device 215, at least one communications unit 220, and at least one input/output (I/O) unit 225.
  • a bus system 205 which supports communication between at least one processing device 210, at least one storage device 215, at least one communications unit 220, and at least one input/output (I/O) unit 225.
  • the processing device 210 executes instructions that may be loaded into a memory 230.
  • the processing device 210 may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement.
  • Example types of processing devices 210 include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discreet circuitry.
  • the memory 230 and a persistent storage 235 are examples of storage devices 215, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis).
  • the memory 230 may represent a random access memory or any other suitable volatile or non-volatile storage device(s).
  • the persistent storage 235 may contain one or more components or devices supporting longer-term storage of data, such as a ready only memory, hard drive, Flash memory, or optical disc.
  • the communications unit 220 supports communications with other systems or devices.
  • the communications unit 220 could include a network interface card or a wireless transceiver facilitating communications over the network 102.
  • the communications unit 220 may support communications through any suitable physical or wireless communication link(s).
  • the I/O unit 225 allows for input and output of data.
  • the I/O unit 225 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device.
  • the I/O unit 225 may also send output to a display, printer, or other suitable output device.
  • FIGURE 2 is described as representing the receiver 107 of FIGURE 1, the same or similar structure could be used in one or more of the speakers 120 and 122.
  • the client device 300 includes an antenna 305, a radio frequency (RF) transceiver 310, transmit (TX) processing circuitry 315, a microphone 320, and receive (RX) processing circuitry 325.
  • the client device 300 also includes a speaker 330, a main processor 340, an input/output (I/O) interface (IF) 345, a keypad 350, a display 355, and a memory 360.
  • the memory 360 includes a basic operating system (OS) program 361 and one or more applications 362.
  • OS basic operating system
  • the RF transceiver 310 receives, from the antenna 305, an incoming RF signal transmitted by another component in a system.
  • the RF transceiver 310 down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal.
  • the IF or baseband signal is sent to the RX processing circuitry 325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal.
  • the RX processing circuitry 325 transmits the processed baseband signal to the speaker 330 (such as for voice data) or to the main processor 340 for further processing (such as for web browsing data).
  • the TX processing circuitry 315 receives analog or digital voice data from the microphone 320 or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the main processor 340.
  • the TX processing circuitry 315 encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal.
  • the RF transceiver 310 receives the outgoing processed baseband or IF signal from the TX processing circuitry 315 and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna 305.
  • the main processor 340 can include one or more processors or other processing devices and execute the basic OS program 361 stored in the memory 360 in order to control the overall operation of the client device 300.
  • the main processor 340 could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver 310, the RX processing circuitry 325, and the TX processing circuitry 315 in accordance with well-known principles.
  • the main processor 340 includes at least one microprocessor or microcontroller.
  • the main processor 340 is also capable of executing other processes and programs resident in the memory 360.
  • the main processor 340 can move data into or out of the memory 360 as required by an executing process.
  • the main processor 340 is configured to execute the applications 362 based on the OS program 361 or in response to signals received from external devices or an operator.
  • the main processor 340 is also coupled to the I/O interface 345, which provides the client device 300 with the ability to connect to other devices such as laptop computers and handheld computers.
  • the I/O interface 345 is the communication path between these accessories and the main processor 340.
  • the main processor 340 is also coupled to the keypad 350 and the display unit 355.
  • the operator of the client device 300 can use the keypad 350 to enter data into the client device 300.
  • the display 355 may be a liquid crystal display or other display capable of rendering text and/or at least limited graphics, such as from web sites.
  • the memory 360 is coupled to the main processor 340.
  • Part of the memory 360 could include a random access memory (RAM), and another part of the memory 360 could include a Flash memory or other read-only memory (ROM).
  • RAM random access memory
  • ROM read-only memory
  • FIGURES 2 and 3 illustrate examples of devices in a computing system
  • various changes may be made to FIGURES 2 and 3.
  • the main processor 340 could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs).
  • FIGURE 3 illustrates the client device 300 configured as a mobile telephone or smartphone
  • client devices could be configured to operate as other types of mobile or stationary devices.
  • client devices and receivers can come in a wide variety of configurations, and FIGURES 2 and 3 do not limit this disclosure to any particular client device or receiver.
  • the embodiments disclosed herein can be implemented with any off-the-shelf speaker.
  • the speakers can include wired or wireless speaker.
  • the speakers can also include speakers that are included in other devices, such as a television, mobile phone, mobile device and the like.
  • the speakers can be complex, with their own processors or controllers, or simple, without any complex receivers, processors, or controllers.
  • spherical harmonic decomposition of a sound field can be fully described by a set of spherical harmonic components (i.e. basis functions in the spherical coordinates).
  • the sound pressure can be expressed as follows:
  • the main advantage of HOA over other reproduction techniques is its flexibility in using arbitrary speaker configurations to recreate sound fields.
  • the number of speakers required to reproduce the sound field with minimal error can be greater than the number of the ambisonic signals.
  • the number of playback speakers is much larger than the ambisonics order, the quality of the reconstructed sound field would deteriorate.
  • the reason is that the driving signals for the speakers are found from the ambisonic signals by solving an under-determined linear equation. As such the error in the reconstruction of sound fields would be proportional to the difference between the number of speakers and the number of ambisonic signals.
  • One or more embodiments of this disclosure provide upscale lower-order ambisonics using compressed sensing (CS) techniques and reduce the gap between the number of speakers and the number of ambisonic signals.
  • CS compressed sensing
  • CS is a method that allows accurate recovery of signals from sub-Nyquist-rate sampling.
  • a physical phenomenon i.e. signal
  • a sparse set of basis functions i.e., mapping onto an over-complete set of basis functions to generate a sparse signal representation
  • the representation of a signal is sparse in a certain domain, then it is optimal to measure the signal in a domain whose basis functions are incoherent with the basis functions describing the sparse signal domain. Then the original signal can be recovered through an optimization process based on the L1 norm to find a signal with the smallest L1 norm.
  • the Nyquist rate is a sufficient condition for the perfect reconstruction of a signal.
  • the observations i.e., measurements
  • the observations provided by a sub-Nyquist sampling rate process contains enough information to perfectly describe the observed phenomenon (e.g., signal, sound field, and the like.).
  • the signal can be described as a sum of a small number of elementary functions (i.e., basis functions in a certain domain). If x denotes the vector of the observations (i.e. measurements) of a sparse vector s, it can then be expressed as follows:
  • s is a sparse vector with many zero coefficients. If K coefficients are non-zero, the signal s is called K-sparse.
  • An example of a sparsity domain is the Fourier domain for sinusoidal time signals. Although a pure tone seems full in the time domain, it is perfectly described by a vector with only one non-zero coefficient in the frequency domain.
  • the L1-norm is used to measure the sparsity of vector s instead of the L0-norm (i.e. the non-zero counting norm).
  • L0-norm i.e. the non-zero counting norm.
  • One or more embodiments of this disclosure provide an application of compressed sensing to ambisonic decoding.
  • ambisonics a sound field is mapped onto orthogonal spherical harmonics. This orthogonal projection produces ambisonics signals that will be used to reconstruct the sound field.
  • the decoding of the ambisonics signals is performed in a re-encoding process wherein the unknown speaker signals are encoded to produce the same ambisonic signals as follows:
  • equation (4) Since the number of the speaker signals is larger than the number of the ambisonic signals, equation (4) has infinite number of solutions.
  • the decoding process involves an optimization procedure to minimize a certain norm of the speaker signals.
  • the classical optimization is based on the L2-norm.
  • a more recent approach to performing the optimization is based on the application of CS to solve equation (4) using the L1-norm (or L12-norm, a combination of L2 and L1 norms to only sparsify the speaker signals in the space).
  • the application of compressed sensing requires that the observed phenomenon/signal be sparse in a certain domain.
  • the spherical harmonic domain is not a good candidate in terms of sparsity. There is no reason for the spherical harmonic expansion of a sound field to be sparse unless all of the sound sources are located in very particular directions. On the other hand, if the sound field results from a few sound sources, it is likely to be described by a small number of coefficients in some sparsity domain.
  • the speaker signals can be found to solve equation (4).
  • the classical manner in which this linear problem is solved is to calculate the inverse or pseudo-inverse of .
  • This method provides a unique and exact solution when the system is under-determined; in other words, when the number of speaker signals is greater than the number of ambisonic signals (spherical harmonic components).
  • This solution is referred to as the least square solution as it provides the lowest energy among all of the solutions.
  • the least-square solution tends to distribute the energy evenly among the speakers. This becomes a problem when the number of speakers is greater than the number of spherical harmonic components being used for the expansion of the sound field.
  • many speakers turn on and driving similar signals leading to spectral distortion that reduces the size of the sweet spot.
  • one or more embodiments of this disclosure provide a larger sweet spot when a minimal number of speakers are used to recreate a given source. This implies the benefit of using a CS approach to make the speaker signals sparse.
  • An application of compressive sampling methods is to solve the decoding problem by finding a solution to the following optimization problem:
  • the accuracy of a CS-based approach to the upscaling of low-order ambisonics depends on the sparsity level of the sound field, meaning that at any time instance, there should be only few active sound sources in the field.
  • the methods to increase sparsity in sound fields are disclosed in the following embodiments.
  • One or more embodiments of this disclosure provide upscaling ambisonic signals to higher orders.
  • ambisonic signals are decoded through a re-encoding process using compressed sensing techniques (i.e., L12 optimization). Once the optimal speaker signals are found, the higher-order ambisonic signals can be obtained as follows:
  • each column of contains nine spherical harmonics (for 3D mapping).
  • This approach can be used in framed-based ambisonic decoding to upscale the lower-order ambisonic signals locally.
  • the optimality of the decoded speaker signals contributes to the upscaling process. That is the reason ambisonic decoding is performed using compressed sensing which provides the sparsest optimal speaker signals. If the original sound field is sparse, the optimal decoded speaker signals would accurately represent the original sound field and as such higher order ambisonics can be generated from the optimal speaker signals. This again confirms that the upscaling process based on compressed sensing would be effective on sparse sound fields.
  • One way to increase the sparsity of sound fields is to decompose the ambisonic signals into many sub-bands to increase the sparsity of the inputs to the CS-based optimization procedure.
  • Another way to increase sparsity of a given sound field is to apply an auditory masking pattern to the ambisonic signals and remove the inaudible parts of the signals. This way, the overlap between the sources in the sound field would be reduced and as such the sparsity of the sound field would be increased.
  • One or more embodiments of this disclosure provide technique for increasing the sparsity of sound fields (i.e., sub-band decomposition of ambisonic signals and the perceptual sparsity approach). Moreover, the issue of discontinuity of speaker signals is discussed and one or more embodiments of this disclosure provide a new speaker-specific smoothing technique. Also, one or more embodiments of this disclosure provide perception-based techniques to save on the driving power for speaker signals.
  • FIGURE 4 illustrates a block diagram of a sub-band based ambisonic decoder according to some embodiments of the current disclosure.
  • the embodiment of the sub-band based ambisonic decoder illustrated in FIGURE 4 is for illustration only. However, sub-band based ambisonic decoders come in a wide variety of configurations, and FIGURE 4 does not limit the scope of this disclosure to any particular implementation of a sub-band based ambisonic decoder.
  • FIGURE 4 shows a block diagram of ambisonic decoding and upscaling based on the subband decomposition of ambisonic signals.
  • the ambisonic signals are passed through analysis filterbank 402 followed by the decoding of subband signals 404.
  • the decoded speaker signals are smoothed using speaker-specific smoothing factors 406 and then passed through a synthesis filterbank 408 to generate the full-band speaker signals.
  • the speaker signals are projected to spherical harmonics corresponding to higher-order ambisonic signals 410.
  • the analysis and synthesis filterbanks 402 and 408 form a perfect reconstruction system, with no loss in the quality due to the subband processing of the ambisonic signals.
  • FIGURE 5 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor according to some embodiments of the current disclosure.
  • the embodiment of the ambisonic decoder illustrated in FIGURE 5 is for illustration only. However, ambisonic decoders come in a wide variety of configurations, and FIGURE 5 does not limit the scope of this disclosure to any particular implementation of a ambisonic decoder.
  • a controller can control a signal acquisition and ambisonic encoding unit 502, an auditory masking model 504, a process 506 for removal of masked signals, and an ambisonic decoder 508.
  • the encoding unit 502 provides ambisonic signals for masking.
  • the auditory masking model 504 can be one of many different models designed to mask different levels of inaudible parts of the ambisonic components.
  • the process 506 applies the model 504 to the signals from the encoding unit 502 to mask the inaudible parts.
  • the ambisonic decoder maps the masked signals onto the spherical harmonic basis functions to produce the first-order ambisonic signals.
  • Perceptual sparsity can be defined where the human auditory masking effects are exploited to remove any inaudible parts of the ambisonics components. After removal of the inaudible parts, a sparser representation of sound fields will be produced that would lead to more accurate decoding of ambisonic signals. Moreover, the perceptually processed ambisonic signals require a lower bitrate (or memory) for transmission (or storage).
  • Each ambisonic signal is split into frames of 30 millisecond (the frame length can also be adapted to the short-term characteristic of the sound field).
  • An auditory masking pattern for each frame is calculated (any auditory model such as MPEG psychoacoustic model 1 and 2 can be used in this operation).
  • a global masking pattern can be the maximum masking threshold in each frequency bin. Other methods such as linear or nonlinear summation of the masking power in the same frequency bin can be used to find the global masking pattern.
  • An embodiment of this disclosure provides an approach to make sure that no audible part is removed,
  • the ambisonic signals are compared against the global masking pattern in the frequency domain to remove the inaudible parts of the signals (i.e., spectral components below the masking threshold).
  • a sparsity measure in any domain can be the ratio of the number of non-zero samples to the total number of samples. In an example embodiment, up to 60% percent of the spectral components can be removed (i.e., set to zero) that would consequently increase the sparsity of the ambisonic signals and also would save a lot of bandwidth in the transmission of ambisonic signals.
  • FIGURE 5 depicts a block diagram of an ambisonic decoder using an auditory masking model 504 as the front-end processor to increase the sparsity of the ambisonic signals in the perceptual domain (i.e. perceptual sparsity).
  • the following simple example demonstrates the impact of the proposed technique on the accuracy of ambisonic decoding in a sparse sound field where only three active sources are present.
  • three tonal sources at 1000 Hz, 1020 Hz and 1040 Hz with amplitude of 1, 0.1 and 0.1, respectively, are placed at -150, -60 and 60 degrees, respectively, with reference to the positive direction of the vertical axis.
  • the weaker sources are masked by the strong source and cannot be heard.
  • the sources are mapped onto the spherical harmonic basis functions to produce the first-order ambisonic signals.
  • the ambisonic signals are decoded using norm-L12 to generate the speaker signals. In a first situation, the original ambisonic signals are decoded to produce the speaker signals.
  • an auditory masking pattern is produced from the ambisonic signals and then used to remove inaudible parts of the ambisonic signals.
  • the processed ambisonic signals are decoded to produce the speaker signals.
  • twelve speakers are located equi-angle around a circle. The presence of the masked parts makes the decoding process less accurate whereas in the second situation the removal of the masked parts highlights the presence of the dominant source and lead to accurate power estimation and perfect localization of that source.
  • the power of the dominant source spreads over a few speakers.
  • FIGURE 6 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor and ICA according to some embodiments of the current disclosure.
  • the embodiment of the ambisonic decoder illustrated in FIGURE 6 is for illustration only. However, ambisonic decoders come in a wide variety of configurations, and FIGURE 6 does not limit the scope of this disclosure to any particular implementation of an ambisonic decoder.
  • a controller can control an auditory masking model 602, a process 604 for removal of masked signals, an Independent Component Analysis (ICA) processor 606, and ambisonic decoders 608.
  • the auditory masking model 602 can be one of many different models designed to mask different levels of inaudible parts of the ambisonic components.
  • the process 604 applies the model 602 to the signals from an encoding unit to mask the inaudible parts.
  • the ICA processor 606 separates a complex dataset into independent subparts.
  • the ambisonic decoders 608 map the masked signals onto the spherical harmonic basis functions to produce the first-order ambisonic signals.
  • ICA Independent Component Analysis
  • ICA is a statistical technique for decomposing a complex dataset into independent subparts. That technique can provide blind source separation to extract sources from linear mixtures of the sources.
  • ICA is used herein to decompose ambisonic signals into sparser signals. Increasing the sparsity of ambisonic signals improves the accuracy of compressed-sensing-based ambisonic decoding.
  • ambisonic signals are linear mixtures of independent sound sources in a given sound field.
  • An embodiment of this disclosure recognizes and takes into account that it can be difficult to estimate the impulse response filters linking each source to each microphone.
  • ICA can be under-, critically-, or over-determined. For example, if there are four sound sources in a given sound field, the first order ambisonics would be critically determined as the number of ambisonic signals equals the number of sound sources. In this example, ICA would extract all the sound sources from the ambisonic signals. For M ambisonic signals and N sources, ICA can output NM signals (N sets of M signals).
  • Each signal set contains the projection of each sound source on the spherical harmonics (i.e., contribution to each microphone recording). As such, one dominant source could exist in each signal set, which is an ideal condition for compressed-sensing-based ambisonic decoding.
  • Each signal set is decoded and mapped on the speakers. The total speaker signals could be the superposition of the decoded speaker signals from each signal set.
  • FIGURE 6 shows a block diagram of an ambisonic decoder using ICA as the front-end processor.
  • FIGURE 7 illustrates a block diagram of an ambisonic decoder using speaker-specific smoothing factors according to some embodiments of the current disclosure.
  • the embodiment of the ambisonic decoder illustrated in FIGURE 7 is for illustration only.
  • ambisonic decoders come in a wide variety of configurations, and FIGURE 7 does not limit the scope of this disclosure to any particular implementation of an ambisonic decoder.
  • a controller can control a receiving overlapping blocks 702 of ambisonic signals, a frame based L12-norm decoder 704, and smoothing units 706.
  • ambisonic decoding based on the L2-norm produces the speaker signals with minimum energy.
  • energy is distributed over speakers evenly and can deteriorate the localization of sources.
  • the L1-norm reconstructs a sound field with higher quality.
  • the decoding process is performed locally, where the ambisonic signals are split into frames and the speaker signals are found through the decoding of the frames of the ambisonic signals, adapting the decoding matrix to the local characteristic of the sound field. Local decoding of the ambisonic signals results in some discontinuity in the speaker signals.
  • One or more embodiments of this disclosure provide two measures to avoid the audible discontinuity in the speaker signals once the frames of the speaker signals are concatenated.
  • the first technique in order to mitigate the so-called block edge effects in frame-based processing, the frames of ambisonic signals and the speaker signals are windowed and overlapped by 50%.
  • the speaker signals are found through the overlap-add method. Although the overlap-add method would reduce discontinuity in the speaker signals, other measures are warranted to avoid any audible distortion in the decoded signals.
  • one or more embodiments of this disclosure provide a ruled-based method to further prevent any discontinuity at the frames edges and perform a smooth decoding process. Recently a smoothing procedure has been reported wherein a forgetting factor is applied to the decoding matrix to smooth out the decoding process as follows:
  • is the forgetting factor, which acts to smooth out any sharp changes in the decoding matrix between time windows, are the decoding matrix in the previous and current time windows respectively, and is the smooth decoding matrix for the current time window.
  • the speaker signals for the Tth time window is obtained from:
  • the embodiment provides smoothing out the decoding process for each speaker signal separately based on some pre-defined rules as follows:
  • a smoothing factor for the ith rows in the decoding matrix (i.e., used to find the ith speaker signal) is defined as follows:
  • Ruled-based techniques provide that the smoothing is applied to the necessary extent.
  • the decoding vectors (rows in the decoding matrix) in successive frames corresponding to high power speaker signals change smoothly and as such there is no need for further smoothing of those decoding vectors (that would result in less optimal decoding matrix).
  • the slow evolution of the decoding vectors is manifests itself in high correlation of the decoding vectors in successive frames and also larger magnitude of the decoding vector.
  • the variation in the decoding vectors in terms of the magnitude and the correlation of the decoding vectors in successive frames) is large.
  • a single smoothing factor ⁇ is determined based on the correlation between the current and previous frames of speaker signals. Due to a 50% overlap between the current and previous frames, the correlation of the overlapping segments of the current and previous frames is calculated as follows,
  • Sp and SpOld are the speaker signals in the current and previous frames, and is the correlation between the overlapping parts of speaker signal m in the current and previous frames (i.e. first half of the current frame and second half of the previous frame).
  • the smoothing factor is calculated as follows,
  • the smoothing factor ⁇ is used differently than in the above embodiments. If the overlapping parts of the speaker signals are highly correlated, the previous decoding matrix can be used; otherwise the decoding matrix will be a linear combination of the decoding matrices for the current and previous frames. This embodiment provides no discontinuity in the speaker signals.
  • An embodiment provides power savings in ambisonics decoding.
  • ambisonics a sound field is mapped onto orthogonal spherical harmonics. This orthogonal projection produces ambisonics signals that will be used to reconstruct the sound field.
  • the decoding of the ambisonics signals is performed in a re-encoding process wherein the unknown speaker signals are encoded to produce the same ambisonics signals.
  • the decoding process involves an optimization procedure to minimize a certain norm of the speaker signals.
  • the classical optimization is based on the L2-norm which produces the speaker signals with minimum energy.
  • the drawback of this norm is that it would distribute the energy over speakers somehow evenly and as such would deteriorate the localization of sources.
  • An approach to performing the optimization is based on the L0-norm (or L12-norm, a combination of L2 and L1 norms to only sparsify the speaker signals in the space). Compared to the L2-norm, L1-norm would reconstruct a sound field with higher quality at the cost of larger power of the speaker signals.
  • An embodiment of this disclosure provides a method based on the human auditory making effects to reduce the power of the speaker signals while using the L12-norm to reconstruct sound fields with high quality.
  • An embodiment of this disclosure provides three approaches to modify the speaker signals in order to reduce the driving power.
  • the argument in the optimization (i.e. ) is compared against an auditory masking pattern produced by the speaker signals and only the audible parts are kept and the inaudible parts are discarded.
  • This approach will produce only audible signals and all of the resulting speaker signals will contribute to the reconstruction of the sound field.
  • the argument of the above optimization is replaced by that contains only the audible parts of . Since at each step in the optimization masking patterns for each speaker signal needs to be calculated, one or more embodiments of this disclosure provide an approach where in the equality constrain in the optimization procedure is replaced by an inequality constrain.
  • the speaker signals produce the same ambisonic signals once mapped onto the spherical harmonic basis function.
  • an embodiment of this disclosure modifies the optimization process to find the speaker signals with their projection onto the spherical harmonic basis function approximating the original ambisonic signals.
  • the difference between the projection of the speaker signals and the original ambisonic signals may not be audible. As such, the difference between and can be less than the auditory masking pattern induced by the original ambisonic signals.
  • the optimization procedure is modified as follows:
  • the masking pattern generated by the ambisonic signals is the masking pattern generated by the ambisonic signals. If the masking pattern is calculated in the frequency domain (i.e., simultaneous masking effects), the optimization will be performed in the frequency domain by transforming the ambisonic signals to the frequency domain and then solve the above optimization to find the Fourier transform of the speaker signals.
  • the speaker signals in the time domain will be the inverse Fourier transform of the speaker signals spectra.
  • An embodiment of this disclosure provides a second approach without modifying the optimization procedure.
  • this approach once the resulting speaker signals are found, an auditory masking pattern induced by the speaker signals is determined. Then all the speaker signals are compared against the masking pattern and only the audible parts are kept.
  • Each speaker signal is split into frames of 30 msec (the frame length can be adapted to the short-term characteristic of the sound field).
  • An auditory masking pattern for each frame is calculated (any auditory model such as MPEG psychoacoustic model 1 and 2 can be used in this step).
  • a global masking pattern is found as the maximum masking threshold in each frequency bin.
  • Other methods such as linear or nonlinear summation of the masking power in the same frequency bin can be used to find the global masking pattern.
  • the approach can be less aggressive to reduce the chances that an audible part is removed.
  • the speaker signals are compared against the global masking pattern in the frequency domain to remove the inaudible parts of the signals (i.e., spectral components below the masking threshold).
  • the perceptually processed frames are windowed and overlapped by 50% to avoid any discontinuity in the speaker signals due to the independent processing of the frames.
  • This proposed method can reduce the driving power up to 10% (depending on the sound field).
  • one or more embodiments provide compressed sensing techniques for upscaling HOA sound fields to higher orders. Upscaled sound fields have a greater spatial resolution, which allows more speakers to be used during the playback, resulting in a larger sweet spot and improved sound quality.
  • a number of embodiments, including perceptual sparsity, speaker-specific smoothing of speaker signals and perception-based power saving are introduced to improve ambisonic decoding.
  • HOA upscaling based on compressed sensing techniques allows reproducing sound fields similar to those reconstructed from original HOA.
  • the effectiveness of ambisonic decoding based on compressed sensing depends on the instantaneous sparsity of the sound field (i.e. a few active sound sources in the sound field), one can exploit the short term sparsity of the sound field in the CS-based approach to enhance the ambisonic decoding as opposed to classical ambisonics decoding based on the least-norm.
  • the ambisonic components can be upscaled to a higher order ambisonics.
  • the success of the upscaling technique is affected by the complexity of a given sound field. If a few sound sources are located in a small subspace (e.g. 2D surface), the upscaled ambisonics is a good replica of the original ambisonics. Otherwise, the quality of sound fields reconstructed from upscaled ambisonics would likely degrade. For instance, upscaling a first-order ambisonis to second order in 2D space requires estimating two missing ambisonic signals whereas in a 3D space that would be five signals. Moreover, the CS-based HOA upscaling is effective if the original sound field is sparse (i.e. a small number of sources are active at any time instance.).
  • the CS measurement matrix (the spherical harmonics) can be incoherent to reduce the error of a signal with higher dimension (e.g. speaker signals).
  • a matrix with random components would be a good choice for sensing a higher dimensional signal. That constrain is not satisfied since a given sound field is not sampled randomly (i.e., random intervals and random directions) and as such spherical harmonics would limit the effectiveness of the CS-based HOA method.
  • FIGURE 8 illustrates a process for managing an audio signal in accordance with an embodiment of this disclosure.
  • a controller here may represent the main processor 240 and a memory element may be the memory 260 in FIGURE 2.
  • the embodiment of the process shown in FIGURE 8 is for illustration only. Other embodiments of the process could be used without departing from the scope of this disclosure.
  • the controller receives the audio signal.
  • the audio signal includes a plurality of ambisonic components.
  • the audio signal can be received from a number of devices, such as, but not limited to, a mobile device, the Internet, a compact disc, and the like.
  • the controller separates the audio signal into a plurality of independent subcomponents.
  • Each of the independent subcomponents is from a different source.
  • Each of the plurality of ambisonic components is split into the independent subcomponents.
  • the controller decodes each of the independent subcomponents.
  • the controller combines each of the decoded independent subcomponents into a speaker signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Mathematical Physics (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

An embodiment of this disclosure provides an audio receiver. The audio receiver includes a memory configured to store an audio signal and processing circuitry coupled to the memory. The processing circuitry is configured to receive the audio signal. The audio signal comprises a plurality of ambisonic components. The processing circuitry is also configured to separate the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents. The processing circuitry is also configured to decode each of the independent subcomponents. The processing circuitry is also configured to combine each of the decoded independent subcomponents into a speaker signal.

Description

METHOD AND APPARATUS FOR IMPROVED AMBISONIC DECODING
This disclosure relates generally to ambisonic decoding. More specifically, this disclosure relates to improved ambisonic decoding by masking inaudible sounds and using independent component analysis.
Ambisonics is an effective technique to encode and reconstruct sound fields. This technique is based on the orthogonal decomposition of a sound field in the spherical coordinates in the 3D space or cylindrical decomposition in the 2D space. In the decoding process, the ambisonic signals are decoded to produce speaker signals. The higher the order of the ambisonics, the finer reconstruction of the sound fields achieved.
Moreover, the complexity of the sound field plays a key role in the quality of the reconstructed sound field for a given ambisonics order. A less complex sound field might be well described by low-order ambisonics whereas a more complex sound field requires Higher-Order Ambisonics (HOA) to be reconstructed with high quality. A complex sound field contains many simultaneous active sources (either localized or distributed sources). If at any time instance (or in a frequency band) there are few active sources, lower order ambisonics would be able to describe and encode the sound field.
This disclosure provides a method and apparatus for improved ambisonic decoding.
According to an embodiment of the present disclosure, improved ambisonic decoding by masking inaudible sounds and using independent component analysis may be provided.
FIGURE 1 illustrates an example computing system 100 according to this disclosure;
FIGURES 2 and 3 illustrate example devices in a computing system according to this disclosure;
FIGURE 4 illustrates a block diagram of a sub-band based ambisonic decoder according to some embodiments of the current disclosure;
FIGURE 5 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor according to some embodiments of the current disclosure;
FIGURE 6 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor and ICA according to some embodiments of the current disclosure;
FIGURE 7 illustrates a block diagram of an ambisonic decoder using speaker-specific smoothing factors according to some embodiments of the current disclosure; and
FIGURE 8 illustrates a process for managing an audio signal in accordance with an embodiment of this disclosure.
In a first embodiment, this disclosure provides an audio receiver. The audio receiver includes a memory configured to store an audio signal and processing circuitry coupled to the memory. The processing circuitry is configured to receive the audio signal. The audio signal comprises a plurality of ambisonic components. The processing circuitry is also configured to separate the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents. The processing circuitry is also configured to decode each of the independent subcomponents. The processing circuitry is also configured to combine each of the decoded independent subcomponents into a speaker signal.
In a second embodiment, this disclosure provides a method for managing an audio signal. The method includes receiving the audio signal. The audio signal comprises a plurality of ambisonic components. The method also includes separating the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents. The method also includes decoding each of the independent subcomponents. The method also includes combining each of the decoded independent subcomponents into a speaker signal.
In a third embodiment, this disclosure provides a non-transitory computer readable medium embodying a computer program. The computer program includes computer readable program code that, when executed, causes at least one processing device to receive the audio signal. The audio signal comprises a plurality of ambisonic components. The computer program includes computer readable program code that, when executed, also causes at least one processing device to separate the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents. The computer program includes computer readable program code that, when executed, also causes at least one processing device to decode each of the independent subcomponents. The computer program includes computer readable program code that, when executed, also causes at least one processing device to combine each of the decoded independent subcomponents into a speaker signal.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The term “couple” and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The terms “transmit,” “receive,” and “communicate,” as well as derivatives thereof, encompass both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, means to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The term “controller” means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
Definitions for other certain words and phrases are provided throughout this patent document. Those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior as well as future uses of such defined words and phrases.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 61/923,518 filed on January 3, 2014, U.S. Provisional Patent Application No. 61/923,508 filed on January 3, 2014, U.S. Provisional Patent Application No. 61/923,498 filed on January 3, 2014, and U.S. Provisional Patent Application No. 61/923,493 filed on January 3, 2014. The above-identified provisional patent application is hereby incorporated by reference in its entirety.
FIGURES 1 through 8, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of this disclosure may be implemented in any suitably arranged device or system.
FIGURE 1 illustrates an example computing system 100 according to this disclosure. The embodiment of the computing system 100 shown in FIGURE 1 is for illustration only. Other embodiments of the computing system 100 could be used without departing from the scope of this disclosure.
As shown in FIGURE 1, the system 100 includes a network 102, which facilitates communication between various components in the system 100. For example, the network 102 may communicate Internet Protocol (IP) packets, frame relay frames, Asynchronous Transfer Mode (ATM) cells, or other information between network addresses. The network 102 may include one or more local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs), all or a portion of a global network such as the Internet, or any other communication system or systems at one or more locations.
The network 102 facilitates communications between at least one server 104 and various client devices 106-114. Each server 104 includes any suitable computing or processing device that can provide computing services for one or more client devices. Each server 104 could, for example, include one or more processing devices, one or more memories storing instructions and data, and one or more network interfaces facilitating communication over the network 102.
Each client device 106-114 represents any suitable computing or processing device that interacts with at least one server or other computing device(s) over the network 102. In this example, the client devices 106-114 include a desktop computer 106, an audio receiver 107, a mobile telephone or smartphone 108, a personal digital assistant (PDA) 110, a laptop computer 112, and a tablet computer 114. However, any other or additional client devices could be used in the computing system 100.
In this example, some client devices 108-114 communicate indirectly with the network 102. For example, the client devices 108-110 communicate via one or more base stations 116, such as cellular base stations or eNodeBs. Also, the client devices 112-114 communicate via one or more wireless access points 118, such as IEEE 802.11 wireless access points. Note that these are for illustration only and that each client device could communicate directly with the network 102 or indirectly with the network 102 via any suitable intermediate device(s) or network(s).
In this example, audio receiver 107 can receive an audio signal from any one of client devices 106-114. Alternatively, audio receiver 107 can receive an audio signal from the Internet through network 102. Audio receiver 107 can send speaker signals directly to speakers 120 and 122 through a wired network. In another embodiment, audio receiver 107 can send speaker signals indirectly to speakers 120 and 122 through a wireless network.
Although FIGURE 1 illustrates one example of a computing system 100, various changes may be made to FIGURE 1. For example, the system 100 could include any number of each component in any suitable arrangement. In general, computing and communication systems come in a wide variety of configurations, and FIGURE 1 does not limit the scope of this disclosure to any particular configuration. While FIGURE 1 illustrates one operational environment in which various features disclosed in this patent document can be used, these features could be used in any other suitable system.
FIGURES 2 and 3 illustrate example devices in a computing system according to this disclosure. In particular, FIGURE 2 illustrates an example receiver 200, and FIGURE 3 illustrates an example client device 300. The receiver 200 could represent the receiver 107 in FIGURE 1, and the client device 300 could represent one or more of the client devices 106-114 in FIGURE 1.
As shown in FIGURE 2, the receiver 200 includes a bus system 205, which supports communication between at least one processing device 210, at least one storage device 215, at least one communications unit 220, and at least one input/output (I/O) unit 225.
The processing device 210 executes instructions that may be loaded into a memory 230. The processing device 210 may include any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processing devices 210 include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discreet circuitry.
The memory 230 and a persistent storage 235 are examples of storage devices 215, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 230 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 235 may contain one or more components or devices supporting longer-term storage of data, such as a ready only memory, hard drive, Flash memory, or optical disc.
The communications unit 220 supports communications with other systems or devices. For example, the communications unit 220 could include a network interface card or a wireless transceiver facilitating communications over the network 102. The communications unit 220 may support communications through any suitable physical or wireless communication link(s).
The I/O unit 225 allows for input and output of data. For example, the I/O unit 225 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 225 may also send output to a display, printer, or other suitable output device.
Note that while FIGURE 2 is described as representing the receiver 107 of FIGURE 1, the same or similar structure could be used in one or more of the speakers 120 and 122.
As shown in FIGURE 3, the client device 300 includes an antenna 305, a radio frequency (RF) transceiver 310, transmit (TX) processing circuitry 315, a microphone 320, and receive (RX) processing circuitry 325. The client device 300 also includes a speaker 330, a main processor 340, an input/output (I/O) interface (IF) 345, a keypad 350, a display 355, and a memory 360. The memory 360 includes a basic operating system (OS) program 361 and one or more applications 362.
The RF transceiver 310 receives, from the antenna 305, an incoming RF signal transmitted by another component in a system. The RF transceiver 310 down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry 325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry 325 transmits the processed baseband signal to the speaker 330 (such as for voice data) or to the main processor 340 for further processing (such as for web browsing data).
The TX processing circuitry 315 receives analog or digital voice data from the microphone 320 or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the main processor 340. The TX processing circuitry 315 encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver 310 receives the outgoing processed baseband or IF signal from the TX processing circuitry 315 and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna 305.
The main processor 340 can include one or more processors or other processing devices and execute the basic OS program 361 stored in the memory 360 in order to control the overall operation of the client device 300. For example, the main processor 340 could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver 310, the RX processing circuitry 325, and the TX processing circuitry 315 in accordance with well-known principles. In some embodiments, the main processor 340 includes at least one microprocessor or microcontroller.
The main processor 340 is also capable of executing other processes and programs resident in the memory 360. The main processor 340 can move data into or out of the memory 360 as required by an executing process. In some embodiments, the main processor 340 is configured to execute the applications 362 based on the OS program 361 or in response to signals received from external devices or an operator. The main processor 340 is also coupled to the I/O interface 345, which provides the client device 300 with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface 345 is the communication path between these accessories and the main processor 340.
The main processor 340 is also coupled to the keypad 350 and the display unit 355. The operator of the client device 300 can use the keypad 350 to enter data into the client device 300. The display 355 may be a liquid crystal display or other display capable of rendering text and/or at least limited graphics, such as from web sites.
The memory 360 is coupled to the main processor 340. Part of the memory 360 could include a random access memory (RAM), and another part of the memory 360 could include a Flash memory or other read-only memory (ROM).
Although FIGURES 2 and 3 illustrate examples of devices in a computing system, various changes may be made to FIGURES 2 and 3. For example, various components in FIGURES 2 and 3 could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the main processor 340 could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). Also, while FIGURE 3 illustrates the client device 300 configured as a mobile telephone or smartphone, client devices could be configured to operate as other types of mobile or stationary devices. In addition, as with computing and communication networks, client devices and receivers can come in a wide variety of configurations, and FIGURES 2 and 3 do not limit this disclosure to any particular client device or receiver.
For example, the embodiments disclosed herein, can be implemented with any off-the-shelf speaker. In different embodiments, the speakers can include wired or wireless speaker. The speakers can also include speakers that are included in other devices, such as a television, mobile phone, mobile device and the like. In yet further different embodiments, the speakers can be complex, with their own processors or controllers, or simple, without any complex receivers, processors, or controllers.
Various embodiments of this disclosure recognize and take into account that the spherical harmonic decomposition of a sound field can be fully described by a set of spherical harmonic components (i.e. basis functions in the spherical coordinates). The sound pressure can be expressed as follows:
Figure PCTKR2015000072-appb-I000001
(1)
with k being the wave number, W(kr) being the weighting factor for the rigid sphere,
Figure PCTKR2015000072-appb-I000002
being the ambisonic components of the sound field and
Figure PCTKR2015000072-appb-I000003
being the real-valued spherical harmonic functions in the direction of
Figure PCTKR2015000072-appb-I000004
(azimuth) and
Figure PCTKR2015000072-appb-I000005
(elevation). For an order-M ambisonics, the infinity in the above equation is replaced by M and the number of ambisonics components can be (M+1)2 (for 3-D mapping) and (2M+1) (for 2-D mapping).
The main advantage of HOA over other reproduction techniques is its flexibility in using arbitrary speaker configurations to recreate sound fields. The number of speakers required to reproduce the sound field with minimal error can be greater than the number of the ambisonic signals. On the other hand, if the number of playback speakers is much larger than the ambisonics order, the quality of the reconstructed sound field would deteriorate. The reason is that the driving signals for the speakers are found from the ambisonic signals by solving an under-determined linear equation. As such the error in the reconstruction of sound fields would be proportional to the difference between the number of speakers and the number of ambisonic signals. One or more embodiments of this disclosure provide upscale lower-order ambisonics using compressed sensing (CS) techniques and reduce the gap between the number of speakers and the number of ambisonic signals.
CS is a method that allows accurate recovery of signals from sub-Nyquist-rate sampling. If a physical phenomenon (i.e. signal) can be described by a sparse set of basis functions (i.e., mapping onto an over-complete set of basis functions to generate a sparse signal representation), then the number of sensors (i.e. measurements) required to perfectly reconstruct the signal can be much smaller than indicated by the Nyquist theorem. For instance, a time-domain signal resulting from the sum of a few pure tones could be reconstructed perfectly even if the sampling rate is much less than the Nyquist rate. If the representation of a signal is sparse in a certain domain, then it is optimal to measure the signal in a domain whose basis functions are incoherent with the basis functions describing the sparse signal domain. Then the original signal can be recovered through an optimization process based on the L1 norm to find a signal with the smallest L1 norm.
When sampling a signal in any domain, perfect reconstruction of the signal can occur when the sample rate satisfies the Nyquist sampling theorem. According to the CS theory the Nyquist rate is a sufficient condition for the perfect reconstruction of a signal. Under certain assumptions, the observations (i.e., measurements) provided by a sub-Nyquist sampling rate process contains enough information to perfectly describe the observed phenomenon (e.g., signal, sound field, and the like.).
The signal can be described as a sum of a small number of elementary functions (i.e., basis functions in a certain domain). If x denotes the vector of the observations (i.e. measurements) of a sparse vector s, it can then be expressed as follows:
Figure PCTKR2015000072-appb-I000006
(2)
where
Figure PCTKR2015000072-appb-I000007
is the measurement matrix and s is a sparse vector with many zero coefficients. If K coefficients are non-zero, the signal s is called K-sparse. An example of a sparsity domain is the Fourier domain for sinusoidal time signals. Although a pure tone seems full in the time domain, it is perfectly described by a vector with only one non-zero coefficient in the frequency domain.
According to CS, if a signal is K-sparse in a particular domain, then the number of observations required to describe it perfectly is proportional to K which is less than the Nyquist rate.
Perfect recovery of a signal from a small number of measurements uses equation (2) for s. Since the number of observations (i.e., dimension of
Figure PCTKR2015000072-appb-I000008
) is small compared to the dimension of s, the system is under-determined and there are an infinite number of solutions. The CS approach to solving this system and resolving the ambiguity is to look for the sparsest solution, which translates into the following optimization process:
Figure PCTKR2015000072-appb-I000009
(3)
subject to
Figure PCTKR2015000072-appb-I000010
. The L1-norm is used to measure the sparsity of vector s instead of the L0-norm (i.e. the non-zero counting norm). An embodiment of this disclosure recognizes and takes into account that, unlike the L1-norm, L0-norm minimization is too complex and almost impossible to solve. The L1-norm minimization leads to a roughly equivalent optimization problem that can be solved by linear programming methods.
One or more embodiments of this disclosure provide an application of compressed sensing to ambisonic decoding. In the ambisonics technique a sound field is mapped onto orthogonal spherical harmonics. This orthogonal projection produces ambisonics signals that will be used to reconstruct the sound field. The decoding of the ambisonics signals is performed in a re-encoding process wherein the unknown speaker signals are encoded to produce the same ambisonic signals as follows:
Figure PCTKR2015000072-appb-I000011
(4)
where
Figure PCTKR2015000072-appb-I000012
is the vector of speaker signals at time instance t,
Figure PCTKR2015000072-appb-I000013
is the matrix containing spherical harmonics in the direction of each speaker, and
Figure PCTKR2015000072-appb-I000014
is the ambisonic signals.
Since the number of the speaker signals is larger than the number of the ambisonic signals, equation (4) has infinite number of solutions. The decoding process involves an optimization procedure to minimize a certain norm of the speaker signals. The classical optimization is based on the L2-norm. A more recent approach to performing the optimization is based on the application of CS to solve equation (4) using the L1-norm (or L12-norm, a combination of L2 and L1 norms to only sparsify the speaker signals in the space).
As stated earlier, the application of compressed sensing requires that the observed phenomenon/signal be sparse in a certain domain. The spherical harmonic domain is not a good candidate in terms of sparsity. There is no reason for the spherical harmonic expansion of a sound field to be sparse unless all of the sound sources are located in very particular directions. On the other hand, if the sound field results from a few sound sources, it is likely to be described by a small number of coefficients in some sparsity domain. When the spherical harmonic expansion is specified up to an order M, the speaker signals can be found to solve equation (4). The classical manner in which this linear problem is solved is to calculate the inverse or pseudo-inverse of
Figure PCTKR2015000072-appb-I000015
. This method provides a unique and exact solution when the system is under-determined; in other words, when the number of speaker signals is greater than the number of ambisonic signals (spherical harmonic components). This solution is referred to as the least square solution as it provides the lowest energy among all of the solutions. The least-square solution tends to distribute the energy evenly among the speakers. This becomes a problem when the number of speakers is greater than the number of spherical harmonic components being used for the expansion of the sound field. In this example, many speakers turn on and driving similar signals leading to spectral distortion that reduces the size of the sweet spot. In this example, one or more embodiments of this disclosure provide a larger sweet spot when a minimal number of speakers are used to recreate a given source. This implies the benefit of using a CS approach to make the speaker signals sparse.
An application of compressive sampling methods is to solve the decoding problem by finding a solution to the following optimization problem:
Figure PCTKR2015000072-appb-I000016
(5)
subject to
Figure PCTKR2015000072-appb-I000017
, which means searching for the sparsest speaker signals.
The accuracy of a CS-based approach to the upscaling of low-order ambisonics depends on the sparsity level of the sound field, meaning that at any time instance, there should be only few active sound sources in the field. The methods to increase sparsity in sound fields are disclosed in the following embodiments.
One or more embodiments of this disclosure provide upscaling ambisonic signals to higher orders. As mentioned above, ambisonic signals are decoded through a re-encoding process using compressed sensing techniques (i.e., L12 optimization). Once the optimal speaker signals are found, the higher-order ambisonic signals can be obtained as follows:
Figure PCTKR2015000072-appb-I000018
(6)
where
Figure PCTKR2015000072-appb-I000019
is the upscaled HOA signals and
Figure PCTKR2015000072-appb-I000020
is the matrix containing spherical harmonics in the direction of each speaker that includes basis functions up to a desired higher order. For example, in up-scaling from first order to second order ambisonics, each column of
Figure PCTKR2015000072-appb-I000021
contains nine spherical harmonics (for 3D mapping).
Another way to upscale ambisonic signals is first to find the de-mixing matrix to decode the lower-order ambisonic signals into the speaker signals as follows:
Figure PCTKR2015000072-appb-I000022
(7)
where
Figure PCTKR2015000072-appb-I000023
is the smooth decoding matrix. Note that
Figure PCTKR2015000072-appb-I000024
, where I is the identity matrix, and
Figure PCTKR2015000072-appb-I000025
is the matrix containing the spherical harmonics up to the lower order. The upscaling matrix is given by:
Figure PCTKR2015000072-appb-I000026
(8)
and the upscaled ambisonic signals are found as follows :
Figure PCTKR2015000072-appb-I000027
(9)
This approach can be used in framed-based ambisonic decoding to upscale the lower-order ambisonic signals locally.
The optimality of the decoded speaker signals contributes to the upscaling process. That is the reason ambisonic decoding is performed using compressed sensing which provides the sparsest optimal speaker signals. If the original sound field is sparse, the optimal decoded speaker signals would accurately represent the original sound field and as such higher order ambisonics can be generated from the optimal speaker signals. This again confirms that the upscaling process based on compressed sensing would be effective on sparse sound fields.
One way to increase the sparsity of sound fields is to decompose the ambisonic signals into many sub-bands to increase the sparsity of the inputs to the CS-based optimization procedure. Another way to increase sparsity of a given sound field is to apply an auditory masking pattern to the ambisonic signals and remove the inaudible parts of the signals. This way, the overlap between the sources in the sound field would be reduced and as such the sparsity of the sound field would be increased. The above-mentioned techniques are discussed in the following sections.
As mentioned above, sparsity of a sound field has a significant impact on the quality of the reconstructed sound field. One or more embodiments of this disclosure provide technique for increasing the sparsity of sound fields (i.e., sub-band decomposition of ambisonic signals and the perceptual sparsity approach). Moreover, the issue of discontinuity of speaker signals is discussed and one or more embodiments of this disclosure provide a new speaker-specific smoothing technique. Also, one or more embodiments of this disclosure provide perception-based techniques to save on the driving power for speaker signals.
FIGURE 4 illustrates a block diagram of a sub-band based ambisonic decoder according to some embodiments of the current disclosure. The embodiment of the sub-band based ambisonic decoder illustrated in FIGURE 4 is for illustration only. However, sub-band based ambisonic decoders come in a wide variety of configurations, and FIGURE 4 does not limit the scope of this disclosure to any particular implementation of a sub-band based ambisonic decoder.
One approach to increasing sparsity of sound fields is to decompose ambisonic signals into subbands. By doing so if the sound sources do not fully overlap in the frequency domain, the subband signals would be sparser than the full-band signals and hence leading to better ambisonic decoding. FIGURE 4 shows a block diagram of ambisonic decoding and upscaling based on the subband decomposition of ambisonic signals. In this approach, the ambisonic signals are passed through analysis filterbank 402 followed by the decoding of subband signals 404. The decoded speaker signals are smoothed using speaker-specific smoothing factors 406 and then passed through a synthesis filterbank 408 to generate the full-band speaker signals. The speaker signals are projected to spherical harmonics corresponding to higher-order ambisonic signals 410. The analysis and synthesis filterbanks 402 and 408 form a perfect reconstruction system, with no loss in the quality due to the subband processing of the ambisonic signals.
FIGURE 5 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor according to some embodiments of the current disclosure. The embodiment of the ambisonic decoder illustrated in FIGURE 5 is for illustration only. However, ambisonic decoders come in a wide variety of configurations, and FIGURE 5 does not limit the scope of this disclosure to any particular implementation of a ambisonic decoder.
In FIGURE 5, a controller can control a signal acquisition and ambisonic encoding unit 502, an auditory masking model 504, a process 506 for removal of masked signals, and an ambisonic decoder 508. The encoding unit 502 provides ambisonic signals for masking. The auditory masking model 504 can be one of many different models designed to mask different levels of inaudible parts of the ambisonic components. The process 506 applies the model 504 to the signals from the encoding unit 502 to mask the inaudible parts. The ambisonic decoder maps the masked signals onto the spherical harmonic basis functions to produce the first-order ambisonic signals.
As discussed above, the sparsity of sound fields affects the performance of the CS-based ambisonic decoding. An embodiment of this disclosure provides for perceptual sparsity. Perceptual sparsity can be defined where the human auditory masking effects are exploited to remove any inaudible parts of the ambisonics components. After removal of the inaudible parts, a sparser representation of sound fields will be produced that would lead to more accurate decoding of ambisonic signals. Moreover, the perceptually processed ambisonic signals require a lower bitrate (or memory) for transmission (or storage).
The inaudible parts of the ambisonic signals can be removed as described below:
Each ambisonic signal is split into frames of 30 millisecond (the frame length can also be adapted to the short-term characteristic of the sound field).
Figure PCTKR2015000072-appb-I000028
(10)
where
Figure PCTKR2015000072-appb-I000029
is the jth frame of the ith ambisonic signal
Figure PCTKR2015000072-appb-I000030
, and L is the frame length.
An auditory masking pattern
Figure PCTKR2015000072-appb-I000031
for each frame is calculated (any auditory model such as MPEG psychoacoustic model 1 and 2 can be used in this operation).
A global masking pattern
Figure PCTKR2015000072-appb-I000032
can be the maximum masking threshold in each frequency bin. Other methods such as linear or nonlinear summation of the masking power in the same frequency bin can be used to find the global masking pattern. An embodiment of this disclosure provides an approach to make sure that no audible part is removed,
Figure PCTKR2015000072-appb-I000033
(11)
The ambisonic signals are compared against the global masking pattern in the frequency domain to remove the inaudible parts of the signals (i.e., spectral components below the masking threshold).
Figure PCTKR2015000072-appb-I000034
(12)
where
Figure PCTKR2015000072-appb-I000035
is the Fourier transform of frame
Figure PCTKR2015000072-appb-I000036
.
A sparsity measure in any domain can be the ratio of the number of non-zero samples to the total number of samples. In an example embodiment, up to 60% percent of the spectral components can be removed (i.e., set to zero) that would consequently increase the sparsity of the ambisonic signals and also would save a lot of bandwidth in the transmission of ambisonic signals. FIGURE 5 depicts a block diagram of an ambisonic decoder using an auditory masking model 504 as the front-end processor to increase the sparsity of the ambisonic signals in the perceptual domain (i.e. perceptual sparsity).
The following simple example demonstrates the impact of the proposed technique on the accuracy of ambisonic decoding in a sparse sound field where only three active sources are present. In this example, three tonal sources at 1000 Hz, 1020 Hz and 1040 Hz with amplitude of 1, 0.1 and 0.1, respectively, are placed at -150, -60 and 60 degrees, respectively, with reference to the positive direction of the vertical axis. The weaker sources are masked by the strong source and cannot be heard. The sources are mapped onto the spherical harmonic basis functions to produce the first-order ambisonic signals. The ambisonic signals are decoded using norm-L12 to generate the speaker signals. In a first situation, the original ambisonic signals are decoded to produce the speaker signals. In a second situation, an auditory masking pattern is produced from the ambisonic signals and then used to remove inaudible parts of the ambisonic signals. The processed ambisonic signals are decoded to produce the speaker signals. In this example, twelve speakers are located equi-angle around a circle. The presence of the masked parts makes the decoding process less accurate whereas in the second situation the removal of the masked parts highlights the presence of the dominant source and lead to accurate power estimation and perfect localization of that source. In the first situation, the power of the dominant source spreads over a few speakers.
FIGURE 6 illustrates a block diagram of an ambisonic decoder using a front-end auditory masking processor and ICA according to some embodiments of the current disclosure. The embodiment of the ambisonic decoder illustrated in FIGURE 6 is for illustration only. However, ambisonic decoders come in a wide variety of configurations, and FIGURE 6 does not limit the scope of this disclosure to any particular implementation of an ambisonic decoder.
In FIGURE 6, a controller can control an auditory masking model 602, a process 604 for removal of masked signals, an Independent Component Analysis (ICA) processor 606, and ambisonic decoders 608. The auditory masking model 602 can be one of many different models designed to mask different levels of inaudible parts of the ambisonic components. The process 604 applies the model 602 to the signals from an encoding unit to mask the inaudible parts. The ICA processor 606 separates a complex dataset into independent subparts. The ambisonic decoders 608 map the masked signals onto the spherical harmonic basis functions to produce the first-order ambisonic signals.
Independent Component Analysis (ICA) is a statistical technique for decomposing a complex dataset into independent subparts. That technique can provide blind source separation to extract sources from linear mixtures of the sources. ICA is used herein to decompose ambisonic signals into sparser signals. Increasing the sparsity of ambisonic signals improves the accuracy of compressed-sensing-based ambisonic decoding.
In this embodiment, ambisonic signals are linear mixtures of independent sound sources in a given sound field. An embodiment of this disclosure recognizes and takes into account that it can be difficult to estimate the impulse response filters linking each source to each microphone. Depending on the number of sound sources in the sound field and the number of ambisonic signals (i.e. ambisonics order), ICA can be under-, critically-, or over-determined. For example, if there are four sound sources in a given sound field, the first order ambisonics would be critically determined as the number of ambisonic signals equals the number of sound sources. In this example, ICA would extract all the sound sources from the ambisonic signals. For M ambisonic signals and N sources, ICA can output NM signals (N sets of M signals). Each signal set contains the projection of each sound source on the spherical harmonics (i.e., contribution to each microphone recording). As such, one dominant source could exist in each signal set, which is an ideal condition for compressed-sensing-based ambisonic decoding. Each signal set is decoded and mapped on the speakers. The total speaker signals could be the superposition of the decoded speaker signals from each signal set. FIGURE 6 shows a block diagram of an ambisonic decoder using ICA as the front-end processor.
FIGURE 7 illustrates a block diagram of an ambisonic decoder using speaker-specific smoothing factors according to some embodiments of the current disclosure. The embodiment of the ambisonic decoder illustrated in FIGURE 7 is for illustration only. However, ambisonic decoders come in a wide variety of configurations, and FIGURE 7 does not limit the scope of this disclosure to any particular implementation of an ambisonic decoder.
In FIGURE 7, a controller can control a receiving overlapping blocks 702 of ambisonic signals, a frame based L12-norm decoder 704, and smoothing units 706. As described above, ambisonic decoding based on the L2-norm produces the speaker signals with minimum energy. With an L2-norm, energy is distributed over speakers evenly and can deteriorate the localization of sources. Compared to the L2-norm, the L1-norm reconstructs a sound field with higher quality. With the L12-norm, the decoding process is performed locally, where the ambisonic signals are split into frames and the speaker signals are found through the decoding of the frames of the ambisonic signals, adapting the decoding matrix to the local characteristic of the sound field. Local decoding of the ambisonic signals results in some discontinuity in the speaker signals.
One or more embodiments of this disclosure provide two measures to avoid the audible discontinuity in the speaker signals once the frames of the speaker signals are concatenated. The first technique, in order to mitigate the so-called block edge effects in frame-based processing, the frames of ambisonic signals and the speaker signals are windowed and overlapped by 50%. The speaker signals are found through the overlap-add method. Although the overlap-add method would reduce discontinuity in the speaker signals, other measures are warranted to avoid any audible distortion in the decoded signals. As such, one or more embodiments of this disclosure provide a ruled-based method to further prevent any discontinuity at the frames edges and perform a smooth decoding process. Recently a smoothing procedure has been reported wherein a forgetting factor is applied to the decoding matrix to smooth out the decoding process as follows:
Figure PCTKR2015000072-appb-I000037
(13)
where α is the forgetting factor, which acts to smooth out any sharp changes in the decoding matrix between time windows,
Figure PCTKR2015000072-appb-I000038
are the decoding matrix in the previous and current time windows respectively, and
Figure PCTKR2015000072-appb-I000039
is the smooth decoding matrix for the current time window.
In one embodiment of this disclosure, once the decoding matrix is determined, the speaker signals for the Tth time window is obtained from:
Figure PCTKR2015000072-appb-I000040
(14)
The problem with the above-mentioned approach is that the smoothing factor is equally applied to all of the speaker signals that would result in a suboptimal decoding matrix. In an embodiment, the embodiment provides smoothing out the decoding process for each speaker signal separately based on some pre-defined rules as follows:
Figure PCTKR2015000072-appb-I000041
(15)
where β is a constant between 1.5 and 2,
Figure PCTKR2015000072-appb-I000042
is the ith row of the decoding matrix
Figure PCTKR2015000072-appb-I000043
(16)
where
Figure PCTKR2015000072-appb-I000044
is the correlation between the ith rows in the decoding matrix in the current and previous frames.
A smoothing factor for the ith rows in the decoding matrix (i.e., used to find the ith speaker signal) is defined as follows:
Figure PCTKR2015000072-appb-I000045
(17)
Figure PCTKR2015000072-appb-I000046
(18)
Moreover, the change in the magnitude of each row in the decoding matrix is limited to
Figure PCTKR2015000072-appb-I000047
(19)
Ruled-based techniques provide that the smoothing is applied to the necessary extent. In some examples, the decoding vectors (rows in the decoding matrix) in successive frames corresponding to high power speaker signals change smoothly and as such there is no need for further smoothing of those decoding vectors (that would result in less optimal decoding matrix). The slow evolution of the decoding vectors is manifests itself in high correlation of the decoding vectors in successive frames and also larger magnitude of the decoding vector. On the other hand, for the low energy speaker signals, the variation in the decoding vectors (in terms of the magnitude and the correlation of the decoding vectors in successive frames) is large. That observation has motivated us to treat each row in the decoding matrix separately and find and apply different smoothing factors to the decoding vectors (i.e., rows in the decoding matrix). Compared to reported method in the audio community where a single smoothing factor is used to smooth out the decoding matrix, the embodiments disclosed herein are more accurate and performs more optimal ambisonic decoding.
In another embodiment of this disclosure, a single smoothing factor α is determined based on the correlation between the current and previous frames of speaker signals. Due to a 50% overlap between the current and previous frames, the correlation of the overlapping segments of the current and previous frames is calculated as follows,
Figure PCTKR2015000072-appb-I000048
(20)
where Sp and SpOld are the speaker signals in the current and previous frames, and
Figure PCTKR2015000072-appb-I000049
is the correlation between the overlapping parts of speaker signal m in the current and previous frames (i.e. first half of the current frame and second half of the previous frame). The smoothing factor is calculated as follows,
Figure PCTKR2015000072-appb-I000050
(21)
In this example embodiment, the smoothing factor α is used differently than in the above embodiments. If the overlapping parts of the speaker signals are highly correlated, the previous decoding matrix can be used; otherwise the decoding matrix will be a linear combination of the decoding matrices for the current and previous frames. This embodiment provides no discontinuity in the speaker signals.
An embodiment provides power savings in ambisonics decoding. In the ambisonics technique a sound field is mapped onto orthogonal spherical harmonics. This orthogonal projection produces ambisonics signals that will be used to reconstruct the sound field. The decoding of the ambisonics signals is performed in a re-encoding process wherein the unknown speaker signals are encoded to produce the same ambisonics signals.
The decoding process involves an optimization procedure to minimize a certain norm of the speaker signals. The classical optimization is based on the L2-norm which produces the speaker signals with minimum energy. However, the drawback of this norm is that it would distribute the energy over speakers somehow evenly and as such would deteriorate the localization of sources. An approach to performing the optimization is based on the L0-norm (or L12-norm, a combination of L2 and L1 norms to only sparsify the speaker signals in the space). Compared to the L2-norm, L1-norm would reconstruct a sound field with higher quality at the cost of larger power of the speaker signals. An embodiment of this disclosure provides a method based on the human auditory making effects to reduce the power of the speaker signals while using the L12-norm to reconstruct sound fields with high quality.
The optimization procedure based on the L12-norm is performed as follows:
Figure PCTKR2015000072-appb-I000052
(22)
subject to:
Figure PCTKR2015000072-appb-I000053
(23)
where
Figure PCTKR2015000072-appb-I000054
is the vector of speaker signals at time instance t,
Figure PCTKR2015000072-appb-I000055
is the matrix containing spherical harmonics in the direction of each speaker, and
Figure PCTKR2015000072-appb-I000056
is the ambisonic signals.
An embodiment of this disclosure provides three approaches to modify the speaker signals in order to reduce the driving power.
In the first approach, the argument in the optimization (i.e.
Figure PCTKR2015000072-appb-I000057
) is compared against an auditory masking pattern produced by the speaker signals and only the audible parts are kept and the inaudible parts are discarded. This approach will produce only audible signals and all of the resulting speaker signals will contribute to the reconstruction of the sound field. In other words, the argument of the above optimization is replaced by
Figure PCTKR2015000072-appb-I000058
that contains only the audible parts of
Figure PCTKR2015000072-appb-I000059
. Since at each step in the optimization masking patterns for each speaker signal needs to be calculated, one or more embodiments of this disclosure provide an approach where in the equality constrain in the optimization procedure is replaced by an inequality constrain.
In the above optimization, the speaker signals produce the same ambisonic signals once mapped onto the spherical harmonic basis function. However, from a perceptual point of view, that requirement is too strict and is not necessary to be satisfied. As such, an embodiment of this disclosure modifies the optimization process to find the speaker signals with their projection onto the spherical harmonic basis function approximating the original ambisonic signals. The difference between the projection of the speaker signals and the original ambisonic signals may not be audible. As such, the difference between
Figure PCTKR2015000072-appb-I000060
and
Figure PCTKR2015000072-appb-I000061
can be less than the auditory masking pattern induced by the original ambisonic signals. The optimization procedure is modified as follows:
Figure PCTKR2015000072-appb-I000062
(24)
subject to
Figure PCTKR2015000072-appb-I000063
(25)
where
Figure PCTKR2015000072-appb-I000064
is the masking pattern generated by the ambisonic signals. If the masking pattern is calculated in the frequency domain (i.e., simultaneous masking effects), the optimization will be performed in the frequency domain by transforming the ambisonic signals to the frequency domain and then solve the above optimization to find the Fourier transform of the speaker signals. The speaker signals in the time domain will be the inverse Fourier transform of the speaker signals spectra.
An embodiment of this disclosure provides a second approach without modifying the optimization procedure. In this approach, once the resulting speaker signals are found, an auditory masking pattern induced by the speaker signals is determined. Then all the speaker signals are compared against the masking pattern and only the audible parts are kept.
The procedure of removing the inaudible parts of the speaker signals is described below:
Each speaker signal is split into frames of 30 msec (the frame length can be adapted to the short-term characteristic of the sound field).
Figure PCTKR2015000072-appb-I000065
(26)
where
Figure PCTKR2015000072-appb-I000066
is the jth frame of the ith speaker signal
Figure PCTKR2015000072-appb-I000067
, and L is the frame length.
An auditory masking pattern
Figure PCTKR2015000072-appb-I000068
for each frame is calculated (any auditory model such as MPEG psychoacoustic model 1 and 2 can be used in this step).
A global masking pattern
Figure PCTKR2015000072-appb-I000069
is found as the maximum masking threshold in each frequency bin. Other methods such as linear or nonlinear summation of the masking power in the same frequency bin can be used to find the global masking pattern. In an example embodiment, the approach can be less aggressive to reduce the chances that an audible part is removed.
Figure PCTKR2015000072-appb-I000070
(27)
The speaker signals are compared against the global masking pattern in the frequency domain to remove the inaudible parts of the signals (i.e., spectral components below the masking threshold).
Figure PCTKR2015000072-appb-I000071
(28)
where
Figure PCTKR2015000072-appb-I000072
is the Fourier transform of frame
Figure PCTKR2015000072-appb-I000073
.
The perceptually processed frames are windowed and overlapped by 50% to avoid any discontinuity in the speaker signals due to the independent processing of the frames. This proposed method can reduce the driving power up to 10% (depending on the sound field).
In this disclosure, one or more embodiments provide compressed sensing techniques for upscaling HOA sound fields to higher orders. Upscaled sound fields have a greater spatial resolution, which allows more speakers to be used during the playback, resulting in a larger sweet spot and improved sound quality. A number of embodiments, including perceptual sparsity, speaker-specific smoothing of speaker signals and perception-based power saving are introduced to improve ambisonic decoding.
HOA upscaling based on compressed sensing techniques allows reproducing sound fields similar to those reconstructed from original HOA. The effectiveness of ambisonic decoding based on compressed sensing depends on the instantaneous sparsity of the sound field (i.e. a few active sound sources in the sound field), one can exploit the short term sparsity of the sound field in the CS-based approach to enhance the ambisonic decoding as opposed to classical ambisonics decoding based on the least-norm.
Once a sound field accurately is reconstructed using the CS approach, the ambisonic components can be upscaled to a higher order ambisonics. However, the success of the upscaling technique is affected by the complexity of a given sound field. If a few sound sources are located in a small subspace (e.g. 2D surface), the upscaled ambisonics is a good replica of the original ambisonics. Otherwise, the quality of sound fields reconstructed from upscaled ambisonics would likely degrade. For instance, upscaling a first-order ambisonis to second order in 2D space requires estimating two missing ambisonic signals whereas in a 3D space that would be five signals. Moreover, the CS-based HOA upscaling is effective if the original sound field is sparse (i.e. a small number of sources are active at any time instance.).
Another issue is that the CS measurement matrix (the spherical harmonics) can be incoherent to reduce the error of a signal with higher dimension (e.g. speaker signals). In other words, a matrix with random components would be a good choice for sensing a higher dimensional signal. That constrain is not satisfied since a given sound field is not sampled randomly (i.e., random intervals and random directions) and as such spherical harmonics would limit the effectiveness of the CS-based HOA method.
FIGURE 8 illustrates a process for managing an audio signal in accordance with an embodiment of this disclosure. A controller here may represent the main processor 240 and a memory element may be the memory 260 in FIGURE 2. The embodiment of the process shown in FIGURE 8 is for illustration only. Other embodiments of the process could be used without departing from the scope of this disclosure.
At operation 802, the controller receives the audio signal. The audio signal includes a plurality of ambisonic components. The audio signal can be received from a number of devices, such as, but not limited to, a mobile device, the Internet, a compact disc, and the like.
At operation 804, the controller separates the audio signal into a plurality of independent subcomponents. Each of the independent subcomponents is from a different source. Each of the plurality of ambisonic components is split into the independent subcomponents.
At operation 806, the controller decodes each of the independent subcomponents. At operation 806, the controller combines each of the decoded independent subcomponents into a speaker signal.
None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claim scope. The scope of patented subject matter is defined only by the claims. Moreover, none of the claims is intended to invoke 35 U.S.C. § 112(f) unless the exact words “means for” are followed by a participle.

Claims (15)

  1. An audio receiver, the audio receiver comprising:
    a memory configured to store an audio signal; and
    processing circuitry coupled to the memory, the processing circuitry configured to:
    receive the audio signal, the audio signal comprising a plurality of ambisonic components;
    separate the audio signal into a plurality of independent subcomponents, each of the independent subcomponents is from a different source, wherein each of the plurality of ambisonic components is split into the independent subcomponents;
    decode each of the independent subcomponents; and
    combine each of the decoded independent subcomponents into a speaker signal.
  2. The audio receiver of Claim 1, wherein decoding each of the independent subcomponents comprises the processing circuitry configured to:
    separate and decode each of the plurality of ambisonic components for each of the independent subcomponents into a plurality of frames;
    overlap each frame with at least one adjacent frame; and
    perform smoothing on the overlapping frames.
  3. The audio receiver of Claim 2, wherein a smoothing factor of the smoothing is based on a correlation between overlapping parts of the plurality of frames of the plurality of decoded ambisonic components.
  4. The audio receiver of Claim 3, wherein the correlation derived by:
    Figure PCTKR2015000072-appb-I000074
    ,
    where m is the audio signal, L is a frame length,
    Figure PCTKR2015000072-appb-I000075
    is a current frame, is a previous frame, and
    Figure PCTKR2015000072-appb-I000077
    is the correlation between overlapping parts of the plurality of frames of the plurality of decoded ambisonic components in the current frame and previous frame.
  5. The audio receiver of Claim 1, wherein the processing circuitry is configured to:
    mask a number of signals of the audio signal within a threshold.
  6. The audio receiver of Claim 5, wherein the threshold is set to comprise inaudible parts of the audio signal.
  7. The audio receiver of Claim 1, further comprising:
    a transceiver configured to transmit the speaker signal to a plurality of speakers.
  8. A method for managing an audio signal, the method comprising:
    receiving the audio signal, the audio signal comprising a plurality of ambisonic components;
    separating the audio signal into a plurality of independent subcomponents, each of the independent subcomponents is from a different source, wherein each of the plurality of ambisonic components is split into the independent subcomponents;
    decoding each of the independent subcomponents; and
    combining each of the decoded independent subcomponents into a speaker signal.
  9. The method of Claim 8, wherein decoding each of the independent subcomponents comprises:
    separating and decoding each of the plurality of ambisonic components for each of the independent subcomponents into a plurality of frames;
    overlapping each frame with at least one adjacent frame; and
    performing smoothing on the overlapping frames.
  10. The method of Claim 9, wherein a smoothing factor of the smoothing is based on a correlation between overlapping parts of the plurality of frames of the plurality of decoded ambisonic components.
  11. The method of Claim 10, wherein the correlation derived by:
    Figure PCTKR2015000072-appb-I000078
    ,
    where m is the audio signal, L is a frame length,
    Figure PCTKR2015000072-appb-I000079
    is a current frame,
    Figure PCTKR2015000072-appb-I000080
    is a previous frame, and
    Figure PCTKR2015000072-appb-I000081
    is the correlation between overlapping parts of the plurality of frames of the plurality of decoded ambisonic components in the current frame and previous frame.
  12. The method of Claim 8, further comprising:
    masking a number of signals of the audio signal within a threshold.
  13. The method of Claim 12, wherein the threshold is set to comprise inaudible parts of the audio signal.
  14. The method of Claim 8, further comprising:
    transmitting the speaker signal to a plurality of speakers.
  15. A non-transitory computer readable medium embodying a computer program, the computer program comprising computer readable program code that when executed causes at least one processing device to:
    receive an audio signal, the audio signal comprising a plurality of ambisonic components;
    separate the audio signal into a plurality of independent subcomponents, each of the independent subcomponents is from a different source, wherein each of the plurality of ambisonic components is split into the independent subcomponents;
    decode each of the independent subcomponents; and
    combine each of the decoded independent subcomponents into a speaker signal.
PCT/KR2015/000072 2014-01-03 2015-01-05 Method and apparatus for improved ambisonic decoding WO2015102452A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15733320.4A EP3090574B1 (en) 2014-01-03 2015-01-05 Method and apparatus for improved ambisonic decoding
KR1020167021086A KR101862356B1 (en) 2014-01-03 2015-01-05 Method and apparatus for improved ambisonic decoding

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201461923498P 2014-01-03 2014-01-03
US201461923518P 2014-01-03 2014-01-03
US201461923493P 2014-01-03 2014-01-03
US201461923508P 2014-01-03 2014-01-03
US61/923,493 2014-01-03
US61/923,508 2014-01-03
US61/923,518 2014-01-03
US61/923,498 2014-01-03
US14/589,710 2015-01-05
US14/589,710 US10020000B2 (en) 2014-01-03 2015-01-05 Method and apparatus for improved ambisonic decoding

Publications (1)

Publication Number Publication Date
WO2015102452A1 true WO2015102452A1 (en) 2015-07-09

Family

ID=53493716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/000072 WO2015102452A1 (en) 2014-01-03 2015-01-05 Method and apparatus for improved ambisonic decoding

Country Status (3)

Country Link
US (1) US10020000B2 (en)
KR (1) KR101862356B1 (en)
WO (1) WO2015102452A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940937B2 (en) * 2014-10-10 2018-04-10 Qualcomm Incorporated Screen related adaptation of HOA content
CN107305770B (en) * 2016-04-21 2021-02-09 华为技术有限公司 Method, device and system for sampling and reconstructing audio signal
FR3060830A1 (en) * 2016-12-21 2018-06-22 Orange SUB-BAND PROCESSING OF REAL AMBASSIC CONTENT FOR PERFECTIONAL DECODING
US10657974B2 (en) * 2017-12-21 2020-05-19 Qualcomm Incorporated Priority information for higher order ambisonic audio data
US11270711B2 (en) * 2017-12-21 2022-03-08 Qualcomm Incorproated Higher order ambisonic audio data
CN113345448B (en) * 2021-05-12 2022-08-05 北京大学 HOA signal compression method based on independent component analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154584A1 (en) 2005-01-31 2008-06-26 Soren Andersen Method for Concatenating Frames in Communication System
WO2009067741A1 (en) * 2007-11-27 2009-06-04 Acouity Pty Ltd Bandwidth compression of parametric soundfield representations for transmission and storage
US20100198601A1 (en) * 2007-05-10 2010-08-05 France Telecom Audio encoding and decoding method and associated audio encoder, audio decoder and computer programs
US20100305952A1 (en) 2007-05-10 2010-12-02 France Telecom Audio encoding and decoding method and associated audio encoder, audio decoder and computer programs
US20120155653A1 (en) * 2010-12-21 2012-06-21 Thomson Licensing Method and apparatus for encoding and decoding successive frames of an ambisonics representation of a 2- or 3-dimensional sound field
US20120259442A1 (en) 2009-10-07 2012-10-11 The University Of Sydney Reconstruction of a recorded sound field
US20130216070A1 (en) * 2010-11-05 2013-08-22 Florian Keiler Data structure for higher order ambisonics audio data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080040165A1 (en) * 2006-08-08 2008-02-14 Monsanto Technology Llc Transgenic crop financial systems and methods
US9135920B2 (en) * 2012-11-26 2015-09-15 Harman International Industries, Incorporated System for perceived enhancement and restoration of compressed audio signals

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154584A1 (en) 2005-01-31 2008-06-26 Soren Andersen Method for Concatenating Frames in Communication System
US20100198601A1 (en) * 2007-05-10 2010-08-05 France Telecom Audio encoding and decoding method and associated audio encoder, audio decoder and computer programs
US20100305952A1 (en) 2007-05-10 2010-12-02 France Telecom Audio encoding and decoding method and associated audio encoder, audio decoder and computer programs
WO2009067741A1 (en) * 2007-11-27 2009-06-04 Acouity Pty Ltd Bandwidth compression of parametric soundfield representations for transmission and storage
US20120259442A1 (en) 2009-10-07 2012-10-11 The University Of Sydney Reconstruction of a recorded sound field
US20130216070A1 (en) * 2010-11-05 2013-08-22 Florian Keiler Data structure for higher order ambisonics audio data
US20120155653A1 (en) * 2010-12-21 2012-06-21 Thomson Licensing Method and apparatus for encoding and decoding successive frames of an ambisonics representation of a 2- or 3-dimensional sound field

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANDREW WABNITZ ET AL.: "A frequency-domain algorithm to upscale ambisonic sound scenes", 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, March 2012 (2012-03-01)
ANDREW WABNITZ; NICOLAS EPAIN; ALISTAIR MCEWAN; CRAIG JIN: "Upscaling Ambisonic sound scenes using compressed sensing techniques", APPLICATIONS OF SIGNAL PROCESSING TO AUDIO AND ACOUSTICS (WASPAA), 2011 IEEE WORKSHOP ON,, 16 October 2011 (2011-10-16), pages 1 - 4, XP032011510
ANDREW WABNITZ; NICOLAS EPAIN; CRAIG T JIN: "A frequency-domain algorithm to upscale ambisonic sound scenes", 2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2012) : KYOTO, JAPAN, 25 - 30 MARCH 2012 ;, - 25 March 2012 (2012-03-25), pages 385 - 388, XP032227141
See also references of EP3090574A4

Also Published As

Publication number Publication date
KR101862356B1 (en) 2018-06-29
US20150194161A1 (en) 2015-07-09
US10020000B2 (en) 2018-07-10
KR20160105509A (en) 2016-09-06

Similar Documents

Publication Publication Date Title
WO2015102452A1 (en) Method and apparatus for improved ambisonic decoding
JP4689625B2 (en) Adaptive mixed transform for signal analysis and synthesis
Adler et al. Audio inpainting
JP2022120119A (en) Method or apparatus for compressing or decompressing higher-order ambisonics signal representation
Miyabe et al. Blind compensation of interchannel sampling frequency mismatch for ad hoc microphone array based on maximum likelihood estimation
CA2213348C (en) Method and apparatus for efficient implementation of single-sideband filter banks providing accurate measures of spectral magnitude and phase
WO2020231230A1 (en) Method and apparatus for performing speech recognition with wake on voice
WO2011049416A2 (en) Apparatus and method encoding/decoding with phase information and residual information
WO2020111676A1 (en) Voice recognition device and method
JP2015520411A5 (en)
GB2514441A (en) Motion estimation using hierarchical phase plane correlation and block matching
WO2020045794A1 (en) Electronic device and control method thereof
CN109478407B (en) Encoding device for processing an input signal and decoding device for processing an encoded signal
WO2016204581A1 (en) Method and device for processing internal channels for low complexity format conversion
CN115335900A (en) Transforming panoramical acoustic coefficients using an adaptive network
WO2014092460A1 (en) Method of encoding and decoding audio signal and apparatus for encoding and decoding audio signal
EP3090574A1 (en) Method and apparatus for improved ambisonic decoding
WO2021167318A1 (en) Position detection method, apparatus, electronic device and computer readable storage medium
US11902757B2 (en) Techniques for unified acoustic echo suppression using a recurrent neural network
WO2015122752A1 (en) Signal encoding method and apparatus, and signal decoding method and apparatus
WO2022010189A1 (en) Apparatus and method for audio encoding/decoding robust to transition segment encoding distortion
WO2022031061A1 (en) Wpe-based reverberation removal apparatus using deep neural network-based virtual channel extension
Ullah et al. Semi-supervised transient noise suppression using OMLSA and SNMF algorithms
US6487526B1 (en) Vector correlator for speech VOCODER using optical processor
RU2648632C2 (en) Multi-channel audio signal classifier

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15733320

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015733320

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015733320

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20167021086

Country of ref document: KR

Kind code of ref document: A