CN106664490B - Monophonic or multichannel audio control interface - Google Patents
Monophonic or multichannel audio control interface Download PDFInfo
- Publication number
- CN106664490B CN106664490B CN201580035622.0A CN201580035622A CN106664490B CN 106664490 B CN106664490 B CN 106664490B CN 201580035622 A CN201580035622 A CN 201580035622A CN 106664490 B CN106664490 B CN 106664490B
- Authority
- CN
- China
- Prior art keywords
- audio
- gui
- received
- user
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005236 sound signal Effects 0.000 claims abstract description 261
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000015654 memory Effects 0.000 claims description 47
- 238000012545 processing Methods 0.000 claims description 23
- 230000005764 inhibitory process Effects 0.000 claims description 8
- 238000009877 rendering Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 3
- 238000002156 mixing Methods 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 2
- 230000003068 static effect Effects 0.000 description 43
- 238000005516 engineering process Methods 0.000 description 23
- 230000008859 change Effects 0.000 description 18
- 230000004044 response Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000013500 data storage Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 241000209140 Triticum Species 0.000 description 4
- 235000021307 Triticum Nutrition 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000004378 air conditioning Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 241000545442 Radix Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/04—Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/008—Visual indication of individual signal levels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/40—Visual indication of stereophonic sound image
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/155—Musical effects
- G10H2210/265—Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
- G10H2210/295—Spatial effects, musical uses of multiple audio channels, e.g. stereo
- G10H2210/305—Source positioning in a soundscape, e.g. instrument positioning on a virtual soundstage, stereo panning or related delay or reverberation changes; Changing the stereo width of a musical source
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/091—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
- G10H2220/096—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith using a touch screen
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/091—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
- G10H2220/101—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
- G10H2220/106—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/211—User input interfaces for electrophonic musical instruments for microphones, i.e. control of musical parameters either directly from microphone signals or by physically associated peripherals, e.g. karaoke control switches or rhythm sensing accelerometer within the microphone casing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/351—Environmental parameters, e.g. temperature, ambient light, atmospheric pressure, humidity, used as input for musical purposes
- G10H2220/355—Geolocation input, i.e. control of musical parameters based on location or geographic position, e.g. provided by GPS, WiFi network location databases or mobile phone base station position databases
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2230/00—General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
- G10H2230/005—Device type or category
- G10H2230/015—PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/01—Aspects of volume control, not necessarily automatic, in sound systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Otolaryngology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Stereophonic System (AREA)
Abstract
A kind of method handling audio may include receiving to be communicably coupled to multiple real-time audio signals that multiple microphones of the computing device are exported by computing device.The computing device can export the graphical user interface GUI that audio-frequency information associated with the received audio signal is presented to display.It can be inputted based on user associated with the audio-frequency information presented via the GUI to handle one or more the received audio signals, to generate one or more through handling audio signal.Can for example by it is described one or more through handle audio signal be output to one or more output devices, such as loudspeaker, headphone etc..
Description
CROSS REFERENCE
Present application advocates the equity of the 62/020th, No. 928 United States provisional application filed on July 3rd, 2014, described
Application case is incorporated herein by reference in its entirety.
Technical field
The present invention relates generally to monophonics or multichannel audio to generate, and more particularly, is related to for being filled with calculating
It sets to record the technology of audio.
Background technique
The progress of technology has produced smaller and more powerful computing device.For example, there is currently a variety of portable
Formula personal computing device includes radio telephone, such as mobile and smart phone, tablet computer and laptop computer, body
Product is small, light-weight, and carries convenient for user.These devices can transmit speech and data packet via wireless network.In addition, it is many this
Class device incorporates additional functionality, such as Digital Still Camera, DV, digital recorder and audio file play
Device.Also, such device can handle executable instruction, include software application, such as can be used to access the network of internet
Browser application.These devices may include a large amount of computing capabilitys as a result,., such as the calculating dress such as radio telephone for example
It sets and can include one or more of microphone to capture audio signal for storing and reset.As another example, a kind of computing device
Multiple sound channels of audio can be recorded simultaneously in real time.The user of computing device may be selected when to start to capture the audio signal with
And when stop capturing the audio signal.
Summary of the invention
Such as smart phone, tablet computer, laptop computer, " flat board mobile phone ", open car and wearable computing
The mobile computing devices such as device increasingly incorporate the ability of multiple sound channels of record audio in real time.These mobile computing devices can
Comprising microphone array, the analog capture of multiple and different voice-grade channels is realized.The present invention relates generally to for using movement
Calculate the technology that equipment to record monophonic or multichannel audio in real time.Present invention also generally relates to for during playback to
User provides and provides feedback in real time about the feedback of audio or when just recording the audio.By provide Real-time Feedback or
Feedback is provided during playback, the experience of user can be enhanced, the quality of the playback can be enhanced, or captured audio can be enhanced
Quality.For example, present invention description is related to audio track for enabling the user of mobile computing device to adjust in real time
The parameter of connection.
In an example, a kind of method may include being received by computing device by being communicably coupled to the meter
Calculate multiple real-time audio signals that multiple microphones of device are exported.The method may include to display output pattern user
Interface (GUI), for rendering audio-frequency information associated with the received audio signal;Based on the sound that is presented via GUI
The associated user's input of frequency information is to handle one or more of the received audio signal, to generate one or more through handling
Audio signal;And output it is described one or more through handling audio signal.
In another example, a kind of method may include being received by computing device by being communicably coupled to calculate dress
The multiple real-time audio signals for the multiple microphones output set.The method may include to display output for rendering with connect
The graphical user interface (GUI) of the associated noise information of one or more of audio signal of receipts.The method may include: base
In user associated with the noise information presented via GUI input handle one or more of the received audio signal with
One or more are generated through handling audio signal;And output it is described one or more through handling audio signal.
In another example, a kind of equipment may include: memory;And one or more processors, it is configured to receive
The multiple real-time audio signals exported by multiple microphones, and generate associated with the received audio signal audio-frequency information with
Storage is in the memory.One or more described processors can be configured as display output pattern user interface (GUI)
Graphical content with audio-frequency information associated with the received audio signal for rendering;Based on the sound that is presented via GUI
The associated user's input of frequency information is to handle one or more of the received audio signal to generate one or more through handling
Audio signal;And export one or more described processed audio signals.
In another example, a kind of equipment may include: memory;And one or more processors, it is configured to receive
Multiple real-time audio signals that multiple microphones are exported;And generate noise information associated with the received audio signal
For storing in the memory.One or more described processors can be configured with: for display output pattern user interface
(GUI) graphical content is to be presented noise information associated with one or more of the received audio signal;Based on warp
The associated user of the noise information presented by GUI inputs to handle one or more of the received audio signal to generate one
Or it is multiple through handling audio signal;And output it is described one or more through handling audio signal.
In another example, a kind of device may include: for receiving the multiple wheats for being communicably coupled to computing device
The device for multiple real-time audio signals that gram wind is exported;Audio associated with the received audio signal is presented for exporting
The device of the graphical user interface (GUI) of information;For being inputted based on user associated with the audio-frequency information presented via GUI
To handle one or more of the received audio signal to generate one or more devices through handling audio signal;And it is used for
One or more described devices through handling audio signal of output.
In another example, a kind of device may include: be communicably coupled to the more of the computing device for receiving
The device for multiple real-time audio signals that a microphone is exported;For export present with the received audio signal in one or
The device of the graphical user interface (GUI) of more associated noise informations of person;For based on the noise information that is presented via GUI
Associated user inputs to handle one or more of the received audio signal to generate one or more through processing audio letter
Number device;And for exporting one or more described devices through handling audio signal.
In another example, a kind of non-transitory computer-readable storage media being stored with instruction above, described instruction
When executed, one or more processors of computing device can be caused: receiving multiple real-time audios that multiple microphones are exported
Signal;The graphical content of the graphical user interface (GUI) of the display is exported, to display to present and the received audio of institute
The associated noise information of one or more of signal;It is inputted based on user associated with the noise information presented via GUI
To handle one or more of the received audio signal to generate one or more through handling audio signal;And output described one
Or it is multiple through handling audio signal.
In another example, it is stored with instruction above a kind of non-transitory computer-readable storage media, described instruction exists
It is performed, one or more processors of computing device can be caused: receiving multiple real-time audios letter that multiple microphones are exported
Number;To the graphical content of the graphical user interface (GUI) of display Output Display Unit, to present and the received audio signal phase
Associated audio-frequency information;Received audio is inputted to handle based on user associated with the audio-frequency information presented via GUI
One or more of signal is to generate one or more through handling audio signal;And one or more believe through processing audio described in output
Number.
The details of one or more examples is stated in the accompanying drawings and the description below.Other feature, target and advantage of the invention will
It is apparent from the description, schema and the appended claims.
Detailed description of the invention
Fig. 1 is the figure of the calculating environment of one or more technologies according to the present invention.
It is the example for the multiple views for executing the device of multichannel audio generation Fig. 2A to C is watched together when
Figure;
Fig. 3 A to G is the various examples of the graphical user interface of one or more technologies according to the present invention.
Fig. 4 is the flow chart for illustrating the example operation of one or more technologies according to the present invention.
Fig. 5 is the flow chart for illustrating the example operation of one or more technologies according to the present invention.
Fig. 6 is the flow chart for illustrating the example operation of one or more technologies according to the present invention.
Specific embodiment
Present invention description is configured to record monophonic or multichannel audio in real time and adjust in real time or during playback
The various examples of the computing device (such as communication device and other devices) of whole parameter associated with the multichannel audio.When
Before, many computing devices, such as laptop computer, smart phone, flat board mobile phone, wearable computing device, tablet computer,
It is able to record monophonic or multichannel audio.Record multichannel audio is also referred to as surrounding recording, can be for example using advanced
Audio coding (AAC) or other codecs are realized.Can have several different channel configurations and format around recording, such as
5.1,7.1 and 9.1 channel audio formats or other surround sound audio recording formats.These computing devices may also be able to carry out institute
The surround sound audio playback (such as real time playback or non real-time playback) of the surround sound audio of record.The playback can be related to
Audio-frequency information is emitted to using output interface (such as using bluetooth, HDMI (high definition media interface) or another output interface)
Output device, such as loudspeaker.
In order to execute around recording (SSR or multichannel recording), multiple physics microphones are can be used in computing device.It is described more
A microphone is referred to alternatively as " microphone array ".Each microphone can record the audio letter of one or more sound channels for audio
Number.For example, a microphone can record the sound of centre audio sound channel, and another microphone can record left audio track
Sound.
However, routine SSR system and the device with SSR function are during record or during resetting not in real time to institute
The user for stating device provides feedback.Device with SSR function does not allow for the active user input during record yet, with real-time
Realize the change to record in ground.In some instances, one or more technologies of the invention make device (such as with SSR function
Device) user's input can be received in real time when with one or more microphones to record audio (such as when executing SSR).?
In other examples, one or more technologies of the invention make device (such as device with SSR function) can be in precedence record
User's input is received during the playback of audio.In other examples, one or more technologies of the invention make device (such as with
The device of SSR function) can with one or more microphones record audio (such as when executing SSR) in real time reception user it is defeated
Enter, and then gained real-time audio is stored as it is modified or unmodified, in addition to gained real-time video is presented or generation
It is reset later for the real-time video obtained by presenting.
In some instances, one or more technologies of the invention make computing device (such as device with SSR function) energy
Enough when recording audio with one or more microphones or during the playback of the audio of precedence record, in real time via described device
Display on the graphical user interface (GUI) that is presented come to user's output information.For example, described device may be in response to
It receives user's input of request activating multi-media application program and shows GUI.Via GUI (for example, or passing through GUI or passing through
GUI the information) being presented to the user can be related to any face etc. of audio recording or playback.For example, the information can be sound
Frequency relevant feedback.The GUI may include about or in other ways with any microphone, any output device, any sound channel, wheat
The related information of any processing of gram wind any audio signal exported and the audio recorded.The GUI may include one
Or multiple graphical representations, therefore user can visualize audio-frequency information relevant to record audio over the display.The audio phase
Closing feedback may inform the user that and the record, the relevant various aspects of the playback of real time playback or previous institute's recorded content).It is described
User or device when configured in this way can be made a determination based on audio-frequency information, change weight in other ways to change, modify
Audio (real-time or non real-time) during putting.
According to specific context, come indication signal path using term " sound channel " sometimes, and is indicated thus when other
The signal of path delivery.
Depending on context, different things can be indicated to the reference of " audio signal ".For example, microphone can be connect
The audio signal for receiving, converting or capturing in other ways is considered as audio signal, or more specifically one or more sound waves.As
The output of another example, microphone can be the audio signal of expression sound, such as the combination of the sound wave or sound wave received.Depend on
In the complexity of microphone, the analog signal that microphone is exported can be the combined simulation or number of the sound wave received or sound wave
Word indicates.The analog or digital indicates to be analog or digital signal, so that the audio signal that microphone is exported can be in mould
Quasi- or digital signal form.For example, microphone can be configured to receive the letter of the audio in the form of one or more sound waves
Number, and the output audio signal in analog or digital domain.
Such as run through disclosed, real-time audio will be indicated from the playback of the audio of precedence record is prominent.Depending on upper
Hereafter, real-time audio or the real-time presentation of audio that can refer to the record of audio or record in real time is reset.Depending on context, reset
It can refer to previously record but preservation in real time or be otherwise stored at the audio in memory for resetting later.It should be understood that making
Audio is recorded with one or more microphones can lead to the accessible temporary memory space of one or more processors of use device
(such as cushion space), permanent storage space (such as hard drive space) or combinations thereof provide recorded audio
The device presented in real time.In some instances, when recording audio, device can handle the audio, with immediately or relatively immediately
It is output to one or more loudspeakers.Although the memory space of described device can be used for the various processing of recorded audio, processing
Delay is not intended to indicate relative to playback, and there is no the real-time presentations of recorded audio.In some instances, term " record " and
Its variation can be indicated " to convert " or " be captured " in other ways, together with its corresponding change.In other examples, term " record "
And its variation can indicate " to convert " or " capture " and its change in other ways;And " record " audio storage memory space with
For resetting later, although may also be treated to for presenting in real time.In other words, the real-time presentation of recorded video is intended to refer to
Generation applied technology when recording audio.Depending on the context, reset the case where referring to wherein recorded audio, and
Usually before playback.
The acoustic sensing face of the microphone is indicated the reference of " position " of the microphone of multi-microphone audio sensing device
Center position, unless the context indicates otherwise.Unless otherwise directed, two or two otherwise are indicated using term " series "
A above item aim sequence.Indicate that radix is ten logarithm using term " logarithm ", but such operation is to the expansion of other radixes
Exhibition is within the scope of the invention.Come one of a set of frequencies or the frequency band of indication signal using term " frequency component ", such as
The sample of the frequency domain representation of (for example, by Fast Fourier Transform (FFT) generation) signal or the subband of signal are (for example, Bark
(Bark) ratio or Meier (mel) ratio subband).
In some instances, one or more technologies of the invention are equally applicable to monophonic audio.For example, it depends on
Context, the example comprising multichannel can be applied equally to monophonic.Therefore, although term monophonic is possibly through the present invention
Do not occur, but one or more technologies as described herein can be implemented in the example for being related to monophonic audio, such as when device has
When one microphone, or when multi-channel signal is mixed into single sound channel downwards.
Unless otherwise directed, otherwise any disclosure of the operation of the equipment with special characteristic is also expressly intended to
Disclose the method (and vice versa) with similar characteristics, and to any disclosure according to the operation of the equipment of specific configuration
It is also expressly intended to disclose the method (and vice versa) according to similar configuration.Term " configuration " can refer to by its specific context
Method, the equipment and/or system of instruction come using.Unless specific context is indicated otherwise, otherwise term " method ", " process ",
" program " and " technology " universally and is interchangeably used.Unless specific context is indicated otherwise, otherwise term " equipment " and " dress
Set " also universally and it is interchangeably used.Term " element " and " module " may be used to indicate a part of larger configuration.Except non-through
It crosses its context to be expressly limited by, otherwise term " system " is here used to indicate any one of its general sense, comprising " mutually
It is configured for a set of pieces of common purpose ".It is incorporated to by reference to any of a part of document it will be also be appreciated that incorporating
In the definition of the term or variable of the part internal reference, wherein these definition appear in other places in document, Yi Jisuo
It is incorporated to any figure referred in part.
Referring to Fig. 1, disclosing can be operated to execute an example of the device of monophonic or multichannel audio generation, and usually
It is expressed as 102.In other examples, device 102 can have component more than component illustrated in fig. 1 or few.
Device 102 includes one or more processors 103, and can be stored by the data that one or more processors 103 access
Media 109 (such as temporarily or permanently memory space).One or more processors 103 of device 102 are configured to execute instruction
To implement corresponding process.Therefore, as used herein, when execution or implementation process in other ways, refer to the one of device 102
Or multiple processors 103 (or other processors of other devices in other examples) execute and correspond to the one or more of the process
A instruction or operation.For example, device 102 may include operating system.In some instances, the operating system can be for a
The typical behaviour found on people's computing device (such as laptop computer, desktop PC, tablet computer, smart phone etc.)
Make system, such as Graphic Operating System.The operating system is storable on data storage medium 109.
The example of one or more processors 103 may include (but being not limited to) central processing unit (CPU), graphics processing unit
(GPU), digital signal processor (DSP), general purpose microprocessor, specific integrated circuit (ASIC), Field Programmable Logic Array
(FPGA) or other equivalent integrated or discrete logic.One or more processors 103 may include one or more in these examples
Other types of processor in person and any combination.One or more processors 103 can be single or multiple core.
The example of data storage medium 109 may include (but being not limited to) one or more computer-readable storage mediums, such as
But be not limited to random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM),
Flash memory or any other media, can be used to deliver or store in instruct and/or data structure in the form of and can be by counting
Calculation machine or the wanted program code of processor access.In some instances, data storage medium 109 can be considered as non-transitory and deposit
Store up media.Term " non-transitory " can indicate that storage media are not embodied in carrier wave or institute's transmitting signal.However, term is " non-temporary
When property " is not necessarily to be construed as indicating that data storage medium 109 is immovable.Data storage medium 109 may include these realities
Other types of data storage medium in one or more of example and any combination.
Device 102 may include or be coupled to one or more input units 105.Input unit 105 may include keyboard, mouse,
Touch-screen display or other input units.Although separating description with one or more input units 105, it should be appreciated that showing
Device 106 is in the example of touch-screen display, and display 106 constitutes input unit.Similarly, although being filled with one or more inputs
It sets 105 points and opens description, it should be appreciated that one or more microphones 104 constitute input unit.
Device 102 may include or be coupled to one or more audio output devices 107.One or more audio output devices 107
It can include one or more of loudspeaker.Although separating description with one or more output devices 107, it should be appreciated that 112 structure of headphone
At audio output device.
Device 102 may include or be coupled to multiple microphones (such as multi-microphone array).For example, more Mikes
Wind array may include the first microphone 104a, second microphone 104b and third microphone 104c.Although Fig. 1 illustrates three Mikes
Wind, but in other examples, device 102 can be coupled to more or less than three microphones.Multiple microphones can be used to support
Spatial audio coding in two or three dimensions.The example for the spatial audio coding method that can be supported with multi-microphone array
It may include 5.1 circular, 7.1 circular, Dolby Surround, dolby pro logics or any other phase width matrix stereo format;Doby
Number, the discrete multi-channel format of DTS or any;And wave field synthesis.One example of five-sound channel coding include before-it is left, preceding-
The right side, center, rear-left and rear-right channel.
Device 102 may include or be coupled to display 106, headphone 112 or the two.Device 102 may include sound
Frequency analyzer 114 and GUI 120.Audio analyzer 114 may include software, hardware, firmware or combinations thereof.Audio analyzer 114
It is storable in the data storage medium 109 of one or more processors 103 access of device 102.In these examples, divide with audio
The associated any process of parser 114 can be loaded due to from memory 109 with executed by one or more processors 103 and sound
The execution of one or more the associated instructions of frequency analyzer 114.As shown in fig. 1, audio analyzer 114 is surround by dotted line, with
Illustrate the executable instruction for corresponding to the audio analyzer 114 being stored in memory 109 of one or more processors 103.One
In a little examples, audio analyzer 114 can for can produce in one or more processors 103 execution by device 102 GUI 120,
The application program of gui data 150 or the two.
One or more processors 103 of device 102 generate GUI 120 for display.GUI 120 is transmitted to display
106 for presenting on it.The gui data 150 being stored in memory 109 may include executable instruction, when executed
GUI 120 be can produce for the presentation of display 106.Gui data 150 can be a part of audio analyzer 114.Audio wherein
Analyzer 114 is in the example of application program, and gui data 150 can be a part of the application program, and therefore, arrives sound
The correspondence graph data of frequency analyzer 114.In some instances, audio analyzer 114 can be for by one or more processors 103
It can lead to generation, access or the application program for executing gui data 150 when execution.For example, according to some examples, audio point
Parser 114 can generate graphical user interface (GUI) 120 using graph data 150 when being executed.As another example, audio point
Parser 114 can cause device 102 to render user interface, such as GUI 120.Audio analyzer 114 can provide GUI 120 aobvious
Show device 106.
Gui data 150 may include related to one or more input signals 108, one or more audio signals 110 or combinations thereof
Data.As identified above, gui data 150 can be stored in and be coupled to or be contained in device 102 by audio analyzer 114
Memory in.In a particular instance, audio signal 110 can be through compressing, and can occupy the storage fewer than input signal 108
Device.
GUI 120 can include one or more of graphical representation, therefore user can visualize and record audio phase over the display
The audio-frequency information of pass.The audio relevant feedback may inform the user that and the record, real time playback or previous institute's recorded content
Reset relevant various aspects).The user or device when configured in this way can be made a determination based on audio-frequency information, with more
Change, modify the audio (real-time or non real-time) changed during resetting in other ways.For example, user or device can remembered
Record frequency when or during playback adjust audio frequency parameter, adjust using filter or more in real time, this can improve recorded sound
Frequently the quality of (such as surround sound audio).As another example, the audio relevant feedback being presented to the user via described device can
Allow users to select option appropriate to change or adjust the quality of recorded audio in other ways, either in real time also
It is during playback.For example, user can be interacted based on the audible feedback information being presented to the user with GUI 120, just
Record audio when or during playback, in real time adjust audio audio track level of sound volume or other characteristics.
In some instances, GUI 120 may include corresponding to the one or more of the microphone 104 for recording audio for device 102
A graphical representation (such as microphone icon).GUI 120 may include corresponding to the audio output dress for exporting recorded audio
One or more graphical representations (such as the speaker icon) set.In some instances, GUI 120 may include three Graphical audio sound
Road indicates (such as three the speaker icons), each of microphone 104a, 104b and 104c one, because of audio analyzer
114 can automatically configure the number of surround sound sound channel based on the number of microphone.In other examples, three loudspeakings can be shown
Device icon, because user surround setting options from multiple options selection triple-track using GUI 120.GUI is provided through the present invention
Other examples of 120 audio-frequency informations that may include, because GUI 120 may include any audio-frequency information disclosed herein.
During the operation of device 102, audio analyzer 114 can from multiple microphones (such as microphone 104a, 104b and
104c) receive multiple input signals (such as input signal 108a, 108b and 108c).For example, audio analyzer 114 can be from
Microphone 104a receives input signal 108a, receives the second input signal 108b from microphone 104b, and connect from microphone 104c
Receive third input signal 108c.Input signal 108 can correspond to one or more sound sources.In microphone 104a, 104b and 104c
Each the sound wave received can be converted to analog or digital audio signal.In these examples, the first input can be believed
Each of number 108a, 108b and 108c are considered as audio signal, either simulation or number.
User 118 can via presented GUI 120 and user input apparatus 105 (such as wherein display be touch
In the example of screen, display 106) it is interacted with device 102.For example, GUI 120 may include one or more for being portrayed as 140
Optional option.User 118 may be selected at least one of optional option 140, and audio analyzer 114 can based on the selection from
Input signal 108 generates audio signal 110.For example, optional option 140 may include associated with any feature or process
Any graphical representation, the feature or process and audio analyzer 114, microphone 104, output device 107, input signal 108,
Audio signal 110, other related audio informations etc. are associated.
In some instances, audio analyzer 114 is referred to alternatively as audio and generates application program, because of audio analyzer 114
Exportable processed signal (that is, signal that audio analyzer is handled on it).In other examples, such as this paper institute
It states, audio analyzer 114 can not only generate audio, but device 102 can be used also to control when audio is stored in memory 109
In, if it can truly occur.In these examples, audio analyzer 114 is also referred to as audio storage application program.Citing
For, audio analyzer 114 can store respectively from microphone 104a, 104b and 104c received input signal 108a, 108b and
108c.As another example, audio analyzer 114 can not store such as from the received input letter of microphone 104a, 104b and 104c
Number 108a, 108b and 108c.Truth is that audio analyzer 114 can store audio signal 110 (that is, audio analyzer 114 exports
Signal, it is either modified or unmodified).In another example again, audio analyzer 114 can be stored such as from Mike
Input signal 108a, 108b and 108c that wind 104a, 104b and 104c are received;And audio analyzer 114 can also store sound
Frequency signal 110.The signal stored, either input signal 108 or audio signal 110, can be used for resetting.In these examples
In, during playback, audio analyzer may or may not receive stored signal.It is deposited being related to the reception of audio analyzer 114
In the example of the signal of storage, audio analyzer 114 identical mode can handle institute with live signal (such as input signal 108)
The signal of storage.
Any input unit (including, for example, display 106) of device 102 can be used to select optional option for user 118
140.For example, audio analyzer 114 can from input unit receive selection 130 (or in other ways be referred to as indicate selection
Input data 130).In an example, audio signal 110 can be output to audio output device by audio analyzer 114
107, such as headphone 112 or one or more loudspeakers.The number of sound channel corresponding to output device is (such as solid
Two sound channels of stereo headset: left and right) can therefrom to receive the number of microphone 104 of input with device 102 identical,
Less than the number or it is greater than the number, to generate audio signal 110.User 118 can be used being capable of playing audio signal 110
Any output dress of (or when audio signal 110 includes the signal for multiple sound channels, the subset for the signal being included in)
It sets, such as headphone 112 or loudspeaker, to monitor or listen to audio signal 110.For example, user 118 can detect sound
The static noise grade of frequency signal 110, and GUI 120 can be used to select noise suppressed (decaying) option (such as optional option
140), with the static noise grade of audio signal 110 caused by reduction then.In this example and other examples, it can be based on
The past audio signal 110 exported based on audio analyzer 114 that is received from user 118 and input in real time to audio point
The audio signal that parser 114 is subsequently received is made in real time or dynamic calibration or change in other ways.It will be appreciated that in user
118 when providing any input to influence any processing that audio analyzer 114 is carried out, and past audio signal 110 may that
When be current (or in real time) audio signal 110.By this method, family that can be used can be connect in audio for audio analyzer 114
It receives and is exported using one or more output devices for being adjusted in real time when presenting to the audio, to be based on user 118
Preference change (such as passing through enhancing) quality.
In other examples, family that can be used can modify the regular collection being stored in memory 109 for GUI 120, wherein
Device 102 is influenced to the audio recorded according to the rule based on the occurrence rate by the regular trigger event defined automatically
Change (for example, if event, then movement (if EVENT, then ACTION).Event in rule can be for defined
The existing true or false of audio-frequency information determines.Movement in rule may be in response to the determination that (or not occurring) occurs in the event.
For example, user 118 can define rule, so that described device can audio output based on the microphone in using and in using
The number of device is descended to mix or upper mixed automatically.If number is equal, do not need to change.However, if in the record phase
Between, such as in conjunction with five loudspeakers around five microphone arrays are set using, then rule can be handled, so that in one or more loudspeakings
Device become it is inoperable or in other ways power off in the case where, described device automatically under mixed multichannel audio.Similarly, if
During record, such as in conjunction with five loudspeakers around being set using five microphone arrays, then rule can be handled so that one or
Multiple loudspeakers become in the case where can operating or being powered in other ways, and described device is automatically upper to mix multichannel audio.
Audio analyzer 114 can be based on the selection 130 for receiving optional option 140, to generate multiple sounds from input signal 108
Frequency signal (such as audio signal 110), as with reference to described by Fig. 6 to 13.In other words, audio analyzer 114 can produce through repairing
Change or unmodified input signal (referred to as audio signal 110).Acquiescently, audio analyzer 114 is exportable unmodified
Input signal 108, rather than audio signal 110.Audio analyzer can the selected choosing according to represented by corresponding to input data 130
Process generate audio signal.Modified input signal (that is, audio signal 110) refers to according to corresponding to input number
After receiving input data 130 according to 130 process, just by audio analyzer 114 is modified, one or more are subsequently received
Input signal 108.Modified input signal can refer to that voice data just modified itself (such as uses filter or will be with two
Two or more associated signals 110 of different sound channels are mixed together in multi-channel signal), or correspond to or with other sides
Formula data associated with audio signal 110, such as change channel information, so that any signal can be shipped to again different
Output device, etc..For example, user can be moved by using GUI 120 and selection option 140 appropriate from center loudspeaking
Device is emitted to the sound of another loudspeaker, to form empty space around central loudspeakers.As another example, 120 GUI
Usable family can adjust channel volume grade (such as by adjusting channel gain upward or downward), audio position, loudspeaker
Position and other recording parameters.In the first modification (such as based on one or more user instructions indicated by input data 130
Receive) after, can further occurrence one or more modification.Whenever making the optional option 140 for influencing audio processing, audio point
Parser 114 can correspondingly adjust the processing of one or more input signals 108, so that exporting follow audio letter according to user preference
Numbers 110.Although audio analyzer can be through matching it should be understood that Fig. 1 describes the audio signal 110 just exported by audio analyzer 114
It sets to export unmodified input signal 108 for one or more sound channels, and modified for one or more other sound channel outputs
Input signal (that is, audio signal 110).
Audio analyzer 114 can handle input signal 108, to generate audio signal 110.Audio analyzer 114 can be from defeated
Enter signal 108 generate it is several be differently directed sound channel (such as audio signal 110), so as to upper Mixed design signal 108.Citing comes
It says, input signal 108 can correspond to associated with the first number (such as three) microphone (such as microphone 104a to c)
The sound channel of one number.Audio signal 110 can correspond to the sound channel of the second number, and second number can be higher than the first number or
Lower than the first number, the latter is related to the example of lower Mixed design signal 108, is contrasted with upper Mixed design signal 108.It lifts
For example, for 5.1 surround sound schemes, audio signal 110 can correspond to five sound channels.Audio analyzer 114 can go up mixing
Input signal 108 is to generate audio signal 110, so that can be used in the loudspeaker array of the loudspeaker with the second number not
Each signal (or sound channel) of (that is, output) audio signal 110 is reset with loudspeaker.
In some instances, audio analyzer 114 can be by based on the input for using GUI 120 to receive expression user's selection
Data 130 are filtered input signal 108 to generate the signal for being filtered (such as modified), as described herein.Citing comes
It says, analyzer can handle input signal 108, as with reference to described by Fig. 6 to 13.
Referring to Fig. 2A to C, the example of multiple views of device is shown in Fig. 2A to C.The view can correspond to institute in Fig. 1
The device 102 shown.
The view includes discribed front view 220 in Fig. 2A, discribed rearview 230 and Fig. 2 C in Fig. 2 B
In discribed side view 240.Front view 220 can correspond to the first side comprising display 106 of device 102.Described first
Side may include the first microphone 104a, second microphone 104b, third microphone 104c, earphone 208, the first loudspeaker 210a and
Second loudspeaker 210b.
Rearview 230 in Fig. 2 B can correspond to second side opposite with the first side of device 102.Described second side can wrap
Containing camera 206, the 4th microphone 204d and the 5th microphone 204e.Side view 240 in Fig. 2 C can correspond to the company of device 102
Connect the third side of the first side and second side.
The example that Fig. 3 A to G is respectively the GUI 120 of Fig. 1.A referring to Fig. 3, the example for showing GUI 120.The institute in Fig. 7
In the example shown, GUI 120 may include coordinate map 301 and multiple optional options, such as one or more sectors 302 (such as
302a to e), sector reshaper/determine size device 305 again.GUI 120 also can include one or more of sound channel icon (such as 304a
To e).The sound channel icon can represent graphically each audio output device 107, be configured to from audio analyzer
114 received audio signals.Sector may be selected in user, and with the presentation of one or more options.In other examples, user may be selected
One or more options, and one or more sectors that then selection selected option will be applied to.The option may include audio point
Any processing that parser 114 can be configured to perform.
No matter in this example still in other examples, each sector 302 of coordinate map 301 be can correspond to relative to dress
Set the specific region on 102 specific direction, the center of coordinate map 301 indicate the position of device 102 (or listener positions,
It is either virtual or true).Each sector 302 can mutually or exclusively correspond to the certain party relative to device 102
Upward specific audio output device 107 is indicated by the relationship of each sector to sound channel icon.For example, sector 302a is arrived
302e can be corresponded respectively to or related with sound channel 304a to e in other ways.Sound channel 304a to e can relate separately to the right side after, it is left back,
Left front, center and right front channels.Sector 302a to e can relate separately to input signal 108 associated with microphone 104a to e.
In some instances, audio analyzer 114 can determine the arrival direction information corresponding to input signal 108, and can
Coordinate map is generated, so that the existing each sector 302 for showing sound is related with the microphone on the specific direction.Citing
For, audio analyzer 114 can determine that at least part of input signal 108 is received from specific direction.In the reality shown
In example, coordinate map 301 includes five sectors.Coordinate map 301 can correspond to the one of one or more sources of input signal 108
The physical coordinates of a or multiple positions.Coordinate map 301 can indicate that the source of input signal 108 is located at relative to device 102
Position.For example, it is received from specific direction that audio analyzer 114, which can determine input signal 108 not,.Coordinate map
301 particular sector corresponding to specific direction can indicate there is no input signal 108 source (such as because there is no correspond to
In the sound of the specific direction).For example, particular sector can be shown as with specific color, specific yin in GUI 120
Shadow, particular text, specific image etc. can indicate being not present or existing for the source of input signal 108 on the specific direction,
Whether input signal is received for the particular sector, the sound corresponding to any loudspeaker associated with the particular sector
Magnitude, the saturation degree of any microphone associated with the particular sector and any other audio-frequency information.As another reality
Example, audio analyzer 114 can determine the intensity (such as volume) of audio signal.Audio analyzer 114 can be by GUI 120
The specific shade of graphical representation (such as sector or sound channel/the speaker icon) indicates the intensity of audio signal.For example, compared with
Dark shade can indicate higher-strength.
In some instances, the counting of audio signal 110 can correspond to the counting of the multiple sound channel icon.Audio signal
110 counting can correspond to the counting of the multiple sector of coordinate map 301.Each of the multiple sound channel icon can
It is associated with the particular audio signal of audio signal 110.For example, audio analyzer 114 can produce corresponding to the multiple
The particular audio signal of each of sound channel icon.
In some instances, each sound channel 304 is not exclusively related to sector 302.For example, three wheats can be used
Gram wind records surround sound, this might mean that there are three sectors for the tool of coordinate map 301, wherein five sound channel icons are described three
It is separated around a sector.In this example, graphical representation can be used to inform audio user analyzer 114 can how on be mixed into five
Sound channel output.For example, selection particular channel icon can lead to GUI 120 and highlight sector, and the therefore loudspeaker
It is associated with it microphone and input signal.
During operation, input unit 105 can be used to select the particular sector in 302a to the e of sector in user 118.One
In a little examples, user 118 can determine again size device/reshaper 305 by one or more mobile sectors to modify selected fan
The size or shape in area.
One or more sectors 302 may be selected to deactivate and come from any Mike associated with selected sector in user 118
The capture of the sound of wind or record, while the other microphones unrelated with selected sector continue to capture or record sound.?
Wherein the sector of coordinate map 301 have in the example of the one-to-one correspondence of audio track (for example, by sound channel icon representation),
Deactivated sector, which can lead to, deactivates corresponding sound channel.Two or more sectors of wherein coordinate map 301 are shared and audio track
In the example of the correspondence of (such as by sound channel icon representation), deactivated sector can lead to the corresponding audio track of influence, without whole
The sound channel is deactivated, so that not reprocessing noise associated with deactivated sector, and therefore audio analyzer 114 will not
Its with the also enabled associated sound mix in sector associated with same sound channel.
Audio analyzer 114 may be in response to receive the selection of sector, and the sector directions based on selected sector are to one
Or multiple input signals 108 are filtered, to generate audio signal 110, as described herein.In an example, audio analysis
Device 114 may be in response to the selection of sector and the Treatment Options selected according to user (such as move or reposition signal, delete or go
It is filtered except signal, to signal) one or more input signals 108 are filtered.Audio signal 108 is executed any
Filtering, processing or operation can be considered as the manipulation to audio signal 108 or any corresponding audio track.For example, Yong Huke
By selecting any graphical representation associated with each sound channel, each audio track is manipulated by means of interacting with GUI 210.
B referring to Fig. 3, the example for showing GUI 120.In this example, describe the example of channel configuration menu 320.Fig. 3 A
In discribed GUI 120 result of audio frequency output channel can be configured using channel configuration menu for user 118.Channel configuration
Menu 320 may include the sound channel option 322 of multiple numbers, so that user 118 can specify wait reset (such as by audio analyzer
114 generate) several audio signals 110.
Each option in the sound channel option 322 of the number can indicate several audios that will be generated for multi-channel signal
Signal.For example, the sound channel option (such as 5.1) of the first number can indicate that the first number (such as 5 plus 1 subwoofer) will be generated
Audio signal, the sound channel option (such as 7.1) of the second number can indicate will to generate the second number (such as 7 plus 1 subwoofer)
Audio signal, etc..When selecting the number of sound channel option 5.1, such as the graphical representation of 5 output channels (such as loudspeaker)
It can seem around the coordinate map 301 in GUI 120.In other examples, any corresponding subwoofer channel can also have been seen
Come on the coordinate map 301 in GUI 120.If the number of selected sound channel is more high or low than the number of physics microphone,
Audio analyzer 114 can it is upper mixed respectively or under mix the input signal.For example, if the number of selected sound channel is more than physics
The number of microphone, then audio analyzer 114 can interpolation or generation additional audio sound channel.In response to the selection of user, audio
Analyzer 114 can determine audio output device 107 number whether the number matches with microphone 104;And if mismatched,
So the user can be alerted via GUI 120.
In some instances, the gui data 150 of Fig. 1 can store the number sound channel option 322 (such as 2.1,5.1,
7.1,22.2 or any other sound channel option) each of count with corresponding (such as there is no the feelings of corresponding subwoofer
Under condition, 2,5,7 and 22) between mapping.Comprising subwoofer, corresponding count of such example may respectively be 3,6,8 and 24.It is described
Mapping may include default value.In this example, audio analyzer 114 can be used the mapping to determine corresponding to sound channel option
The counting (such as 7) of given number (such as 7.1).In particular instances, the mapping may further indicate that corresponding to for the number
(such as left and right, center, a left side-surround, are right-to surround, left in one or more directions of each of sound channel option 322 (such as 7)
Afterwards and behind the right side).The mapping may further indicate that angle (such as 45 degree, 135 for corresponding to each of one or more described directions
Degree, 90 degree, 225 degree, 315 degree, 180 degree and 0 degree).
C referring to Fig. 3, the example for showing GUI 120.In this example, the reality of noise suppressed (decaying) option 330 is shown
Example.Noise suppressed (decaying) option 330 can be specific for sector, sound channel or microphone.Noise suppressed option 330 may be in response to use
Family 118 selects one of sector 302 or sound channel/loudspeaker to indicate one of 304 and appear in GUI 120.Noise suppression
System (decaying) option 330 can realize the noise suppressed (such as 0% to 100%) of one or more grades.For example, user 118
Input unit 105 (such as including display 106) can be used to select the amount of noise suppressed.Audio analyzer 114 may be in response to
The noise suppressed option 330 just called is received, inhibits to exist in input signal 108 by the grade based on selected noise suppressed
Static noise generate audio signal 110.For example, audio analyzer 114 can be selected based on the grade of noise suppressed
Specific noise filter (such as static noise filter), and audio analyzer 114 can be by by the specific noise filter
Audio signal 110 is generated applied to input signal 108.As used herein, term inhibits that decaying or its equivalent can be asked
It is average.
Noise suppressed option 330 can be used family that audio analyzer 114 can be caused to generate corresponding to selected noise suppressed etc.
The audio signal 110 of grade.It family that can be used can select whether static noise captures (example with the optional noise suppressed grade of user
Such as, the microphone of essential record noise may depend on the selected noise suppressed of user 118 and deactivate), it is defeated by audio analyzer 114
Out, or how static noise is filtered.For example, user can capture the sound of the wave on sandy beach, and can reduce
The sound of the wind captured during voice.
Noise can be any unnecessary sound, such as one or more unnecessary sound wave/sounds under any combination of frequency
Frequency signal.For example, noise may include noise pollution caused by transportation system and vehicle, the harshness of city noise,
Or be related to compared to garbage signal (such as the signal that will be rejected or inhibit or filter in other ways) useful signal (such as
The signal that will be handled and exported) audio system in any unnecessary noise.It in an example, can be by the wave on sandy beach
The sound of wave is considered as unnecessary noise, and filters out from record.It in another example, can not be by the sound of the wave on sandy beach
It is considered as unnecessary noise, and is not therefore filtered out from record.
Sound whether constitute noise may depend on desired sound compared with undesired sound and its in amplitude and
Relationship in frequency.In some instances, noise can be analog defined in any sound or audio signal or user.Citing
For, family that can be used can select one or more sound (such as city sound, barking etc.) for GUI as described herein, to lead
114 output audio signal 110 of audio analyzer is caused, so that audio signal 110 has filtered to remove or inhibit selected sound.?
In another example, family can be used to be able to record one or more sound (such as barking, mew, wave for GUI as described herein
Deng) Lai Dingyi respective filter, so that 114 output audio signal 110 of audio analyzer, so that audio signal 110 has filtered
And remove or inhibit recorded sound.
In some instances, noise suppressed option 330 may make up " sky " and go out option.In response to vacating option described in selection,
Audio analyzer 114, which can inhibit, selectes the associated audio in sector with one or more.For example, it is next empty that sector may be selected in user
Out.The area vacated corresponds to the area in the audio track, and audio analyzer inhibition inhibits to correspond at the audio track
The audio in the area.In some instances, user can be pushed and be pulled to determine size again or reshape one or more sectors, be come defeated
Enter to vacate instruction (that is, noise suppressed/counteracting instruction).In other examples, user may be selected sector, and except other options it
Outside, it is presented and vacates option, cause audio analyzer 114 according to for the selected of the selected sector that will inhibit when chosen
Grade (or type of filter) is inhibited to inhibit audio, this influences audio signal 110, and therefore influences via any associated
The sound that loudspeaker 107 is presented to the user.
In some instances, coordinate map 301 can indicate the source of the static noise in input signal 108 relative to device
102 are located at where.Audio analyzer 114 can determine static noise grade associated with input signal 108.For example, sound
Frequency analyzer 114 can be quiet to determine based on the noisiness measure (such as linear prediction decodes (LPC) prediction gain) of input signal 108
Only noise grade.In particular instances, lower LPC prediction gain can indicate the higher static noise grade of input signal 108.It can
Noisiness measure is defined according to the variation of input signal 108 or the power according to input signal 108 or energy.In particular instance
In, audio analyzer 114 can determine specific static noise grade associated with each of input signal 108, and GUI
120 can indicate and the specific static noise grade on the associated direction of corresponding microphone.For example, audio analyzer 114
It can determine the first static noise grade of input signal 108a.GUI 120 can then indicate associated with the first microphone 104a
Static noise grade.For example, GUI 120 can indicate the first party corresponding to microphone 104a on coordinate map 301
Upward static noise grade.Therefore GUI 120 can indicate the source of static noise relative to 102 positions of device to user 118
In position, thus enable user 118 be based on this audio-frequency information (that is, noise information) take movement.For example, user
118 move away from the source of static noise, or call certain Treatment Options provided by audio analyzer 114.
Audio analyzer 114 can modify (such as increasing or decreasing) noise reference grade based on the grade of noise suppressed.
Audio analyzer 114 can be by being applied to input signal 108 for noise filter to meet (such as higher or lower than) to having
One or more frequency bands of the input signal 108 of the amplitude of noise reference grade are filtered, to generate audio signal 110.Noise
Reference grade can be based on the selected specific noise filter of user.Because being used with reference to one or more filters are applied to
" input signal " applies noise filter (or any other filter) it is therefore to be understood that audio analyzer 114 is alternative
In one or more input signals comprising noise.In other examples, audio analyzer 114 can be based on each input signal and fan
Relationship between area or the application specific noise filter regardless of the relationship.
In some instances, before noise filter (such as static noise filter) is applied to input signal 108,
Audio analyzer 114, which can modify frequency domain, is applied to input signal 108.It can be by will be specific in order to illustrate, audio analyzer 114
Low-pass filter, specific high-pass filter or specific bandpass filter are applied to input signal 108 to generate M signal.Audio
Analyzer 114 can be by being applied to M signal for specific static noise filter to meet (such as higher or lower than) to having
One or more frequency bands of the M signal of specific noise reference grade are filtered to generate audio signal 110.
Audio analyzer 114 can provide generated audio signal 110 to headphone 112 or other output devices
107, such as loudspeaker.Headphone 112 can be used to monitor or listen to generated audio signal 110 in user 118, and can
The grade of noise suppressed is adjusted by selection (such as movement) noise suppressed option 330.For example, user 118 can be in sand
On beach, thereby increases and it is possible to want the sound of capture wave.In this example, user 118 can movement be made an uproar on (such as left) in a first direction
Sound inhibits option 330 to reduce the grade of noise suppressed.In another example, user 118 can be outdoors in meeting, thereby increases and it is possible to think
Capture the voice of particular speaker.User 118 can listen to audio signal 110 via headphone 112, and can be appreciated that
Audio signal 110 has the strong noise grade for corresponding to the wind for touching microphone 104a to c.In this example, user 118 can lead to
It crosses and moves up moving noise inhibition option 330 in second direction (such as right) to increase the grade of noise suppressed.Alternatively or in addition, user
Device 102 can be moved to the lesser position of wind based on the graphical feedback received about the audio recorded by 118.
Audio analyzer 114 can use the past audio signal for example exported based on audio analyzer 114 based on expression
110 and the user input data of user's selection of GUI that inputs, realize the audio that can be subsequently received to audio analyzer 114
The real-time or other dynamic calibration or change of signal real-time perfoming.It will be appreciated that providing any input in user 118 to cause sound
When any processing for the input signal 108 that frequency analyzer 114 is subsequently received, past audio signal 110 at that time may be
Become current (or real-time) audio signal 110.By this method, family that can be used can receive audio for audio analyzer 114
When real-time adjustment is made to the audio.Audio analyzer 114 is to the input signal (or single input signal) being subsequently received
It makes adjustment, and is exported using one or more output devices 107 for presenting.
D referring to Fig. 3, the example for showing GUI 120.In this example, another example of noise suppressed option 330 is shown.
In this example, noise suppressed option 330 is supplemented by noise indicator 331, and the noise indicator indicates audio analyzer 114
Amount based on the static noise (such as ambient noise) that processing is detected corresponding to the input signal 108 of microphone 104.As above
Pointed by text, user can interact with noise suppressed option 330, to indicate that audio analyzer 114 will be in one or more input letters
The amount of the ambient noise (such as static noise) inhibited in numbers 108.In some instances, GUI 120 is directed to each microphone 104
Include noise suppressed option 330 and noise indicator.
In some instances, it in order to estimate noise grade, can be indicated with noise 331, audio analyzer 114 can calculate:
The wherein static noise reference of SNR=, the static noise reference of Nref=
Magnitude spectrum, i=frequency range (1 to 512, if using 512 size FFT), the scale factor of and ratio=will be used for GUI expression.
Audio analyzer 114 can bi-directional scaling this final noise reference gross energy, and use the final noise as in GUI
Discribed value in noise grade, such as noise indicator 331.
In some instances, can for noise indicator 331 using the single item of display single color (such as green) come
Describe noise grade.In these examples, green bar is higher relative to its pedestal, and existing static noise is more;And green bar
Lower relative to its pedestal, existing static noise is fewer.List in response to applying noise suppressed, for noise indicator 331
One can include the second color (such as blue) in same, to show the amount of the noise of inhibition.For example, it is assumed that measuring
Static noise grade (or reference) will be in a certain amount.Noise indicator 331 can climb to corresponding to the amount for measuring noise
First height.After application noise suppressed, the height of noise indicator 331 will keep identical, but the top of green bar will reduce,
Noisiness after being inhibited with display noise is less than the noisiness before noise suppressed.It can be on the top of green bar in green bar
Portion start with 331 pole of noise indicator like can the item be filled into another color-bar (such as blue) at top.This blue bar
Allow users to that removal how many noise are understood quickly.
For example, as shown in Fig. 3 B, discribed white bars can correspond to " green " item, and the item with hachure can be right
Ying Yu " blue " item.By checking the increment between green bar and blue bar (that is, change), user can notify just to inhibit how many quiet
Only noise.In the green/blue example of noise indicator 331, green bar before inhibiting can be above etc. based on using
The amount of formula noise calculated.
Green bar after inhibition can be based on the amount for the noise for using following equation to calculate:
The wherein magnitude spectrum of the static noise reference of Nref=,
I=frequency range (1 to 512, if using the FFT of 512 sizes), the gain of the static noise of NSgain=, and ratio=GUI will be used for
The scale factor of expression.By this method, if using 25% noise suppressed, after inhibition, the height of green bar can subtract
Few 25%.For example, in fig. 3 c, 50% inhibition is shown;However, in fig. 3d, showing 35% inhibition.
In some instances, the camera 111 of device 102 can be used for based on the photo for example captured and then with audio point
Parser 114 analyzes institute's captured image, Lai Zhihang scene or object detection.Based on the scene or object detected, device 102 can
Noise suppressed is not recommended to user recommendation or via GUI 120.Fig. 3 D show the scene detected or object instruction 333 with
And noise suppressed recommends 335 example.In example shown in fig. 3d, audio analyzer 114 detects seashore, corresponds to
Audio has the sound of the wave of rolling as static noise.Audio analyzer 114 can be by using currently or previously recording
Sound auxiliary audio frequency analyzer is determining and the scene or object of identification specific image, Lai Zengjia scene or object detection it is accurate
Property.In example shown in fig. 3d, audio analyzer may be based on the sound of capture image (such as sandy beach), current record
(such as wave) or the two and determine that scene (or current location of device 102, if handling positive real-time perfoming) is seashore.Base
In the scene, audio analyzer 114 can not recommend static noise suppressed, as shown in the figure.Can not recommend to inhibit because just like its
The wave of its noise may not be considered as noise (such as it is contemplated that such sound being added to the ambient enviroment of recorded audio).
In another example, such as the indoor environment with noisy air-conditioning or fan, scene-detection algorithms can recommend static noise
Inhibit.
In addition, as illustrated in Figure 1, computing device can be able to use the camera of computing device to execute scene or object inspection
It surveys.Based on the scene or object detected, computing device can be recommended to user or not recommend noise suppressed.In the example of fig. 1,
Computing device detects seashore, and corresponding to audio has the sound of the wave to roll as static noise.It is detected based on described
Seashore scene, computing device can not recommend static noise suppressed.In another example, such as with noisy air-conditioning or fan
Indoor environment, scene-detection algorithms can recommend static noise suppressed.
In some instances, position positioning can be used for executing scene detection, either individually still combine the scene of this paper
Other examples (such as analysis image) of detection.For example, position positioning can refer to coordinate or one or more Mikes of device 102
The coordinate of wind 104.Device 102 can be the device with GPS function, such as with GPS receiver, the GPS receiver warp
Configuration with after receiving required signal (such as one or more satellite-signals), be calculated or determined at once the position 2D (such as through
Latitude) or the position 3D (such as latitude, longitude and height above sea level).One or more microphones 104 can have the function of GPS, tool
There is such as GPS receiver, the GPS receiver, which is configured to, receives required signal (such as one or more satellite-signals)
Afterwards, the position 2D (such as longitude and latitude) or the position 3D (such as latitude, longitude and height above sea level) are calculated or determined at once.Audio analysis
Device 114 can be configured to receive GPS data (such as GPS coordinate) from device 102 or one or more microphones 104.
Audio analyzer 114 can be configured based on such as one or more of device 102 or one or more microphones 104
GPS coordinate executes detection.Based on the scene detected, such as device 102 is based on before, during or after recording audio
Calculate or determine one or more GPS coordinates and determine that its position is on sandy beach, audio analyzer 114 can recommend or not recommend quiet
Only noise suppressed.As another example, audio analyzer 114 can be based on the GPS coordinate of device 102, based on being sat using the GPS
The travel rate of calculating is marked to determine described device on automobile, train or aircraft.In this example, audio analyzer 114
Road noise filter, track filter or filter of advancing in the air can be for example applied automatically.Such filter can filter out respectively
It is associated with such traveling mode it is common be not intended to noise, such as be road noise, track noise and loud fire respectively
Vehicle blast of whistle and engine noise.In other examples again, GUI 120 allow users to input position (such as address, city,
City Hezhou, country or any other identification information), to make audio analyzer 114 be able to carry out scene selection, or with other
Mode enhances any scene detection (such as increasing its accuracy) performed by audio analyzer 114.
E referring to Fig. 3, the example for showing GUI 120.The example for relocating option 340 is shown in this example.It is operating
One of sector 302 may be selected in period, user 118.After selecting sector 302, GUI 120 can be in other optional options
It is presented in 140 lists and relocates option 340.For example, void can be expressed as in GUI 120 by relocating option 340
The menu of quasi- button or a part of matrix.An example as association option 340 uses, and user 118 may wish to change
The audio output device 107 that announcer's speech therefrom exports.Announcer can be relative to device 102 from specific direction (such as user
Behind 118) it speaks.User 118 may wish to generate audio signal 110, so that institute's recording of voice of announcer is corresponding to specific
Signal or sound channel (such as center channel).Speech one or more sectors associated therewith of announcer may be selected in user 118
302, and selection relocates option 340 in turn.Then, after the speech corresponding to the announcer may be selected in user 118
Continue audio signal 110 for the sector transmitted or be reoriented to or sound channel.Other examples can be related to about when informing audio analysis
Selection described in device 114 corresponds to the operation of reorientation signal or different order related from reorientation signal.
Therefore GUI 120 can allow users to generate multi-channel audio signal, so that the audio for corresponding to particular channel is believed
Number correspond to the input signal that receives of specific direction from the particular sector for corresponding to coordinate map.For example, using GUI
120 and relocate option 340, user can be by positive output to the first audio output device associated with the first audio track
107 audio is mobile or is repositioned on the second different location, so that the audio from the first audio track is moved to and the second sound
Associated second audio output device 107 of frequency sound channel.As an example, if the speech of announcer derives from back sound channel, that
GUI 120 can be used to push, pull or be otherwise moved into middle heartfelt wishes from rear sound channel for the speech of announcer for user
Road.In some instances, GUI 120 allow users to by select announcer speech sector associated therewith come move/
Audio is repositioned, and then selected next sector will cause audio analyzer 114 that audio is transmitted to second from the first sector
Sector, so that the audio is effectively moved to output device 107 associated with second sector.In other examples
In, GUI 120 allows users to move by selecting the graphical representation of audio track (such as being portrayed as sound channel icon)/determine again
Position audio, and then next graphical representation of another audio track will cause audio analyzer 114 by audio from the first audio sound
Road is transmitted to the second audio track.As a result, audio analyzer 114 can by audio from the firstth area (such as sector or sound channel) it is mobile or
It is repositioned on the secondth area (such as sector or sound channel).In other examples, the movement of audio may include that audio is moved to sector
Or sound channel, while be maintained at the audio and initiating sector or sound channel.For example, the speech of announcer can only with back sound channel phase
Association.Using option 340 is relocated, the speech of announcer moves into also associated with one or more other sound channels.
User can determine the associated area selected element " C " of orientation noise Ying Congyu user in one of sector 302
It is reoriented to another area (such as one or more other sectors 302).For example, as shown in Fig. 3 E, user can be used and show
To indicate to be reoriented in firstth area and center channel phase from the upward towing dumb show of the arrow of selected element " C "
Associated area.By this method, family that can be used can selectively mix two or more sectors and any correspondence for GUI 120
Audio track.
F and 3G referring to Fig. 3 shows two examples of GUI 120.In this example, show audio grade (such as volume/
Amplitude levels) indicator 350 example.Fig. 3 F is similar with 3G, but shows different grades of details.
During operation, audio analyzer 114 can determine output audio associated with each of audio signal 110
Grade, each of described audio signal 110 it is associated with each sound channel (such as 5 sound channels around setting in, audio signal
110 may include five signals, each sound channel one).For example, audio analyzer 114, which can measure, corresponds to sound channel/loudspeaker
The first of the first audio signal of icon 304a exports audio grade, corresponding to the second audio of sound channel/the speaker icon 304b
Second output audio grade of signal, etc..Audio analyzer 114 can by by specific measurement (such as root mean square) be applied to
The amplitude of the associated sound wave of audio signal 110 corresponding to each of sound channel/the speaker icon 304a to c measures spy
Surely audio grade is exported.
GUI 120 can indicate output audio grade associated with each of audio signal 110.In a particular instance
In, the color of each sound channel icon (such as sound channel icon 304a to e) or other graphical representations can indicate corresponding output audio
Grade (such as volume/amplitude levels).For example, the sound channel icon 304d of the first color can indicate the first output audio grade
(such as volume/amplitude levels), the sound channel icon 304e of the second color can indicate the second output audio grade (such as volume level),
Etc..In an example, with can indicate it is lower output audio grade (such as volume/amplitude levels) it is shallower or stronger
Color (such as faint yellow) is compared, and darker or stronger color (such as shiny red) can indicate higher output audio grade (example
Such as volume/amplitude levels).In some instances, GUI 120 may include three-dimensional (3D) curve graph (such as 3D grid curve figure),
It indicates output audio grade associated with each of audio signal 110.In another example, figure volume bar can position
Above each sound channel/the speaker icon 304, to indicate output level associated with each audio signal 110.
Audio analyzer 114 can determine input audio grade associated with each of input signal 108 (such as sound
Amount/amplitude levels).For example, audio analyzer 114 can determine the first input sound associated with the first input signal 108a
It is frequency grade, the second input audio grade associated with the second input signal 108b, associated with third input signal 108c
Third input audio grade, etc..Input audio grade is portrayed as audio grade indicator 350.Audio analyzer 114 can lead to
It crosses and specific measurement (such as root mean square) is applied to input signal associated with microphone (that is, for example, just receiving from microphone
Arrive one or more sound waves conversion input signal) amplitude, to measure specific input audio grade.Audio analyzer 114 can
In response to determining that corresponding input signal is associated with particular microphone, and determining specific input audio grade (such as volume/vibration
Width) it is associated with particular microphone.For example, the first input audio grade can be associated with the first microphone 104a, and second
Input audio grade can be associated with second microphone 104b, and third input audio grade can be related to third microphone 104c
Connection, etc..
GUI 120 can indicate noise grade associated with each audio track.In particular instances, each sound channel icon
The color of (such as sound channel icon 304a to e) or other graphical representations can indicate corresponding noise grade.For example, the first color
Sound channel icon 304d can indicate the first noise grade, the sound channel icon 304e of the second color can indicate the second noise grade, etc.
Deng.In an example, compared with the shallower or less intense color (such as faint yellow) that can indicate lower noise grade, compared with
Dark or stronger color (such as shiny red) can indicate higher noise grade.In some instances, noise information (such as noise
Level) it is spatially presented by GUI via the dynamic graphical representation of one or more audio tracks.For example, graphical representation
Can based on correspond to graphical representation audio track associated therewith noise amount and change.
GUI 120 can input audio grade of the display corresponding to each microphone.For example, input audio indicator
350 may include the first graphical representation corresponding to the first input audio grade of input audio grade, pair of input audio grade
It should be indicated in the second graph of the second input audio grade, the third corresponding to third input audio grade of input audio grade
Graphical representation, etc..In particular instances, specific input audio level indicator or figured size, color or this two
Person can indicate corresponding input audio grade (such as volume/amplitude).For example, the input sound of the first color (such as white)
Frequency TIER icon can indicate that corresponding input audio grade is not able to satisfy (such as less than) the first audio grade threshold value.Second color
The input audio TIER icon of (such as green) can indicate that corresponding input audio grade meets (being greater than) first audio etc.
Grade threshold value, and meet (being, for example, less than) second audio grade threshold value.The input audio TIER icon of third color (such as yellow)
It can indicate that corresponding input audio grade is not able to satisfy and (is greater than) the second audio grade threshold value, and meet (being, for example, less than) the
Three audio grade threshold values.The input audio TIER icon of 4th color (such as red) can indicate corresponding input audio grade not
It is able to satisfy and (is greater than) third audio grade threshold value.Three audio grade threshold values are described for illustrative purpose.Specific
In example, input audio level indicator 350 be can correspond to less than three or more than three audio grade threshold value.Input audio etc.
Grade indicator 350 can indicate that microphone saturation degree alerts.For example, it is full to can correspond to microphone for specific color (such as red)
With degree alarm (that is, volume/amplitude of specific input signal is close or has exceeded the saturation degree of microphone, it is meant that input signal
Will or positive clipped wave).
In some instances, GUI 120 includes the saddle or other selection options for user, so that can avoid microphone
Saturation degree (such as microphone slicing).For example, input audio level indicator 350 can be respectively sliding with microphone level adjustment
Part is associated.By adjusting saddle downward or upward, user can decrease or increase the microphone gain or audio of particular microphone
The gain of sound channel.For example, as shown in Fig. 3 G, GUI 120 may include fader 352.It can be adjusted by using family
Whole gain, user can be avoided microphone saturation degree, or can increase the volume of amount of bass audio track, this can improve user
The quality of the audio just recorded.
GUI 120 can therefore provide a user about the input signal 108 received from microphone audio grade and
The feedback of the audio signal 110 of sound channel corresponding to generated multi-channel signal.User can be dynamic to take based on the feedback
Make.For example, user can determine the audio grade based on input signal to deactivate one or more of described microphone, and can
Therefore the microphone is enabled.Therefore user experience can be improved.As another example, the user can be based on the sound of input signal
Frequency grade determines the positive slicing of one or more of described microphone or is saturated in other ways, and can deactivate any attack Mike
Wind, or the gain of any attack microphone of adjustment.Therefore user experience can be improved.In other examples, audio analyzer 114 can
Recognize that microphone deactivates, and output indicates that microphone deactivates or idle notice in other ways under error condition automatically
Audio signal.This notification audio signal will enable device 102 inform one or more wheats of the user of described device during record
Gram wind is deactivated under error condition or is not worked in other ways.Notification signal can be output to be expressed as notice sound channel one or
Multiple audio tracks may or may not be local (that is, the loudspeaker or described device of the recording device in recording device
External loudspeaker).In other examples, the notice can additionally or alternatively be capable of providing for described device to the another of user
One output, such as touch feedback or selection graphical information.In other examples, the notice may be included in audio signal 110
Any one in.
Input unit (such as mouse, touch screen etc.) can be used to select wear-type for G referring to Fig. 3, during operation, user
Earphone icon 354, to use headphone 112 as one of output device 107.Audio analyzer 114 may be in response to
It receives the selection of headphone icon 354 and provides audio signal 110 to headphone 112.Due to wear-type ear
Machine can be stereosonic, therefore the multi-channel signal for being higher than 2 sound channels can be mixed into downwards the multi-channel signal with 2 sound channels.
Audio analyzer 114 may be in response to receive another (that is, second or subsequent) selection of headphone icon 354 and prevention will
Audio signal 110, which provides, arrives headphone 112.In particular instances, the headphone figure of the first color (such as green)
Mark 354 can indicate that audio analyzer 114 just provides audio signal 110 to headphone 112, and the second color (such as it is white
Color) headphone icon 354 can indicate that audio signal 110 is not provided and worn (or prevent) by audio analyzer 114
Formula earphone 112.
Specific image corresponding to each of sound channel icon 304a to c can indicate corresponding output audio grade, such as
It is described herein.For example, corresponding to specific color (such as blue) in sound channel icon 304 first part (such as
The major part of the first image) the first image of one can indicate the first output audio grade (such as high), correspond to
The second part (half of for example, about described second image) with the specific color (such as blue) in sound channel icon 304
Both the second image can indicate the second output audio grade (such as medium), and correspond to the tool in sound channel icon 304
There is the third party of the Part III (such as without described second image) of the specific color (such as blue) that can indicate that third exports
Audio grade (such as no or low).
Audio analyzer 114 can determine the static noise grade of audio signal 110, as shown in Fig. 3 G.In some examples
In, after selecting one of sector 302 or sound channel 304, static noise grade shown in Fig. 3 G can be filled in GUI at once
In 120.In these examples, static noise grade corresponds to particular sector or sound channel.In other examples, shown in Fig. 3 G
Static noise grade can correspond to the noise grade on all audio signals 110 (or input signal 108).For example, audio
Analyzer 114 can based on audio signal 110 (or input signal 108) noisiness measure (such as linear prediction decoding (LPC) prediction
Gain) determine static noise grade.In particular instances, lower LPC prediction gain can indicate the relatively Gao Jing of audio signal 110
Only noise grade.Perceived noisiness degree can be defined according to the variation of audio signal 110 or the power according to audio signal 110 or energy
Amount.Output noise level indicator 356 can indicate one or more audio signals 110 (or one or more of input signal 108)
Static noise grade.As an example, the height of the specific color (such as red) of output noise level indicator 356 can
Indicate the static noise grade of audio signal 110.
User 118 can in a first direction (such as downwards) mobile noise suppressed option 330 to reduce noise suppressed grade,
Or noise suppressed option 330 can be moved in a second direction (such as upwards) to increase noise suppressed grade.User 118 is removable
Noise suppressed option 330 adjusts noise suppressed grade.Audio analyzer 114 can generate sound based on the noise suppressed grade
Frequency signal 110.Output noise level indicator 356 can indicate the static noise grade of audio signal 110 or input signal 108.
Output noise TIER icon 356 can be provided thus about selected noise suppressed grade to user 118 to audio signal 110 or input
The feedback of the influence of the static noise grade of signal 108.Noise grade indicator 356 can be presented in real time, so that its instruction is current
The amount (also known as static noise) of existing ambient noise in the audio of record.In some instances, noise grade indicator 356
Can be identical, or be similar to noise indicator 331 in other ways and present.For example, noise grade indicator 356 can be similar
Ground is arranged comprising green/blue item, with compared with noisiness remaining after noise suppressed, enhancing measures the visualization of noise.
One or more faders (or gain option) each of 352 can be associated with particular microphone.Citing
For, the first gain option in one or more gain options 1308 can correspond to the first microphone 104a of Fig. 1, described one or
The second gain option in multiple gain options 1308 can correspond to second microphone 104b, etc..User 118 may be selected specific
Gain option adjusts and the grade of the associated gain of corresponding microphone.For example, user 118 can be in a first direction
(such as upwards) the first gain option of movement, to increase the first gain level associated with the first microphone 104a.Specific
In example, certain gain option can correspond to optional option 140.For example, audio analyzer 114 can receive instruction user
118 have selected the selection 130 of certain gain option.Selection 130 may further indicate that the gain corresponding to the certain gain option
Grade.For example, selection 130 can indicate user 118 in a first direction by the certain gain option move first away from
From.The first distance can correspond to the first knots modification, and the first direction can indicate corresponding gain level will increase (or
It reduces).Audio analyzer 114 can determine that the first gain level corresponding to certain gain option will increase based on selection 130
(or reduction) first knots modification.Audio analyzer 114 can make the gain level of corresponding microphone increase (or reduction) first change
Amount.Then, input audio level indicator 350 may be updated, to indicate to correspond to the input audio for the microphone that gain has changed
Grade.Input audio level indicator 350 can be provided thus about selected gain level to user 118 to corresponding to microphone
The feedback of the influence of first input audio grade.
Therefore GUI 120 can provide a user feedback during multichannel audio generates.User can be made based on the feedback
It is selected out to modify multichannel audio and generate, thus the user experience and quality of multichannel audio caused by improving.
The every GUI 120 illustrated in figure of the invention may include the component fewer than illustrated component or more components
(such as graphical representation, optional graphical representation etc.).
Referring to Fig. 4, the flow chart of the specific illustrative example of the method 400 of multichannel audio generation is shown.What is shown
In example, one or more steps are can be performed in audio analyzer 114.
Method 400 is included at first device and receives more than (402) first a input signals from multiple microphones.Citing comes
It says, the audio analyzer 114 of device 102 can receive input signal 108 from microphone 104a to c.
Method 400 is also included at first device and shows (404) graphical user interface.The graphical user interface may include
Optional option enables the user to interact with audio analyzer 114.For example, user can be on such as display 106
The expression interaction of existing Graphical audio sound channel, especially to adjust audio recording parameter or audio processing parameters.The audio of device 102
Analyzer 114 can show GUI 120, as described herein.
Method 400 further includes the selection of reception (406) optional option.For example, the audio analyzer of device 102
114 can receive selection 130, as described herein.
Method 400 is also comprising generating more than (408) second from more than described first a input signals based on the selection is received
A audio signal.For example, audio analyzer 114 can generate audio signal from input signal 108 based on selection 130 is received
110, as described herein.Each of a audio signal can be associated with specific direction more than second.It is every in audio signal 110
One can be associated with specific direction (such as left and right, center, a left side-are surround or the right side-is surround), as described herein.
Method 400, which is further included, to be sent (410) for more than second a audio signals (or other outputs fills to headphone
It sets 107).For example, audio analyzer 114 can send audio signal 110 to headphone 112 (or other outputs dress
It sets 107), as described herein.
Method 400 is also comprising storing (412) in memory for more than second a audio signals.For example, audio analysis
Audio signal 110 can be stored in gui data 150 by device 114, or will be associated with audio signal 110 or right in other ways
The information of audio signal described in Ying Yu is stored in gui data 150.Gui data 150, which is storable in, is coupled to device 102 or packet
Contained in the memory in device 102.
Method 400 can the selection based on the optional option for receiving GUI and realizing generated from more than first a input signals it is more
Channel audio signal (such as a audio signal more than second).Therefore method 400 can realize that the interactive of multi-channel audio signal produces
It is raw, thus the user experience and quality of multi-channel audio signal caused by improving.
Fig. 5 is the flow chart for illustrating the example operation of one or more technologies according to the present invention.Example shown in fig. 5
In, a kind of computing device receivable (500) is communicably coupled to multiple realities that multiple microphones of computing device are exported
When audio signal.For example, one or more of the multiple microphone can be communicably coupled to computing device, so that
Their construction are into described device.As another example, one or more of the multiple microphone can be communicatively coupled
To computing device, so that their not construction into described device (such as peripheral microphone).
The computing device can export (502) to display and audio letter associated with received audio signal is presented
The graphical user interface (GUI) of breath.For example, the audio-frequency information can be real-time audio information.As can be with any combination
Be used together or be separated from each other some additional examples used, audio-frequency information may include with each of real-time audio signal,
Each of the multiple microphone, one or more output devices, level of sound volume related with one or more output devices, one
Or the related information of saturation gradation or noise level of multiple microphones.Other examples are identified in the present invention.
It can be inputted based on user associated with the audio-frequency information presented via GUI to handle the received audio of (504) institute
One or more of signal, to generate one or more through handling audio signal.For example, one or more of the computing device
Processor can handle the received audio signal.As an example, one or more processors of the computing device can be handled
The received audio signal, the above mixed or lower mixed the received audio signal.It is described mixed or it is lower it is mixed can be based on being in from via GUI
The channel configuration of existing the multiple channel configuration option selects.As another example, if there is two microphones, and sound channel
Configuration selection three output devices (such as three loudspeakers) of instruction, then one or more described processors can will come from described two
The triple-track multichannel letter for being configured for use in and using in conjunction with three output devices is blended together in two audio signals of a microphone
Number.As another example, if there is three microphones, and channel configuration selection indicates that (such as two raised two output devices
Sound device), one or more described processors can be configured to use by blending together under three audio signals from three microphones
In the binary channels multi-channel signal used in conjunction with two output devices.
As another example, one or more processors of the computing device can handle the received audio signal, with right
The received audio signal is filtered.The filtering can be based on from one or more the noise suppressed options presented via GUI
Noise suppressed selection.
As another example, one or more processors of the computing device can handle the received audio signal, with place
The first audio signal in the multiple audio signal is managed, so that before treatment, first audio signal and the first audio
Sound channel is associated, and after the treatment, first audio signal is associated with the second audio track.As yet another embodiment, institute
One or more processors for stating computing device can handle the received audio signal, to handle in the multiple audio signal
One audio signal, so that before treatment, first audio signal is only associated with the first audio track, and in processing
Afterwards, first audio signal is only associated with the second audio track.
It is exportable one or more through handle audio signal (506).For example, one or more processed audios can be believed
Number it is output to output device, such as loudspeaker or headphone.
Fig. 6 is the flow chart for illustrating the example operation of one or more technologies according to the present invention.The example shown in Fig. 6
In, a kind of computing device receivable (600) is communicably coupled to multiple realities that multiple microphones of computing device are exported
When audio signal.For example, one or more of the multiple microphone can be communicably coupled to computing device, so that
Their construction are into described device.As another example, one or more of the multiple microphone can be communicatively coupled
To computing device, so that their not construction into described device (such as peripheral microphone).In some instances, the calculating dress
It sets and can produce audio-frequency information associated with the received audio signal, to store in memory.For example, the storage
Device can be any memory disclosed herein, such as memory associated with one or more of the multiple microphone,
With the associated memory of interface associated with one or more of the multiple microphone, with CPU, GPU or other processing
The associated memory of device, system storage etc..The memory can be one or more memories described in the present invention
Combination.The memory can be internal or external.For example, the memory can be in CPU, GPU or other processors
Inside or the memory can be outside CPU, GPU or other processors.The memory may make up temporary memory space, forever
Long property memory space or combinations thereof.
It is associated with one or more of the received audio signal that computing device can export (602) presentation to display
The graphical user interface (GUI) of noise information.For example, the noise information can be and one in the received audio signal
Or more associated real-time audio informations of person.As another example, via GUI present noise information include with correspond to connect
The related information of the noisiness of one or more of the audio signal of receipts, and wherein the GUI includes one or more noise suppresseds
Option.
It can be inputted based on user associated with the noise information presented via GUI to handle sound received by (604)
One or more of frequency signal, to generate one or more through handling audio signal.For example, the computing device is one or more
A processor can handle the received audio signal.As an example, one or more processors of the computing device can be located
The received audio signal is managed, to calculate the noisiness for corresponding to one or more of the received audio signal.As another
One or more processors of computing device described in example can be based on making an uproar from one or more the noise suppressed options presented via GUI
Sound inhibits selection to handle the received audio signal, to be filtered to the received audio signal.In some instances, it filters
Wave may include the noise attentuation made in one or more of the received audio signal.
In some instances, one or more processors of the computing device are detectable corresponds to what computing device was located at
The scene of location type determines whether to recommend noise suppression based on the scene corresponding to the location type detected
System is presented the identified noise suppressed via GUI and recommends, or any combination thereof.In an example, detection scene can base
In one or more of the following terms: computing device uses one in camera institute's captured image or the received audio signal
Or more persons.
It is exportable one or more through handle audio signal (606).For example, one or more processed audios can be believed
Number it is output to output device, such as loudspeaker or headphone.
According to the present invention, in the case where context has no other instructions, term "or" can be inferred as "and/or".Separately
Outside, although the phrases such as such as " one or more " or "at least one" may have been used for some features disclosed herein rather than its
Its feature;But in the case where context has no other instructions, the feature that non-needle is used for this speech like sound may be interpreted as implying
Such meaning.
Technology described in the present invention can at least partly be implemented in hardware, software, firmware, or any combination thereof.Citing
For, the various aspects of described technology may be implemented in one or more processors, include one or more microprocessors, number letter
Number processor (DSP), specific integrated circuit (ASIC), field programmable gate array (FPGA) or any other equivalent integrated or
Any combination of discrete logic and these components.Term " processor " or " processing circuit " can generally refer to aforementioned patrol
Any one of circuit is collected, individually or in conjunction with other logic circuits or any other equivalent circuit.Control unit comprising hardware
Also one or more of technology of the invention can be performed.
Such hardware, software and firmware can be implemented in same device or in isolated system, to support to retouch in the present invention
The various technologies stated.In addition, any one of described unit, module or component can be together or separately as discrete but can be mutual
Operation logic device and implement.Different characteristic is portrayed as module or unit intention is emphasized in terms of different function and may not imply this
Generic module or unit must be realized by independent hardware, firmware or software component.In fact, related to one or more modules or unit
The functionality of connection can be executed by independent hardware, firmware and/or component software, or be integrated in shared or independent hardware, firmware or soft
In part component.
Technology described in the present invention is embodied comprising encoding in the product for having the computer-readable storage medium of instruction
Or coding.Insertion or the instruction encoded in the product comprising encoded computer-readable storage medium can cause one or more
Programmable processor or other processors implement one or more of the techniques described herein, for example, comprising or coding calculating
When instruction in machine readable memory medium is executed by one or more processors.Computer-readable storage medium may include arbitrary access
Memory (RAM), read-only memory (ROM), programmable read only memory (PROM), Erasable Programmable Read Only Memory EPROM
(EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory, hard disk, compact disk ROM (CD-ROM),
Floppy disk, cassette tape, magnetic medium, optical media or other computer-readable medias.In some instances, a kind of product can wrap
Containing one or more computer-readable storage mediums.
In some instances, a kind of computer-readable storage medium may include non-transitory media.Term " non-transitory "
It can indicate that storage media are not embodied in carrier wave or institute's transmitting signal.In some instances, non-transitory storage media can store
The data of (for example, in RAM or cache memory) can be changed over time.
Those skilled in the art will understand that can be used one or more circuits, processor and/or software to implement herein
Described method and process.Circuit refers to any circuit, either integrated or outside processing unit.Software refers to can
The code to realize desired result or instruction are executed by processing unit.This software can be locally stored in the storage media of described device
On, such as on the memory of processing unit, system storage or other memories.
Offer is to enable those skilled in the art to make or use institute to the previous description of revealed instance
The example of announcement.It will be readily apparent to those skilled in the art that the various modifications of these examples, and do not departing from the present invention
Range in the case where, the principles defined herein can be applied to other examples.Therefore, the present invention is not intended to be limited to show herein
Example out, but should be endowed with the principle and novel feature that are defined such as the appended claims it is consistent it is possible most extensively
Range.
Claims (30)
1. a kind of method for handling audio data, which comprises
Multiple real-time audio letters are received from the multiple microphones for being communicably coupled to the computing device by computing device
Number, wherein the multiple real-time audio signal is collectively form the circular recording SSR of the audio data;
The graphical user interface of audio-frequency information associated with the SSR of the audio data for rendering is exported to display
GUI;
User's input of specified one or more parameters associated with the SSR of the audio data is received via the GUI;
Based on received more to handle via one or more parameters described in being specified in the received user input of the GUI
One or more audio signals in a audio signal;
The SSR of the audio data is adjusted based on one or more processed described audio signals based on via described
One or more described parameters for specifying in the received user input of GUI generate adjusted SSR;And
Export the adjusted SSR of the audio data.
2. according to the method described in claim 1, wherein being selected via the received user's input of the GUI comprising audio track
It selects, specifies several audio tracks exported for the part as the adjusted SSR of the audio data.
3. according to the method described in claim 2, its further comprise generate the GUI with comprising with several audio tracks
The associated graphical representation that can determine size again of each audio track.
4. according to the method described in claim 3, wherein each graphical representation associated with each audio track can pass through institute
User input selection is stated, to realize the manipulation of the graphical representation each audio track associated therewith.
5. according to the method described in claim 2, wherein handling described one or more in described received multiple audio signals
A audio signal includes at least one of the following terms: deactivating an audio track in the audio track, inhibition and institute
State the associated noise of one or more of audio track, or by one or more of described received multiple audio signals of institute from
An audio track in the audio track is moved to the different audio tracks in the audio track.
6. according to the method described in claim 1, wherein via the GUI present the audio-frequency information include with it is described in real time
The related information of each of audio signal, and wherein the GUI includes the phase of each of described real-time audio signal
Answer graphical representation.
7. according to the method described in claim 1, wherein the GUI includes multiple channel configuration options, and wherein described in processing
One or more described audio signals in received multiple audio signals include to be inputted based on the user to go up mixed or lower mix
One or more described audio signals in described received multiple audio signals, the user, which inputs, to be provided selected from via described
The channel configuration selection for the multiple channel configuration option that GUI is presented.
8. according to the method described in claim 1, wherein the GUI includes one or more noise suppressed options, and wherein handling
One or more described audio signals in described received multiple audio signals include based on user input come to described
The received multiple audio signals of institute are filtered, and it is described one or more selected from presenting via the GUI that the user inputs offer
The noise suppressed of a noise suppressed option selects.
9. according to the method described in claim 1, wherein handling described one or more in described received multiple audio signals
A audio signal includes:
Before treatment, the first audio signal in described received multiple audio signals is related to the first audio track
Connection, and
After the treatment, first audio signal is associated with the second audio track.
10. according to the method described in claim 9, wherein first audio track correspond to be coupled to the computing device
First output device, and second audio track corresponds to the second output device for being coupled to the computing device.
11. a kind of equipment for handling audio data comprising:
Communication unit is configured to receive multiple real-time audio signals from multiple microphones;
Memory, is coupled to the communication unit, and the memory is configured to the received multiple audio signals of storage institute;With
And
One or more processors are coupled to the memory, one or more described processors are configured to:
The circular recording SSR of audio data is generated using the multiple audio signal of storage in the memory;
For display the graphical content of output pattern user interface GUI with for rendering with the SSR phase of the audio data
Associated audio-frequency information;
Via the GUI receive it is specified with the SSR associated one that is being formed by described received multiple audio signals or
The user of multiple parameters inputs;
Described connect is handled based on via one or more parameters described in specifying in the received user's input of the GUI
One or more audio signals in multiple audio signals received;
The SSR of the audio data is adjusted based on one or more processed described audio signals based on via described
One or more described parameters for specifying in the received user input of GUI generate adjusted SSR;And
Export the adjusted SSR of the audio data.
12. equipment according to claim 11, wherein including audio track via the received user's input of the GUI
Selection, specifies several audio tracks exported for the part as the adjusted SSR of the audio data.
13. equipment according to claim 12, wherein one or more described processors are further configured to described in generation
GUI is to include the graphical representation that can determine size again associated with each audio track of several audio tracks.
14. equipment according to claim 13, wherein each graphical representation associated with each audio track can pass through
The user input selection, to realize the manipulation of the graphical representation each audio track associated therewith.
15. equipment according to claim 12, wherein in order to handle the received multiple audio signals of institute, described one or
Multiple processors be configured to deactivate an audio track in the audio track, inhibit in the audio track one or
The associated noise of more persons or by one or more of described received multiple audio signals from one in the audio track
A audio track is moved to the different audio tracks in the audio track.
16. equipment according to claim 11, wherein the audio-frequency information for presenting via the GUI includes and institute
The related information of each of real-time audio signal is stated, and wherein the GUI is configured to comprising the real-time audio signal
Each of respective graphical indicate.
17. equipment according to claim 11, wherein the GUI includes multiple channel configuration options, and wherein in order to locate
The received audio signal is managed, one or more described processors are configured to input based on the user mixed or lower mixed to go up
One or more described audio signals in described received multiple audio signals, the user, which inputs, to be provided selected from via described
The channel configuration selection for the multiple channel configuration option that GUI is presented.
18. equipment according to claim 11 wherein the GUI includes one or more noise suppressed options, and is wherein
The processing the received audio signal, described one or more processors are configured to input based on the user come to described
The received multiple audio signals of institute are filtered, and it is described one or more selected from presenting via the GUI that the user inputs offer
The noise suppressed of a noise suppressed option selects.
19. equipment according to claim 11, wherein in order to handle the received multiple audio signals of institute, described one or
Multiple processors are configured to:
Before treatment, the first audio signal in described received multiple audio signals is related to the first audio track
Connection, and
After the treatment, first audio signal is associated with the second audio track.
20. equipment according to claim 19, wherein first audio track corresponds to the communicated with the equipment
One output device, and second audio track corresponds to the second output device communicated with the equipment.
21. a kind of device for handling audio data comprising:
For receiving the device of multiple real-time audio signals from the multiple microphones for being communicably coupled to described device, wherein
The multiple real-time audio signal is collectively form the circular recording SSR of audio data;
For exporting the dress that the graphical user interface GUI of audio-frequency information associated with the SSR of the audio data is presented
It sets;
For receiving the user of specified one or more parameters associated with the SSR of the audio data via the GUI
The device of input;
For being received based on via one or more parameters described in being specified in the received user input of the GUI to handle
Multiple audio signals in one or more audio signals device;
For adjusted based on one or more processed described audio signals the SSR of the audio data with based on via
One or more the described parameters specified in the received user's input of the GUI generate the device of adjusted SSR;And
For exporting the device of the adjusted SSR of the audio data.
22. device according to claim 21, wherein including audio track via the received user's input of the GUI
Selection, specifies several audio tracks exported for the part as the adjusted SSR of the audio data.
23. device according to claim 22, further comprise for generate the GUI with comprising with several sounds
The associated figured device that can determine size again of each audio track of frequency sound channel.
24. device according to claim 23, wherein each graphical representation associated with each audio track can pass through
The user input selection, to realize the manipulation of the graphical representation each audio track associated therewith.
25. device according to claim 22, wherein described for handling in described received multiple audio signals
The device of one or more audio signals includes at least one of the following:
For deactivating the device of an audio track in the audio track,
For inhibiting the device of noise associated with one or more of the audio track, or
For by one or more of described received multiple audio signals from an audio track in the audio track
It is moved to the device of the different audio tracks in the audio track.
26. device according to claim 21, wherein the GUI includes multiple channel configuration options, and the wherein use
The device of one or more audio signals described in handling in the received audio signal includes for being based on the user
It inputs to go up the mixed or lower device for mixing one or more audio signals in described received multiple audio signals, the use
Family input is provided selected from the channel configuration selection via the GUI the multiple channel configuration option presented.
27. device according to claim 21 wherein the GUI includes one or more noise suppressed options, and is wherein located
Manage one or more audio signals in the received audio signal include based on from described one presented via the GUI or
The noise suppressed of multiple noise suppressed options selects to be filtered to the received audio signal.
28. device according to claim 21, wherein described for handling in described received multiple audio signals
The device of one or more audio signals includes:
For before treatment, by the first audio signal and the first audio track phase in described received multiple audio signals
Associated device, and
For after the treatment, by first audio signal device associated with the second audio track.
29. device according to claim 28, wherein first audio track, which corresponds to, is coupled to the of described device
One output device, and second audio track corresponds to the second output device for being coupled to described device.
30. a kind of non-transitory computer-readable storage media for being stored with instruction above, described instruction when executed, cause
One or more processors of computing device:
Reception is communicably coupled to multiple real-time audio signals that multiple microphones of the computing device are exported, wherein
The multiple real-time audio signal is collectively form the circular recording SSR of audio data;
It presents and the institute of the audio data to the graphical content of display output pattern user interface GUI for the display
State the associated audio-frequency information of SSR;
User's input of specified one or more parameters associated with the SSR of the audio data is received via the GUI;
Based on received more to handle via one or more parameters described in being specified in the received user input of the GUI
One or more audio signals in a audio signal;
The SSR of the audio data is adjusted based on one or more processed described audio signals based on via described
One or more described parameters for specifying in the received user input of GUI generate adjusted SSR;And
Export the adjusted SSR of the audio data.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462020928P | 2014-07-03 | 2014-07-03 | |
US62/020,928 | 2014-07-03 | ||
US14/789,736 US10073607B2 (en) | 2014-07-03 | 2015-07-01 | Single-channel or multi-channel audio control interface |
US14/789,736 | 2015-07-01 | ||
PCT/US2015/039051 WO2016004345A1 (en) | 2014-07-03 | 2015-07-02 | Single-channel or multi-channel audio control interface |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106664490A CN106664490A (en) | 2017-05-10 |
CN106664490B true CN106664490B (en) | 2019-05-14 |
Family
ID=53783304
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580035448.XA Active CN106664484B (en) | 2014-07-03 | 2015-07-02 | Monophonic or multichannel audio control interface |
CN201580035622.0A Active CN106664490B (en) | 2014-07-03 | 2015-07-02 | Monophonic or multichannel audio control interface |
CN201910690711.9A Pending CN110569016A (en) | 2014-07-03 | 2015-07-02 | Single or multi-channel audio control interface |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580035448.XA Active CN106664484B (en) | 2014-07-03 | 2015-07-02 | Monophonic or multichannel audio control interface |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910690711.9A Pending CN110569016A (en) | 2014-07-03 | 2015-07-02 | Single or multi-channel audio control interface |
Country Status (4)
Country | Link |
---|---|
US (2) | US10073607B2 (en) |
EP (2) | EP3165003B1 (en) |
CN (3) | CN106664484B (en) |
WO (2) | WO2016004345A1 (en) |
Families Citing this family (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
DE212014000045U1 (en) | 2013-02-07 | 2015-09-24 | Apple Inc. | Voice trigger for a digital assistant |
JP6553052B2 (en) * | 2014-01-03 | 2019-07-31 | ハーマン インターナショナル インダストリーズ インコーポレイテッド | Gesture-interactive wearable spatial audio system |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10073607B2 (en) | 2014-07-03 | 2018-09-11 | Qualcomm Incorporated | Single-channel or multi-channel audio control interface |
USD766267S1 (en) * | 2014-09-02 | 2016-09-13 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
USD762663S1 (en) * | 2014-09-02 | 2016-08-02 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
WO2016036436A1 (en) | 2014-09-02 | 2016-03-10 | Apple Inc. | Stopwatch and timer user interfaces |
WO2016052876A1 (en) * | 2014-09-30 | 2016-04-07 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
JP6236748B2 (en) * | 2015-03-25 | 2017-11-29 | ヤマハ株式会社 | Sound processor |
KR102386309B1 (en) * | 2015-06-04 | 2022-04-14 | 삼성전자주식회사 | Electronic device and method of controlling input or output in the electronic device |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10013996B2 (en) | 2015-09-18 | 2018-07-03 | Qualcomm Incorporated | Collaborative audio processing |
US9706300B2 (en) * | 2015-09-18 | 2017-07-11 | Qualcomm Incorporated | Collaborative audio processing |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11929088B2 (en) * | 2015-11-20 | 2024-03-12 | Synaptics Incorporated | Input/output mode control for audio processing |
US20180367836A1 (en) * | 2015-12-09 | 2018-12-20 | Smartron India Private Limited | A system and method for controlling miracast content with hand gestures and audio commands |
CN105741861B (en) * | 2016-02-05 | 2017-12-15 | 京东方科技集团股份有限公司 | Intelligent playing system, method, wearable device, main unit and broadcast unit |
EP3440527A4 (en) * | 2016-04-05 | 2019-11-27 | Hewlett-Packard Development Company, L.P. | Audio interface for multiple microphones and speaker systems to interface with a host |
US10419455B2 (en) * | 2016-05-10 | 2019-09-17 | Allstate Insurance Company | Cyber-security presence monitoring and assessment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
US10409552B1 (en) * | 2016-09-19 | 2019-09-10 | Amazon Technologies, Inc. | Speech-based audio indicators |
US11431836B2 (en) | 2017-05-02 | 2022-08-30 | Apple Inc. | Methods and interfaces for initiating media playback |
US10992795B2 (en) | 2017-05-16 | 2021-04-27 | Apple Inc. | Methods and interfaces for home media control |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
DK201770427A1 (en) | 2017-05-12 | 2018-12-20 | Apple Inc. | Low-latency intelligent automated assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770411A1 (en) * | 2017-05-15 | 2018-12-20 | Apple Inc. | Multi-modal interfaces |
US20220279063A1 (en) | 2017-05-16 | 2022-09-01 | Apple Inc. | Methods and interfaces for home media control |
CN111343060B (en) | 2017-05-16 | 2022-02-11 | 苹果公司 | Method and interface for home media control |
EP3634009A4 (en) * | 2017-05-29 | 2021-02-24 | Audio-Technica Corporation | Signal processing device |
US10012691B1 (en) * | 2017-11-07 | 2018-07-03 | Qualcomm Incorporated | Audio output diagnostic circuit |
CN108200515B (en) * | 2017-12-29 | 2021-01-22 | 苏州科达科技股份有限公司 | Multi-beam conference pickup system and method |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
WO2019217341A1 (en) | 2018-05-07 | 2019-11-14 | Apple Inc. | User interfaces for viewing live video feeds and recorded video |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
JP7314130B2 (en) * | 2018-06-06 | 2023-07-25 | AlphaTheta株式会社 | volume control device |
US20210368267A1 (en) * | 2018-07-20 | 2021-11-25 | Hewlett-Packard Development Company, L.P. | Stereophonic balance of displays |
US11240623B2 (en) | 2018-08-08 | 2022-02-01 | Qualcomm Incorporated | Rendering audio data from independently controlled audio zones |
US11432071B2 (en) | 2018-08-08 | 2022-08-30 | Qualcomm Incorporated | User interface for controlling audio zones |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US10887467B2 (en) | 2018-11-20 | 2021-01-05 | Shure Acquisition Holdings, Inc. | System and method for distributed call processing and audio reinforcement in conferencing environments |
US11264029B2 (en) | 2019-01-05 | 2022-03-01 | Starkey Laboratories, Inc. | Local artificial intelligence assistant system with ear-wearable device |
US11264035B2 (en) | 2019-01-05 | 2022-03-01 | Starkey Laboratories, Inc. | Audio signal processing for automatic transcription using ear-wearable device |
US11463615B2 (en) * | 2019-03-13 | 2022-10-04 | Panasonic Intellectual Property Management Co., Ltd. | Imaging apparatus |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11363071B2 (en) | 2019-05-31 | 2022-06-14 | Apple Inc. | User interfaces for managing a local network |
US10904029B2 (en) * | 2019-05-31 | 2021-01-26 | Apple Inc. | User interfaces for managing controllable external devices |
CN117170620A (en) | 2019-05-31 | 2023-12-05 | 苹果公司 | User interface for audio media controls |
US11010121B2 (en) | 2019-05-31 | 2021-05-18 | Apple Inc. | User interfaces for audio media control |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11513667B2 (en) | 2020-05-11 | 2022-11-29 | Apple Inc. | User interface for audio message |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11038934B1 (en) | 2020-05-11 | 2021-06-15 | Apple Inc. | Digital assistant hardware abstraction |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US11922949B1 (en) * | 2020-08-17 | 2024-03-05 | Amazon Technologies, Inc. | Sound detection-based power control of a device |
CN114747196A (en) * | 2020-08-21 | 2022-07-12 | Lg电子株式会社 | Terminal and method for outputting multi-channel audio using a plurality of audio devices |
US11392291B2 (en) | 2020-09-25 | 2022-07-19 | Apple Inc. | Methods and interfaces for media control with dynamic feedback |
KR20220065370A (en) * | 2020-11-13 | 2022-05-20 | 삼성전자주식회사 | Electronice device and control method thereof |
CN114567840B (en) * | 2020-11-27 | 2024-02-06 | 北京小米移动软件有限公司 | Audio output method and device, mobile terminal and storage medium |
US11741983B2 (en) * | 2021-01-13 | 2023-08-29 | Qualcomm Incorporated | Selective suppression of noises in a sound signal |
JP2022134182A (en) * | 2021-03-03 | 2022-09-15 | ヤマハ株式会社 | Video output method, video output device, and video output system |
WO2023192046A1 (en) * | 2022-03-29 | 2023-10-05 | Dolby Laboratories Licensing Corporation | Context aware audio capture and rendering |
US20240070110A1 (en) * | 2022-08-24 | 2024-02-29 | Dell Products, L.P. | Contextual noise suppression and acoustic context awareness (aca) during a collaboration session in a heterogenous computing platform |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102007532A (en) * | 2008-04-16 | 2011-04-06 | Lg电子株式会社 | A method and an apparatus for processing an audio signal |
Family Cites Families (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR930007376B1 (en) | 1991-07-19 | 1993-08-09 | 삼성전자 주식회사 | Automatic sound level control circuit |
US7499969B1 (en) * | 2004-06-25 | 2009-03-03 | Apple Inc. | User interface for multiway audio conferencing |
US8020102B2 (en) | 2005-08-11 | 2011-09-13 | Enhanced Personal Audiovisual Technology, Llc | System and method of adjusting audiovisual content to improve hearing |
WO2007072467A1 (en) | 2005-12-19 | 2007-06-28 | Thurdis Developments Limited | An interactive multimedia apparatus |
JP5109978B2 (en) | 2006-11-07 | 2012-12-26 | ソニー株式会社 | Electronic device, control information transmitting method and control information receiving method |
US20080253592A1 (en) | 2007-04-13 | 2008-10-16 | Christopher Sanders | User interface for multi-channel sound panner |
US20080259731A1 (en) | 2007-04-17 | 2008-10-23 | Happonen Aki P | Methods and apparatuses for user controlled beamforming |
JP5555987B2 (en) | 2008-07-11 | 2014-07-23 | 富士通株式会社 | Noise suppression device, mobile phone, noise suppression method, and computer program |
US20100040217A1 (en) | 2008-08-18 | 2010-02-18 | Sony Ericsson Mobile Communications Ab | System and method for identifying an active participant in a multiple user communication session |
US20120096353A1 (en) | 2009-06-19 | 2012-04-19 | Dolby Laboratories Licensing Corporation | User-specific features for an upgradeable media kernel and engine |
TW201116041A (en) | 2009-06-29 | 2011-05-01 | Sony Corp | Three-dimensional image data transmission device, three-dimensional image data transmission method, three-dimensional image data reception device, three-dimensional image data reception method, image data transmission device, and image data reception |
JP2011030180A (en) | 2009-06-29 | 2011-02-10 | Sony Corp | Three-dimensional image data transmission device, three-dimensional image data transmission method, three-dimensional image data reception device, and three-dimensional image data reception method |
US8265928B2 (en) | 2010-04-14 | 2012-09-11 | Google Inc. | Geotagged environmental audio for enhanced speech recognition accuracy |
US9564148B2 (en) | 2010-05-18 | 2017-02-07 | Sprint Communications Company L.P. | Isolation and modification of audio streams of a mixed signal in a wireless communication device |
US9661428B2 (en) * | 2010-08-17 | 2017-05-23 | Harman International Industries, Inc. | System for configuration and management of live sound system |
EP2460464A1 (en) * | 2010-12-03 | 2012-06-06 | Koninklijke Philips Electronics N.V. | Sleep disturbance monitoring apparatus |
US8903722B2 (en) | 2011-08-29 | 2014-12-02 | Intel Mobile Communications GmbH | Noise reduction for dual-microphone communication devices |
US8712076B2 (en) | 2012-02-08 | 2014-04-29 | Dolby Laboratories Licensing Corporation | Post-processing including median filtering of noise suppression gains |
US20150296247A1 (en) | 2012-02-29 | 2015-10-15 | ExXothermic, Inc. | Interaction of user devices and video devices |
EP2642407A1 (en) * | 2012-03-22 | 2013-09-25 | Harman Becker Automotive Systems GmbH | Method for retrieving and a system for reproducing an audio signal |
US10107887B2 (en) | 2012-04-13 | 2018-10-23 | Qualcomm Incorporated | Systems and methods for displaying a user interface |
US20130315402A1 (en) | 2012-05-24 | 2013-11-28 | Qualcomm Incorporated | Three-dimensional sound compression and over-the-air transmission during a call |
US9966067B2 (en) | 2012-06-08 | 2018-05-08 | Apple Inc. | Audio noise estimation and audio noise reduction using multiple microphones |
EP2680616A1 (en) | 2012-06-25 | 2014-01-01 | LG Electronics Inc. | Mobile terminal and audio zooming method thereof |
US8989552B2 (en) | 2012-08-17 | 2015-03-24 | Nokia Corporation | Multi device audio capture |
WO2014032709A1 (en) | 2012-08-29 | 2014-03-06 | Huawei Technologies Co., Ltd. | Audio rendering system |
US20140105411A1 (en) | 2012-10-16 | 2014-04-17 | Peter Santos | Methods and systems for karaoke on a mobile device |
US20140115470A1 (en) | 2012-10-22 | 2014-04-24 | Apple Inc. | User interface for audio editing |
US9368117B2 (en) | 2012-11-14 | 2016-06-14 | Qualcomm Incorporated | Device and system having smart directional conferencing |
US20140191759A1 (en) | 2012-11-14 | 2014-07-10 | Mark S. Olsson | Multi-frequency locating systems and methods |
US9679564B2 (en) | 2012-12-12 | 2017-06-13 | Nuance Communications, Inc. | Human transcriptionist directed posterior audio source separation |
US10073607B2 (en) | 2014-07-03 | 2018-09-11 | Qualcomm Incorporated | Single-channel or multi-channel audio control interface |
US9778899B2 (en) * | 2015-02-25 | 2017-10-03 | Intel Corporation | Techniques for setting volume level within a tree of cascaded volume controls with variating operating delays |
-
2015
- 2015-07-01 US US14/789,736 patent/US10073607B2/en active Active
- 2015-07-01 US US14/789,766 patent/US10051364B2/en active Active
- 2015-07-02 WO PCT/US2015/039051 patent/WO2016004345A1/en active Application Filing
- 2015-07-02 EP EP15745019.8A patent/EP3165003B1/en active Active
- 2015-07-02 WO PCT/US2015/039065 patent/WO2016004356A1/en active Application Filing
- 2015-07-02 CN CN201580035448.XA patent/CN106664484B/en active Active
- 2015-07-02 EP EP15747266.3A patent/EP3165004B1/en active Active
- 2015-07-02 CN CN201580035622.0A patent/CN106664490B/en active Active
- 2015-07-02 CN CN201910690711.9A patent/CN110569016A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102007532A (en) * | 2008-04-16 | 2011-04-06 | Lg电子株式会社 | A method and an apparatus for processing an audio signal |
Also Published As
Publication number | Publication date |
---|---|
US10051364B2 (en) | 2018-08-14 |
EP3165003A1 (en) | 2017-05-10 |
US20160004405A1 (en) | 2016-01-07 |
US10073607B2 (en) | 2018-09-11 |
EP3165004B1 (en) | 2021-08-18 |
CN106664484B (en) | 2019-08-23 |
CN110569016A (en) | 2019-12-13 |
WO2016004356A1 (en) | 2016-01-07 |
CN106664490A (en) | 2017-05-10 |
EP3165004A1 (en) | 2017-05-10 |
EP3165003B1 (en) | 2018-08-29 |
US20160004499A1 (en) | 2016-01-07 |
WO2016004345A1 (en) | 2016-01-07 |
CN106664484A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106664490B (en) | Monophonic or multichannel audio control interface | |
US10123140B2 (en) | Dynamic calibration of an audio system | |
US10136240B2 (en) | Processing audio data to compensate for partial hearing loss or an adverse hearing environment | |
KR101984356B1 (en) | An audio scene apparatus | |
US9918174B2 (en) | Wireless exchange of data between devices in live events | |
US20170309289A1 (en) | Methods, apparatuses and computer programs relating to modification of a characteristic associated with a separated audio signal | |
US9886166B2 (en) | Method and apparatus for generating audio information | |
CN101800919A (en) | Sound signal processing device and playback device | |
EP2812785B1 (en) | Visual spatial audio | |
US20220246161A1 (en) | Sound modification based on frequency composition | |
EP4005234A1 (en) | Rendering audio over multiple speakers with multiple activation criteria | |
US20200058317A1 (en) | Playback enhancement in audio systems | |
EP4074077A2 (en) | Content and environmentally aware environmental noise compensation | |
US12003933B2 (en) | Rendering audio over multiple speakers with multiple activation criteria | |
KR102638121B1 (en) | Dynamics processing across devices with differing playback capabilities | |
CN114128312B (en) | Audio rendering for low frequency effects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |