CN107749299A - A kind of multi-audio-frequencoutput output method and device - Google Patents
A kind of multi-audio-frequencoutput output method and device Download PDFInfo
- Publication number
- CN107749299A CN107749299A CN201710900894.3A CN201710900894A CN107749299A CN 107749299 A CN107749299 A CN 107749299A CN 201710900894 A CN201710900894 A CN 201710900894A CN 107749299 A CN107749299 A CN 107749299A
- Authority
- CN
- China
- Prior art keywords
- audio
- data
- output
- stereo process
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
Abstract
The invention provides a kind of multi-audio-frequencoutput output method and device, methods described includes step:Processor receives multigroup audio stream data, audio stream data is classified according to preset configuration information, and the audio stream to belonging to same output type carries out resampling and stereo process, obtain audio mixing data, and synthesized according to default composition rule to audio mixing data or without the audio stream data of stereo process, audio generated data is obtained, and audio generated data is transmitted to audio codec;Audio codec receives audio generated data, and rule of detachment separates to audio generated data according to corresponding to default composition rule, and by the audio mixing data after separation or without the audio stream data of stereo process be transmitted separately to corresponding to output equipment exported.The present invention can make the equipment of single audio output interface also to carry out Multi-audio-frequency while separate output, effectively reduce hardware cost, improve Consumer's Experience.
Description
Technical field
The present invention relates to audio output process field, more particularly to a kind of multi-audio-frequencoutput output method and device.
Background technology
As intelligence system is deep into every field, the configuration of hardware device is also high all the way, and individual equipment needs simultaneously
The demand for taking into account different scenes is more and more.Wherein, the separation output usage scenario of Multi-audio-frequency is particularly critical.
As shown in figure 1, existing single device to realize support Multi-audio-frequency output, under normal circumstances be need be provided simultaneously with it is more
Individual audio output interface, such as possess two groups of I2S or possess one group of I2S and one group of pcm interface, cost is higher.When audio is defeated
There is pin multiplexing or when equipment do not have so much audio output interface in outgoing interface, then can not complete Multi-audio-frequency different hardware
The function of output simultaneously, have impact on Consumer's Experience.
The content of the invention
For this reason, it may be necessary to a kind of technical scheme of Multi-audio-frequency output is provided, to solve to connect currently without multiple audio output
Mouthful equipment simultaneously Multi-audio-frequency can not be supported to separate, the problem of causing poor user experience.
To achieve the above object, a kind of Multi-audio-frequency output device is inventor provided, described device includes processor, audio
Codec, storage medium and at least one output equipment;The processor is connected with audio codec, the processor with
Storage medium is connected, and the audio codec is connected with output equipment;The storage medium is stored with computer program, described
Following steps are realized when computer program is executed by processor:
Multigroup audio stream data is received, audio stream data is classified according to preset configuration information, the preset configuration
Information includes corresponding output parameter during output equipment output data;
Resampling and stereo process are carried out to the audio stream for belonging to same output type, obtain audio mixing data;
Synthesized according to default composition rule to audio mixing data or without the audio stream data of stereo process, obtain sound
Frequency generated data, and audio generated data is transmitted to audio codec;
The audio codec is used to receive audio generated data, and the rule of detachment according to corresponding to default composition rule
Audio generated data is separated, and passed respectively by the audio mixing data after separation or without the audio stream data of stereo process
Output equipment corresponding to transporting to is exported.
Further, following steps are also realized when the computer program is executed by processor:
Judge whether present apparatus is in Multi-audio-frequency output mode, if then according to default composition rule to audio mixing data with
And synthesized without the audio stream data of stereo process, obtain audio generated data;Otherwise sound data or not are directly exported
By the audio stream data of stereo process.
Further, the audio codec be used for according to corresponding output parameter during output equipment output data and
Audio generated data, it is determined that default composition rule, and the rule of detachment according to corresponding to determining identified default composition rule.
Further, the audio codec is used to judge whether present apparatus is in more equipment output modes, if
Then audio generated data is separated according to rule of detachment corresponding to default composition rule, and by the audio mixing data after separation or not
Multiple different output equipments are transmitted separately to by the audio stream data of stereo process to be exported;Otherwise directly audio is closed
Exported into data transfer to corresponding output equipment.
Inventor additionally provides a kind of multi-audio-frequencoutput output method, and methods described is applied to Multi-audio-frequency output device;The dress
Put including processor, audio codec and at least one output equipment;The processor is connected with audio codec, described
Audio codec is connected with output equipment;It the described method comprises the following steps:
Processor receives multigroup audio stream data, and audio stream data is classified according to preset configuration information, described pre-
If configuration information includes corresponding output parameter during output equipment output data;
Processor carries out resampling and stereo process to the audio stream for belonging to same output type, obtains audio mixing data;
Processor synthesizes according to default composition rule to audio mixing data or without the audio stream data of stereo process,
Audio generated data is obtained, and audio generated data is transmitted to audio codec;
Audio codec receives audio generated data, and rule of detachment closes to audio according to corresponding to default composition rule
Separated into data, and be transmitted separately to correspondingly by the audio mixing data after separation or without the audio stream data of stereo process
Output equipment exported.
Further, methods described is further comprising the steps of:
Processor judges whether present apparatus is in Multi-audio-frequency output mode, if then according to default composition rule to audio mixing
Data and synthesized without the audio stream data of stereo process, obtain audio generated data;Otherwise sound number is directly exported
According to or without stereo process audio stream data.
Further, methods described is further comprising the steps of:
Audio codec according to corresponding output parameter during output equipment output data and audio generated data, it is determined that
Default composition rule, and the rule of detachment according to corresponding to determining identified default composition rule.
Further, methods described includes:
Audio codec judges whether present apparatus is in more equipment output modes, if then according to default composition rule pair
The rule of detachment answered separates to audio generated data, and by the audio mixing data after separation or the audio without stereo process
Flow data is transmitted separately to multiple different output equipments and exported;Otherwise directly audio generated data is transmitted to corresponding
Output equipment is exported.
Prior art, the multi-audio-frequencoutput output method and device of above-mentioned technical proposal are different from, methods described includes step:Place
Manage device and receive multigroup audio stream data, audio stream data is classified according to preset configuration information, and it is same defeated to belonging to
The audio stream for going out type carries out resampling and stereo process, obtains audio mixing data, and according to default composition rule to audio mixing data
Or synthesized without the audio stream data of stereo process, obtain audio generated data, and by audio generated data transmit to
Audio codec;Audio codec receives audio generated data, and the rule of detachment pair according to corresponding to default composition rule
Audio generated data is separated, and is transmitted respectively by the audio mixing data after separation or without the audio stream data of stereo process
Exported to corresponding output equipment.The present invention can make the equipment of single audio output interface also to carry out Multi-audio-frequency simultaneously
Separation output, effectively reduces hardware cost, improves Consumer's Experience.
Brief description of the drawings
Fig. 1 is the structural representation of the existing Multi-audio-frequency output device described in background technology;
Fig. 2 is the structural representation of the Multi-audio-frequency output device described in the embodiment of the invention;
Fig. 3 is the structural representation of the Multi-audio-frequency output device described in another embodiment of the present invention;
Fig. 4 is the schematic diagram of the default composition rule described in the embodiment of the invention;
Fig. 5 is the schematic diagram of the default composition rule described in another embodiment of the present invention;
Fig. 6 is the flow chart of the multi-audio-frequencoutput output method described in the embodiment of the invention;
Description of reference numerals:
101st, processor;
102nd, audio codec;
103rd, storage medium;
104th, output equipment.
Embodiment
To describe the technology contents of technical scheme, construction feature, the objects and the effects in detail, below in conjunction with specific reality
Apply example and coordinate accompanying drawing to be explained in detail.
Referring to Fig. 3, the structural representation for the Multi-audio-frequency output device described in another embodiment of the present invention.Institute
It is the electronic equipment with audio frequency process and audio output function to state device, such as tablet personal computer, personal computer, mobile phone terminal
Deng.Described device includes processor 101, audio codec 102, storage medium 103 and at least one output equipment 104;Institute
State processor 101 to be connected with audio codec 102, the audio codec 102 is connected with output equipment 104.Such as Fig. 2 institutes
Show, the processor can be that CPU, CPU and Audio Codec (i.e. audio codec) are connected by I2S audio-frequency bus
Connect.The output equipment includes bluetooth equipment (bluetooth in such as Fig. 2), earphone wearable device (earphone in such as Fig. 2), microphone
Equipment (loudspeaker in such as Fig. 2).
The storage medium 103 is connected with processor 101, and the storage medium 103 is stored with computer program, the meter
Calculation machine program realizes following steps when being executed by processor:
Multigroup audio stream data is received, audio stream data is classified according to preset configuration information.The preset configuration
Information includes corresponding output parameter during output equipment output data.The output parameter includes the sample rate of output equipment, sound
One or more in road number, output accuracy.By classifying to audio stream data, so as to judge to draw each audio fluxion
Which, according to finally being exported by output equipment, it is ready for the separation of follow-up Multi-audio-frequency data
Resampling and stereo process then are carried out to the audio stream for belonging to same output type, obtain audio mixing data.Such as
Multimedia sound, three groups of tone, VOIP voices audio stream datas are notified to occur simultaneously, it is necessary to be exported on one device, can
So that multimedia sound and notice tone are set as into microphone apparatus (such as loudspeaker) is exported, VOIP voices are set as that bluetooth is defeated
Go out.Because multimedia sound, notice tone belong to same output type (being exported by same output equipment), thus can be first
Multimedia sound and notice tone are subjected to resampling, to cause the sampling rate after resampling to be adopted to what microphone apparatus was supported
Sample rate, stereo process then is carried out to both again, so as to follow-up further synthesis output.
Then synthesized, obtained to audio mixing data or without the audio stream data of stereo process according to default composition rule
Transmitted to audio generated data, and by audio generated data to audio codec.Needing all audio streams of synchronism output
In data, the audio stream data of some output type may only have one, it is also possible to be multiple.Thus to multigroup audio stream
After data are classified, in fact it could happen that following several situations:(1) audio stream data of each output type has two or more;
(2) there is the audio stream data of at least one output type in two or more, and there is at least one output type
Audio stream data only has one.Therefore, audio generated data may be synthesized to obtain by following several tissues:(1) to more
Group audio mixing data are synthesized;(2) multigroup synthesize to one group of audio mixing data and without the audio stream data of stereo process;
(3) multigroup audio mixing data and one group are synthesized without the audio stream data of stereo process;(4) to it is multigroup without audio mixing at
The audio stream data of reason is synthesized.
After audio generated data is transmitted to audio codec, the audio codec is used to receive audio composite number
According to, and the rule of detachment according to corresponding to default composition rule carries out lossless separation to audio generated data, and will be mixed after separation
Sound data or without stereo process audio stream data be transmitted separately to corresponding to output equipment exported.Further,
The audio codec is used for according to corresponding output parameter during output equipment output data and audio generated data, it is determined that
Default composition rule, and the rule of detachment according to corresponding to determining identified default composition rule.Audio codec can lead to
Corresponding output parameter when crossing reading hardware output listing to know each output equipment output data.
As shown in figure 4, the schematic diagram for the default composition rule described in the embodiment of the invention.This default conjunction
Into rule be based on extension output accuracy synthesized, specifically each output type audio stream data output accuracy one
In the case of cause (such as being 16bit/8bit), according to sample rate frequency multiplication relation, by extending output accuracy to audio mixing data
Or synthesized without the audio stream data of stereo process, obtain audio generated data.
As shown in figure 5, the schematic diagram for the default composition rule described in another embodiment of the present invention.This is default
Composition rule is the audio synthesis based on audio generated data is exported according to higher sample rate, such as after synthesis
The output accuracy of data is highest output accuracy, and the highest output accuracy refers to the audio mixing data for synthesizing the audio generated data
Or the maximum without output accuracy in the audio stream data of stereo process.The sound channel of audio generated data after synthesis
Number is highest channel number, and the highest channel number refers to the audio mixing data for synthesizing the audio generated data or without stereo process
Audio stream data in channel number maximum.In further embodiments, default composition rule can also be according to being actually needed
Self-defined selection, need to only meet the sample rate of audio generated data can support multigroup audio output sample rate to combine.
In order to which the Multi-audio-frequency output preferably to single device is controlled, in certain embodiments, the computer program
Following steps are also realized when being executed by processor:Judge whether present apparatus is in Multi-audio-frequency output mode, if then according to pre-
If composition rule synthesizes to audio mixing data and without the audio stream data of stereo process, audio generated data is obtained;
Otherwise sound data or the audio stream data without stereo process are directly exported.Device whether Multi-audio-frequency output mode can basis
Whether processor, which receives corresponding touch command, is judged.In short, be exactly device be in Multi-audio-frequency equipment simultaneously it is defeated
Do well (i.e. multigroup audio stream data needs to carry out while export by identical or different output equipment) when, to audio fluxion
According to being synthesized, otherwise handled without synthesis, export audio stream data, audio coding decoding one by one to audio codec successively
After device receives audio stream data, each audio stream data is transmitted to corresponding output equipment and exported.
In certain embodiments, whether the audio codec is used to judge present apparatus in more equipment output mould
Formula, if then audio generated data is separated according to rule of detachment corresponding to default composition rule, and by the audio mixing after separation
Data are transmitted separately to multiple different output equipments and exported without the audio stream data of stereo process;Otherwise it is direct
Audio generated data is transmitted to corresponding output equipment and exported.Such as multimedia sound and this two groups of audio streams of notice tone
Data occur simultaneously, and multimedia sound and notice tone can be exported by microphone apparatus (such as loudspeaker), then can be with
First to both progress resampling, stereo process, obtain audio mixing data and be transferred to audio codec.Audio codec is sentenced
Surely more equipment output modes (as long as i.e. a kind of voice data of output type) are not presently within, thus audio mixing data are not carried out
Lossless separation, but directly audio mixing data are sent to microphone apparatus and exported.So, audio codec is only being set more
During standby output mode, lossless separation just is carried out to audio generated data, saves equipment power dissipation.
As shown in fig. 6, the flow chart for the multi-audio-frequencoutput output method described in the embodiment of the invention.Methods described
Applied to Multi-audio-frequency output device;Described device includes processor, audio codec and at least one output equipment;The place
Reason device is connected with audio codec, and the audio codec is connected with output equipment;It the described method comprises the following steps:
Initially enter step S601 processors and receive multigroup audio stream data, according to preset configuration information to audio stream data
Classified.The preset configuration information includes corresponding output parameter during output equipment output data.
Then enter step S602 processors and resampling and stereo process carried out to the audio stream for belonging to same output type,
Obtain audio mixing data.
Then enter step S603 processors according to default composition rule to audio mixing data or the sound without stereo process
Frequency flow data is synthesized, and obtains audio generated data, and audio generated data is transmitted to audio codec.In some realities
Apply in example, methods described is further comprising the steps of:Corresponding output ginseng when audio codec is according to output equipment output data
Number and audio generated data, it is determined that default composition rule, and the separation according to corresponding to determining identified default composition rule
Rule.
Then enter step S604 audio codecs and receive audio generated data, and according to corresponding to default composition rule
Rule of detachment separates to audio generated data, and by the audio mixing data after separation or the audio fluxion without stereo process
Exported according to corresponding output equipment is transmitted separately to.
In order to which the Multi-audio-frequency output preferably to single device is controlled, in certain embodiments, methods described also includes
Following steps:Processor judges whether present apparatus is in Multi-audio-frequency output mode, if then according to default composition rule to mixed
Sound data and synthesized without the audio stream data of stereo process, obtain audio generated data;Otherwise sound is directly exported
Data or the audio stream data without stereo process.
In order to which the Multi-audio-frequency output preferably to single device is controlled, in certain embodiments, methods described includes:Sound
Frequency codec judges whether present apparatus is in more equipment output modes, if then according to extractor gauge corresponding to default composition rule
Then audio generated data is separated, and distinguished by the audio mixing data after separation or without the audio stream data of stereo process
Transmit to multiple different output equipments and exported;Otherwise directly audio generated data is transmitted to corresponding output equipment
Row output.
The invention provides a kind of multi-audio-frequencoutput output method and device, methods described includes step:Processor receives multigroup
Audio stream data, audio stream data is classified according to preset configuration information, and to belonging to the audio of same output type
Stream carries out resampling and stereo process, obtains audio mixing data, and according to default composition rule to audio mixing data or without audio mixing
The audio stream data of processing is synthesized, and obtains audio generated data, and audio generated data is transmitted to audio codec;
Audio codec receives audio generated data, and rule of detachment enters to audio generated data according to corresponding to default composition rule
Row separation, and by the audio mixing data after separation or without the audio stream data of stereo process be transmitted separately to corresponding to output set
It is standby to be exported.The present invention can make the equipment of single audio output interface also to carry out Multi-audio-frequency while separate output, effectively
Hardware cost is reduced, improves Consumer's Experience.
It should be noted that although the various embodiments described above have been described herein, but not thereby limit
The scope of patent protection of the present invention.Therefore, based on the present invention innovative idea, to embodiment described herein carry out change and repair
Change, or the equivalent structure or equivalent flow conversion made using description of the invention and accompanying drawing content, directly or indirectly will be with
Upper technical scheme is used in other related technical areas, is included within the scope of patent protection of the present invention.
Claims (8)
1. a kind of Multi-audio-frequency output device, it is characterised in that described device includes processor, audio codec, storage medium
With at least one output equipment;The processor is connected with audio codec, and the processor is connected with storage medium, described
Audio codec is connected with output equipment;The storage medium is stored with computer program, and the computer program is processed
Device realizes following steps when performing:
Multigroup audio stream data is received, audio stream data is classified according to preset configuration information, the preset configuration information
Corresponding output parameter during including output equipment output data;
Resampling and stereo process are carried out to the audio stream for belonging to same output type, obtain audio mixing data;
Synthesized according to default composition rule to audio mixing data or without the audio stream data of stereo process, obtain audio conjunction
Transmitted into data, and by audio generated data to audio codec;
The audio codec is used to receive audio generated data, and according to corresponding to default composition rule rule of detachment to sound
Frequency generated data is separated, and is transmitted separately to by the audio mixing data after separation or without the audio stream data of stereo process
Corresponding output equipment is exported.
2. Multi-audio-frequency output device as claimed in claim 1, it is characterised in that when the computer program is executed by processor
Also realize following steps:
Judge whether present apparatus is in Multi-audio-frequency output mode, if then according to default composition rule to audio mixing data and not
Synthesized by the audio stream data of stereo process, obtain audio generated data;Otherwise directly export sound data or without
The audio stream data of stereo process.
3. Multi-audio-frequency output device as claimed in claim 1, it is characterised in that the audio codec is used for according to output
Corresponding output parameter and audio generated data during equipment output data, it is determined that default composition rule, and according to identified
Rule of detachment corresponding to default composition rule determination.
4. the Multi-audio-frequency output device as described in claim 1 or 3, it is characterised in that the audio codec is used to judge
Whether present apparatus is in more equipment output modes, if then according to rule of detachment corresponding to default composition rule to audio composite number
According to being separated, and multiple differences are transmitted separately to by the audio mixing data after separation or without the audio stream data of stereo process
Output equipment exported;Otherwise directly audio generated data is transmitted to corresponding output equipment and exported.
5. a kind of multi-audio-frequencoutput output method, it is characterised in that methods described is applied to Multi-audio-frequency output device;Described device includes
Processor, audio codec and at least one output equipment;The processor is connected with audio codec, and the audio is compiled
Decoder is connected with output equipment;It the described method comprises the following steps:
Processor receives multigroup audio stream data, and audio stream data is classified according to preset configuration information, the pre- establishing
Confidence breath includes corresponding output parameter during output equipment output data;
Processor carries out resampling and stereo process to the audio stream for belonging to same output type, obtains audio mixing data;
Processor synthesizes according to default composition rule to audio mixing data or without the audio stream data of stereo process, obtains
Audio generated data, and audio generated data is transmitted to audio codec;
Audio codec receive audio generated data, and according to corresponding to default composition rule rule of detachment to audio composite number
According to being separated, and by the audio mixing data after separation or without the audio stream data of stereo process be transmitted separately to corresponding to it is defeated
Go out equipment to be exported.
6. multi-audio-frequencoutput output method as claimed in claim 5, it is characterised in that methods described is further comprising the steps of:
Processor judges whether present apparatus is in Multi-audio-frequency output mode, if then according to default composition rule to audio mixing data
And synthesized without the audio stream data of stereo process, obtain audio generated data;Otherwise directly export sound data or
Without the audio stream data of stereo process.
7. multi-audio-frequencoutput output method as claimed in claim 5, it is characterised in that methods described is further comprising the steps of:
Audio codec is according to corresponding output parameter during output equipment output data and audio generated data, it is determined that default
Composition rule, and the rule of detachment according to corresponding to determining identified default composition rule.
8. the multi-audio-frequencoutput output method as described in claim 5 or 7, it is characterised in that methods described includes:
Audio codec judges whether present apparatus is in more equipment output modes, if then according to corresponding to default composition rule
Rule of detachment separates to audio generated data, and by the audio mixing data after separation or the audio fluxion without stereo process
Exported according to multiple different output equipments are transmitted separately to;Otherwise directly audio generated data is transmitted to corresponding output
Equipment is exported.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710900894.3A CN107749299B (en) | 2017-09-28 | 2017-09-28 | Multi-audio output method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710900894.3A CN107749299B (en) | 2017-09-28 | 2017-09-28 | Multi-audio output method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107749299A true CN107749299A (en) | 2018-03-02 |
CN107749299B CN107749299B (en) | 2021-07-09 |
Family
ID=61255111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710900894.3A Active CN107749299B (en) | 2017-09-28 | 2017-09-28 | Multi-audio output method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107749299B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446092A (en) * | 2018-03-06 | 2018-08-24 | 京东方科技集团股份有限公司 | Audio-frequency inputting method, audio output device, equipment and storage medium |
CN109714316A (en) * | 2018-12-03 | 2019-05-03 | 视联动力信息技术股份有限公司 | A kind of mixed audio processing method and a kind of view networked system of view networking |
CN110349584A (en) * | 2019-07-31 | 2019-10-18 | 北京声智科技有限公司 | A kind of audio data transmission method, device and speech recognition system |
CN110876098A (en) * | 2018-08-31 | 2020-03-10 | Oppo广东移动通信有限公司 | Audio processing method and electronic equipment |
CN112346700A (en) * | 2020-11-04 | 2021-02-09 | 浙江华创视讯科技有限公司 | Audio transmission method, device and computer readable storage medium |
CN115794025A (en) * | 2023-02-07 | 2023-03-14 | 南京芯驰半导体科技有限公司 | Vehicle-mounted audio partition output system and method |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2514682A1 (en) * | 2002-12-28 | 2004-07-15 | Samsung Electronics Co., Ltd. | Method and apparatus for mixing audio stream and information storage medium |
US20070083365A1 (en) * | 2005-10-06 | 2007-04-12 | Dts, Inc. | Neural network classifier for separating audio sources from a monophonic audio signal |
CN2912166Y (en) * | 2006-07-05 | 2007-06-13 | 北京汉邦高科数字技术有限公司 | Multi-rute video inputting device |
CN101361112A (en) * | 2006-08-15 | 2009-02-04 | 美国博通公司 | Time-warping of decoded audio signal after packet loss |
CN101814289A (en) * | 2009-02-23 | 2010-08-25 | 数维科技(北京)有限公司 | Digital audio multi-channel coding method and system of DRA (Digital Recorder Analyzer) with low bit rate |
CN102171754A (en) * | 2009-07-31 | 2011-08-31 | 松下电器产业株式会社 | Coding device and decoding device |
CN103262158A (en) * | 2010-09-28 | 2013-08-21 | 华为技术有限公司 | Device and method for postprocessing decoded multi-hannel audio signal or decoded stereo signal |
CN103733256A (en) * | 2011-06-07 | 2014-04-16 | 三星电子株式会社 | Audio signal processing method, audio encoding apparatus, audio decoding apparatus, and terminal adopting the same |
US20140210740A1 (en) * | 2013-01-31 | 2014-07-31 | Samsung Electronics Co., Ltd | Portable apparatus having plurality of touch screens and sound output method thereof |
US20150148928A1 (en) * | 2013-11-22 | 2015-05-28 | Qualcomm Incorporated | Audio output device that utilizes policies to concurrently handle multiple audio streams from different source devices |
CN105959438A (en) * | 2016-07-06 | 2016-09-21 | 惠州Tcl移动通信有限公司 | Processing method and system for audio multi-channel output loudspeaker and mobile phone |
CN106293595A (en) * | 2015-05-27 | 2017-01-04 | 中兴通讯股份有限公司 | The audio frequency output processing method of a kind of terminal unit and terminal unit |
CN106373582A (en) * | 2016-08-26 | 2017-02-01 | 腾讯科技(深圳)有限公司 | Multi-channel audio processing method and device |
CN106816152A (en) * | 2016-12-05 | 2017-06-09 | 乐视控股(北京)有限公司 | A kind of audio mixing method, device and electronic equipment |
WO2017143003A1 (en) * | 2016-02-18 | 2017-08-24 | Dolby Laboratories Licensing Corporation | Processing of microphone signals for spatial playback |
-
2017
- 2017-09-28 CN CN201710900894.3A patent/CN107749299B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2514682A1 (en) * | 2002-12-28 | 2004-07-15 | Samsung Electronics Co., Ltd. | Method and apparatus for mixing audio stream and information storage medium |
US20070083365A1 (en) * | 2005-10-06 | 2007-04-12 | Dts, Inc. | Neural network classifier for separating audio sources from a monophonic audio signal |
CN2912166Y (en) * | 2006-07-05 | 2007-06-13 | 北京汉邦高科数字技术有限公司 | Multi-rute video inputting device |
CN101361112A (en) * | 2006-08-15 | 2009-02-04 | 美国博通公司 | Time-warping of decoded audio signal after packet loss |
CN101814289A (en) * | 2009-02-23 | 2010-08-25 | 数维科技(北京)有限公司 | Digital audio multi-channel coding method and system of DRA (Digital Recorder Analyzer) with low bit rate |
CN102171754A (en) * | 2009-07-31 | 2011-08-31 | 松下电器产业株式会社 | Coding device and decoding device |
CN103262158A (en) * | 2010-09-28 | 2013-08-21 | 华为技术有限公司 | Device and method for postprocessing decoded multi-hannel audio signal or decoded stereo signal |
CN103733256A (en) * | 2011-06-07 | 2014-04-16 | 三星电子株式会社 | Audio signal processing method, audio encoding apparatus, audio decoding apparatus, and terminal adopting the same |
US20140210740A1 (en) * | 2013-01-31 | 2014-07-31 | Samsung Electronics Co., Ltd | Portable apparatus having plurality of touch screens and sound output method thereof |
US20150148928A1 (en) * | 2013-11-22 | 2015-05-28 | Qualcomm Incorporated | Audio output device that utilizes policies to concurrently handle multiple audio streams from different source devices |
CN106293595A (en) * | 2015-05-27 | 2017-01-04 | 中兴通讯股份有限公司 | The audio frequency output processing method of a kind of terminal unit and terminal unit |
WO2017143003A1 (en) * | 2016-02-18 | 2017-08-24 | Dolby Laboratories Licensing Corporation | Processing of microphone signals for spatial playback |
CN105959438A (en) * | 2016-07-06 | 2016-09-21 | 惠州Tcl移动通信有限公司 | Processing method and system for audio multi-channel output loudspeaker and mobile phone |
CN106373582A (en) * | 2016-08-26 | 2017-02-01 | 腾讯科技(深圳)有限公司 | Multi-channel audio processing method and device |
CN106816152A (en) * | 2016-12-05 | 2017-06-09 | 乐视控股(北京)有限公司 | A kind of audio mixing method, device and electronic equipment |
Non-Patent Citations (2)
Title |
---|
WANG YH: "Research on input and output signals for identification of multimedia sound transmission system", 《INTERNATIONAL SYMPOSIUM ON TEST AUTOMATION AND INSTRUMENTATION》 * |
周冬梅,王建勤: "多通道音频串行端口在音频输出方案中的应用", 《计算机工程》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446092A (en) * | 2018-03-06 | 2018-08-24 | 京东方科技集团股份有限公司 | Audio-frequency inputting method, audio output device, equipment and storage medium |
CN108446092B (en) * | 2018-03-06 | 2021-10-08 | 京东方科技集团股份有限公司 | Audio output method, audio output device, audio output apparatus, and storage medium |
CN110876098A (en) * | 2018-08-31 | 2020-03-10 | Oppo广东移动通信有限公司 | Audio processing method and electronic equipment |
CN110876098B (en) * | 2018-08-31 | 2022-01-11 | Oppo广东移动通信有限公司 | Audio processing method and electronic equipment |
CN109714316A (en) * | 2018-12-03 | 2019-05-03 | 视联动力信息技术股份有限公司 | A kind of mixed audio processing method and a kind of view networked system of view networking |
CN110349584A (en) * | 2019-07-31 | 2019-10-18 | 北京声智科技有限公司 | A kind of audio data transmission method, device and speech recognition system |
CN112346700A (en) * | 2020-11-04 | 2021-02-09 | 浙江华创视讯科技有限公司 | Audio transmission method, device and computer readable storage medium |
CN112346700B (en) * | 2020-11-04 | 2023-06-13 | 浙江华创视讯科技有限公司 | Audio transmission method, device and computer readable storage medium |
CN115794025A (en) * | 2023-02-07 | 2023-03-14 | 南京芯驰半导体科技有限公司 | Vehicle-mounted audio partition output system and method |
Also Published As
Publication number | Publication date |
---|---|
CN107749299B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107749299A (en) | A kind of multi-audio-frequencoutput output method and device | |
US11109138B2 (en) | Data transmission method and system, and bluetooth headphone | |
CN109246671B (en) | Data transmission method, device and system | |
US8527258B2 (en) | Simultaneous interpretation system | |
US20090088208A1 (en) | Apparatus having mobile terminal as input/output device of computer and related system and method | |
CN112513981A (en) | Spatial audio parameter merging | |
CN110010139B (en) | Audio input/output method, system and computer readable storage medium | |
CN102474543A (en) | Integrated circuit for routing of audio signals | |
US8391513B2 (en) | Stream synthesizing device, decoding unit and method | |
CN111078930A (en) | Audio file data processing method and device | |
CN106161724A (en) | Audio output control method and device | |
US20060079271A1 (en) | Stereo terminal and method for voice calling using the stereo terminal | |
US20230156404A1 (en) | Audio processing method and apparatus, wireless earphone, and storage medium | |
CN103677728A (en) | Electronic device and audio information sharing method | |
CN103260124A (en) | Audio testing method and system of mobile terminal | |
CN102063908A (en) | Audio data transmission method between PC and mobile phone | |
CN107957908A (en) | A kind of microphone sharing method, device, computer equipment and storage medium | |
JP2008276476A (en) | Information processor | |
KR102423827B1 (en) | Transmission apparatus and method for controlling the transmission apparatus thereof | |
CN103957141A (en) | Multi-service emergency communication system | |
CN102598536A (en) | Apparatus and method for reproducing multi-sound channel contents using dlna in mobile terminal | |
JP5163682B2 (en) | Interpreter call system | |
CN101102334B (en) | Method and device for establishing information transmission between terminal device and mobile terminal | |
CN206517484U (en) | Audio frequency and video instructor in broadcasting's equipment | |
WO2016082579A1 (en) | Voice output method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province Applicant after: Ruixin Microelectronics Co., Ltd Address before: 350003 building 18, No.89, software Avenue, Gulou District, Fuzhou City, Fujian Province Applicant before: Fuzhou Rockchips Electronics Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |