CN107423022A - Audio frequency playing method and device - Google Patents

Audio frequency playing method and device Download PDF

Info

Publication number
CN107423022A
CN107423022A CN201710661122.9A CN201710661122A CN107423022A CN 107423022 A CN107423022 A CN 107423022A CN 201710661122 A CN201710661122 A CN 201710661122A CN 107423022 A CN107423022 A CN 107423022A
Authority
CN
China
Prior art keywords
audio
voice data
data
audio data
mixed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710661122.9A
Other languages
Chinese (zh)
Inventor
钟波
肖适
刘志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu XGIMI Technology Co Ltd
Original Assignee
Chengdu XGIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu XGIMI Technology Co Ltd filed Critical Chengdu XGIMI Technology Co Ltd
Priority to CN201710661122.9A priority Critical patent/CN107423022A/en
Publication of CN107423022A publication Critical patent/CN107423022A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The present invention provides a kind of audio frequency playing method and device, applied to the intelligent terminal for being provided with microphone and loudspeaker.Method includes:Before system audio data enter loudspeaker, the first voice data that system audio data and microphone collect is mixed into second audio data;Second audio feeding loudspeaker is played out.In this way, can simultaneously play system audio and microphone collection audio.

Description

Audio frequency playing method and device
Technical field
The present invention relates to audio-frequency information technical field, in particular to a kind of audio frequency playing method and device.
Background technology
Existing intelligent terminal can not simultaneously play system audio (the sound video, the music that are such as stored in intelligent terminal) With the audio of microphone collection.Common practice is to rely on the K song softwares in intelligent terminal to realize system sound indirectly at present Play while frequency and microphone audio, but when user exits K song softwares or closes K song softwares, just can not broadcast simultaneously again Put.This way limitation is too big, and user uses very inconvenient.
For example, when user is given a lecture or during site-teaching, it is necessary to need to play system while passing through microphone talk System audio, but software can only be sung according to above-mentioned way, user by specific K and could realized.On the one hand, the intelligent terminal of user K song softwares may be installed;On the other hand, it is necessary to system audio first will be imported into K song softwares, when system audio is larger When, take longer.Poor user experience.
The content of the invention
In view of this, it is an object of the invention to provide a kind of audio frequency playing method and device so that intelligent terminal can The audio that play system audio and microphone are gathered simultaneously.
In order to achieve the above object, the embodiment of the present invention provides a kind of audio frequency playing method, applied to being provided with microphone With the intelligent terminal of loudspeaker, methods described includes:
Before system audio data enter the loudspeaker, the system audio data are collected with the microphone The first voice data be mixed into second audio data;
The second audio data is sent into the loudspeaker to play out.
Alternatively, in the above-mentioned methods, the loudspeaker is broadcast for obtaining voice data from audio driven file Put;The first voice data that the system audio data and the microphone collect is mixed into the step of second audio data Suddenly, including:
Before the system audio data are written into the audio driven file, the system audio data are passed through mixed Sound device is mixed into the 3rd voice data;
3rd voice data and first voice data are mixed into the second audio data.
Alternatively, in the above-mentioned methods, the 3rd voice data and first voice data are mixed into described the The step of two voice datas, including:
By the 3rd voice data and first voice data write-in audio driven file;
Synthesis processing is carried out to the 3rd voice data and first voice data in the audio driven file, Obtain the second audio data.
Alternatively, in the above-mentioned methods, the 3rd voice data and first voice data are mixed into described the The step of two voice datas, including:
Before the 3rd voice data is write into audio driven file, to the 3rd voice data and described first Voice data carries out synthesis processing, obtains the second audio data;
The second audio data is write into the audio driven file.
Alternatively, in the above-mentioned methods, the intelligent terminal is the terminal based on android system, and the mixer is AudioFlinger。
The embodiment of the present invention also provides a kind of audio playing apparatus, whole applied to the intelligence for being provided with microphone and loudspeaker End, described device include:
Mixing module, for system audio data enter the loudspeaker before, by the system audio data and institute State the first voice data that microphone collects and be mixed into second audio data;
Playing module, played out for the second audio data to be sent into the loudspeaker.
Alternatively, in said apparatus, the loudspeaker is used for the acquisition voice data from audio driven file and broadcast Put;The mixing module includes:
First mixing submodule, for before the system audio data are written into the audio driven file, by institute State system audio data and the 3rd voice data is mixed into by mixer;
Second mixing submodule, for the 3rd voice data and first voice data to be mixed into described second Voice data.
Alternatively, in said apparatus, described second mixes submodule by the 3rd voice data and first sound Frequency evidence is mixed into the mode of the second audio data, including:
By the 3rd voice data and first voice data write-in audio driven file;
Synthesis processing is carried out to the 3rd voice data and first voice data in the audio driven file, Obtain the second audio data.
Alternatively, in said apparatus, described second mixes submodule by the 3rd voice data and first sound Frequency evidence is mixed into the mode of the second audio data, including:
Before the 3rd voice data is write into audio driven file, to the 3rd voice data and described first Voice data carries out synthesis processing, obtains the second audio data;
The second audio data is write into the audio driven file.
Alternatively, the intelligent terminal is the terminal based on android system, and the mixer is AudioFlinger.
Audio frequency playing method and device provided in an embodiment of the present invention, will before system audio data enter loudspeaker System audio data are mixed into second audio data with the first voice data that microphone collects, and second audio data is sent Enter loudspeaker to play out.In this way, system audio can be directly realized by intelligent terminal and microphone gathers the same of audio When play.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing elaborates.
Brief description of the drawings
Scheme in order to illustrate the embodiments of the present invention more clearly is right below in conjunction with the accompanying drawing in the embodiment of the present invention Technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is a part of the invention Embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making wound The every other embodiment obtained under the premise of the property made work, belongs to the scope of protection of the invention.
Fig. 1 is a kind of block diagram of intelligent terminal provided in an embodiment of the present invention.
Fig. 2 is a kind of schematic flow sheet of audio frequency playing method provided in an embodiment of the present invention.
Fig. 3 is the sub-step schematic diagram of step S110 shown in Fig. 2.
Fig. 4 is a kind of functional block diagram of audio playing apparatus provided in an embodiment of the present invention.
Icon:100- intelligent terminals;110- memories;120- processors;130- microphones;140- loudspeakers;200- sounds Frequency playing device;210- mixing modules;211- first mixes submodule;212- second mixes submodule;220- playing modules.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Ground describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.Generally exist The component described in accompanying drawing with the embodiment of the present invention of displaying can be configured to arrange and design with a variety of herein.
Therefore, below the detailed description of the embodiments of the invention to providing in the accompanying drawings be not intended to limit it is claimed The scope of the present invention, but be merely representative of the present invention selected embodiment.Based on embodiments of the invention, people in the art The every other embodiment that member is obtained on the premise of creative work is not made, belongs to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain Xiang Yi It is defined, then it further need not be defined and explained in subsequent accompanying drawing in individual accompanying drawing.Meanwhile the present invention's In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that instruction or hint relative importance.
As shown in figure 1, it is a kind of block diagram of intelligent terminal 100 provided in an embodiment of the present invention.In the present embodiment In, the intelligent terminal 100 can be the terminal based on android system.The intelligent terminal 100 includes audio playing apparatus 200th, memory 110, processor 120, microphone 130 and loudspeaker 140.
The memory 110, processor 120, microphone 130 and 140 each element of loudspeaker between each other directly or Ground connection is electrically connected with, to realize the transmission of data or interaction.For example, these elements can pass through one or more communication between each other Bus or signal wire, which are realized, to be electrically connected with.Audio playing apparatus 200 include it is at least one can be with software or firmware (firmware) Form be stored in memory 110 or be solidificated in the operating system (operating system, OS) of intelligent terminal 100 Software function module.Processor 120 is used to performing storage executable module in the memory 110, for example, application program and Software function module and computer program included by audio playing apparatus 200 etc..
Wherein, memory 110 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only storage (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Processor 120 can be a kind of IC chip, have the ability of signal transacting.Above-mentioned processor can also It is general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;Can also be digital signal processor (DSP)), application specific integrated circuit (ASIC), field programmable gate Array (FPGA) either other programmable logic devices, discrete gate or transistor logic device, discrete hardware components.Can realize or Person performs disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be microprocessor or Person's processor can also be any conventional processor etc..
Microphone 130 is a kind of energy switching device that voice signal is converted to electric signal, in the present embodiment, for adopting Collect the voice signal of the place environment of intelligent terminal 100, and be converted into the voice data that can be identified by intelligent terminal 100.Raise one's voice Device 140 is a kind of electro-acoustic transducer device, for converting electrical signals to voice signal.
It should be appreciated that structure shown in Fig. 1 is only to illustrate, intelligent terminal 100 can also have than shown in Fig. 1 more or more Few component, it is possible to have the configuration different from shown in Fig. 1.It should be noted that each component shown in Fig. 1 can with hardware, Software or its combination are realized.
The audio that existing intelligent terminal 100 can only distinguish play system audio and microphone 130 gathers, can not broadcast simultaneously Put.Study and find through inventor, some K song softwares can realize background music and microphone 130 gather audio when play, Background music in K song softwares is also a kind of system audio in itself, and it is substantially by by background music and microphone People's phonosynthesis of 130 collections plays the road track to be played while realizing background music and voice into track all the way.
However, the voice data in intelligent terminal 100 needs to broadcast into loudspeaker 140 by the audio driven file of bottom Put, and audio driven file has exclusivity, in the case where K song softwares are opened, system audio just can not enter back into.Namely Say, it is necessary to system audio is imported into K song softwares, the audio that system audio and microphone 130 gather could be synthesized.When K is sung When software is closed or user exits the application interface of K song softwares, intelligent terminal 100 just can not be realized above-mentioned while broadcasting again Function, larger limitation be present.
Study and find through inventor, if before the play system audio of intelligent terminal 100, to system audio and microphone 130 The audio of collection carries out synthesis processing, then can allow intelligent terminal 100 be not necessarily dependent on K song software realize it is above-mentioned and meanwhile play Function, that is, under any circumstance, while intelligent terminal 100 can realize that system audio and microphone 130 gather audio Play.
To describe the implementation of the embodiment of the present invention in detail, as shown in Fig. 2 being a kind of sound provided in an embodiment of the present invention The schematic flow sheet of frequency player method, applied to the intelligent terminal 100 shown in Fig. 1.Below by the idiographic flow shown in Fig. 2 and Step is described in detail.
Step S110, system audio data enter the loudspeaker 140 before, by the system audio data with it is described The first voice data that microphone 130 collects is mixed into second audio data.
In the present embodiment, the voice signal collected is converted to the first voice data and is stored in intelligence by microphone 130 In terminal 100.Under normal circumstances, system audio may have multichannel (e.g., system carries the audio after audio, multimedia decoding Deng), it is necessary to each system audio is mixed into voice data all the way, and the voice data being mixed into is write into audio driven file In, so that audio driven file plays out the delivery of audio data to loudspeaker 140.That is, loudspeaker 140 really from Voice data is obtained in audio driven file to play out.
In detail, step S110 can include step S111 and the sub-steps of step S112 two.
Step S111, before the system audio data are written into the audio driven file, by the system audio Data are mixed into the 3rd voice data by mixer.
In the present embodiment, intelligent terminal 100 can be the terminal based on android system.When intelligent terminal 100 is base When the terminal of android system, the mixer refers to the serviced component AudioFlinger in Android audio systems. Each system audio data in intelligent terminal 100 can pass through the serviced component before audio driven file is written into AudioFlinger is mixed into the 3rd voice data.
Step S112, the 3rd voice data and first voice data are mixed into the second audio data.
In the present embodiment, the time point that the 3rd voice data mixes with the first voice data can be had multiple.For example, Step S112 can be achieved by the steps of:
By the 3rd voice data and first voice data write-in audio driven file;In audio driven text Synthesis processing is carried out to the 3rd voice data and first voice data in part, obtains the second audio data.
And for example, step S112 can be achieved by the steps of:
Before the 3rd voice data is write into audio driven file, to the 3rd voice data and described first Voice data carries out synthesis processing, obtains the second audio data;The second audio data is write into the audio driven File.
That is, second audio data is finally recorded in audio driven file, and passed by the audio driven file Loudspeaker 140 is handed to play out.And the synthesis of second audio data can be in the audio driven file or Before first audio data file and the 3rd voice data are written into the audio driven file, the embodiment of the present invention pair This is not limited.
Step S120, the second audio data is sent into the loudspeaker 140 and played out.
In the present embodiment, the second audio data is finally to be transferred to loudspeaker 140 from audio driven file.Implement When, second audio data can be in audio driven file synthesis obtain or be written into audio driven file it Preceding synthesis obtains, and the present embodiment is without limitation.
As shown in figure 4, the embodiment of the present invention also provides a kind of audio playing apparatus 200, applied to the intelligence shown in Fig. 1 eventually End 100.Described device includes mixing module 210 and playing module 220.
Wherein, mixing module 210 is used for before system audio data enter the loudspeaker 140, by the system sound Frequency is mixed into second audio data according to the first voice data collected with the microphone 130.
In the present embodiment, the description as described in the mixing module 210 is specifically referred to the detailed of step S110 shown in Fig. 2 Thin description, that is, the step S110 can be performed by the mixing module 210.
The playing module 220 is used to play out the second audio data feeding loudspeaker 140.
In the present embodiment, the description as described in the playing module 220 is specifically referred to the detailed of step S120 shown in Fig. 2 Thin description, that is, the step S120 can be performed by the playing module 220.
Alternatively, in the present embodiment, the loudspeaker 140 is used to obtain voice data progress from audio driven file Play.The mixing module 210 can include the first mixing submodule 211 and second and mix submodule 212.
Wherein, the first mixing submodule 211 is used to be written into the audio driven text in the system audio data Before part, the system audio data are mixed into the 3rd voice data by mixer.
Wherein, the intelligent terminal 100 can be the terminal based on android system, and the mixer can be Serviced component AudioFlinger in Android audio systems.
In the present embodiment, the description as described in the described first mixing submodule 211 is specifically referred to step shown in Fig. 3 S111 detailed description, that is, the step S111 can be performed by the described first mixing submodule 211.
The second mixing submodule 212 is used to the 3rd voice data and first voice data being mixed into institute State second audio data.
In the present embodiment, the description as described in the described second mixing submodule 212 is specifically referred to step shown in Fig. 3 S112 detailed description, that is, the step S112 can be performed by the described second mixing submodule 212.
Alternatively, as a kind of embodiment, the second mixing submodule 212 by the 3rd voice data with it is described The mode that first voice data is mixed into the second audio data can include:
By the 3rd voice data and first voice data write-in audio driven file;In audio driven text Synthesis processing is carried out to the 3rd voice data and first voice data in part, obtains the second audio data.
Alternatively, as another embodiment, described second mixes submodule 212 by the 3rd voice data and institute Stating the first voice data and being mixed into the mode of the second audio data to include:
Before the 3rd voice data is write into audio driven file, to the 3rd voice data and described first Voice data carries out synthesis processing, obtains the second audio data;The second audio data is write into the audio driven File.
In summary, audio frequency playing method and device provided in an embodiment of the present invention, enter in system audio data and raise one's voice Before device 140, the first voice data that system audio data and microphone 130 collect is mixed into second audio data, and Second audio data feeding loudspeaker 140 is played out.In this way, system audio can be directly realized by intelligent terminal 100 Played while gathering audio with microphone 130.Under any scene, intelligent terminal 100 can simultaneously play system sound The audio that frequency and microphone 130 are gathered, it is no longer limited in K song softwares.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, can also pass through Other modes are realized.Device embodiment described above is only schematical, for example, flow chart and block diagram in accompanying drawing Show the device of multiple embodiments according to the present invention, method and computer program product architectural framework in the cards, Function and operation.At this point, each square frame in flow chart or block diagram can represent the one of a module, program segment or code Part, a part for the module, program segment or code include one or more and are used to realize holding for defined logic function Row instruction.It should also be noted that at some as in the implementation replaced, the function that is marked in square frame can also with different from The order marked in accompanying drawing occurs.For example, two continuous square frames can essentially perform substantially in parallel, they are sometimes It can perform in the opposite order, this is depending on involved function.It is it is also noted that every in block diagram and/or flow chart The combination of individual square frame and block diagram and/or the square frame in flow chart, function or the special base of action as defined in performing can be used Realize, or can be realized with the combination of specialized hardware and computer instruction in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate to form an independent portion Point or modules individualism, can also two or more modules be integrated to form an independent part.
If the function is realized in the form of software function module and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply and deposited between these entities or operation In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to Nonexcludability includes, so that process, method, article or equipment including a series of elements not only will including those Element, but also the other element including being not expressly set out, or it is this process, method, article or equipment also to include Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Other identical element also be present in process, method, article or equipment including the key element.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

  1. A kind of 1. audio frequency playing method, it is characterised in that applied to the intelligent terminal for being provided with microphone and loudspeaker, the side Method includes:
    Before system audio data enter the loudspeaker, the system audio data and the microphone are collected the One voice data is mixed into second audio data;
    The second audio data is sent into the loudspeaker to play out.
  2. 2. according to the method for claim 1, it is characterised in that the loudspeaker is used to obtain sound from audio driven file Frequency evidence plays out;The first voice data that the system audio data and the microphone collect is mixed into the second sound Frequency according to the step of, including:
    Before the system audio data are written into the audio driven file, the system audio data are passed through into mixer It is mixed into the 3rd voice data;
    3rd voice data and first voice data are mixed into the second audio data.
  3. 3. according to the method for claim 2, it is characterised in that by the 3rd voice data and first voice data The step of being mixed into the second audio data, including:
    By the 3rd voice data and first voice data write-in audio driven file;
    The 3rd voice data is synthesized with first voice data in the audio driven file, obtained described Second audio data.
  4. 4. according to the method for claim 2, it is characterised in that by the 3rd voice data and first voice data The step of being mixed into the second audio data, including:
    Before the 3rd voice data is write into audio driven file, to the 3rd voice data and first audio Data carry out synthesis processing, obtain the second audio data;
    The second audio data is write into the audio driven file.
  5. 5. according to the method described in any one of claim 2~4, it is characterised in that the intelligent terminal is based on Android systems The terminal of system, the mixer are AudioFlinger.
  6. A kind of 6. audio playing apparatus, it is characterised in that applied to the intelligent terminal for being provided with microphone and loudspeaker, the dress Put including:
    Mixing module, for system audio data enter the loudspeaker before, by the system audio data and the wheat The first voice data that gram wind collects is mixed into second audio data;
    Playing module, played out for the second audio data to be sent into the loudspeaker.
  7. 7. device according to claim 6, it is characterised in that the loudspeaker is used to obtain sound from audio driven file Frequency evidence plays out;The mixing module includes:
    First mixing submodule, for before the system audio data are written into the audio driven file, by the system System voice data is mixed into the 3rd voice data by mixer;
    Second mixing submodule, for the 3rd voice data and first voice data to be mixed into second audio Data.
  8. 8. device according to claim 7, it is characterised in that described second mixes submodule by the 3rd voice data The mode of the second audio data is mixed into first voice data, including:
    By the 3rd voice data and first voice data write-in audio driven file;
    Synthesis processing is carried out to the 3rd voice data and first voice data in the audio driven file, obtained The second audio data.
  9. 9. device according to claim 7, it is characterised in that described second mixes submodule by the 3rd voice data The mode of the second audio data is mixed into first voice data, including:
    Before the 3rd voice data is write into audio driven file, to the 3rd voice data and first audio Data carry out synthesis processing, obtain the second audio data;
    The second audio data is write into the audio driven file.
  10. 10. according to the device described in any one of claim 7~9, it is characterised in that the intelligent terminal is to be based on Android The terminal of system, the mixer are AudioFlinger.
CN201710661122.9A 2017-08-04 2017-08-04 Audio frequency playing method and device Pending CN107423022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710661122.9A CN107423022A (en) 2017-08-04 2017-08-04 Audio frequency playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710661122.9A CN107423022A (en) 2017-08-04 2017-08-04 Audio frequency playing method and device

Publications (1)

Publication Number Publication Date
CN107423022A true CN107423022A (en) 2017-12-01

Family

ID=60436494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710661122.9A Pending CN107423022A (en) 2017-08-04 2017-08-04 Audio frequency playing method and device

Country Status (1)

Country Link
CN (1) CN107423022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829254A (en) * 2018-06-21 2018-11-16 广东小天才科技有限公司 A kind of implementation method, system and relevant device that microphone is interacted with user terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834929A (en) * 2009-03-13 2010-09-15 深圳富泰宏精密工业有限公司 Audio playing system and method
CN102932567A (en) * 2012-11-19 2013-02-13 东莞宇龙通信科技有限公司 Terminal and audio processing method
CN106201421A (en) * 2016-06-28 2016-12-07 维沃移动通信有限公司 A kind of terminal and audio-frequency processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834929A (en) * 2009-03-13 2010-09-15 深圳富泰宏精密工业有限公司 Audio playing system and method
US20100235661A1 (en) * 2009-03-13 2010-09-16 Foxconn Communication Technology Corp. Electronic device and power management method for audio control system thereof
CN102932567A (en) * 2012-11-19 2013-02-13 东莞宇龙通信科技有限公司 Terminal and audio processing method
CN106201421A (en) * 2016-06-28 2016-12-07 维沃移动通信有限公司 A kind of terminal and audio-frequency processing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓凡平: "《深入理解Android:卷I》", 30 September 2011, 机械工业出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829254A (en) * 2018-06-21 2018-11-16 广东小天才科技有限公司 A kind of implementation method, system and relevant device that microphone is interacted with user terminal

Similar Documents

Publication Publication Date Title
CN105847939A (en) Bullet screen play method, bullet screen play device and bullet screen play system
CN108462895A (en) Sound effect treatment method, device and machine readable media
CN103345376B (en) A kind of digital audio signal volume monitoring method
CN108564966A (en) The method and its equipment of tone testing, the device with store function
CN104918016B (en) A kind of system of multimedia multi information reproduced in synchronization
CN106851162A (en) video recording method and device
CN105096981A (en) Multipath sound playing method, multipath sound playing device and multipath sound playing system
CN106131472A (en) A kind of kinescope method and mobile terminal
CN103474082A (en) Multi-microphone vocal accompaniment marking system and method thereof
CN110111613A (en) Audio-frequency processing method and system based on interactive teaching and learning scene
CN106504759B (en) A kind of mixed audio processing method and terminal device
CN103942247B (en) The information providing method and device of multimedia resource
CN107423022A (en) Audio frequency playing method and device
CN102141957A (en) Auxiliary test method, device and system for remote real machine
CN111796794A (en) Voice data processing method and system and virtual machine
CN102693722A (en) Voice recognition method and voice recognition device and digital television
CN108206886A (en) A kind of audio frequency playing method and device and terminal
CN106571108A (en) Advisement player having voice interaction function
CN110444233A (en) A kind of the audio reception playback method and system of number sound shadow amusement equipment
CN110491355A (en) A kind of electronic organ plays practice interactive system and electronic organ
CN106911978A (en) Voice signal imported smart machine earphone again
CN103873935A (en) Data processing method and device
CN103399724B (en) A kind of DAB loudness measurement card
CN105278959B (en) Control display method and device and terminal equipment
CN107911740A (en) A kind of method and device of the sound collecting based on video playing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171201

RJ01 Rejection of invention patent application after publication