CN105760436A - Audio data processing method and device - Google Patents

Audio data processing method and device Download PDF

Info

Publication number
CN105760436A
CN105760436A CN201610073552.4A CN201610073552A CN105760436A CN 105760436 A CN105760436 A CN 105760436A CN 201610073552 A CN201610073552 A CN 201610073552A CN 105760436 A CN105760436 A CN 105760436A
Authority
CN
China
Prior art keywords
data source
voice data
collection
information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610073552.4A
Other languages
Chinese (zh)
Other versions
CN105760436B (en
Inventor
任超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201610073552.4A priority Critical patent/CN105760436B/en
Publication of CN105760436A publication Critical patent/CN105760436A/en
Application granted granted Critical
Publication of CN105760436B publication Critical patent/CN105760436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9562Bookmark management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

The invention provides an audio data processing method and device. The audio data processing method comprises the steps that when a storage instruction sent by a terminal is received, storage information of audio data corresponding to the storage instruction is obtained; when a playing instruction sent by the terminal is received, a target data source of the audio data is looked up according to the storage instruction; the audio data is played according to target data source; the audio data is played according to the target data source. Due to the fact that an intelligent loudspeaker box obtains the storage information of an audio signal when the terminal sends the storage instruction, the corresponding data source is obtained according to the storage information during next-time playing, the problem of playing failure when no local data source exists is avoided, and the functional completeness is improved.

Description

The processing method of voice data and device
Technical field
The present invention relates to field of terminal technology, particularly relate to processing method and the device of a kind of voice data.
Background technology
Intelligent sound box generally collects the song oneself liked such as through the software in terminal, owing to song origin mode is relatively more, such as the machine shares music, DLNA (DLNA, DIGITALLIVINGNETWORKALLIANCE) server music, pluggable equipment (such as USB flash disk, bluetooth etc.) music and network data etc..In these sources, most stable of broadcast source is network data, and generally (except copyright restrictions), the song collected by user can both find the data source of correspondence and again play.
But, the song of the data source of USB flash disk or DLNA server has been collected before such as user, when next time plays, if data source is absent from, (such as USB flash disk is unplugged, or DLNA server is not turned on), may result in playback of songs failure, it is seen that the perfect in shape and function of existing intelligent sound box is poor.
Summary of the invention
The embodiment of the present invention provides processing method and device, the technical problem poor to solve existing intelligent sound box perfect in shape and function of a kind of voice data.
For solving the problems referred to above, technical scheme provided by the invention is as follows:
The embodiment of the present invention provides the processing method of a kind of voice data, comprising:
When receiving the collection instruction that terminal sends, obtain the Information on Collection of voice data corresponding to described collection instruction;
When receiving the play instruction that described terminal sends, search the target data source of described voice data according to described Information on Collection;
According to described target data source playing audio data.
The embodiment of the present invention also provides for the process device of a kind of voice data, comprising:
Acquisition module, for when receiving the collection instruction that terminal sends, obtaining the Information on Collection of voice data corresponding to described collection instruction;
Search module, for when receiving the play instruction that described terminal sends, searching the target data source of described voice data according to described Information on Collection;
Playing module, according to described target data source playing audio data.
Compared to prior art, the processing method of the voice data of the present embodiment and device, when receiving the voice data collection instruction that terminal sends, obtain the Information on Collection of voice data corresponding to this collection instruction, when receiving the play instruction that this terminal sends, search the target data source of this voice data according to this Information on Collection;This voice data is play according to this target data source.Due to when terminal sends collection instruction, intelligent sound obtains the Information on Collection of audio signal, so that when next time plays, obtaining corresponding data source according to this Information on Collection, avoid the problem that when local data source is absent from, broadcasting is failed, improve the integrity of function.
Accompanying drawing explanation
The flow chart of the processing method of the voice data that Fig. 1 provides for the embodiment of the present invention one;
The flow chart of the processing method of the voice data that Fig. 2 provides for the embodiment of the present invention two;
The flow chart of the processing method of the voice data that Fig. 3 provides for the embodiment of the present invention three;
The structural representation processing device of the voice data that Fig. 4 provides for the embodiment of the present invention four;
The preferred structure schematic diagram processing device of the voice data that Fig. 5 provides for the embodiment of the present invention four.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those skilled in the art obtain under not making creative work premise, broadly fall into the scope of protection of the invention.
Refer to the flow chart of the processing method of the voice data that Fig. 1, Fig. 1 provide for the embodiment of the present invention one.
The processing method of the voice data of this preferred embodiment, including:
Step S101, when receiving the collection instruction that terminal sends, obtains the Information on Collection of voice data corresponding to this collection instruction.
Such as, when user clicks collection key by the software installed in terminal on certain voice data, obtain this collection instruction, terminal sends this collection instruction to intelligent sound box afterwards, this collection instruction is for collecting the voice data of correspondence, and this voice data is such as song or recording file or broadcast etc..When intelligent sound box receives this collection instruction, it is possible to this voice data is resolved, to obtain this Information on Collection;Or by terminal, this voice data being carried out parsing and obtain this Information on Collection, intelligent sound box obtains this Information on Collection from terminal again, this terminal can be the equipment such as mobile phone, panel computer.This Information on Collection URL (URL, UniformResourceLocator) that can include broadcast information, beacon information and this audio signal etc..
Step S102, when receiving the play instruction that this terminal sends, searches the target data source of this voice data according to this Information on Collection.
Such as, when user clicks the play button of this software being positioned in terminal afterwards, intelligent sound box obtains the target data source of this audio signal namely played file according to the Information on Collection collected before.Certainly the mode obtained has multiple, for instance search in the local database, or by the URL link of storage to NetFind, it is also possible to the voice data of correspondence is mated in a network according to beacon information.For song, intelligent sound box obtains the song files of this song according to the Information on Collection collected before.
Step S103, plays this voice data according to this target data source.
Such as, intelligent sound box plays this voice data according to the step S102 played file obtained.
The processing method of the voice data of this preferred embodiment, due to when terminal sends collection instruction, intelligent sound obtains the Information on Collection of audio signal, so that when next time plays, corresponding data source is obtained according to this Information on Collection, avoid the problem that when local data source is absent from, broadcasting is failed, improve the integrity of function.
Refer to the flow chart of the processing method of the voice data that Fig. 2, Fig. 2 provide for the embodiment of the present invention two.
The processing method of the voice data of this preferred embodiment, including:
Step S201, when receiving the collection instruction that terminal sends, obtains the Information on Collection of voice data corresponding to this collection instruction.
Such as, when user clicks collection key by the software installed in terminal on certain voice data, obtain this collection instruction, terminal sends this collection instruction to intelligent sound box afterwards, this collection instruction is for collecting the voice data of correspondence, and this voice data is such as song or recording file or broadcast etc..When intelligent sound box receives this collection instruction, it is possible to this voice data is resolved, to obtain this Information on Collection;Or by terminal, this voice data being carried out parsing and obtain this Information on Collection, intelligent sound box obtains this Information on Collection from terminal again, certainly after obtaining this Information on Collection, the Information on Collection of this voice data can also be stored by intelligent sound box, in order to use next time.This terminal can be the equipment such as mobile phone, panel computer.This Information on Collection can include original uniform resource position mark URL of broadcast information and this audio signal etc..
Step S202, when receiving the play instruction that this terminal sends, judges whether to find the local data source of this voice data according to broadcast information;
Such as, this Information on Collection includes this broadcast information;This broadcast information can include but are not limited to: the title of voice data, the music style of voice data, the singer of voice data, the time span of voice data, the initial time of voice data, the end time of voice data, voice data form etc..When user clicks the play button of software in terminal, intelligent sound box searches corresponding local data source in the local database according to this broadcast information;This local data base can include the data base or USB flash disk etc. of terminal storage voice data, to judge whether to find played file, if finding the played file of correspondence, then performs step S203, otherwise, then performs step S204.
If step S203 finds the local data source of this voice data, then using this local data source as this target data source.
Such as, if intelligent sound box finds the played file of this voice data in the local database, using this played file as target data source.
If step S204 does not find the local data source of this voice data, judge whether to find the first data source of this voice data according to original URL;
Such as, this Information on Collection also includes this original URL namely the URL being resolved to during this audio signal during intelligent sound box collection;This first data source is the data source determined according to this original URL;When intelligent sound box does not find the played file of this voice data in the local database, intelligent information searches corresponding data source namely the first data source according to original URL;
Specifically, can before searching, first judge whether this original URL is empty, if this original URL is not empty, then by network linking to this original URL, to judge whether to get the played file of correspondence, if got, then perform step S205, otherwise, then perform step S206.
If step S205 finds the first data source of this voice data, then using this first data source as this target data source.
Such as, if intelligent sound box finds the played file of this voice data according to this original URL, using this played file as target data source;
If step S206 does not find the first data source of this voice data, then judge whether to find second data source corresponding with this broadcast information according to this broadcast information;
Such as, intelligent sound box is searched and the highest played file of this broadcast information matching degree in Internet resources, using this played file as the second data source;
If step S207 finds the second data source of this voice data, then using this second data source as this target data source.
Such as, if intelligent sound box finds the played file mated with this broadcast information in Internet resources, then using this played file as target data source.
If find the second data source of this voice data, the method also includes:
(1) the first URL with this second data source is obtained;
(2) replace this original URL with this first URL and store.
Such as, when intelligent sound box gets the played file mated with this broadcast information, obtain the URL of this played file, and original URL is replaced with the first URL and stores.
Step S208, plays this voice data according to this target data source.
Such as, intelligent sound box plays this voice data according to the played file that above step obtains.
The processing method of the voice data of this preferred embodiment, due to when collecting voice data, just obtain corresponding Information on Collection, if when playing, when not finding local played file, in Internet resources, search the played file of correspondence according to this Information on Collection, thus when preventing local data source to be unplugged, the problem that cannot play, improves intelligent sound box and provides the integrity of function, improve Consumer's Experience.
Refer to the flow chart of the processing method of the voice data that Fig. 3, Fig. 3 provide for the embodiment of the present invention three.
The present embodiment is described in detail for voice data for song, the processing method of the voice data of this preferred embodiment, including:
Step S301, when receiving the collection instruction that terminal sends, obtains the Information on Collection of song corresponding to this collection instruction.
Such as, when user clicks collection key by the software installed in terminal in certain song, obtain this collection instruction, terminal sends this collection instruction to intelligent sound box afterwards, this collection instruction is used for collecting this song, when intelligent sound box receives this collection instruction, it is possible to this song is resolved, to obtain this Information on Collection and to store;Or by terminal, this song being carried out parsing obtain this Information on Collection and store, intelligent sound box obtains this Information on Collection from terminal again, this terminal can be the equipment such as mobile phone, panel computer.This Information on Collection can include uniform resource position mark URL of broadcast information and this song etc..
Step S302, when receiving the play instruction that this terminal sends, judges whether to find the local data source of this song according to broadcast information;
Such as, this Information on Collection includes this broadcast information;This broadcast information can include but are not limited to: the title of song, the music style of song, the singer of song, the time span of song, the initial time of song, the end time of song, song form etc..When user clicks the play button of software in terminal, intelligent sound box searches the local data source corresponding with this broadcast information in the local database according to this broadcast information;This local data base can include the data base or USB flash disk etc. of terminal storage song, to judge whether to find played file, if finding the played file of correspondence, then performs step S303, otherwise, then performs step S304.
If step S303 finds the local data source of this song, then using this local data source as this target data source.
Such as, if intelligent sound box finds the played file of this song in the local database, using this played file as target data source;
If step S304 does not find the local data source of this song, judge whether to find the first data source of this song according to original URL;
Such as, this Information on Collection also includes this original URL namely the URL being resolved to during this song during intelligent sound box collection;This first data source is the data source determined according to this original URL;When intelligent sound box does not find the played file of this song in the local database, intelligent information searches corresponding data source namely the first data source according to original URL;
Specifically, can before searching, first judge whether this original URL is empty, if this original URL is not empty, then by network linking to this original URL, to judge whether to get the played file of correspondence, if got, then perform step S305, otherwise, then perform step S306.
If step S305 finds the first data source of this song, then using this first data source as this target data source.
Such as, if intelligent sound box finds the played file of this song according to this original URL, using this played file as target data source;
If step S306 does not find the first data source of this song, then judge whether to find second data source corresponding with this broadcast information according to this broadcast information;
Such as, intelligent sound box is searched and the played file of the highest song of this broadcast information matching degree in Internet resources, using this played file as the second data source;If find the second data source of this song, intelligent sound box also obtains the URL of this played file, and the original URL of this song is replaced with this first URL and stores.
If step S307 finds the second data source of this song, then using this second data source as this target data source.
Such as, if intelligent sound box finds the played file mated with this broadcast information in Internet resources, then using this played file as target data source.
Step S308, plays this song according to this target data source.
Such as, intelligent sound box plays this song according to the played file that above step obtains.
The processing method of the voice data of this preferred embodiment, due to when collecting song, just obtain corresponding Information on Collection, if when playing, when not finding local played file, in Internet resources, search the played file of correspondence according to this Information on Collection, thus when preventing local data source to be unplugged, the problem that cannot play, improves intelligent sound box and provides the integrity of function, improve Consumer's Experience.
Refer to the structural representation processing device of the voice data that Fig. 4, Fig. 4 provide for the embodiment of the present invention four.The process device 40 of the voice data of this preferred embodiment includes: acquisition module 41, lookup module 42, playing module 43;
This acquisition module 41, for when receiving the collection instruction that terminal sends, obtaining the Information on Collection of voice data corresponding to this collection instruction;
Such as, when user clicks collection key by the software installed in terminal on certain voice data, obtain this collection instruction, terminal sends this collection instruction to intelligent sound box afterwards, this collection instruction is for collecting the voice data of correspondence, and this voice data is such as song or recording file or broadcast etc..When acquisition module 41 receives this collection instruction, it is possible to this voice data is resolved, to obtain this Information on Collection;Or by terminal, this voice data being carried out parsing and obtain this Information on Collection, intelligent sound box obtains this Information on Collection from terminal again, certainly after obtaining this Information on Collection, the Information on Collection of this voice data can also be stored by intelligent sound box, in order to use next time.This terminal can be the equipment such as mobile phone, panel computer.This Information on Collection can include original uniform resource position mark URL of broadcast information and this audio signal etc..
This lookup module 42, for when receiving the play instruction that this terminal sends, searching the target data source of this voice data according to this Information on Collection;
Such as, when user clicks the play button of this software being positioned in terminal afterwards, search module 42 and obtain the target data source of this audio signal namely played file according to the Information on Collection collected before.Certainly the mode searching module 42 acquisition has multiple, for instance search in the local database, or by the URL link of storage to NetFind, it is also possible to the voice data of correspondence is mated in a network according to beacon information.For song, intelligent sound box obtains the song files of this song according to the Information on Collection collected before.
Playing module 43, plays this voice data according to this target data source.
Such as, intelligent sound box plays this voice data according to the played file searching module 42 acquisition.
As it is shown in figure 5, this lookup module includes: first searches submodule 421, second searches submodule 422 and the 3rd lookup submodule 423;
This first lookup submodule 421, for judging whether to find the local data source of this voice data according to broadcast information;Wherein, this Information on Collection includes this broadcast information;
If finding the local data source of this voice data, then using this local data source as this target data source.
Such as, this Information on Collection includes this broadcast information;This broadcast information can include but are not limited to: the title of voice data, the music style of voice data, the singer of voice data, the time span of voice data, the initial time of voice data, the end time of voice data, voice data form etc..When user clicks the play button of software in terminal, first searches submodule 421 searches corresponding local data source in the local database according to this broadcast information;This local data base can include the data base or USB flash disk etc. of terminal storage voice data.
Such as, if first searches submodule 421 when finding the played file of this voice data in the local database, using this played file as target data source.
This second lookup submodule 422, for after judge whether to find the local data source of this voice data according to broadcast information, if not finding the local data source of this voice data, judging whether to find the first data source of this voice data according to original URL, this first data source is the data source determined according to this original URL;Wherein, this Information on Collection also includes this original URL;
If finding the first data source of this voice data, then using this first data source as this target data source.
Such as, this Information on Collection also includes this original URL namely the URL being resolved to during this audio signal during intelligent sound box collection;This first data source is the data source determined according to this original URL;When first searches the played file that submodule 421 does not find this voice data in the local database, second searches submodule 422 searches corresponding data source namely the first data source according to original URL;
Specifically, can before second searches submodule 422 lookup, first judge whether this original URL is empty, if this original URL is not empty, then second search submodule 422 by network linking to this original URL, to judge whether to get the played file of correspondence.
Such as, if second searches submodule 422 when finding the played file of this voice data according to this original URL, using this played file as target data source;
3rd searches submodule 423, for after judge whether to find the first data source of this voice data according to original URL, if not finding the first data source of this voice data, then judge whether to find second data source corresponding with this broadcast information according to this broadcast information;
If finding the second data source of this voice data, then using this second data source as this target data source.
Such as, the 3rd searches submodule 423 searches and the highest played file of this broadcast information matching degree in Internet resources, using this played file as the second data source;
Such as, if the 3rd searches submodule 423 and find the played file mated with this broadcast information in Internet resources, then using this played file as target data source.
This replacement submodule 424, for when finding the second data source of this voice data, obtaining the first URL with this second data source;
Replace this original URL with this first URL and store.
Such as, when the 3rd lookup submodule 423 gets the played file mated with this broadcast information, replace submodule 424 and obtain the URL of this played file, and original URL is replaced with the first URL and stores.
When being embodied as, above modules can realize as independent entity, it is also possible to carries out combination in any, realizes as same or several entities, and being embodied as of above modules referring to embodiment of the method above, can not repeat them here.
The process device of the voice data of this preferred embodiment, due to when collecting voice data, just obtain corresponding Information on Collection, if when playing, when not finding local played file, in Internet resources, search the played file of correspondence according to this Information on Collection, thus when preventing local data source to be unplugged, the problem that cannot play, improves intelligent sound box and provides the integrity of function, improve Consumer's Experience.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment can be by the hardware that program carrys out instruction relevant and completes, this program can be stored in a computer-readable recording medium, storage medium may include that read only memory (ROM, ReadOnlyMemory), random access memory (RAM, RandomAccessMemory), disk or CD etc..
Processing method and the device of a kind of the voice data above embodiment of the present invention provided are described in detail, principles of the invention and embodiment are set forth by specific case used herein, and the explanation of above example is only intended to help to understand method and the core concept thereof of the present invention;Simultaneously for those skilled in the art, according to the thought of the present invention, all will change in specific embodiments and applications, in sum, this specification content should not be construed as limitation of the present invention.

Claims (10)

1. the processing method of a voice data, it is characterised in that including:
When receiving the collection instruction that terminal sends, obtain the Information on Collection of voice data corresponding to described collection instruction;
When receiving the play instruction that described terminal sends, search the target data source of described voice data according to described Information on Collection;
According to described target data source playing audio data.
2. the processing method of voice data according to claim 1, it is characterised in that the step of the described target data source searching described voice data according to described Information on Collection includes:
Judge whether to find the local data source of described voice data according to broadcast information;Wherein, described Information on Collection includes described broadcast information;
If finding the local data source of described voice data, then using described local data source as described target data source.
3. the processing method of voice data according to claim 2, it is characterised in that after the described step judging whether to find the local data source of described voice data according to broadcast information, described method also includes:
If not finding the local data source of described voice data, judging whether to find the first data source of described voice data according to original URL, described first data source is the data source determined according to described original URL;Wherein, described Information on Collection also includes described original URL;
If finding the first data source of described voice data, then using described first data source as described target data source.
4. the processing method of voice data according to claim 3, it is characterised in that after the described step judging whether to find the first data source of described voice data according to original URL, described method also includes:
If not finding the first data source of described voice data, then judge whether to find second data source corresponding with described broadcast information according to described broadcast information;
If finding the second data source of described voice data, then using described second data source as described target data source.
5. the processing method of voice data according to claim 4, it is characterised in that if find the second data source of described voice data, described method also includes:
Obtain the first URL with described second data source;
Replace described original URL with described first URL and store.
6. the process device of a voice data, it is characterised in that including:
Acquisition module, for when receiving the collection instruction that terminal sends, obtaining the Information on Collection of voice data corresponding to described collection instruction;
Search module, for when receiving the play instruction that described terminal sends, searching the target data source of described voice data according to described Information on Collection;
Playing module, according to described target data source playing audio data.
7. the process device of voice data according to claim 6, it is characterised in that described lookup module includes: first searches submodule;
Described first searches submodule, for judging whether to find the local data source of described voice data according to broadcast information;Wherein, described Information on Collection includes described broadcast information;
If finding the local data source of described voice data, then using described local data source as described target data source.
8. the process device of voice data according to claim 7, it is characterised in that described lookup module also includes: second searches submodule;
Described second searches submodule, for after judge whether to find the local data source of described voice data according to broadcast information, if not finding the local data source of described voice data, judging whether to find the first data source of described voice data according to original URL, described first data source is the data source determined according to described original URL;Wherein, described Information on Collection also includes described original URL;
If finding the first data source of described voice data, then using described first data source as described target data source.
9. the process device of voice data according to claim 8, it is characterised in that described lookup module also includes: the 3rd searches submodule;
Described 3rd searches submodule, for after judge whether to find the first data source of described voice data according to original URL, if not finding the first data source of described voice data, then judge whether to find second data source corresponding with described broadcast information according to described broadcast information;
If finding the second data source of described voice data, then using described second data source as described target data source.
10. the process device of voice data according to claim 9, it is characterised in that described lookup module also includes: replace submodule;
Described replacement submodule, for when finding the second data source of described voice data, obtaining the first URL with described second data source;
Replace described original URL with described first URL and store.
CN201610073552.4A 2016-02-02 2016-02-02 The processing method and processing device of audio data Active CN105760436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610073552.4A CN105760436B (en) 2016-02-02 2016-02-02 The processing method and processing device of audio data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610073552.4A CN105760436B (en) 2016-02-02 2016-02-02 The processing method and processing device of audio data

Publications (2)

Publication Number Publication Date
CN105760436A true CN105760436A (en) 2016-07-13
CN105760436B CN105760436B (en) 2019-07-16

Family

ID=56329633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610073552.4A Active CN105760436B (en) 2016-02-02 2016-02-02 The processing method and processing device of audio data

Country Status (1)

Country Link
CN (1) CN105760436B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203624A (en) * 2017-05-26 2017-09-26 维沃移动通信有限公司 The single generation method of one kind song and mobile terminal
CN108259941A (en) * 2018-03-01 2018-07-06 北京达佳互联信息技术有限公司 Video broadcasting method and device
CN111193940A (en) * 2019-12-09 2020-05-22 腾讯科技(深圳)有限公司 Audio playing method and device, computer equipment and computer readable storage medium
CN111191071A (en) * 2019-11-12 2020-05-22 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted music collection synchronization system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021860A (en) * 2006-02-13 2007-08-22 国际商业机器公司 Method and system for invoking an audio hyperlink embedded in a markup document
CN101635826A (en) * 2008-07-21 2010-01-27 中国科学院计算技术研究所 Method for acquiring addresses of network video programs
CN102999583A (en) * 2012-11-14 2013-03-27 上海量明科技发展有限公司 Method and system for loading data by using stream media interaction frame
US20150120786A1 (en) * 2013-10-28 2015-04-30 Zoom International S.R.O. Multidimensional data representation
CN104915382A (en) * 2015-05-18 2015-09-16 广东欧珀移动通信有限公司 Music playing method and terminal
US9213705B1 (en) * 2011-12-19 2015-12-15 Audible, Inc. Presenting content related to primary audio content

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021860A (en) * 2006-02-13 2007-08-22 国际商业机器公司 Method and system for invoking an audio hyperlink embedded in a markup document
CN101635826A (en) * 2008-07-21 2010-01-27 中国科学院计算技术研究所 Method for acquiring addresses of network video programs
US9213705B1 (en) * 2011-12-19 2015-12-15 Audible, Inc. Presenting content related to primary audio content
CN102999583A (en) * 2012-11-14 2013-03-27 上海量明科技发展有限公司 Method and system for loading data by using stream media interaction frame
US20150120786A1 (en) * 2013-10-28 2015-04-30 Zoom International S.R.O. Multidimensional data representation
CN104915382A (en) * 2015-05-18 2015-09-16 广东欧珀移动通信有限公司 Music playing method and terminal

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203624A (en) * 2017-05-26 2017-09-26 维沃移动通信有限公司 The single generation method of one kind song and mobile terminal
CN108259941A (en) * 2018-03-01 2018-07-06 北京达佳互联信息技术有限公司 Video broadcasting method and device
CN111191071A (en) * 2019-11-12 2020-05-22 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted music collection synchronization system and method
CN111193940A (en) * 2019-12-09 2020-05-22 腾讯科技(深圳)有限公司 Audio playing method and device, computer equipment and computer readable storage medium
CN111193940B (en) * 2019-12-09 2021-07-06 腾讯科技(深圳)有限公司 Audio playing method and device, computer equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN105760436B (en) 2019-07-16

Similar Documents

Publication Publication Date Title
WO2017028624A1 (en) Method and device for processing resources
JP5313931B2 (en) A framework for correlating content on the local network with information on the external network
US7546288B2 (en) Matching media file metadata to standardized metadata
US20040019658A1 (en) Metadata retrieval protocols and namespace identifiers
KR101346731B1 (en) Method and apparatus for synchronizing feed information
US10885107B2 (en) Music recommendation method and apparatus
CN102761623B (en) Resource self-adaptive joins method for down loading, system, data storage server and communication system
CN105760436A (en) Audio data processing method and device
CN103873928A (en) Method, device and application server for playing video
CN101179474A (en) Download method, system and device
CN104572952A (en) Identification method and device for live multi-media files
CN102999359A (en) Mount response method of external storage equipment and electronic equipment
CN111680489B (en) Target text matching method and device, storage medium and electronic equipment
CN103945259A (en) Online video playing method and device
CN101833573A (en) Signal conditioning package and information processing method
CN104750839A (en) Data recommendation method, terminal and server
CN104853251A (en) Online collection method and device for multimedia data
CN105354318A (en) File searching method and device
CN100504852C (en) Method and system for connecting system DVD disc to relative web site
CN101226534B (en) Method, terminal and system for searching relevant document
CN102811167A (en) Methods and apparatuses for a network based on hierarchical name structure
CN102970578A (en) Multimedia information identifying and training method and device
CN102469361B (en) Method for automatically downloading interlude of television program and television
CN101155280A (en) Device, method, and computer program product for structuring digital-content program
CN105578297A (en) Audio and radio file fragment type repeat play method and system at WEB end

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong

Applicant before: Guangdong OPPO Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant