CN104615689A - Searching method and device - Google Patents

Searching method and device Download PDF

Info

Publication number
CN104615689A
CN104615689A CN201510033132.9A CN201510033132A CN104615689A CN 104615689 A CN104615689 A CN 104615689A CN 201510033132 A CN201510033132 A CN 201510033132A CN 104615689 A CN104615689 A CN 104615689A
Authority
CN
China
Prior art keywords
key message
voice data
keyword
described key
data corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510033132.9A
Other languages
Chinese (zh)
Inventor
姜岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201510033132.9A priority Critical patent/CN104615689A/en
Publication of CN104615689A publication Critical patent/CN104615689A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a searching method and device. Voice data corresponding to key information are obtained by obtaining the key information input by a user, and searching results matched with the key information and the voice data corresponding to the key information are output. According to the technical scheme, the diversity of the output content can be improved when the searching results are output to the user.

Description

A kind of searching method and device
[technical field]
The present invention relates to technical field of internet application, particularly relate to a kind of searching method and device.
[background technology]
At present, the method that search engine realizes searching for is: the keyword first receiving user's input or the image information uploaded; Then, utilize the recognition result of this keyword or image information to search for, obtain the Search Results matched with keyword or the Search Results matched with this image information, finally Search Results is presented to user.
Therefore, all that the Search Results of acquisition is exported to user by ways of presentation in prior art, be only limitted to the propelling movement of vision aspect, like this, when pushing Search Results, the information type exported to user is single, and lack change, the unicity of output information and dull property obviously can not meet the demand of present user.
[summary of the invention]
In view of this, embodiments providing a kind of searching method and device, the diversity improving the output content when exporting Search Results to user can being realized.
The one side of the embodiment of the present invention, provides a kind of searching method, comprising:
Obtain the key message of user's input;
Obtain the voice data that described key message is corresponding;
Export with the Search Results that described key message matches and export voice data corresponding to described key message.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, and described key message is image information, and the voice data that the described key message of described acquisition is corresponding comprises:
Obtain the index relative of keyword and voice data;
Image recognition is carried out to described image information, to obtain keyword corresponding to described image information;
According to the keyword that described image information is corresponding, search in described index relative, to obtain voice data corresponding to described keyword.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, and described key message is keyword, and the voice data that the described key message of described acquisition is corresponding comprises;
Obtain the index relative of keyword and voice data;
According to described keyword, search in described index relative, to obtain voice data corresponding to described keyword.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, it is characterized in that, described output is with the Search Results that described key message matches and export voice data corresponding to described key message, comprising:
Obtain the Search Results matched with described key message;
Represent the Search Results matched with described key message;
Play-over the voice data that described key message is corresponding; Or, play voice data corresponding to described key message according to trigger action.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, and described foundation trigger action plays voice data corresponding to described key message, comprising:
Representing the interface display trigger element of described Search Results;
Receive play instruction, described play instruction for described trigger element clicked triggered;
According to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
The one side of the embodiment of the present invention, provides a kind of searcher, comprising:
Acquiring unit, for obtaining the key message of user's input;
Processing unit, for obtaining voice data corresponding to described key message;
Output unit, for exporting with the Search Results that described key message matches and exporting voice data corresponding to described key message.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, and described key message is image information, described processing unit, specifically for:
Obtain the index relative of keyword and voice data;
Image recognition is carried out to described image information, to obtain keyword corresponding to described image information;
According to the keyword that described image information is corresponding, search in described index relative, to obtain voice data corresponding to described keyword.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, and described key message is keyword, described processing unit, specifically for;
Obtain the index relative of keyword and voice data;
According to described keyword, search in described index relative, to obtain voice data corresponding to described keyword.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, described device also comprises: search unit, for obtaining the Search Results matched with described key message;
Described output unit, specifically for:
Represent the Search Results matched with described key message;
Play-over the voice data that described key message is corresponding; Or, play voice data corresponding to described key message according to trigger action.
Aspect as above and arbitrary possible implementation, provide a kind of implementation further, when described output unit is used for playing voice data corresponding to described key message according to trigger action, specifically for:
Representing the interface display trigger element of described Search Results;
Receive play instruction, described play instruction for described trigger element clicked triggered;
According to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
As can be seen from the above technical solutions, the embodiment of the present invention has following beneficial effect:
In the embodiment of the present invention, can while exporting the Search Results matched with described key message, also export voice data corresponding to key message, thus not only in vision aspect to user's output information, can also to user's output information in sense of hearing aspect, can realize comprehensive various dimensions to user's output content, make up the vacancy of output information in sense of hearing aspect, to solve in prior art, the information type exported to user is single, lack the problem of change, therefore, the technical scheme that the embodiment of the present invention provides can realize the diversity improving the output content when exporting Search Results to user.
[accompanying drawing explanation]
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the searching method that the embodiment of the present invention provides;
Fig. 2 is the exemplary plot of the key message that the embodiment of the present invention provides;
Fig. 3 be the embodiment of the present invention provide representing first exemplary plot of interface display trigger element of described Search Results;
Fig. 4 be the embodiment of the present invention provide representing second exemplary plot of interface display trigger element of described Search Results;
Fig. 5 is the functional block diagram of the searcher that the embodiment of the present invention provides.
[embodiment]
Technical scheme for a better understanding of the present invention, is described in detail the embodiment of the present invention below in conjunction with accompanying drawing.
Should be clear and definite, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making other embodiments all obtained under creative work prerequisite, belong to the scope of protection of the invention.
The term used in embodiments of the present invention is only for the object describing specific embodiment, and not intended to be limiting the present invention." one ", " described " and " being somebody's turn to do " of the singulative used in the embodiment of the present invention and appended claims is also intended to comprise most form, unless context clearly represents other implications.
Should be appreciated that term "and/or" used herein is only a kind of incidence relation describing affiliated partner, can there are three kinds of relations in expression, and such as, A and/or B, can represent: individualism A, exists A and B simultaneously, these three kinds of situations of individualism B.In addition, character "/" herein, general expression forward-backward correlation is to the relation liking a kind of "or".
Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining " or " in response to detection ".Similarly, depend on linguistic context, phrase " if determination " or " if detecting (the conditioned disjunction event of statement) " can be construed as " when determining " or " in response to determining " or " when detecting (the conditioned disjunction event of statement) " or " in response to detection (the conditioned disjunction event of statement) ".
Embodiment one
The embodiment of the present invention provides a kind of searching method, please refer to Fig. 1, the schematic flow sheet of its searching method provided for the embodiment of the present invention, and as shown in the figure, the method comprises the following steps:
S101, obtains the key message of user's input.
S102, obtains the voice data that described key message is corresponding.
S103, exports with the Search Results that described key message matches and exports voice data corresponding to described key message.
It should be noted that, the executive agent of S101 ~ S103 can be searcher, this device can be positioned at the application of local terminal, or can also for being arranged in plug-in unit or SDK (Software Development Kit) (the Software Development Kit of the application of local terminal, the functional unit such as SDK), or can also be positioned at server, the embodiment of the present invention is not particularly limited this.
Be understandable that, described application can be mounted in the application program (nativeApp) in terminal, or can also be a web page program (webApp) of browser in terminal, and the embodiment of the present invention does not limit this.
It should be noted that, terminal involved in the embodiment of the present invention can include but not limited to personal computer (Personal Computer, PC), personal digital assistant (Personal Digital Assistant, PDA), radio hand-held equipment, panel computer (Tablet Computer), mobile phone, MP3 player, MP4 player etc.
Embodiment two
Based on the searching method that above-described embodiment one provides, the method for the embodiment of the present invention to S101 is specifically described.This step specifically can comprise:
Preferably, in the embodiment of the present invention, the key message of user's input can include but not limited to: keyword or image information.
Illustrate, the method obtaining the key message of user's input can include but not limited to: client can receive the keyword that user inputs in input frame, then sends the keyword of this reception to search engine.Or client also can receive the image information that user uploads, and then sends the image information of this reception to search engine.Wherein, this image information can for client the instant captured image information of camera head in the terminal, or the image information that also can prestore in the terminal for client.
In the embodiment of the present invention, described search engine can be positioned at client this locality, also can be positioned at server.
Embodiment three
The searching method provided based on above-described embodiment one and embodiment two, the method for the embodiment of the present invention to S102 is specifically described.This step specifically can comprise:
Illustrate, in the embodiment of the present invention, the method obtaining voice data corresponding to key message can comprise following two kinds:
The first: when key message is image information, first, obtain the index relative of keyword and voice data.Then, image recognition is carried out to described image information, to obtain keyword corresponding to described image information.Finally, according to the keyword that image information is corresponding, search in index relative, to obtain voice data corresponding to keyword.
Wherein, image recognition technology can be utilized to carry out image recognition to image information, to obtain the keyword corresponding to this image information.Such as, please refer to Fig. 2, the exemplary plot of its key message provided for the embodiment of the present invention, as shown in the figure, the image information that user uploads is the picture of a large panda, utilizes image recognition technology to carry out image recognition to this picture, obtain the keyword corresponding to this picture, i.e. " giant panda ".
The second: when key message is keyword, first, obtains the index relative of keyword and voice data.Then, according to described keyword, search in index relative, to obtain voice data corresponding to keyword.
Preferably, according to the keyword of nearest a period of time user input, can add up the wherein frequency of occurrences the most much higher keyword, can be the voice data of these keywords configuration correspondence, to generate the index relative of keyword and voice data.Again such as, also can according to the popularization demand of nearest a period of time, for the keyword relevant to popularization demand configures corresponding voice data, to generate the index relative of keyword and voice data.
In addition, can the index relative of the keyword of generation and voice data be stored in the audio database of client this locality, or also can the index relative of the keyword of generation and voice data be stored in the audio database of server, when needs use index relative, can obtain from audio database.And, can the generation keyword in cycle and the index relative of voice data, and add in audio database, to realize constantly supplementing and upgrading of index relative in audio database, make data more perfect.
Be understandable that, the voice data corresponding to keyword refers to the voice data relevant with the attribute of this keyword.Such as, for keyword " giant panda ", corresponding voice data can be the voice data of the cry of giant panda.Again such as, for keyword " sea ", corresponding voice data can be the voice data of wave sound.Again such as, for keyword " forest ", corresponding voice data can be the voice data of the cry of bird.Again such as, for keyword " train ", corresponding voice data can be that train is blown a whistle the voice data of sound.
Preferably, in the index relative of keyword and voice data, can at least one keyword corresponding for each voice data, also can an a voice data only corresponding keyword.
Wherein, if each voice data at least one keyword corresponding, can directly search in index relative according to keyword, to obtain the voice data corresponding with this keyword.If the corresponding keyword of each voice data, and when not searching corresponding voice data according to keyword in index relative, can search near synonym storehouse according to keyword, obtain at least one near synonym that this keyword is corresponding, then these at least one near synonym are utilized to search in index relative, to obtain the voice data corresponding with near synonym, using the voice data of the voice data corresponding with near synonym as this keyword.
Such as, if there is the index relative of " wave " and " wave sound ", when keyword is " sea ", corresponding voice data cannot be obtained, therefore can search near synonym storehouse according to " sea ", obtain the near synonym in " sea ", as " wave " and " seashore ", these two near synonym are utilized to search in index relative, obtain the voice data " wave sound " that " wave " is corresponding, so by the voice data of the voice data of " wave sound " as " sea " correspondence.
Again such as, if there is " wave " index relative with " wave sound " and the index relative in " sea " and " wave sound ", namely a voice data " wave sound " has index relative with two keywords, then directly can obtain corresponding voice data " wave sound " according to keyword " sea " from index relative.
In addition, if do not find the voice data that key message is corresponding in index relative, then can only export the Search Results matched with key message, not playing audio-fequency data.
Embodiment four
Based on searching method, embodiment two and embodiment three that above-described embodiment one provides, the method for the embodiment of the present invention to S103 is specifically described.This step specifically can comprise:
Preferably, in the embodiment of the present invention, export and can include but not limited to the Search Results that described key message matches and the method that exports voice data corresponding to described key message:
First, the Search Results matched with key message is obtained.Then, represent the Search Results matched with key message, and, when starting to represent the Search Results matched with keyword, or, after representing the Search Results matched with keyword, the voice data that the key message obtained in broadcasting S102 is corresponding.
Be understandable that, while utilizing technique scheme just to can be implemented in present to user the Search Results matched with key message, also the voice data relevant to this key message can be provided, thus realize extra from sense of hearing layer user oriented output information, to solve in prior art, the information type exported to user is single, lacks the problem of change.
Illustrate, the method obtaining the Search Results matched with key message can include but not limited to following two kinds:
The first; When key message is image information, first, utilize Description Image algorithm, feature extraction is carried out to this image information, to obtain image feature information.Then, coded treatment is carried out to image feature information, to obtain the encoded radio of image information.Finally, utilize the encoded radio of this image information, in image data base, carry out the Similarity Measure of the overall situation or local, and using some the highest for wherein similarity image informations as the Search Results matched with this image information.
Such as, Description Image algorithm can include but not limited to scale invariant feature conversion (Scale-invariantFeature Transform, SIFT) algorithm, fingerprint algorithm (Bundling Features) or hash function algorithm etc.
The second: when key message is keyword, can search in web database according to this keyword, all webpages comprising this keyword are found out from web database, and the webpage found out is sorted according to rank algorithm, using the webpage obtained after sequence as the Search Results matched with key message.
Illustrate, in the embodiment of the present invention, the method for the voice data that the key message obtained in broadcasting S102 is corresponding can include but not limited to following two kinds:
The first: play-over the voice data that described key message is corresponding.
The second: play voice data corresponding to described key message according to trigger action.
Illustrate, obtain HTML (Hypertext Markup Language) (the HyperText Mark-up Language comprising Search Results from server in client, HTML) file, in order to displaying searching result, and multimedia groupware can be carried by this html file, this multimedia groupware comprises voice data corresponding to key message, and be configured with for representing the attribute that voice data is play automatically, thus realize client and utilize Page Template to play up html file, during to realize the representing of Search Results, just can realize play-overing voice data corresponding to key message.
In addition, server can also send a html file again to client, wherein carries a trigger element, and like this, client can pass through this trigger element of inactive webpage representation, if click this trigger element can control voice data stopping broadcasting.The page that client is representing activates the page, and as represented the page of the Search Results matched in key message, utilize which, client can also represent an inactive page simultaneously, represents trigger element by this page.
Illustrate, the method playing the voice data of key message according to trigger action can include but not limited to:
First, the interface display trigger element of described Search Results is being represented.Receive play instruction, described play instruction for described trigger element clicked triggered.Finally, according to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
Preferably, this trigger element can be shown at the center section at the interface representing described search results pages.Or, also can show this trigger element at the top at the interface representing described search results pages.Or, this trigger element can also be shown in the bottom at the interface representing described search results pages.
Such as, please refer to Fig. 3, what it provided for the embodiment of the present invention is representing first exemplary plot of interface display trigger element of described Search Results, as shown in the figure, at the top display trigger element at the interface of search results pages, as the speaker button in Fig. 3.In addition, relevant information can also be shown, as " sound of wave is listened in click " simultaneously.
Again such as, please refer to Fig. 4, what it provided for the embodiment of the present invention is representing second exemplary plot of interface display trigger element of described Search Results, as shown in the figure, the Search Results that this search results pages represents can be the Search Results matched with the image information shown in Fig. 2, the similar image information of Fig. 2, in the interface of this search results pages, a suspended frame is inserted in bottom, this trigger element is represented by this suspended frame, i.e. speaker button, if trigger this speaker button, then can play the voice data corresponding with the image information shown in Fig. 2, as the cry of giant panda.
Illustrate, the mode that Fig. 3 and Fig. 4 these two kinds shows trigger element can be: client obtains the html file comprising Search Results from server, in order to displaying searching result, and trigger element wherein can be carried by html file, such as, can be added by server and add multimedia groupware in html file, this multimedia groupware comprises trigger element.
Such as, for this mode representing trigger element of Fig. 3 can be: the style configuration of this multimedia groupware is become embedded.
Such as, for this mode representing trigger element of Fig. 4 can be: the style configuration of this multimedia groupware is become floated.
In the embodiment of the present invention, described trigger element can include but not limited at least one in icon, button and mark.Such as, the trigger element represented in Fig. 3 and Fig. 4 is button.
In the embodiment of the present invention, described trigger element is clicked can be included but not limited to: described trigger element is clicked by mouse or described trigger element is pointed click.
Embodiment five
The embodiment of the present invention provides the device embodiment realizing each step and method in said method embodiment further.
Please refer to Fig. 5, the functional block diagram of its searcher provided for the embodiment of the present invention.As shown in the figure, this device comprises:
Acquiring unit 501, for obtaining the key message of user's input;
Processing unit 502, for obtaining voice data corresponding to described key message;
Output unit 503, for exporting with the Search Results that described key message matches and exporting voice data corresponding to described key message.
Preferably, described key message is image information, described processing unit 502, specifically for:
Obtain the index relative of keyword and voice data;
Image recognition is carried out to described image information, to obtain keyword corresponding to described image information;
According to the keyword that described image information is corresponding, search in described index relative, to obtain voice data corresponding to described keyword.
Preferably, described key message is keyword, described processing unit 502, specifically for;
Obtain the index relative of keyword and voice data;
According to described keyword, search in described index relative, to obtain voice data corresponding to described keyword.
Preferably, described device also comprises: search unit 504, for obtaining the Search Results matched with described key message;
Described output unit 503, specifically for:
Represent the Search Results matched with described key message;
Play-over the voice data that described key message is corresponding; Or, play voice data corresponding to described key message according to trigger action.
Preferably, described output unit 503 is for playing voice data corresponding to described key message during according to trigger action, specifically for:
Representing the interface display trigger element of described Search Results;
Receive play instruction, described play instruction for described trigger element clicked triggered;
According to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
Because each unit in the present embodiment can perform the method shown in Fig. 1, the part that the present embodiment is not described in detail, can with reference to the related description to Fig. 1.
The technical scheme of the embodiment of the present invention has following beneficial effect:
In the embodiment of the present invention, can while exporting the Search Results matched with described key message, also export voice data corresponding to key message, thus not only in vision aspect to user's output information, can also to user's output information in sense of hearing aspect, can realize comprehensive various dimensions to user's output content, make up the vacancy of output information in sense of hearing aspect, to solve in prior art, the information type exported to user is single, lack the problem of change, therefore, the technical scheme that the embodiment of the present invention provides can realize the diversity improving the output content when exporting Search Results to user.
In addition, the technical scheme that the embodiment of the present invention provides can improve the interest of search application, promotes the search experience of user.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiment provided by the present invention, should be understood that, disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, is only a kind of logic function and divides, and actual can have other dividing mode when realizing, such as, multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form that hardware also can be adopted to add SFU software functional unit realizes.
The above-mentioned integrated unit realized with the form of SFU software functional unit, can be stored in a computer read/write memory medium.Above-mentioned SFU software functional unit is stored in a storage medium, comprising some instructions in order to make a computer installation (can be personal computer, server, or network equipment etc.) or processor (Processor) perform the part steps of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disc or CD etc. various can be program code stored medium.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (10)

1. a searching method, is characterized in that, described method comprises:
Obtain the key message of user's input;
Obtain the voice data that described key message is corresponding;
Export with the Search Results that described key message matches and export voice data corresponding to described key message.
2. method according to claim 1, is characterized in that, described key message is image information, and the voice data that the described key message of described acquisition is corresponding comprises:
Obtain the index relative of keyword and voice data;
Image recognition is carried out to described image information, to obtain keyword corresponding to described image information;
According to the keyword that described image information is corresponding, search in described index relative, to obtain voice data corresponding to described keyword.
3. method according to claim 1, is characterized in that, described key message is keyword, and the voice data that the described key message of described acquisition is corresponding comprises;
Obtain the index relative of keyword and voice data;
According to described keyword, search in described index relative, to obtain voice data corresponding to described keyword.
4. according to the method in any one of claims 1 to 3, it is characterized in that, described output is with the Search Results that described key message matches and export voice data corresponding to described key message, comprising:
Obtain the Search Results matched with described key message;
Represent the Search Results matched with described key message;
Play-over the voice data that described key message is corresponding; Or, play voice data corresponding to described key message according to trigger action.
5. method according to claim 4, is characterized in that, described foundation trigger action plays voice data corresponding to described key message, comprising:
Representing the interface display trigger element of described Search Results;
Receive play instruction, described play instruction for described trigger element clicked triggered;
According to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
6. a searcher, is characterized in that, described device comprises:
Acquiring unit, for obtaining the key message of user's input;
Processing unit, for obtaining voice data corresponding to described key message;
Output unit, for exporting with the Search Results that described key message matches and exporting voice data corresponding to described key message.
7. device according to claim 6, is characterized in that, described key message is image information, described processing unit, specifically for:
Obtain the index relative of keyword and voice data;
Image recognition is carried out to described image information, to obtain keyword corresponding to described image information;
According to the keyword that described image information is corresponding, search in described index relative, to obtain voice data corresponding to described keyword.
8. device according to claim 6, is characterized in that, described key message is keyword, described processing unit, specifically for;
Obtain the index relative of keyword and voice data;
According to described keyword, search in described index relative, to obtain voice data corresponding to described keyword.
9. the device according to any one of claim 6 to 8, is characterized in that,
Described device also comprises: search unit, for obtaining the Search Results matched with described key message;
Described output unit, specifically for:
Represent the Search Results matched with described key message;
Play-over the voice data that described key message is corresponding; Or, play voice data corresponding to described key message according to trigger action.
10. device according to claim 9, is characterized in that, when described output unit is used for playing voice data corresponding to described key message according to trigger action, specifically for:
Representing the interface display trigger element of described Search Results;
Receive play instruction, described play instruction for described trigger element clicked triggered;
According to described play instruction, play the voice data corresponding with the described key message that described trigger element is bound.
CN201510033132.9A 2015-01-22 2015-01-22 Searching method and device Pending CN104615689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510033132.9A CN104615689A (en) 2015-01-22 2015-01-22 Searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510033132.9A CN104615689A (en) 2015-01-22 2015-01-22 Searching method and device

Publications (1)

Publication Number Publication Date
CN104615689A true CN104615689A (en) 2015-05-13

Family

ID=53150131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510033132.9A Pending CN104615689A (en) 2015-01-22 2015-01-22 Searching method and device

Country Status (1)

Country Link
CN (1) CN104615689A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599059A (en) * 2016-11-15 2017-04-26 广州酷狗计算机科技有限公司 Method and device for adding songs
CN107436871A (en) * 2016-05-25 2017-12-05 北京搜狗科技发展有限公司 A kind of data search method, device and electronic equipment
CN108334540A (en) * 2017-12-15 2018-07-27 深圳市腾讯计算机系统有限公司 Methods of exhibiting and device, storage medium, the electronic device of media information
CN109933576A (en) * 2019-03-04 2019-06-25 百度在线网络技术(北京)有限公司 Audio SDK library method for building up and device, electronic equipment and computer-readable medium
CN112307249A (en) * 2020-03-05 2021-02-02 北京字节跳动网络技术有限公司 Audio information playing method and device
CN113536026A (en) * 2020-04-13 2021-10-22 阿里巴巴集团控股有限公司 Audio searching method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080069404A1 (en) * 2006-09-15 2008-03-20 Samsung Electronics Co., Ltd. Method, system, and medium for indexing image object
CN101313364A (en) * 2005-11-21 2008-11-26 皇家飞利浦电子股份有限公司 System and method for using content features and metadata of digital images to find related audio accompaniment
CN101950302A (en) * 2010-09-29 2011-01-19 李晓耕 Method for managing immense amounts of music libraries based on mobile device
CN104050188A (en) * 2013-03-15 2014-09-17 上海斐讯数据通信技术有限公司 Music search method and system
CN104281705A (en) * 2014-10-23 2015-01-14 百度在线网络技术(北京)有限公司 Searching method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101313364A (en) * 2005-11-21 2008-11-26 皇家飞利浦电子股份有限公司 System and method for using content features and metadata of digital images to find related audio accompaniment
US20080069404A1 (en) * 2006-09-15 2008-03-20 Samsung Electronics Co., Ltd. Method, system, and medium for indexing image object
CN101950302A (en) * 2010-09-29 2011-01-19 李晓耕 Method for managing immense amounts of music libraries based on mobile device
CN104050188A (en) * 2013-03-15 2014-09-17 上海斐讯数据通信技术有限公司 Music search method and system
CN104281705A (en) * 2014-10-23 2015-01-14 百度在线网络技术(北京)有限公司 Searching method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436871A (en) * 2016-05-25 2017-12-05 北京搜狗科技发展有限公司 A kind of data search method, device and electronic equipment
CN106599059A (en) * 2016-11-15 2017-04-26 广州酷狗计算机科技有限公司 Method and device for adding songs
CN108334540A (en) * 2017-12-15 2018-07-27 深圳市腾讯计算机系统有限公司 Methods of exhibiting and device, storage medium, the electronic device of media information
WO2019114516A1 (en) * 2017-12-15 2019-06-20 腾讯科技(深圳)有限公司 Media information display method and apparatus, storage medium, and electronic apparatus
CN108334540B (en) * 2017-12-15 2020-11-10 深圳市腾讯计算机系统有限公司 Media information display method and device, storage medium and electronic device
US10998005B2 (en) 2017-12-15 2021-05-04 Tencent Technology (Shenzhen) Company Limited Method and apparatus for presenting media information, storage medium, and electronic apparatus
CN109933576A (en) * 2019-03-04 2019-06-25 百度在线网络技术(北京)有限公司 Audio SDK library method for building up and device, electronic equipment and computer-readable medium
CN109933576B (en) * 2019-03-04 2021-06-11 百度在线网络技术(北京)有限公司 Sound effect SDK library establishing method and device, electronic equipment and computer readable medium
CN112307249A (en) * 2020-03-05 2021-02-02 北京字节跳动网络技术有限公司 Audio information playing method and device
CN113536026A (en) * 2020-04-13 2021-10-22 阿里巴巴集团控股有限公司 Audio searching method, device and equipment
CN113536026B (en) * 2020-04-13 2024-01-23 阿里巴巴集团控股有限公司 Audio searching method, device and equipment

Similar Documents

Publication Publication Date Title
WO2022078102A1 (en) Entity identification method and apparatus, device and storage medium
US20210225380A1 (en) Voiceprint recognition method and apparatus
CN104615689A (en) Searching method and device
CN104598502A (en) Method, device and system for obtaining background music information in played video
CN107507615A (en) Interface intelligent interaction control method, device, system and storage medium
CN104750789A (en) Label recommendation method and device
CN105009113A (en) Queryless search based on context
CN109271542A (en) Cover determines method, apparatus, equipment and readable storage medium storing program for executing
US9639633B2 (en) Providing information services related to multimodal inputs
CN104866308A (en) Scenario image generation method and apparatus
CN103714104A (en) Answering questions using environmental context
CN108920649B (en) Information recommendation method, device, equipment and medium
CN103092981B (en) A kind of method and electronic equipment setting up phonetic symbol
CN112395420A (en) Video content retrieval method and device, computer equipment and storage medium
CN111383631A (en) Voice interaction method, device and system
CN103440243A (en) Teaching resource recommendation method and device thereof
CN107862058B (en) Method and apparatus for generating information
CN103823849A (en) Method and device for acquiring entries
US20130212105A1 (en) Information processing apparatus, information processing method, and program
CN104571813A (en) Information displaying method and device
CN103942328A (en) Video retrieval method and video device
CN105335383A (en) Input information processing method and device
CN104598571A (en) Method and device for playing multimedia resource
CN103235773A (en) Method and device for extracting text labels based on keywords
CN103399737B (en) Multi-media processing method based on speech data and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150513