CN104239442A - Method and device for representing search results - Google Patents

Method and device for representing search results Download PDF

Info

Publication number
CN104239442A
CN104239442A CN201410439870.9A CN201410439870A CN104239442A CN 104239442 A CN104239442 A CN 104239442A CN 201410439870 A CN201410439870 A CN 201410439870A CN 104239442 A CN104239442 A CN 104239442A
Authority
CN
China
Prior art keywords
search
result
content
results
sound result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410439870.9A
Other languages
Chinese (zh)
Other versions
CN104239442B (en
Inventor
谷铁锋
郭金勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201410439870.9A priority Critical patent/CN104239442B/en
Publication of CN104239442A publication Critical patent/CN104239442A/en
Application granted granted Critical
Publication of CN104239442B publication Critical patent/CN104239442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results

Abstract

The invention provides a method and a device for representing search results. The method for representing the search results comprises the following steps: receiving search content input by a user; obtaining search results according to the search content, wherein the search results contain voice results; representing the search results, wherein the voice results are played in voices. The method can provide richer search results.

Description

The exhibiting method of Search Results and device
Technical field
The present invention relates to communication technical field, particularly relate to a kind of exhibiting method and device of Search Results.
Background technology
People can obtain information needed by search engine.When searching for, the Search Results that usually obtains is textual form, even if when user is with phonetic entry, is also be first text by speech conversion, adopts text to carry out retrieval afterwards and obtain Search Results during search.The Search Results that this way of search provides is too single.
Summary of the invention
The present invention is intended to solve one of technical matters in correlation technique at least to a certain extent.
For this reason, one object of the present invention is the exhibiting method proposing a kind of Search Results, and the method can provide the Search Results more enriching form.
Another object of the present invention is the demonstration device proposing a kind of Search Results.
For achieving the above object, the exhibiting method of the Search Results that first aspect present invention embodiment proposes, comprising: the search content receiving user's input; Obtain Search Results according to described search content, wherein, described Search Results comprises sound result; Represent described Search Results, wherein, with sound result described in speech play.
The exhibiting method of the Search Results that first aspect present invention embodiment proposes, by comprising sound result in Search Results, can provide the Search Results more enriching form.
For achieving the above object, the demonstration device of the Search Results that second aspect present invention embodiment proposes, comprising: receiver module, for receiving the search content of user's input; Acquisition module, for obtaining Search Results according to described search content, wherein, described Search Results comprises sound result; Represent module, for representing described Search Results, wherein, with sound result described in speech play.
The demonstration device of the Search Results that second aspect present invention embodiment proposes, by comprising sound result in Search Results, can provide the Search Results more enriching form.
The aspect that the present invention adds and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Accompanying drawing explanation
The present invention above-mentioned and/or additional aspect and advantage will become obvious and easy understand from the following description of the accompanying drawings of embodiments, wherein:
Fig. 1 is the schematic flow sheet of the exhibiting method of the Search Results that one embodiment of the invention proposes;
Fig. 2 is the schematic flow sheet of the exhibiting method of the Search Results that another embodiment of the present invention proposes;
Fig. 3 is the structural representation of the demonstration device of the Search Results that another embodiment of the present invention proposes.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.On the contrary, embodiments of the invention comprise fall into attached claims spirit and intension within the scope of all changes, amendment and equivalent.
Fig. 1 is the schematic flow sheet of the exhibiting method of the Search Results that one embodiment of the invention proposes, and the method comprises:
S11: the search content receiving user's input.
Optionally, search content can be textual form, speech form, or picture.
S12: obtain Search Results according to described search content, wherein, described Search Results comprises sound result.
Can specifically comprise:
Described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtains Search Results;
Receive the described Search Results that described service end sends.
In correlation technique, service end is carry out in text search server when searching for, even if during user speech input, can be also text speech conversion, search for afterwards in text search server.
And in the present embodiment, not only can search in text search server, also can search in phonetic search server, to obtain sound result.
When user input voice, voice messaging can be uploaded to server by client, server constantly identifies that user inputs and identification text results is returned to client, client can represent recognition result and detect that user has inputted sweep backward server and initiated search, and search server obtains Search Results from phonetic search server and text search server respectively.
Concrete, when described search content is the content of phonetic entry, described described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtain Search Results, comprising:
The content of described phonetic entry is sent to described service end, and receives described service end the result after text identification is carried out to the content of described phonetic entry;
Result after described text identification is sent to described service end, in phonetic search server, the sound result of mating with the result after described text identification is obtained to make described service end, and in text search server, obtain the text results of mating with the result after described text identification, and described sound result sequence was obtained Search Results before described text results.
Optionally, described sound result comprises at least one item in following item:
The voice response result of corresponding described search content;
The audio result of corresponding described search content;
The results for video of corresponding described search content;
The e-book sound result of corresponding described search content;
Such as, when user is with phonetic entry Beijing weather, Beijing weather condition of today of speech form in Search Results, can be comprised, or Search Results comprises the music etc. corresponding to title of the song of user's input.
S13: represent described Search Results, wherein, with sound result described in speech play.
Wherein, described with sound result described in speech play, comprising:
When described search content is the content of phonetic entry, after the described sound result of acquisition, automatically play described sound result; Or,
When described search content is not the content of phonetic entry, after the described sound result of acquisition, receives the triggering command of described user, and according to described triggering command, play described sound result.
Optionally, described Search Results is presented on result of page searching, and on described result of page searching, corresponding described sound result is provided with the icon triggering described sound result, the triggering command of the described user of described reception, and according to described triggering command, play described sound result, comprising:
When described user clicks described icon, play described sound result.
Such as, when user is with phonetic entry Beijing weather, after acquisition weather condition, can automatically play this weather condition.Or, when user is with text event detection Beijing weather, after acquisition weather condition, icon can be set correspondence after weather condition, the icon of such as horn shape, after user clicks this icon, this weather condition can be play.
Optionally, when user inputs with dialect, sound result can be play with corresponding dialect, such as, when user adopts Guangdong language to input Beijing weather, plays the weather condition obtained with Guangdong language.
The present embodiment, by comprising sound result in Search Results, can provide the Search Results more enriching form.In addition, the present embodiment can be searched for from phonetic search server when searching for, and can provide more accurately and Search Results targetedly, promote Consumer's Experience.
Fig. 2 is the schematic flow sheet of the exhibiting method of the Search Results that another embodiment of the present invention proposes, and the method comprises:
S21: search engine receives the search word of user speech input.
Such as, arranging phonetic entry button in the search box of search engine, such as, is the button of microphone form, after user clicks this phonetic entry button, phonetic entry interface can be entered, after user speaks facing to this phonetic entry interface, can with phonetic entry search word.
Such as, user says: Beijing weather.
S22: this search word is sent to service end by search engine.
Concrete, can be that voice messaging is uploaded to service end by search engine, service end constantly identifies the voice that user inputs and will identify that the text results obtained returns to search engine; Whether search engine constantly can detect user and inputted and after input completes, the text results after identification corresponding for the whole search word obtained sent to service end.
S23: service end obtains corresponding sound result and text results from phonetic search server and text search server respectively, sound result being sorted before text results, obtaining Search Results when sorting.
Such as, when searching for Beijing weather, sound result can be obtained from phonetic search server, Beijing weather condition etc. of such as today, text results is obtained, Beijing weather condition etc. of the these last few days such as characterized with word and/or picture in text search server.
After acquisition sound result and text results, can sort to it, such as, by sound result sequence front.
S24: Search Results is sent to search engine by service end, this Search Results comprises sound result and text results.
S25: search engine plays sound result automatically, and text results is presented in below sound result.
Wherein, because sound result during sequence is front, therefore can be presented in above text results by sound result when representing, such as, Article 1 is sound result, and text results is after sound result.
Such as, after Beijing situation obtaining today, can automatically play this weather condition, such as, not need to click trigger button corresponding to sound result, as horn button, just can play-over this weather condition.When user also needs again to listen to, this trigger button can be clicked, sound result is played again.
Optionally, when user inputs with dialect, sound result can be play with corresponding dialect, such as, when user adopts Guangdong language to input Beijing weather, plays the weather condition obtained with Guangdong language.
The present embodiment, by comprising sound result in Search Results, can provide the Search Results more enriching form.In addition, the present embodiment can be searched for from phonetic search server when searching for, and can provide more accurately and Search Results targetedly, promote Consumer's Experience.In addition, can play with corresponding dialect when inputting with dialect, can users ' individualized requirement be met, differentiated service is provided.
Fig. 3 is the structural representation of the demonstration device of the Search Results that another embodiment of the present invention proposes, and this device 30 comprises receiver module 31, acquisition module 32 and represents module 33.
Receiver module 31 is for receiving the search content of user's input;
Optionally, search content can be textual form, speech form, or picture.
Acquisition module 32 is for obtaining Search Results according to described search content, and wherein, described Search Results comprises sound result;
Optionally, described acquisition module 32 specifically for:
Described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtains Search Results;
Receive the described Search Results that described service end sends.
In correlation technique, service end is carry out in text search server when searching for, even if during user speech input, can be also text speech conversion, search for afterwards in text search server.
And in the present embodiment, not only can search in text search server, also can search in phonetic search server, to obtain sound result.
When user input voice, voice messaging can be uploaded to server by client, server constantly identifies that user inputs and identification text results is returned to client, client can represent recognition result and detect that user has inputted sweep backward server and initiated search, and search server obtains Search Results from phonetic search server and text search server respectively.
Optionally, when described search content is the content of phonetic entry, described acquisition module further specifically for:
The content of described phonetic entry is sent to described service end, and receives described service end the result after text identification is carried out to the content of described phonetic entry;
Result after described text identification is sent to described service end, in phonetic search server, the sound result of mating with the result after described text identification is obtained to make described service end, and in text search server, obtain the text results of mating with the result after described text identification, and described sound result sequence was obtained Search Results before described text results.
Optionally, described sound result comprises at least one item in following item:
The voice response result of corresponding described search content;
The audio result of corresponding described search content;
The results for video of corresponding described search content;
The e-book sound result of corresponding described search content;
Such as, when user is with phonetic entry Beijing weather, Beijing weather condition of today of speech form in Search Results, can be comprised, or Search Results comprises the music etc. corresponding to title of the song of user's input.
Represent module 33 for representing described Search Results, wherein, with sound result described in speech play.
Optionally, represent described in module 33 specifically for:
When described search content is the content of phonetic entry, after the described sound result of acquisition, automatically play described sound result; Or,
When described search content is not the content of phonetic entry, after the described sound result of acquisition, receives the triggering command of described user, and according to described triggering command, play described sound result.
Optionally, described Search Results is presented on result of page searching, and on described result of page searching, corresponding described sound result is provided with the icon triggering described sound result, described in represent module 33 further specifically for:
When described user clicks described icon, play described sound result.
Such as, when user is with phonetic entry Beijing weather, after acquisition weather condition, can automatically play this weather condition.Or, when user is with text event detection Beijing weather, after acquisition weather condition, icon can be set correspondence after weather condition, the icon of such as horn shape, after user clicks this icon, this weather condition can be play.
Optionally, when described search content is the voice content of dialect, described in represent module 33 specifically for:
Described sound result is play with described dialect.
Such as, when user adopts Guangdong language to input Beijing weather, play the weather condition obtained with Guangdong language.
Optionally, represent described in module 33 specifically for:
Described sound result is presented in above described text results.
Because sound result during sequence is front, therefore can be presented in above text results by sound result when representing, such as, Article 1 is sound result, and text results is after sound result.
The present embodiment, by comprising sound result in Search Results, can provide the Search Results more enriching form.In addition, the present embodiment can be searched for from phonetic search server when searching for, and can provide more accurately and Search Results targetedly, promote Consumer's Experience.In addition, can play with corresponding dialect when inputting with dialect, can users ' individualized requirement be met, differentiated service is provided.
It should be noted that, in describing the invention, term " first ", " second " etc. only for describing object, and can not be interpreted as instruction or hint relative importance.In addition, in describing the invention, except as otherwise noted, the implication of " multiple " is two or more.
Describe and can be understood in process flow diagram or in this any process otherwise described or method, represent and comprise one or more for realizing the module of the code of the executable instruction of the step of specific logical function or process, fragment or part, and the scope of the preferred embodiment of the present invention comprises other realization, wherein can not according to order that is shown or that discuss, comprise according to involved function by the mode while of basic or by contrary order, carry out n-back test, this should understand by embodiments of the invention person of ordinary skill in the field.
Should be appreciated that each several part of the present invention can realize with hardware, software, firmware or their combination.In the above-described embodiment, multiple step or method can with to store in memory and the software performed by suitable instruction execution system or firmware realize.Such as, if realized with hardware, the same in another embodiment, can realize by any one in following technology well known in the art or their combination: the discrete logic with the logic gates for realizing logic function to data-signal, there is the special IC of suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc.
Those skilled in the art are appreciated that realizing all or part of step that above-described embodiment method carries is that the hardware that can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, this program perform time, step comprising embodiment of the method one or a combination set of.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, also can be that the independent physics of unit exists, also can be integrated in a module by two or more unit.Above-mentioned integrated module both can adopt the form of hardware to realize, and the form of software function module also can be adopted to realize.If described integrated module using the form of software function module realize and as independently production marketing or use time, also can be stored in a computer read/write memory medium.
The above-mentioned storage medium mentioned can be ROM (read-only memory), disk or CD etc.
In the description of this instructions, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means to describe in conjunction with this embodiment or example are contained at least one embodiment of the present invention or example.In this manual, identical embodiment or example are not necessarily referred to the schematic representation of above-mentioned term.And the specific features of description, structure, material or feature can combine in an appropriate manner in any one or more embodiment or example.
Although illustrate and describe embodiments of the invention above, be understandable that, above-described embodiment is exemplary, can not be interpreted as limitation of the present invention, and those of ordinary skill in the art can change above-described embodiment within the scope of the invention, revises, replace and modification.

Claims (16)

1. an exhibiting method for Search Results, is characterized in that, comprising:
Receive the search content of user's input;
Obtain Search Results according to described search content, wherein, described Search Results comprises sound result;
Represent described Search Results, wherein, with sound result described in speech play.
2. method according to claim 1, is characterized in that, described according to described search content acquisition Search Results, comprising:
Described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtains Search Results;
Receive the described Search Results that described service end sends.
3. method according to claim 2, is characterized in that, when described search content is the content of phonetic entry, described described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtain Search Results, comprising:
The content of described phonetic entry is sent to described service end, and receives described service end the result after text identification is carried out to the content of described phonetic entry;
Result after described text identification is sent to described service end, in phonetic search server, the sound result of mating with the result after described text identification is obtained to make described service end, and in text search server, obtain the text results of mating with the result after described text identification, and described sound result sequence was obtained Search Results before described text results.
4. method according to claim 3, is characterized in that, described in represent described Search Results, comprising:
Described sound result is presented in above described text results.
5. method according to claim 1, is characterized in that, described with sound result described in speech play, comprising:
When described search content is the content of phonetic entry, after the described sound result of acquisition, automatically play described sound result; Or,
When described search content is not the content of phonetic entry, after the described sound result of acquisition, receives the triggering command of described user, and according to described triggering command, play described sound result.
6. method according to claim 5, it is characterized in that, described Search Results is presented on result of page searching, on described result of page searching, corresponding described sound result is provided with the icon triggering described sound result, the triggering command of the described user of described reception, and according to described triggering command, play described sound result, comprising:
When described user clicks described icon, play described sound result.
7. method according to claim 1, is characterized in that, when described search content is the voice content of dialect, described with sound result described in speech play, comprising:
Described sound result is play with described dialect.
8. the method according to any one of claim 1-7, is characterized in that, described sound result comprises at least one item in following item:
The voice response result of corresponding described search content;
The audio result of corresponding described search content;
The results for video of corresponding described search content;
The e-book sound result of corresponding described search content.
9. a demonstration device for Search Results, is characterized in that, comprising:
Receiver module, for receiving the search content of user's input;
Acquisition module, for obtaining Search Results according to described search content, wherein, described Search Results comprises sound result;
Represent module, for representing described Search Results, wherein, with sound result described in speech play.
10. device according to claim 9, is characterized in that, described acquisition module specifically for:
Described search content is sent to service end, to make described service end search in phonetic search server and text search server, obtains Search Results;
Receive the described Search Results that described service end sends.
11. devices according to claim 10, is characterized in that, when described search content is the content of phonetic entry, described acquisition module further specifically for:
The content of described phonetic entry is sent to described service end, and receives described service end the result after text identification is carried out to the content of described phonetic entry;
Result after described text identification is sent to described service end, in phonetic search server, the sound result of mating with the result after described text identification is obtained to make described service end, and in text search server, obtain the text results of mating with the result after described text identification, and described sound result sequence was obtained Search Results before described text results.
12. devices according to claim 11, is characterized in that, described in represent module specifically for:
Described sound result is presented in above described text results.
13. devices according to claim 9, is characterized in that, described in represent module specifically for:
When described search content is the content of phonetic entry, after the described sound result of acquisition, automatically play described sound result; Or,
When described search content is not the content of phonetic entry, after the described sound result of acquisition, receives the triggering command of described user, and according to described triggering command, play described sound result.
14. devices according to claim 13, it is characterized in that, described Search Results is presented on result of page searching, and on described result of page searching, corresponding described sound result is provided with the icon triggering described sound result, described in represent module further specifically for:
When described user clicks described icon, play described sound result.
15. devices according to claim 9, is characterized in that, when described search content is the voice content of dialect, described in represent module specifically for:
Described sound result is play with described dialect.
16. devices according to any one of claim 9-15, it is characterized in that, described sound result comprises at least one item in following item:
The voice response result of corresponding described search content;
The audio result of corresponding described search content;
The results for video of corresponding described search content;
The e-book sound result of corresponding described search content.
CN201410439870.9A 2014-09-01 2014-09-01 Search result shows method and apparatus Active CN104239442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410439870.9A CN104239442B (en) 2014-09-01 2014-09-01 Search result shows method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410439870.9A CN104239442B (en) 2014-09-01 2014-09-01 Search result shows method and apparatus

Publications (2)

Publication Number Publication Date
CN104239442A true CN104239442A (en) 2014-12-24
CN104239442B CN104239442B (en) 2018-03-06

Family

ID=52227501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410439870.9A Active CN104239442B (en) 2014-09-01 2014-09-01 Search result shows method and apparatus

Country Status (1)

Country Link
CN (1) CN104239442B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598585A (en) * 2015-01-15 2015-05-06 百度在线网络技术(北京)有限公司 Information search method and information search device
CN105159981A (en) * 2015-08-28 2015-12-16 百度在线网络技术(北京)有限公司 Voice search result operation method and apparatus
CN105574138A (en) * 2015-12-10 2016-05-11 商丘师范学院 Information retrieval system
CN105683963A (en) * 2016-01-07 2016-06-15 马岩 Network link searching method and system
CN105955753A (en) * 2016-05-17 2016-09-21 精效新软新技术(北京)有限公司 Creation method of integrative full-flow elaborate intelligent EPR (Electronic Public Relation system) working platform
WO2016192369A1 (en) * 2015-06-03 2016-12-08 深圳市轻生活科技有限公司 Voice interaction method and system, and intelligent voice broadcast terminal
CN107515870A (en) * 2016-06-15 2017-12-26 北京搜狗科技发展有限公司 A kind of searching method and device, a kind of device for being used to search for
CN108009177A (en) * 2016-10-28 2018-05-08 百度在线网络技术(北京)有限公司 A kind of information interacting method, server and client side

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101611403A (en) * 2006-12-28 2009-12-23 摩托罗拉公司 The method and apparatus that is used for the phonetic search of mobile communication equipment
US20110314003A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Template concatenation for capturing multiple concepts in a voice query
CN102497391A (en) * 2011-11-21 2012-06-13 宇龙计算机通信科技(深圳)有限公司 Server, mobile terminal and prompt method
CN103425668A (en) * 2012-05-16 2013-12-04 联想(北京)有限公司 Information search method and electronic equipment
CN103914306A (en) * 2014-04-15 2014-07-09 安一恒通(北京)科技有限公司 Method and device for providing executing result of software program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101611403A (en) * 2006-12-28 2009-12-23 摩托罗拉公司 The method and apparatus that is used for the phonetic search of mobile communication equipment
US20110314003A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Template concatenation for capturing multiple concepts in a voice query
CN102497391A (en) * 2011-11-21 2012-06-13 宇龙计算机通信科技(深圳)有限公司 Server, mobile terminal and prompt method
CN103425668A (en) * 2012-05-16 2013-12-04 联想(北京)有限公司 Information search method and electronic equipment
CN103914306A (en) * 2014-04-15 2014-07-09 安一恒通(北京)科技有限公司 Method and device for providing executing result of software program

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598585A (en) * 2015-01-15 2015-05-06 百度在线网络技术(北京)有限公司 Information search method and information search device
WO2016192369A1 (en) * 2015-06-03 2016-12-08 深圳市轻生活科技有限公司 Voice interaction method and system, and intelligent voice broadcast terminal
CN105159981A (en) * 2015-08-28 2015-12-16 百度在线网络技术(北京)有限公司 Voice search result operation method and apparatus
CN105574138A (en) * 2015-12-10 2016-05-11 商丘师范学院 Information retrieval system
CN105683963A (en) * 2016-01-07 2016-06-15 马岩 Network link searching method and system
WO2017117785A1 (en) * 2016-01-07 2017-07-13 马岩 Method and system for web searching
CN105955753A (en) * 2016-05-17 2016-09-21 精效新软新技术(北京)有限公司 Creation method of integrative full-flow elaborate intelligent EPR (Electronic Public Relation system) working platform
CN105955753B (en) * 2016-05-17 2019-06-18 精效新软新技术(北京)有限公司 A kind of creation method of integration whole process fining intelligence ERP workbench
CN107515870A (en) * 2016-06-15 2017-12-26 北京搜狗科技发展有限公司 A kind of searching method and device, a kind of device for being used to search for
CN108009177A (en) * 2016-10-28 2018-05-08 百度在线网络技术(北京)有限公司 A kind of information interacting method, server and client side
US11042543B2 (en) 2016-10-28 2021-06-22 Baidu Online Network Technology (Beijing) Co., Ltd. Information interaction method, server, client, and storage medium and device

Also Published As

Publication number Publication date
CN104239442B (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN104239442A (en) Method and device for representing search results
US11398236B2 (en) Intent-specific automatic speech recognition result generation
CN107507612B (en) Voiceprint recognition method and device
CN108509619B (en) Voice interaction method and device
CN110140168B (en) Contextual hotwords
US10977299B2 (en) Systems and methods for consolidating recorded content
US10068573B1 (en) Approaches for voice-activated audio commands
CN107766482B (en) Information pushing and sending method, device, electronic equipment and storage medium
CN104992704B (en) Phoneme synthesizing method and device
US10431214B2 (en) System and method of determining a domain and/or an action related to a natural language input
US10917758B1 (en) Voice-based messaging
US11189277B2 (en) Dynamic gazetteers for personalized entity recognition
CN110136749A (en) The relevant end-to-end speech end-point detecting method of speaker and device
US20120271631A1 (en) Speech recognition using multiple language models
JP2019503526A (en) Parameter collection and automatic dialog generation in dialog systems
US20130158992A1 (en) Speech processing system and method
CN111081280B (en) Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method
CN104239458A (en) Method and device for representing search results
CN104867492A (en) Intelligent interaction system and method
CN105138515A (en) Named entity recognition method and device
CN103956169A (en) Speech input method, device and system
CN106575502A (en) Systems and methods for providing non-lexical cues in synthesized speech
US9922650B1 (en) Intent-specific automatic speech recognition result generation
CN109976702A (en) A kind of audio recognition method, device and terminal
CN109473104A (en) Speech recognition network delay optimization method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant