CN113488048A - Information interaction method and device - Google Patents

Information interaction method and device Download PDF

Info

Publication number
CN113488048A
CN113488048A CN202110766168.3A CN202110766168A CN113488048A CN 113488048 A CN113488048 A CN 113488048A CN 202110766168 A CN202110766168 A CN 202110766168A CN 113488048 A CN113488048 A CN 113488048A
Authority
CN
China
Prior art keywords
generate
determining
reply sentence
length
character length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110766168.3A
Other languages
Chinese (zh)
Inventor
向伟
陈建哲
钟思思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu International Technology Shenzhen Co ltd
Original Assignee
Baidu International Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu International Technology Shenzhen Co ltd filed Critical Baidu International Technology Shenzhen Co ltd
Priority to CN202110766168.3A priority Critical patent/CN113488048A/en
Publication of CN113488048A publication Critical patent/CN113488048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

The embodiment of the application discloses an information interaction method and device. One embodiment of the method comprises: responding to the received user voice, and performing voice recognition on the user voice to obtain characters corresponding to the user voice; in response to determining that the rejection state is an on state, it is determined whether to generate a reply sentence based on a word length of the word. According to the embodiment of the application, invalid contents spoken by the user can be quickly determined, so that the user can be selectively replied, invalid feedback to the user is reduced, and the intelligent degree of information interaction is improved.

Description

Information interaction method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to an information interaction method and device.
Background
With the development of internet technology, the voice processing technology has been developed. The interaction between the user and the terminal equipment can be realized through a voice processing technology.
When a user utters speech, he may speak some nonsense words, such as "o" and the like. These words do not express explicit indications. If the speech expressing these words is processed, some useless interaction may be achieved and the misjudgment of the device on the speech may be increased.
Disclosure of Invention
The embodiment of the application provides an information interaction method and device.
In a first aspect, an embodiment of the present application provides an information interaction method, including: performing voice recognition on the user voice in response to the received user voice to obtain characters corresponding to the user voice; in response to determining that the rejection state is an on state, it is determined whether to generate a reply sentence based on a word length of the word.
In some embodiments, determining whether to generate a reply sentence based on the word length of the word comprises: and determining a comparison result of the character length and the preset character length, and determining whether to generate a reply sentence or not based on the comparison result.
In some embodiments, determining whether to generate a reply statement based on the comparison comprises: if the character length is less than or equal to the preset character length, determining that no reply sentence is generated; and if the character length is larger than the preset character length, determining to generate a reply sentence.
In some embodiments, the method is applied to a terminal device, and determining whether to generate a reply sentence based on a word length of a word includes: and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
In a second aspect, an embodiment of the present application provides an information interaction apparatus, where the receiving unit is configured to perform voice recognition on a user voice in response to receiving the user voice, so as to obtain a text corresponding to the user voice; a determination unit configured to determine whether to generate a reply sentence based on a word length of the word in response to determining that the rejection state is the on state.
In some embodiments, the determining unit is further configured to perform the determining whether to generate the reply sentence based on the word length of the word as follows: and determining a comparison result of the character length and the preset character length, and determining whether to generate a reply sentence or not based on the comparison result.
In some embodiments, the determining unit is further configured to perform determining whether to generate the reply statement based on the comparison result as follows: if the character length is less than or equal to the preset character length, determining that no reply sentence is generated; and if the character length is larger than the preset character length, determining to generate a reply sentence.
In some embodiments, the apparatus is applied to a terminal device, and the determining unit is further configured to perform the determining whether to generate the reply sentence based on the word length of the word as follows: and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method according to any one of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to any one of the embodiments of the first aspect.
According to the information interaction scheme provided by the embodiment of the application, firstly, the voice of the user is identified in response to the received voice of the user, and the characters corresponding to the voice of the user are obtained. Then, in response to determining that the rejection state is the on state, it is determined whether to generate a reply sentence based on the word length of the word. According to the embodiment of the application, invalid contents spoken by the user can be quickly determined, so that the user can be selectively replied, invalid feedback to the user is reduced, and the intelligent degree of information interaction is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an information interaction method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of an information interaction method according to the present application;
FIG. 4 is a schematic structural diagram of an embodiment of an information interaction device according to the present application;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the information interaction method or information interaction apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an information interaction application, a video application, a live application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
Here, the terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server may analyze and otherwise process data such as user voice, and feed back a processing result (e.g., a result of determining whether to generate a reply sentence) to the terminal device.
It should be noted that the information interaction method provided in the embodiment of the present application may be executed by the terminal device 101, 102, 103 or the server 105, and accordingly, the information interaction apparatus may be disposed in the terminal device 101, 102, 103 or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an information interaction method according to the present application is shown. The information interaction method comprises the following steps:
step 201, in response to receiving the user voice, performing voice recognition on the user voice to obtain characters corresponding to the user voice.
In this embodiment, a user may interact with an execution main body (for example, the terminal device shown in fig. 1) on which the information interaction method operates, and the execution main body may receive a user voice through an installed or connected microphone. And, the execution body can perform voice recognition on the user voice to convert the user voice into characters.
In response to determining that the rejection status is the on status, step 202 determines whether to generate a reply statement based on the word length of the word.
In this embodiment, if the execution main body determines that the rejection state of the terminal device is the on state, the execution main body may determine the character length of the character, and determine whether to generate the reply sentence based on the character length. The word length may refer to the number of words of the word. The reply sentence here may be voice or text, and refers to a sentence that replies to the user's voice for interaction. Specifically, the rejection state is a flow for characterizing whether to execute a process of determining whether to generate a reply sentence based on the word length. The on state indicates execution and, correspondingly, the off state indicates no execution. In practice, the rejection status may be represented by the value of the rejection flag stored by the execution body as described above. For example, the rejection flag may have a value of "0" and "1", which respectively indicate that the rejection status is in the off state and the on state.
The execution body may determine whether to generate the reply statement in a variety of ways. For example, the execution body may perform a mathematical operation on the text length and other parameters, such as the duration of the speech of the user. The execution main body can weight the character length and the duration of the user voice by using weights respectively set for the character length and the duration of the user voice, and if the obtained weighted value is larger than a preset numerical value, the reply sentence is determined to be generated.
In some optional implementations of this embodiment, step 202 may include:
and determining a comparison result of the character length and the preset character length, and determining whether to generate a reply sentence or not based on the comparison result.
In these implementations, the execution body may compare the text length with a preset text length, and determine whether to generate a reply sentence based on the comparison result. The preset character length is a condition threshold value for judging whether to generate the character length of the reply sentence. For example, if the word length is greater than or equal to the predetermined word length, the execution body can generate the reply sentence.
The implementation modes can utilize preset character lengths for comparison, so that whether the reply sentence is generated or not can be determined quickly and accurately.
In some optional application scenarios of this embodiment, the determining whether to generate a reply statement based on the comparison result in the above step may include:
if the character length is less than or equal to the preset character length, determining that no reply sentence is generated; and if the character length is larger than the preset character length, determining to generate a reply sentence.
In these optional application scenarios, the execution body may determine whether to generate a reply sentence by using a relationship between the text length and a preset text length. In these application scenarios, the condition for generating the reply sentence indicated by the preset character length is that the character length is greater than the preset character length. For example, the text corresponding to the user voice included in the user voice message is "i want", the text length is 2, and the preset text length is 2. Then, the word length is equal to the preset word length, and the execution body may determine to generate a reply sentence.
These application scenarios may not generate reply statements to reduce ineffective feedback to the user if the text length is determined to be small.
In some optional implementations of this embodiment, step 202 may include:
and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
These alternative implementations may call a preset Software Development Kit (SDK) to determine whether to generate a reply statement. In this way, the determination process is performed by the software development kit, and then the determination result is transmitted to the service layer of the terminal device. In this way, the implementations can implement an efficient and accurate process of determining whether to generate a reply sentence at the terminal device.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the information interaction method according to the present embodiment. In the application scenario of fig. 3, the execution main body 301 may perform speech recognition on the user speech in response to receiving the user speech 302, and obtain the text "i want" corresponding to the user speech. The execution main body 301 determines whether to generate a reply sentence 304 based on the text length 303 of the text in response to determining that the rejection state is the on state.
The method provided by the embodiment of the application can quickly determine the invalid content spoken by the user, so that the user can be selectively replied, invalid feedback to the user is reduced, and the intelligent degree of information interaction is improved. Moreover, the rejection state in this embodiment is determined by the terminal device, and such a determination manner is more flexible and can even be adjusted by the user. In addition, the execution subject may determine whether to execute the rejection policy according to specific situations.
With further reference to fig. 4, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an information interaction apparatus, which corresponds to the method embodiment shown in fig. 2, and which can be applied to various electronic devices.
As shown in fig. 4, the information interaction apparatus 400 of the present embodiment includes: the receiving unit 401, in response to receiving the user voice, performs voice recognition on the user voice to obtain a character corresponding to the user voice; the determination unit 402 determines whether to generate a reply sentence based on the character length of the character in response to determining that the rejection state is the on state.
In some embodiments, the receiving unit 401 of the information interacting device 400 may receive the user's voice through a microphone installed or connected. And, the execution body can perform voice recognition on the user voice to convert the user voice into characters.
In some embodiments, the determination unit 402 may determine the word length of the word and determine whether to generate the reply sentence based on the word length. The word length here may refer to the number of words of the word. The reply sentence here may be voice or text, and refers to a sentence that replies to the user's voice for interaction.
In some optional implementations of the embodiment, the determining unit is further configured to perform the determining whether to generate the reply sentence according to a text length based on the text as follows: and determining a comparison result of the character length and the preset character length, and determining whether to generate a reply sentence or not based on the comparison result.
In some optional implementations of this embodiment, the determining unit is further configured to perform determining whether to generate the reply statement based on the comparison result as follows: if the character length is less than or equal to the preset character length, determining that no reply sentence is generated; and if the character length is larger than the preset character length, determining to generate a reply sentence.
In some optional implementations of this embodiment, the apparatus is applied to a terminal device, and the determining unit is further configured to execute the text length based on the text and determine whether to generate the reply sentence according to the following manner: and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit and a determining unit. Where the names of these units do not constitute a limitation on the unit itself in some cases, for example, the determination unit may also be described as "a unit that determines whether to generate a reply sentence based on the word length of the word in response to determining that the rejection state is the on state".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: performing voice recognition on the user voice in response to the received user voice to obtain characters corresponding to the user voice; in response to determining that the rejection state is an on state, it is determined whether to generate a reply sentence based on a word length of the word.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (8)

1. An information interaction method comprises the following steps:
responding to the received user voice, and performing voice recognition on the user voice to obtain characters corresponding to the user voice;
in response to determining that the rejection state is the on state, determining whether to generate a reply sentence based on the word length of the word;
the determining whether to generate a reply sentence based on the word length of the word includes:
determining a comparison result of the character length and a preset character length, and determining whether to generate a reply sentence or not based on the comparison result;
the determining whether to generate a reply statement based on the comparison result comprises:
and if the character length is not less than the preset character length, determining to generate a reply sentence.
2. The method of claim 1, wherein the determining whether to generate a reply statement based on the comparison result further comprises:
and if the character length is smaller than the preset character length, determining that no reply sentence is generated.
3. The method of claim 1, wherein the method is applied to a terminal device; and
the determining whether to generate a reply sentence based on the word length of the word includes:
and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
4. An information interaction device, comprising:
the receiving unit is configured to respond to the received user voice, perform voice recognition on the user voice, and obtain characters corresponding to the user voice;
a determination unit configured to determine whether to generate a reply sentence based on a word length of the word in response to determining that the rejection state is the on state;
the determining unit is further configured to perform determining whether to generate a reply sentence based on the word length of the word as follows:
determining a comparison result of the character length and a preset character length, and determining whether to generate a reply sentence or not based on the comparison result;
the determining unit is further configured to perform determining whether to generate a reply statement based on the comparison result as follows:
and if the character length is not less than the preset character length, determining to generate a reply sentence.
5. The apparatus of claim 4, wherein the determining unit is further configured to perform determining whether to generate a reply statement based on the comparison result as follows:
and if the character length is smaller than the preset character length, determining that no reply sentence is generated.
6. The apparatus according to claim 4, wherein the apparatus is applied to a terminal device, and the determining unit is further configured to perform determining whether to generate a reply sentence based on the text length of the text as follows:
and calling a preset software development kit, and determining whether to generate a reply sentence or not based on the character length of the characters.
7. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-3.
8. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-3.
CN202110766168.3A 2019-03-12 2019-03-12 Information interaction method and device Pending CN113488048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110766168.3A CN113488048A (en) 2019-03-12 2019-03-12 Information interaction method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110766168.3A CN113488048A (en) 2019-03-12 2019-03-12 Information interaction method and device
CN201910184565.2A CN109949806B (en) 2019-03-12 2019-03-12 Information interaction method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910184565.2A Division CN109949806B (en) 2019-03-12 2019-03-12 Information interaction method and device

Publications (1)

Publication Number Publication Date
CN113488048A true CN113488048A (en) 2021-10-08

Family

ID=67009596

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910184565.2A Active CN109949806B (en) 2019-03-12 2019-03-12 Information interaction method and device
CN202110766168.3A Pending CN113488048A (en) 2019-03-12 2019-03-12 Information interaction method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910184565.2A Active CN109949806B (en) 2019-03-12 2019-03-12 Information interaction method and device

Country Status (1)

Country Link
CN (2) CN109949806B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949806B (en) * 2019-03-12 2021-07-27 百度国际科技(深圳)有限公司 Information interaction method and device
CN111524515A (en) * 2020-04-30 2020-08-11 海信电子科技(武汉)有限公司 Voice interaction method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670821A (en) * 2004-03-19 2005-09-21 乐金电子(中国)研究开发中心有限公司 Text representing method of voice/text conversion technology
JP2013050605A (en) * 2011-08-31 2013-03-14 Nippon Hoso Kyokai <Nhk> Language model switching device and program for the same
US20150206530A1 (en) * 2014-01-22 2015-07-23 Samsung Electronics Co., Ltd. Interactive system, display apparatus, and controlling method thereof
CN106384591A (en) * 2016-10-27 2017-02-08 乐视控股(北京)有限公司 Method and device for interacting with voice assistant application
CN106550082A (en) * 2016-10-25 2017-03-29 乐视控股(北京)有限公司 The method and apparatus that a kind of use voice assistant application is dialled
CN107481737A (en) * 2017-08-28 2017-12-15 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of voice monitoring
CN109949806B (en) * 2019-03-12 2021-07-27 百度国际科技(深圳)有限公司 Information interaction method and device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6111802B2 (en) * 2013-03-29 2017-04-12 富士通株式会社 Spoken dialogue apparatus and dialogue control method
CN104239539B (en) * 2013-09-22 2017-11-07 中科嘉速(北京)并行软件有限公司 A kind of micro-blog information filter method merged based on much information
CN103631963B (en) * 2013-12-18 2017-10-17 北京博雅立方科技有限公司 A kind of keyword optimized treatment method and device based on big data
CN106302933B (en) * 2016-08-31 2019-06-11 宇龙计算机通信科技(深圳)有限公司 Voice information processing method and terminal
CN106328166B (en) * 2016-08-31 2019-11-08 上海交通大学 Human-computer dialogue abnormality detection system and method
US10062385B2 (en) * 2016-09-30 2018-08-28 International Business Machines Corporation Automatic speech-to-text engine selection
CN106847270B (en) * 2016-12-09 2020-08-18 华南理工大学 Double-threshold place name voice endpoint detection method
JP6825485B2 (en) * 2017-05-23 2021-02-03 富士通株式会社 Explanation support program, explanation support method and information processing terminal
CN109215640B (en) * 2017-06-30 2021-06-01 深圳大森智能科技有限公司 Speech recognition method, intelligent terminal and computer readable storage medium
CN108399241B (en) * 2018-02-28 2021-08-31 福州大学 Emerging hot topic detection system based on multi-class feature fusion
CN109451158B (en) * 2018-11-09 2021-07-27 维沃移动通信有限公司 Reminding method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1670821A (en) * 2004-03-19 2005-09-21 乐金电子(中国)研究开发中心有限公司 Text representing method of voice/text conversion technology
JP2013050605A (en) * 2011-08-31 2013-03-14 Nippon Hoso Kyokai <Nhk> Language model switching device and program for the same
US20150206530A1 (en) * 2014-01-22 2015-07-23 Samsung Electronics Co., Ltd. Interactive system, display apparatus, and controlling method thereof
CN106550082A (en) * 2016-10-25 2017-03-29 乐视控股(北京)有限公司 The method and apparatus that a kind of use voice assistant application is dialled
CN106384591A (en) * 2016-10-27 2017-02-08 乐视控股(北京)有限公司 Method and device for interacting with voice assistant application
CN107481737A (en) * 2017-08-28 2017-12-15 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of voice monitoring
CN109949806B (en) * 2019-03-12 2021-07-27 百度国际科技(深圳)有限公司 Information interaction method and device

Also Published As

Publication number Publication date
CN109949806A (en) 2019-06-28
CN109949806B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN107731229B (en) Method and apparatus for recognizing speech
CN112311841B (en) Information pushing method and device, electronic equipment and computer readable medium
CN109829164B (en) Method and device for generating text
CN109981787B (en) Method and device for displaying information
CN110007936B (en) Data processing method and device
CN111881271A (en) Method and device for realizing automatic conversation
CN110445632B (en) Method and device for preventing client from crashing
CN109949806B (en) Information interaction method and device
CN110232920B (en) Voice processing method and device
CN110727775A (en) Method and apparatus for processing information
CN110223694B (en) Voice processing method, system and device
CN111581664B (en) Information protection method and device
CN110519373B (en) Method and device for pushing information
CN111277488A (en) Session processing method and device
CN107608718B (en) Information processing method and device
CN110634478A (en) Method and apparatus for processing speech signal
CN114707951A (en) Alarm situation big data management method, device, equipment and storage medium
CN111832279B (en) Text partitioning method, apparatus, device and computer readable medium
CN112017685B (en) Speech generation method, device, equipment and computer readable medium
CN111460020B (en) Method, device, electronic equipment and medium for resolving message
CN112395194A (en) Method and device for accessing test platform
CN110765764B (en) Text error correction method, electronic device, and computer-readable medium
CN110830652B (en) Method, apparatus, terminal and computer readable medium for displaying information
CN114513548B (en) Directional call information processing method and device
CN112346728B (en) Device adaptation method, apparatus, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination