CN110706704A - Method, device and computer equipment for generating voice interaction prototype - Google Patents
Method, device and computer equipment for generating voice interaction prototype Download PDFInfo
- Publication number
- CN110706704A CN110706704A CN201910988067.3A CN201910988067A CN110706704A CN 110706704 A CN110706704 A CN 110706704A CN 201910988067 A CN201910988067 A CN 201910988067A CN 110706704 A CN110706704 A CN 110706704A
- Authority
- CN
- China
- Prior art keywords
- instruction
- voice
- text information
- information
- reply
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000004044 response Effects 0.000 claims description 24
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 238000003786 synthesis reaction Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002452 interceptive effect Effects 0.000 abstract description 8
- 238000013461 design Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012795 verification Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012905 input function Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012942 design verification Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/221—Announcement of recognition results
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a method, a device and computer equipment for generating a voice interaction prototype, and belongs to the field of man-machine interaction. In order to solve the problem that a voice interaction designer cannot quickly and effectively check the feasibility of a design scheme in an interactive mode, the method comprises the following steps: acquiring text information input by a user, wherein the text information comprises instruction text information, instruction keywords and reply text information; performing data processing on the text information to obtain a semantic table; preprocessing reply text information in the semantic table, and pre-synthesizing corresponding reply voice information; acquiring voice instruction information input by a user; recognizing voice instruction information, capturing keywords and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table; and finding the reply text information corresponding to the instruction text information in the semantic table, and broadcasting the pre-synthesized reply voice information.
Description
Technical Field
The present invention relates to the field of human-computer interaction technologies, and in particular, to a method, an apparatus, and a computer device for generating a voice interaction prototype.
Background
Human-computer interaction is the science of studying the interactive relationships between systems and users. The system may be a machine or various software systems. The dialogue between the human and the system can be realized through human-computer interaction. Such as a voice control system, voice response system, etc. Speech synthesis and speech recognition techniques are the basis for achieving human-computer speech interaction. By converting the text information into voice to be played, the machine can carry out voice-based conversation with people.
When a voice interaction designer designs a dialog for a voice interaction system, it is often necessary to broadcast the dialog contents by simulating the voice effect of a real person to check the usability of the dialog contents. However, the voice broadcasting method has poor interactivity and cannot reflect real user experience well. If interactive voice programs are established, a great deal of development cost is required and the efficiency is low.
Therefore, an interactive prototype which can respond to a voice instruction preset by a designer and can perform voice broadcast according to preset answer content is established, and the interactive prototype is very important for improving the working efficiency of the designer and approaching to real user experience.
Disclosure of Invention
The invention aims to provide a method, a device and computer equipment for generating a voice interaction prototype, which solve the problem that a voice interaction designer cannot quickly and effectively check the feasibility of a design scheme in an interactive mode.
The invention solves the technical problem, and adopts the technical scheme that: method for generating a speech interaction prototype, comprising the steps of:
step 1, acquiring text information input by a user, wherein the text information comprises instruction text information, instruction keywords and reply text information;
step 2, carrying out data processing on the text information to obtain a semantic table;
step 3, preprocessing the reply text information in the semantic table, and pre-synthesizing corresponding reply voice information;
step 4, acquiring voice instruction information input by a user;
step 5, recognizing the voice instruction information, capturing keywords in the voice instruction information, and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table;
and 6, finding the reply text information corresponding to the instruction text information in the semantic table, and broadcasting the pre-synthesized reply voice information.
Specifically, in step 1, the instruction text information is data used for triggering man-machine conversation in a voice interaction prototype; the instruction keywords are part of contents contained in the instruction text information, and the keywords in the instruction text information are marked after the user inputs the instruction text information; the user can mark one or more instruction keywords in one instruction text message.
Further, in step 1, one instruction text message can correspond to multiple reply text messages.
Specifically, for the case where one instruction text message corresponds to multiple reply text messages, the response mode for selection includes:
a sequential response mode, in which a user repeatedly inputs an instruction in a voice form for multiple times, and after the voice interaction prototype identifies the instruction, sequentially broadcasts response voice information corresponding to the response text information according to a response text information sequence written by the user;
and in the random response mode, the user repeatedly inputs the command in a voice form for multiple times, after the command is recognized through voice interaction prototype, one piece of reply text information is randomly selected from reply text information written by the user to broadcast the corresponding reply voice information, and in the random mode, the broadcast probability of each reply text is the same.
Further, in step 3, the information in the semantic table is uploaded to a speech synthesis device, and the reply text information is preprocessed through a preset acoustic model to synthesize the corresponding reply speech information in advance.
The device for generating the voice interaction prototype is applied to the method and comprises the following steps:
the first data acquisition module is used for acquiring text information input by a user;
the data processing module is used for processing the text information to obtain a semantic table, and the semantic table comprises instruction text information, instruction keywords and reply text information;
the voice synthesis module is used for pre-synthesizing reply voice information corresponding to the reply text information in the semantic table;
the second data acquisition module is used for acquiring voice instruction information input by a user;
the semantic recognition module is used for recognizing voice instruction information input by a user, capturing keywords in the voice instruction information and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table;
and the voice broadcasting module is used for broadcasting the pre-synthesized reply voice information.
Computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method when executing the computer program.
The method, the device and the computer equipment for generating the voice interaction prototype have the advantages that the interactive voice prototype can be generated quickly, the verification requirements of a voice interaction designer in work are met, and meanwhile the efficiency of generating the voice interaction prototype is improved.
Drawings
FIG. 1 is a system architecture diagram of a method for generating a prototype of a voice interaction provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for generating a speech interaction prototype according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for generating a speech interaction prototype according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is described in detail in the following with reference to the accompanying drawings.
The method for generating the voice interaction prototype comprises the following steps:
step 1, acquiring text information input by a user, wherein the text information comprises instruction text information, instruction keywords and reply text information.
The instruction text information is data used for triggering man-machine conversation in the voice interaction prototype; the instruction keywords are part of contents contained in the instruction text information, and the keywords in the instruction text information are marked after the user inputs the instruction text information; the user can mark one or more instruction keywords in one instruction text message.
And, one instruction text message can correspond to a plurality of reply text messages.
For the case that one instruction text message corresponds to multiple reply text messages, the selectable reply modes include:
a sequential response mode, in which a user repeatedly inputs an instruction in a voice form for multiple times, and after the voice interaction prototype identifies the instruction, sequentially broadcasts response voice information corresponding to the response text information according to a response text information sequence written by the user;
and in the random response mode, the user repeatedly inputs the command in a voice form for multiple times, after the command is recognized through voice interaction prototype, one piece of reply text information is randomly selected from reply text information written by the user to broadcast the corresponding reply voice information, and in the random mode, the broadcast probability of each reply text is the same.
And 2, carrying out data processing on the text information to obtain a semantic table.
And 3, preprocessing the reply text information in the semantic table, and pre-synthesizing the corresponding reply voice information.
And uploading the information in the semantic table to a voice synthesis device, preprocessing reply text information through a preset acoustic model, and pre-synthesizing corresponding reply voice information.
And 4, acquiring voice instruction information input by a user.
And 5, recognizing the voice instruction information, capturing keywords in the voice instruction information, matching the keywords with instruction keywords in a semantic table, and judging that the voice instruction information is instruction text information in the semantic table if the matching degree of the instruction keywords is greater than or equal to a set threshold value.
And 6, finding the reply text information corresponding to the instruction text information in the semantic table, and broadcasting the pre-synthesized reply voice information.
The device for generating the voice interaction prototype is applied to the method, and comprises the following steps: the first data acquisition module is used for acquiring text information input by a user; the data processing module is used for processing the text information to obtain a semantic table, and the semantic table comprises instruction text information, instruction keywords and reply text information; the voice synthesis module is used for pre-synthesizing reply voice information corresponding to the reply text information in the semantic table; the second data acquisition module is used for acquiring voice instruction information input by a user; the semantic recognition module is used for recognizing voice instruction information input by a user, capturing keywords in the voice instruction information and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table; and the voice broadcasting module is used for broadcasting the pre-synthesized reply voice information.
Computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method when executing the computer program.
Examples
The voice interaction prototype mainly serves for design verification in a voice design stage, and after a user inputs an instruction needing verification and a reply text, the user interacts with the generated prototype to simulate a real use effect and verify the feasibility of a design scheme.
Fig. 1 is a system architecture diagram of a method for generating a voice interaction prototype according to an embodiment of the present invention. As shown in fig. 1, the system architecture of the present embodiment includes terminal devices 110 and 120, a wireless network 130, and a server 140. The terminal devices 110, 120 are connected to a server 140 via a wireless network 130. The terminal devices 110 and 120 send text information and voice instruction information to the server 140 through the wireless network 130, the server 140 synthesizes corresponding reply voice information according to the text information sent by the terminal devices 110 and 120, and the server 140 identifies the voice instruction information sent by the terminal devices 110 and 120 and sends the corresponding reply voice information back to the terminal devices 110 and 120.
The terminal device of this embodiment may be an electronic device having a text input function, an audio input function, and an audio playing function, including but not limited to a notebook computer, a tablet computer, a smart phone, and the like. The server of the embodiment is a server with voice synthesis and voice recognition functions, and is used for generating corresponding voice information according to text information sent by the equipment and recognizing the voice information sent by the terminal equipment.
Fig. 2 is a flowchart illustrating a method for generating a speech interaction prototype according to an embodiment of the present invention. As shown in fig. 2, the method for generating a speech interaction prototype provided by the present embodiment includes the following steps:
210. acquiring text information input by a user, wherein the text information input by the user comprises instruction text information, instruction keywords and reply text information;
the instruction text information corresponds to the voice content input by the user when the voice interaction prototype is used, and the instruction text information does not only refer to the control class indication in the narrow sense, but also refers to the input in the broad sense, and includes but is not limited to content types such as inquiry, chat and the like;
the instruction keywords are part of the content contained in the instruction text information, and the user marks the keywords in the instruction text information after inputting the instruction text information.
The reply text information is the reply content of the instruction text information input by the user, namely the answer obtained after the user inputs the voice instruction when the user hopes to use the voice interaction prototype;
220. after the text information is obtained, a semantic table is obtained through data processing, and the semantic table comprises instruction text information, keywords and reply text information;
230. uploading the information in the semantic table to a voice synthesis device, preprocessing the reply text information through a preset acoustic model, and synthesizing corresponding reply voice information;
240. acquiring voice instruction information of a user;
250. uploading voice instruction information of a user to a semantic recognition device, converting the voice instruction into character information by the semantic recognition device, and matching the character information with keywords in a semantic table, wherein if the matching degree of the keywords reaches or exceeds a threshold value set by a system, the voice instruction information is judged to be instruction text information in the voice table;
if the semantic table contains n keywords, the recognized voice instruction information contains m (m is less than or equal to n) keywords, and the system sets a threshold value as a:
m/n is more than or equal to a (formula 1)
Then the voice command input by the user is judged to be matched with the command text information in the semantic table.
260. And finding the reply text information corresponding to the instruction text information in the semantic table, and broadcasting the pre-synthesized reply voice information.
Alternatively, the user may mark one or more keywords in a piece of instruction text information.
Alternatively, for one instruction text message, the user may input a plurality of reply text messages.
For the case that one instruction text message corresponds to multiple reply text messages, the selectable response modes include:
and in the sequential response mode, repeatedly inputting the command in a voice form by the user, and broadcasting the response voice information corresponding to the response text information one by one according to the sequence of the response text information written by the user after recognizing the command through the voice interaction prototype.
And in the random response mode, the user repeatedly inputs the command in a voice form for multiple times, after the command is recognized through voice interaction prototype, one piece of reply text information is randomly selected from reply text information written by the user to broadcast the corresponding reply voice information, and in the random mode, the broadcast probability of each reply text is the same.
Fig. 3 is a schematic structural diagram of an apparatus for generating a speech interaction prototype according to an embodiment of the present invention. As shown in fig. 3, the present embodiment provides an apparatus for generating a speech interaction prototype, including: first data acquisition module 310, data processing module 320, speech synthesis module 330, second data acquisition module 340, semantic recognition module 350, voice broadcast module 360:
a first data obtaining module 310, configured to obtain text information input by a user;
and the data processing module 320 is configured to process the text information to obtain a semantic table. The semantic table comprises instruction text information, keywords and reply text information;
the voice synthesis module 330 is configured to pre-synthesize reply voice information corresponding to the reply text information in the semantic table;
the second data obtaining module 340 is configured to obtain voice instruction information input by a user;
the semantic recognition module 350 is configured to recognize voice instruction information input by a user, capture keywords in the voice instruction information, and match the keywords with instruction text information keywords in a semantic table, and if a keyword matching degree exceeds a threshold set by a system, determine that the voice instruction information is instruction text information in a voice table;
and the voice broadcasting module 360 is used for broadcasting the pre-synthesized reply voice information.
An embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps in the method when executing the computer program.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the scheme of the invention, the interactive voice prototype can be generated quickly, the verification requirement of a voice interaction designer in the work process is met, and meanwhile, the efficiency of generating the voice interaction prototype is improved.
Claims (7)
1. Method for generating a prototype of a speech interaction, characterized in that it comprises the following steps:
step 1, acquiring text information input by a user, wherein the text information comprises instruction text information, instruction keywords and reply text information;
step 2, carrying out data processing on the text information to obtain a semantic table;
step 3, preprocessing the reply text information in the semantic table, and pre-synthesizing corresponding reply voice information;
step 4, acquiring voice instruction information input by a user;
step 5, recognizing the voice instruction information, capturing keywords in the voice instruction information, and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table;
and 6, finding the reply text information corresponding to the instruction text information in the semantic table, and broadcasting the pre-synthesized reply voice information.
2. The method for generating a prototype of voice interaction according to claim 1, wherein in step 1, the instruction text information is data in the prototype of voice interaction for triggering man-machine conversation; the instruction keywords are part of contents contained in the instruction text information, and the keywords in the instruction text information are marked after the user inputs the instruction text information; the user can mark one or more instruction keywords in one instruction text message.
3. The method for generating a prototype of speech interaction according to claim 1, wherein in step 1, a command text message can correspond to a plurality of reply text messages.
4. The method for generating a prototype of speech interaction according to claim 3, wherein, for a case where one instruction text message corresponds to a plurality of reply text messages, the response mode for selection comprises:
a sequential response mode, in which a user repeatedly inputs an instruction in a voice form for multiple times, and after the voice interaction prototype identifies the instruction, sequentially broadcasts response voice information corresponding to the response text information according to a response text information sequence written by the user;
and in the random response mode, the user repeatedly inputs the command in a voice form for multiple times, after the command is recognized through voice interaction prototype, one piece of reply text information is randomly selected from reply text information written by the user to broadcast the corresponding reply voice information, and in the random mode, the broadcast probability of each reply text is the same.
5. The method according to claim 1, wherein in step 3, the information in the semantic table is uploaded to a speech synthesis device, and the reply text information is preprocessed by a preset acoustic model to synthesize the corresponding reply speech information.
6. Apparatus for generating a speech interaction prototype, for use in the method of any of claims 1-5, comprising:
the first data acquisition module is used for acquiring text information input by a user;
the data processing module is used for processing the text information to obtain a semantic table, and the semantic table comprises instruction text information, instruction keywords and reply text information;
the voice synthesis module is used for pre-synthesizing reply voice information corresponding to the reply text information in the semantic table;
the second data acquisition module is used for acquiring voice instruction information input by a user;
the semantic recognition module is used for recognizing voice instruction information input by a user, capturing keywords in the voice instruction information and matching the keywords with instruction keywords in a semantic table, and if the matching degree of the instruction keywords is greater than or equal to a set threshold value, judging that the voice instruction information is instruction text information in the semantic table;
and the voice broadcasting module is used for broadcasting the pre-synthesized reply voice information.
7. Computer device, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which when executing the computer program implements the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910988067.3A CN110706704A (en) | 2019-10-17 | 2019-10-17 | Method, device and computer equipment for generating voice interaction prototype |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910988067.3A CN110706704A (en) | 2019-10-17 | 2019-10-17 | Method, device and computer equipment for generating voice interaction prototype |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110706704A true CN110706704A (en) | 2020-01-17 |
Family
ID=69200417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910988067.3A Pending CN110706704A (en) | 2019-10-17 | 2019-10-17 | Method, device and computer equipment for generating voice interaction prototype |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110706704A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113030990A (en) * | 2021-03-01 | 2021-06-25 | 爱驰汽车有限公司 | Fusion ranging method and device for vehicle, ranging equipment and medium |
CN113325752A (en) * | 2021-05-12 | 2021-08-31 | 北京戴纳实验科技有限公司 | Equipment management system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110010177A1 (en) * | 2009-07-08 | 2011-01-13 | Honda Motor Co., Ltd. | Question and answer database expansion apparatus and question and answer database expansion method |
CN103187051A (en) * | 2011-12-28 | 2013-07-03 | 上海博泰悦臻电子设备制造有限公司 | Vehicle-mounted interaction device |
CN106294582A (en) * | 2016-07-28 | 2017-01-04 | 上海未来伙伴机器人有限公司 | Man-machine interaction method based on natural language and system |
CN106709072A (en) * | 2017-02-13 | 2017-05-24 | 长沙军鸽软件有限公司 | Method of obtaining intelligent conversation reply content based on shared corpora |
CN106847278A (en) * | 2012-12-31 | 2017-06-13 | 威盛电子股份有限公司 | System of selection and its mobile terminal apparatus and information system based on speech recognition |
CN107315766A (en) * | 2017-05-16 | 2017-11-03 | 广东电网有限责任公司江门供电局 | A kind of voice response method and its device for gathering intelligence and artificial question and answer |
CN109272129A (en) * | 2018-09-20 | 2019-01-25 | 重庆先特服务外包产业有限公司 | Call center's business management system |
CN109947911A (en) * | 2019-01-14 | 2019-06-28 | 深圳前海达闼云端智能科技有限公司 | A kind of man-machine interaction method, calculates equipment and computer storage medium at device |
CN110019683A (en) * | 2017-12-29 | 2019-07-16 | 同方威视技术股份有限公司 | Intelligent sound interaction robot and its voice interactive method |
-
2019
- 2019-10-17 CN CN201910988067.3A patent/CN110706704A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110010177A1 (en) * | 2009-07-08 | 2011-01-13 | Honda Motor Co., Ltd. | Question and answer database expansion apparatus and question and answer database expansion method |
CN103187051A (en) * | 2011-12-28 | 2013-07-03 | 上海博泰悦臻电子设备制造有限公司 | Vehicle-mounted interaction device |
CN106847278A (en) * | 2012-12-31 | 2017-06-13 | 威盛电子股份有限公司 | System of selection and its mobile terminal apparatus and information system based on speech recognition |
CN106294582A (en) * | 2016-07-28 | 2017-01-04 | 上海未来伙伴机器人有限公司 | Man-machine interaction method based on natural language and system |
CN106709072A (en) * | 2017-02-13 | 2017-05-24 | 长沙军鸽软件有限公司 | Method of obtaining intelligent conversation reply content based on shared corpora |
CN107315766A (en) * | 2017-05-16 | 2017-11-03 | 广东电网有限责任公司江门供电局 | A kind of voice response method and its device for gathering intelligence and artificial question and answer |
CN110019683A (en) * | 2017-12-29 | 2019-07-16 | 同方威视技术股份有限公司 | Intelligent sound interaction robot and its voice interactive method |
CN109272129A (en) * | 2018-09-20 | 2019-01-25 | 重庆先特服务外包产业有限公司 | Call center's business management system |
CN109947911A (en) * | 2019-01-14 | 2019-06-28 | 深圳前海达闼云端智能科技有限公司 | A kind of man-machine interaction method, calculates equipment and computer storage medium at device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113030990A (en) * | 2021-03-01 | 2021-06-25 | 爱驰汽车有限公司 | Fusion ranging method and device for vehicle, ranging equipment and medium |
CN113030990B (en) * | 2021-03-01 | 2024-04-05 | 爱驰汽车有限公司 | Fusion ranging method, device, ranging equipment and medium for vehicle |
CN113325752A (en) * | 2021-05-12 | 2021-08-31 | 北京戴纳实验科技有限公司 | Equipment management system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101213835B1 (en) | Verb error recovery in speech recognition | |
JP2019102063A (en) | Method and apparatus for controlling page | |
CN110046227B (en) | Configuration method, interaction method, device, equipment and storage medium of dialogue system | |
CN103456299A (en) | Method and device for controlling speech recognition | |
US11404052B2 (en) | Service data processing method and apparatus and related device | |
CN102842306A (en) | Voice control method and device as well as voice response method and device | |
CN105391730A (en) | Information feedback method, device and system | |
CN111326154B (en) | Voice interaction method and device, storage medium and electronic equipment | |
CN110992955A (en) | Voice operation method, device, equipment and storage medium of intelligent equipment | |
CN106713111B (en) | Processing method for adding friends, terminal and server | |
CN110047484A (en) | A kind of speech recognition exchange method, system, equipment and storage medium | |
CN111063355A (en) | Conference record generation method and recording terminal | |
Billing et al. | Language models for human-robot interaction | |
CN110706704A (en) | Method, device and computer equipment for generating voice interaction prototype | |
CN106205622A (en) | Information processing method and electronic equipment | |
CN114401431A (en) | Virtual human explanation video generation method and related device | |
CN105427856B (en) | Appointment data processing method and system for intelligent robot | |
US20190026266A1 (en) | Translation device and translation system | |
CN114064943A (en) | Conference management method, conference management device, storage medium and electronic equipment | |
CN116825105A (en) | Speech recognition method based on artificial intelligence | |
CN113362806A (en) | Intelligent sound evaluation method, system, storage medium and computer equipment thereof | |
CN112447179A (en) | Voice interaction method, device, equipment and computer readable storage medium | |
CN113643706B (en) | Speech recognition method, device, electronic equipment and storage medium | |
CN112820265B (en) | Speech synthesis model training method and related device | |
CN111176430B (en) | Interaction method of intelligent terminal, intelligent terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200117 |
|
RJ01 | Rejection of invention patent application after publication |