CN110570867A - Voice processing method and system for locally added corpus - Google Patents
Voice processing method and system for locally added corpus Download PDFInfo
- Publication number
- CN110570867A CN110570867A CN201910861559.6A CN201910861559A CN110570867A CN 110570867 A CN110570867 A CN 110570867A CN 201910861559 A CN201910861559 A CN 201910861559A CN 110570867 A CN110570867 A CN 110570867A
- Authority
- CN
- China
- Prior art keywords
- corpus
- voice
- information
- local
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
The invention discloses a method and a system for processing a voice of a local newly added corpus.
Description
Technical Field
the application relates to the technical field of voice processing, in particular to a voice processing method and system for a local newly added corpus.
background
With the development of technology, various advanced technologies, such as voice control, intelligent driving, etc., have been added to vehicles.
currently, voice control functions are installed in vehicles, and users can control corresponding functions on the vehicles through voice input, such as the problem that users can adjust air conditioners through voice control, make calls through voice, even control seats of the vehicles through voice, and the like.
however, in the current voice control, a user needs to input standard voice information, and if the user inputs non-standard voice information, the system cannot recognize the non-standard voice information, so that the problems of limitation and low practicability of the voice control are caused.
Disclosure of Invention
the invention provides a voice processing method and system for a locally added corpus, which are used for solving the problem that the system cannot identify when a user inputs non-standard voice information in the prior art, so that the limitation and the practicability of voice control are low.
the specific technical scheme is as follows:
a method of voice processing for a local added expectation, the method comprising:
Acquiring voice information input by a user, and converting the voice information into first text information;
Judging whether the first text information has corresponding corpora in a local corpus or not;
If the voice information does not exist, outputting prompt information for prompting the user to input the voice information again;
when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information;
And taking the second text information as a new corpus and storing the new corpus in a dynamic corpus.
optionally, the second text information is stored in a dynamic corpus as a new corpus, including:
Performing voice output on the second text information and the determined associated information of the linguistic data;
And when a determination instruction of the association information input by the user is received, associating the second text information with the determined new corpus, and adding the new corpus into the dynamic corpus.
Optionally, the method further includes:
when the language material corresponding to the first text information exists in the local language database, outputting an answer text corresponding to the language material;
And converting the answer text into voice data output through a text-to-voice TTS technology.
optionally, the method further includes:
When the local corpus has the newly added corpora, matching the dynamic corpus with the local corpus;
And when the repeated corpora exist in the dynamic corpus and the local corpus, the repeated corpora exist in the dynamic corpus or the local corpus.
optionally, the method further includes:
Detecting whether the local corpus is in an idle state;
actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user;
And replying the audio signal according to the local corpus.
A local augmented reality speech processing system, the system comprising:
The acquisition module is used for acquiring voice information input by a user and converting the voice information into first text information;
the processing module is used for judging whether the first text information has corresponding corpora in the local corpus; if the voice information does not exist, outputting prompt information for prompting the user to input the voice information again; when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information; and taking the second text information as a new corpus and storing the new corpus in a dynamic corpus.
Optionally, the processing module is specifically configured to perform voice output on the second text information and the determined associated information of the corpus; and when a determination instruction of the association information input by the user is received, associating the second text information with the determined new corpus, and adding the new corpus into the dynamic corpus.
optionally, the processing module is further configured to output an answer text corresponding to the corpus when the corpus corresponding to the first text information exists in the local corpus; and converting the answer text into voice data output through a text-to-voice TTS technology.
optionally, the processing module is further configured to match the dynamic corpus with the local corpus when the new corpus exists in the local corpus; and when the repeated corpora exist in the dynamic corpus and the local corpus, the repeated corpora exist in the dynamic corpus or the local corpus.
optionally, the processing module is further configured to detect whether the local corpus is in an idle state; actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user; and replying the audio signal according to the local corpus.
By the method, when the system cannot recognize the corpus corresponding to the user voice through the local corpus and the dynamic corpus, the system confirms with the user, after the user confirms, the system takes the corpus corresponding to the voice as the new corpus, and then adds the new corpus into the dynamic corpus.
drawings
FIG. 1 is a flowchart of a method for processing speech of a local corpus in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of a speech processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a process of performing voice reply on a local corpus according to an embodiment of the present invention;
FIG. 4 is a second exemplary flowchart of speech processing according to the present invention;
FIG. 5 is a schematic structural diagram of a speech processing system with locally added corpora according to an embodiment of the present invention.
Detailed Description
the technical solutions of the present invention are described in detail with reference to the drawings and the specific embodiments, and it should be understood that the embodiments and the specific technical features in the embodiments of the present invention are merely illustrative of the technical solutions of the present invention, and are not restrictive, and the embodiments and the specific technical features in the embodiments of the present invention may be combined with each other without conflict.
Fig. 1 is a flowchart of a speech processing method for local corpus updating according to an embodiment of the present invention, where the method includes:
s1, acquiring voice information input by a user, and converting the voice information into first text information;
s2, judging whether the first text information has corresponding corpora in the local corpus;
if the corpus does not exist in the local corpus, go to step S3-S5, and if the corpus exists in the local corpus, go to step S6-S7.
S3, outputting prompt information for prompting the user to input the voice information again;
s4, when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information;
s5, storing the second text information as a new corpus in a dynamic corpus;
S6, outputting answer texts corresponding to the linguistic data;
and S7, converting the answer text into voice data through a text-to-voice TTS technology and outputting the voice data.
By the method, when the system cannot recognize the corpus corresponding to the user voice through the local corpus and the dynamic corpus, the system confirms with the user, after the user confirms, the system takes the corpus corresponding to the voice as the new corpus, and then adds the new corpus into the dynamic corpus.
the technical scheme of the invention is more fully and specifically explained by specific scenes as follows:
In the method, two corpus comparison libraries exist in the system, one is a dynamic corpus and the other is a local corpus, in the embodiment of the present invention, the dynamic corpus may be a natural language processing domain NLP1, and the local corpus may be a natural language processing domain NLP 2.
for example, as shown in fig. 2, a schematic flow diagram of performing voice reply on a local corpus is shown, in fig. 2, first, voice information of a user is obtained, then, the voice information is converted into text information through ASR, the text information is judged by an arbiter, if the goodness of fit is greater than 0.95, corpus matching is performed through a local NLP, if matching is successful, an answer text is successfully output to TTS answers, if local NLP matching is failed, an answer failure is provided, TTS understanding is failed to be output, then, corpus learning is performed through an incremental learning engine, whether corpus addition is performed or not is judged, if yes, new corpora is added in the dynamic corpus, and if not, the flow is ended.
For example, as shown in fig. 3, after the voice information of the user is acquired, the voice information is converted into text information "open cold air", and at this time, the corpus exists in the local corpus, so that after the text information is acquired through ASR, the matching degree between the text information and the local corpus obtained by the system reaches 0.95, at this time, a corresponding TTS answer is obtained according to the local corpus, and output is completed.
Further, as shown in fig. 2, a schematic flow diagram of performing voice reply on a dynamic corpus is shown, in fig. 2, first, voice information of a user is obtained, then, the voice information is converted into text information "turn on an air conditioner", at this time, the corpus exists in the dynamic corpus, therefore, after the text information is obtained through ASR, the matching degree between the text information and the dynamic corpus obtained by the system reaches 0.95, at this time, a corresponding TTS answer is obtained according to the dynamic corpus, and output is completed.
After the voice information of the user is obtained by the two methods, the user can determine to use the corresponding corpus to reply according to the matching degree between the dynamic corpus and the local corpus, so that the accurate recognition of different voice information is realized, and the practicability of the voice system is improved.
Further, in the embodiment of the present invention, before performing matching, the corpus and the corpus need to be updated in real time, specifically, after obtaining the first text information corresponding to the user voice information, it is determined whether the first text information has a corresponding corpus in the dynamic corpus and the local corpus, and if not, a prompt message prompting the user to re-input the voice information is output; when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information; judging whether the second text information has corresponding corpora in the dynamic corpus and the local corpus; and if so, associating the second text information with the determined linguistic data, and adding the linguistic data in the dynamic corpus newly.
For example, as shown in fig. 4, first, a user voice is obtained, text information corresponding to the voice information is determined through ASR, then whether corresponding corpora exist is determined by comparison, if no corresponding corpora exist, the user is requested to re-speak, after the user changes the language and re-speaks, the system passes through the ASR recognition result again, and performs comparison and determination again, if the corresponding corpora are found, an instruction is executed, and the problem that the user cannot understand is associated, then, the new corpora is determined, and after the user performs voice determination, the new corpora is completed and added to the corresponding corpus or corpus table.
the technical solution of the present invention is further explained by specific application scenarios.
1. The car owner sends a voice instruction to the vehicle-mounted robot: opening the cold air;
2. The vehicle-mounted robot can not find the keyword of 'opening cold air' in the dynamic corpus and replies: the inventor does not understand the theory too much, and can try out the theory of changing the species;
3. The vehicle owner determines and sends out a voice command: opening the air conditioner;
4. the vehicle-mounted robot executes the instruction and associates the unintelligible linguistic data to confirm the newly added linguistic data: turning on the cold air means not turning on the air conditioner;
5. the owner gives a reply to the predicted problem, such as: is/pair;
6. And C, vehicle-mounted robot recovery: good, I have learned that turning on cold air is the meaning of turning on the air conditioner;
7. the vehicle owner sends out the voice command again: the cold air is turned on. A local NLP dynamic corpus has been newly added, and tasks are executed and responded to according to the voice instructions: the air conditioner has been turned on for you;
8. the vehicle end provides a list of the newly added corpora, and the user can delete, check and modify the corresponding functions of the newly added corpora.
By the method, the corpus table and the corpus can be updated in real time, so that the corpus table and the corpus can meet the real-time accuracy.
further, in the embodiment of the present invention, when a new corpus exists in a local corpus, the dynamic corpus is matched with the local corpus; when the dynamic corpus and the local corpus have repeated corpora, the repeated corpora are in the dynamic corpus or the local corpus. Through the matching process, the corpus table can be prevented from being overlapped with the corpus in time, and the accuracy of determining the corpus is improved.
further, in the embodiment of the present invention, in order to improve the user experience, the system detects whether the local corpus is in an idle state in real time; actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user; and replying the audio signal according to the local corpus.
Corresponding to the method provided by the embodiment of the present invention, the embodiment of the present invention further provides a system, as shown in fig. 5, which is a schematic structural diagram of a local newly-anticipated speech processing system in the embodiment of the present invention, and the system includes:
An obtaining module 501, configured to obtain voice information input by a user, and convert the voice information into first text information;
A processing module 502, configured to determine whether the first text information has a corresponding corpus in a local corpus; if the voice information does not exist, outputting prompt information for prompting the user to input the voice information again; when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information; and taking the second text information as a new corpus and storing the new corpus in a dynamic corpus.
further, in this embodiment of the present invention, the processing module 502 is specifically configured to perform voice output on the second text information and the determined associated information of the corpus; and when a determination instruction of the association information input by the user is received, associating the second text information with the determined new corpus, and adding the new corpus into the dynamic corpus.
Further, in this embodiment of the present invention, the processing module 502 is further configured to output an answer text corresponding to the corpus when the corpus corresponding to the first text information exists in the local corpus; and converting the answer text into voice data output through a text-to-voice TTS technology.
Further, in this embodiment of the present invention, the processing module 502 is further configured to match the dynamic corpus with the local corpus when the local corpus has a new corpus; and when the repeated corpora exist in the dynamic corpus and the local corpus, the repeated corpora exist in the dynamic corpus or the local corpus.
further, in this embodiment of the present invention, the processing module 502 is further configured to detect whether the local corpus is in an idle state; actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user; and replying the audio signal according to the local corpus.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the application, including the use of specific symbols, labels, or other designations to identify the vertices.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. a speech processing method for a locally added corpus is characterized by comprising the following steps:
acquiring voice information input by a user, and converting the voice information into first text information;
judging whether the first text information has corresponding corpora in a local corpus or not;
if the voice information does not exist, outputting prompt information for prompting the user to input the voice information again;
when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information;
And taking the second text information as a new corpus and storing the new corpus in a dynamic corpus.
2. the method of claim 1, wherein storing the second text information as a new corpus in a dynamic corpus comprises:
Performing voice output on the second text information and the determined associated information of the linguistic data;
and when a determination instruction of the association information input by the user is received, associating the second text information with the determined new corpus, and adding the new corpus into the dynamic corpus.
3. The method of claim 1, wherein the method further comprises:
when the language material corresponding to the first text information exists in the local language database, outputting an answer text corresponding to the language material;
And converting the answer text into voice data output through a text-to-voice TTS technology.
4. The method of claim 1, wherein the method further comprises:
When the local corpus has the newly added corpora, matching the dynamic corpus with the local corpus;
and when the repeated corpora exist in the dynamic corpus and the local corpus, deleting the repeated corpora in the dynamic corpus or the local corpus.
5. The method of claim 1, wherein the method further comprises:
Detecting whether the local corpus is in an idle state;
Actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user;
And replying the audio signal according to the local corpus.
6. a local augmented expectation speech processing system, the system comprising:
the acquisition module is used for acquiring voice information input by a user and converting the voice information into first text information;
The processing module is used for judging whether the first text information has corresponding corpora in the local corpus; if the voice information does not exist, outputting prompt information for prompting the user to input the voice information again; when the user re-inputs the voice information according to the prompt message, determining second text information corresponding to the re-input voice information; and taking the second text information as a new corpus and storing the new corpus in a dynamic corpus.
7. the system according to claim 6, wherein the processing module is specifically configured to perform speech output on the second text information and the associated information of the determined corpus; and when a determination instruction of the association information input by the user is received, associating the second text information with the determined new corpus, and adding the new corpus into the dynamic corpus.
8. the system according to claim 6, wherein the processing module is further configured to output an answer text corresponding to the corpus when the corpus corresponding to the first text information exists in the local corpus; and converting the answer text into voice data output through a text-to-voice TTS technology.
9. The system of claim 6, wherein the processing module is further configured to match the dynamic corpus with the local corpus when there is a new corpus in the local corpus; and when the repeated corpora exist in the dynamic corpus and the local corpus, the repeated corpora exist in the dynamic corpus or the local corpus.
10. The system of claim 6, wherein the processing module is further configured to detect whether the local corpus is in an idle state; actively outputting a voice signal when the local corpus is in an idle state, and receiving an audio signal based on the voice signal by a user; and replying the audio signal according to the local corpus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910861559.6A CN110570867A (en) | 2019-09-12 | 2019-09-12 | Voice processing method and system for locally added corpus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910861559.6A CN110570867A (en) | 2019-09-12 | 2019-09-12 | Voice processing method and system for locally added corpus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110570867A true CN110570867A (en) | 2019-12-13 |
Family
ID=68779414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910861559.6A Pending CN110570867A (en) | 2019-09-12 | 2019-09-12 | Voice processing method and system for locally added corpus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110570867A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063353A (en) * | 2019-12-31 | 2020-04-24 | 苏州思必驰信息科技有限公司 | Client processing method allowing user-defined voice interactive content and user terminal |
CN111933120A (en) * | 2020-08-19 | 2020-11-13 | 潍坊医学院 | Voice data automatic labeling method and system for voice recognition |
CN113160807A (en) * | 2020-01-22 | 2021-07-23 | 广州汽车集团股份有限公司 | Corpus updating method and system and voice control equipment |
CN113723113A (en) * | 2021-07-23 | 2021-11-30 | 上海原圈网络科技有限公司 | Conversation-based client identity recognition processing method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103456296A (en) * | 2012-05-31 | 2013-12-18 | 三星电子株式会社 | Method for providing voice recognition function and electronic device thereof |
CN105744326A (en) * | 2016-02-03 | 2016-07-06 | 广东长虹电子有限公司 | Editable voice intelligent control method and system for television |
CN105825848A (en) * | 2015-01-08 | 2016-08-03 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for voice recognition |
CN106156022A (en) * | 2015-03-23 | 2016-11-23 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN106328143A (en) * | 2015-06-23 | 2017-01-11 | 中兴通讯股份有限公司 | Voice control method and device and mobile terminal |
CN107195300A (en) * | 2017-05-15 | 2017-09-22 | 珠海格力电器股份有限公司 | Sound control method and system |
CN108831469A (en) * | 2018-08-06 | 2018-11-16 | 珠海格力电器股份有限公司 | Voice command method for customizing, device and equipment and computer storage medium |
CN108962233A (en) * | 2018-07-26 | 2018-12-07 | 苏州思必驰信息科技有限公司 | Voice dialogue processing method and system for voice dialogue platform |
-
2019
- 2019-09-12 CN CN201910861559.6A patent/CN110570867A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103456296A (en) * | 2012-05-31 | 2013-12-18 | 三星电子株式会社 | Method for providing voice recognition function and electronic device thereof |
CN105825848A (en) * | 2015-01-08 | 2016-08-03 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for voice recognition |
CN106156022A (en) * | 2015-03-23 | 2016-11-23 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN106328143A (en) * | 2015-06-23 | 2017-01-11 | 中兴通讯股份有限公司 | Voice control method and device and mobile terminal |
CN105744326A (en) * | 2016-02-03 | 2016-07-06 | 广东长虹电子有限公司 | Editable voice intelligent control method and system for television |
CN107195300A (en) * | 2017-05-15 | 2017-09-22 | 珠海格力电器股份有限公司 | Sound control method and system |
CN108962233A (en) * | 2018-07-26 | 2018-12-07 | 苏州思必驰信息科技有限公司 | Voice dialogue processing method and system for voice dialogue platform |
CN108831469A (en) * | 2018-08-06 | 2018-11-16 | 珠海格力电器股份有限公司 | Voice command method for customizing, device and equipment and computer storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111063353A (en) * | 2019-12-31 | 2020-04-24 | 苏州思必驰信息科技有限公司 | Client processing method allowing user-defined voice interactive content and user terminal |
CN113160807A (en) * | 2020-01-22 | 2021-07-23 | 广州汽车集团股份有限公司 | Corpus updating method and system and voice control equipment |
CN111933120A (en) * | 2020-08-19 | 2020-11-13 | 潍坊医学院 | Voice data automatic labeling method and system for voice recognition |
CN113723113A (en) * | 2021-07-23 | 2021-11-30 | 上海原圈网络科技有限公司 | Conversation-based client identity recognition processing method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570867A (en) | Voice processing method and system for locally added corpus | |
CN109785828B (en) | Natural language generation based on user speech styles | |
CN109841212B (en) | Speech recognition system and speech recognition method for analyzing commands with multiple intents | |
US20140324429A1 (en) | Computer-implemented method for automatic training of a dialogue system, and dialogue system for generating semantic annotations | |
DE102013223036B4 (en) | Adaptation methods for language systems | |
CN111191450B (en) | Corpus cleaning method, corpus input device and computer readable storage medium | |
CN103928027B (en) | Adaptive approach and system for voice system | |
CN108447488B (en) | Enhanced speech recognition task completion | |
CN109920410B (en) | Apparatus and method for determining reliability of recommendation based on environment of vehicle | |
US9202459B2 (en) | Methods and systems for managing dialog of speech systems | |
CN105469797A (en) | Method and system for controlling switching-over from intelligent voice identification to manual services | |
CN112000787B (en) | Voice interaction method, server and voice interaction system | |
CN109785831A (en) | Check method, control device and the motor vehicle of the vehicle-mounted voice identifier of motor vehicle | |
US20230205998A1 (en) | Named entity recognition system and named entity recognition method | |
CN111833870A (en) | Awakening method and device of vehicle-mounted voice system, vehicle and medium | |
CN105869631B (en) | The method and apparatus of voice prediction | |
US20190189113A1 (en) | System and method for understanding standard language and dialects | |
CN111797208A (en) | Dialog system, electronic device and method for controlling a dialog system | |
US20140136204A1 (en) | Methods and systems for speech systems | |
CN113535308A (en) | Language adjusting method, language adjusting device, electronic equipment and medium | |
KR102174148B1 (en) | Speech Recognition Method Determining the Subject of Response in Natural Language Sentences | |
US11646031B2 (en) | Method, device and computer-readable storage medium having instructions for processing a speech input, transportation vehicle, and user terminal with speech processing | |
CN111089603B (en) | Navigation information prompting method based on social application communication content and vehicle | |
US20150039312A1 (en) | Controlling speech dialog using an additional sensor | |
CN107195298B (en) | Root cause analysis and correction system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191213 |