CN106297801A - Method of speech processing and device - Google Patents
Method of speech processing and device Download PDFInfo
- Publication number
- CN106297801A CN106297801A CN201610677323.3A CN201610677323A CN106297801A CN 106297801 A CN106297801 A CN 106297801A CN 201610677323 A CN201610677323 A CN 201610677323A CN 106297801 A CN106297801 A CN 106297801A
- Authority
- CN
- China
- Prior art keywords
- content information
- word content
- command
- enter
- voice messaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Abstract
The present invention is that wherein, method includes about a kind of method of speech processing and device: obtain the voice messaging of user's input;Voice messaging is identified, the word content information corresponding to obtain voice messaging;Word content information being carried out semantic analysis, determines the command type belonging to word content information according to analysis result, command type includes control command and session command;When determining that word content information is control command, enter control model, when determining that word content information is session command, enter chat conversations pattern.By this technical scheme, different mode of operations can be entered according to the difference of command type, when command type is control command, enter control model, when command type is session command, enter chat conversations pattern, thus meet the different demands of different user, promote the experience of user.
Description
Technical field
The present invention relates to voice processing technology field, particularly relate to a kind of method of speech processing and device.
Background technology
Speech recognition is a cross discipline.Recent two decades comes, and speech recognition technology obtains marked improvement, starts from experiment
Market is moved towards in room.It is contemplated that, in coming 10 years, speech recognition technology will enter industry, household electrical appliances, communication, automotive electronics, doctor
The every field such as treatment, home services, consumption electronic product.The application in some fields of the speech recognition dictation machine is by US News circle
It is chosen as one of ten major issues of development of computer in 1997.A lot of experts think that speech recognition technology is between 2000 to 2010
One of development in science and technology technology that areas of information technology ten are the most important.Field involved by speech recognition technology includes: signal processing,
Pattern recognition, theory of probability and theory of information, sound generating mechanism and hearing mechanism, artificial intelligence etc..
Summary of the invention
The embodiment of the present invention provides a kind of method of speech processing and device, in order to realize according to different order entrance differences
Pattern, perform different operations.
First aspect according to embodiments of the present invention, it is provided that a kind of method of speech processing, including:
Obtain the voice messaging of user's input;
Described voice messaging is identified, the word content information corresponding to obtain described voice messaging;
Described word content information is carried out semantic analysis, determines belonging to described word content information according to analysis result
Command type, described command type includes control command and session command;
When determining that described word content information is control command, enter control model, when determining that described word content is believed
When breath is session command, enter chat conversations pattern.
In this embodiment, terminal unit etc. is after identifying the word content information that voice messaging is corresponding, to word
Content information carries out semantic analysis, determines the command type belonging to word content information, according to the difference of command type, enters not
Same mode of operation, when command type is control command, enters control model, when command type is session command, enters
Chat conversations pattern, thus meet the different demands of different user, promote the experience of user.
In one embodiment, described method also includes:
After entering described control model, perform described control command;
After described control command is finished, enter holding state.
In this embodiment, after terminal unit enters control model, then perform the control command received, and controlling
After order is finished, entering holding state, so, user simply enters voice messaging just can control the fortune of terminal unit
OK, terminal is after having performed order, it is also possible to automatically into holding state, thus avoids unnecessary electric quantity consumption, promotes and uses
The experience at family.
In one embodiment, described method also includes:
After entering described chat conversations pattern, search and the answer content of described word content information matches, and export
Described answer content.
In this embodiment, terminal is after entering chat conversations pattern, then explanation terminal needs the voice to user's input
Information replies, now, and terminal searching and the answer content of word content information matches, and export answer content, thus real
Dialogue now and between user.
In one embodiment, described method also includes:
When the voice messaging the most again receiving user's input in preset time period being detected, enter holding state.
In this embodiment, after terminal output replies content, if be detected that the most again connect in preset time period
Receive the voice messaging of user's input, then enter holding state, thus avoid unnecessary electric quantity consumption, promote the use of user
Experience.
In one embodiment, described method also includes:
When entering after described holding state, detect whether to receive to preset and wake up instruction up;
Receive preset wake up instruction up time, enter wake-up states.
In this embodiment, after terminal enters holding state, wake up instruction up if receiving to preset, then enter and wake up shape up
State.Wherein, preset wake up up instruction can be user speech input default wake up key word up, it is also possible to be that user passes through certain and touches
Instruction is waken up up by what operation inputted.
Second aspect according to embodiments of the present invention, it is provided that a kind of voice processing apparatus, including:
Acquisition module, for obtaining the voice messaging of user's input;
Identification module, for described voice messaging is identified, the word content corresponding to obtain described voice messaging
Information;
Determine module, for described word content information is carried out semantic analysis, determine described word according to analysis result
Command type belonging to content information, described command type includes control command and session command;
Processing module, for when determining that described word content information is control command, enters control model, when determining
State word content information when being session command, enter chat conversations pattern.
In one embodiment, described device also includes:
Perform module, for, after entering described control model, performing described control command;
First standby module, for after described control command is finished, enters holding state.
In one embodiment, described device also includes:
Search module, for after entering described chat conversations pattern, search is answered with described word content information matches
Multiple content, and export described answer content.
In one embodiment, described device also includes:
Second standby module, for the most again receiving the voice messaging of user's input in preset time period being detected
Time, enter holding state.
In one embodiment, described device also includes:
Detection module, for when entering after described holding state, detects whether to receive and default wakes up instruction up;
Wake module, for receive preset wake up instruction up time, enter wake-up states.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe
The present invention can be limited.
Other features and advantages of the present invention will illustrate in the following description, and, partly become from description
Obtain it is clear that or understand by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Structure specifically noted in book, claims and accompanying drawing realizes and obtains.
Below by drawings and Examples, technical scheme is described in further detail.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the present invention
Example, and for explaining the principle of the present invention together with description.
Fig. 1 is the flow chart according to a kind of method of speech processing shown in an exemplary embodiment.
Fig. 2 is the flow chart according to the another kind of method of speech processing shown in an exemplary embodiment.
Fig. 3 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
Fig. 4 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
Fig. 5 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
Fig. 6 is the block diagram according to a kind of voice processing apparatus shown in an exemplary embodiment.
Fig. 7 is the block diagram according to the another kind of voice processing apparatus shown in an exemplary embodiment.
Fig. 8 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
Fig. 9 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
Figure 10 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
Detailed description of the invention
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the present invention.On the contrary, they are only with the most appended
The example of the apparatus and method that some aspects that described in detail in claims, the present invention are consistent.
Fig. 1 is the flow chart according to a kind of method of speech processing shown in an exemplary embodiment.This method of speech processing
Being applied in terminal unit, this terminal unit can be mobile phone, computer, digital broadcast terminal, messaging devices, trip
Play control station, tablet device, armarium, body-building equipment, arbitrary equipment with voice control function such as personal digital assistant.
As it is shown in figure 1, the method comprising the steps of S101-S104:
In step S101, obtain the voice messaging of user's input;
In step s 102, voice messaging is identified, the word content information corresponding to obtain voice messaging;
In step s 103, word content information is carried out semantic analysis, determine word content information according to analysis result
Affiliated command type, command type includes control command and session command;
In step S104, when determining that word content information is control command, enter control model, in determining word
When appearance information is session command, enter chat conversations pattern.
In this embodiment, terminal unit etc. is after identifying the word content information that voice messaging is corresponding, to word
Content information carries out semantic analysis, determines the command type belonging to word content information, according to the difference of command type, enters not
Same mode of operation, when command type is control command, enters control model, when command type is session command, enters
Chat conversations pattern, thus meet the different demands of different user, promote the experience of user.
Such as, the word content information that the voice messaging of user's input is corresponding is " playing a song ", and terminal is by semanteme
Identify, determine that this is a control command, then enter control model, play a song song for user, and be finished in order
After, enter holding state.
If the word content information that the voice messaging of user's input is corresponding is " what if I have a stomachache ", terminal passes through language
Justice identifies, determines that this is a session command, then enter chat conversations pattern, chat with user.
Fig. 2 is the flow chart according to the another kind of method of speech processing shown in an exemplary embodiment.
As in figure 2 it is shown, in one embodiment, said method also includes step S201-S202:
In step s 201, after entering control model, perform control command;
Wherein, control command can trigger single order and perform, or triggers the execution of interactive many subcommands.If triggered
Single order performs, then enter holding state after performing.If triggering interactive many subcommands, then this subcommand is finished
After, continue executing with lower subcommand, until all orders enter back into holding state after being finished.
In step S202, after control command is finished, enter holding state.
In this embodiment, after terminal unit enters control model, then perform the control command received, and controlling
After order is finished, entering holding state, so, user simply enters voice messaging just can control the fortune of terminal unit
OK, terminal is after having performed order, it is also possible to automatically into holding state, thus avoids unnecessary electric quantity consumption, promotes and uses
The experience at family.
Fig. 3 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
As it is shown on figure 3, in one embodiment, said method also includes step S301:
In step S301, after entering chat conversations pattern, search and the answer content of word content information matches, and
Output replies content.
In this embodiment, terminal is after entering chat conversations pattern, then explanation terminal needs the voice to user's input
Information replies, now, and terminal searching and the answer content of word content information matches, and export answer content, thus real
Dialogue now and between user.
Fig. 4 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
As shown in Figure 4, in one embodiment, after above-mentioned steps S301, said method also includes step S401:
In step S401, when the voice messaging the most again receiving user's input in preset time period being detected, enter
Enter holding state.
In this embodiment, after terminal output replies content, if be detected that the most again connect in preset time period
Receive the voice messaging of user's input, then enter holding state, thus avoid unnecessary electric quantity consumption, promote the use of user
Experience.
Fig. 5 is the flow chart according to another method of speech processing shown in an exemplary embodiment.
As it is shown in figure 5, in one embodiment, said method also includes step S501-S502:
In step S501, when entering after holding state, detect whether to receive to preset and wake up instruction up;
In step S502, receive preset wake up instruction up time, enter wake-up states.
In this embodiment, after terminal enters holding state, wake up instruction up if receiving to preset, then enter and wake up shape up
State.Wherein, preset wake up up instruction can be user speech input default wake up key word up, it is also possible to be that user passes through certain and touches
Instruction is waken up up by what operation inputted.
Following for apparatus of the present invention embodiment, may be used for performing the inventive method embodiment.
Fig. 6 is the block diagram according to a kind of voice processing apparatus shown in an exemplary embodiment, and this device can be by soft
Part, hardware or both be implemented in combination with become the some or all of of terminal unit.As shown in Figure 6, this voice processing apparatus
Including:
Acquisition module 61, for obtaining the voice messaging of user's input;
Identification module 62, for being identified described voice messaging, in the word corresponding to obtain described voice messaging
Appearance information;
Determine module 63, for described word content information is carried out semantic analysis, determine described literary composition according to analysis result
Command type belonging to word content information, described command type includes control command and session command;
Processing module 64, for when determining that described word content information is control command, enters control model, when determining
When described word content information is session command, enter chat conversations pattern.
In this embodiment, terminal unit etc. is after identifying the word content information that voice messaging is corresponding, to word
Content information carries out semantic analysis, determines the command type belonging to word content information, according to the difference of command type, enters not
Same mode of operation, when command type is control command, enters control model, when command type is session command, enters
Chat conversations pattern, thus meet the different demands of different user, promote the experience of user.
Such as, the word content information that the voice messaging of user's input is corresponding is " playing a song ", and terminal is by semanteme
Identify, determine that this is a control command, then enter control model, play a song song for user, and be finished in order
After, enter holding state.
If the word content information that the voice messaging of user's input is corresponding is " what if I have a stomachache ", terminal passes through language
Justice identifies, determines that this is a session command, then enter chat conversations pattern, chat with user.
Fig. 7 is the block diagram according to the another kind of voice processing apparatus shown in an exemplary embodiment.
As it is shown in fig. 7, in one embodiment, said apparatus also includes:
Perform module 71, for, after entering described control model, performing described control command;
Wherein, control command can trigger single order and perform, or triggers the execution of interactive many subcommands.If triggered
Single order performs, then enter holding state after performing.If triggering interactive many subcommands, then this subcommand is finished
After, continue executing with lower subcommand, until all orders enter back into holding state after being finished.
First standby module 72, for after described control command is finished, enters holding state.
In this embodiment, after terminal unit enters control model, then perform the control command received, and controlling
After order is finished, entering holding state, so, user simply enters voice messaging just can control the fortune of terminal unit
OK, terminal is after having performed order, it is also possible to automatically into holding state, thus avoids unnecessary electric quantity consumption, promotes and uses
The experience at family.
Fig. 8 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
As shown in Figure 8, in one embodiment, said apparatus also includes:
Search module 81, for after entering described chat conversations pattern, searches for and described word content information matches
Reply content, and export described answer content.
In this embodiment, terminal is after entering chat conversations pattern, then explanation terminal needs the voice to user's input
Information replies, now, and terminal searching and the answer content of word content information matches, and export answer content, thus real
Dialogue now and between user.
Fig. 9 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
As it is shown in figure 9, in one embodiment, said apparatus also includes:
Second standby module 91, for the most again receiving the voice messaging of user's input in preset time period being detected
Time, enter holding state.
In this embodiment, after terminal output replies content, if be detected that the most again connect in preset time period
Receive the voice messaging of user's input, then enter holding state, thus avoid unnecessary electric quantity consumption, promote the use of user
Experience.
Figure 10 is the block diagram according to another voice processing apparatus shown in an exemplary embodiment.
As shown in Figure 10, in one embodiment, said apparatus also includes:
Detection module 1001, for when entering after described holding state, detects whether to receive and default wakes up instruction up;
Wake module 1002, for receive preset wake up instruction up time, enter wake-up states.
In this embodiment, after terminal enters holding state, wake up instruction up if receiving to preset, then enter and wake up shape up
State.Wherein, preset wake up up instruction can be user speech input default wake up key word up, it is also possible to be that user passes through certain and touches
Instruction is waken up up by what operation inputted.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program
Product.Therefore, the reality in terms of the present invention can use complete hardware embodiment, complete software implementation or combine software and hardware
Execute the form of example.And, the present invention can use at one or more computers wherein including computer usable program code
The shape of the upper computer program implemented of usable storage medium (including but not limited to disk memory and optical memory etc.)
Formula.
The present invention is with reference to method, equipment (system) and the flow process of computer program according to embodiments of the present invention
Figure and/or block diagram describe.It should be understood that can the most first-class by computer program instructions flowchart and/or block diagram
Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
Instruction arrives the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce
A raw machine so that the instruction performed by the processor of computer or other programmable data processing device is produced for real
The device of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame now.
These computer program instructions may be alternatively stored in and computer or other programmable data processing device can be guided with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in this computer-readable memory produces and includes referring to
Make the manufacture of device, this command device realize at one flow process of flow chart or multiple flow process and/or one square frame of block diagram or
The function specified in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing device so that at meter
Perform sequence of operations step on calculation machine or other programmable devices to produce computer implemented process, thus at computer or
The instruction performed on other programmable devices provides for realizing at one flow process of flow chart or multiple flow process and/or block diagram one
The step of the function specified in individual square frame or multiple square frame.
Obviously, those skilled in the art can carry out various change and the modification essence without deviating from the present invention to the present invention
God and scope.So, if these amendments of the present invention and modification belong to the scope of the claims in the present invention and equivalent technologies thereof
Within, then the present invention is also intended to comprise these change and modification.
Claims (10)
1. a method of speech processing, for terminal unit, it is characterised in that including:
Obtain the voice messaging of user's input;
Described voice messaging is identified, the word content information corresponding to obtain described voice messaging;
Described word content information is carried out semantic analysis, determines the order belonging to described word content information according to analysis result
Type, described command type includes control command and session command;
When determining that described word content information is control command, enter control model, when determining that described word content information is
During session command, enter chat conversations pattern.
Method the most according to claim 1, it is characterised in that described method also includes:
After entering described control model, perform described control command;
After described control command is finished, enter holding state.
Method the most according to claim 1, it is characterised in that described method also includes:
After entering described chat conversations pattern, search and the answer content of described word content information matches, and export described
Reply content.
Method the most according to claim 3, it is characterised in that described method also includes:
When the voice messaging the most again receiving user's input in preset time period being detected, enter holding state.
5. according to the method described in claim 2 or 4, it is characterised in that described method also includes:
When entering after described holding state, detect whether to receive to preset and wake up instruction up;
Receive preset wake up instruction up time, enter wake-up states.
6. a voice processing apparatus, for terminal unit, it is characterised in that including:
Acquisition module, for obtaining the voice messaging of user's input;
Identification module, for described voice messaging is identified, the word content information corresponding to obtain described voice messaging;
Determine module, for described word content information is carried out semantic analysis, determine described word content according to analysis result
Command type belonging to information, described command type includes control command and session command;
Processing module, for when determining that described word content information is control command, enters control model, when determining described literary composition
When word content information is session command, enter chat conversations pattern.
Device the most according to claim 6, it is characterised in that described device also includes:
Perform module, for, after entering described control model, performing described control command;
First standby module, for after described control command is finished, enters holding state.
Device the most according to claim 6, it is characterised in that described device also includes:
Search module, for after entering described chat conversations pattern, searches in the answer with described word content information matches
Hold, and export described answer content.
Device the most according to claim 8, it is characterised in that described device also includes:
Second standby module, for when the voice messaging the most again receiving user's input in preset time period being detected, entering
Enter holding state.
10. according to the device described in claim 7 or 9, it is characterised in that described device also includes:
Detection module, for when entering after described holding state, detects whether to receive and default wakes up instruction up;
Wake module, for receive preset wake up instruction up time, enter wake-up states.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610677323.3A CN106297801A (en) | 2016-08-16 | 2016-08-16 | Method of speech processing and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610677323.3A CN106297801A (en) | 2016-08-16 | 2016-08-16 | Method of speech processing and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106297801A true CN106297801A (en) | 2017-01-04 |
Family
ID=57679437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610677323.3A Pending CN106297801A (en) | 2016-08-16 | 2016-08-16 | Method of speech processing and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106297801A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107086038A (en) * | 2017-05-04 | 2017-08-22 | 珠海格力电器股份有限公司 | Sound control method and voice controller |
CN107170446A (en) * | 2017-05-19 | 2017-09-15 | 深圳市优必选科技有限公司 | Semantic processes server and the method for semantic processes |
CN107388487A (en) * | 2017-07-03 | 2017-11-24 | 珠海格力电器股份有限公司 | The method and apparatus for controlling air-conditioning |
CN108021633A (en) * | 2017-11-24 | 2018-05-11 | 交通宝互联网技术有限公司 | A kind of intelligent vehicle-carried robot system |
CN108091329A (en) * | 2017-12-20 | 2018-05-29 | 江西爱驰亿维实业有限公司 | Method, apparatus and computing device based on speech recognition controlled automobile |
CN109018778A (en) * | 2018-08-31 | 2018-12-18 | 深圳市研本品牌设计有限公司 | Rubbish put-on method and system based on speech recognition |
CN109346077A (en) * | 2018-11-01 | 2019-02-15 | 汤强 | A kind of voice system and its application method suitable for portable intelligent equipment |
CN109491637A (en) * | 2018-11-06 | 2019-03-19 | 珠海格力电器股份有限公司 | A kind of voice synchronous display controlling system and its method and intelligent terminal |
CN109523998A (en) * | 2018-11-06 | 2019-03-26 | 珠海格力电器股份有限公司 | A kind of simplified display system of voice command and its method and intelligent terminal |
CN109584875A (en) * | 2018-12-24 | 2019-04-05 | 珠海格力电器股份有限公司 | A kind of speech ciphering equipment control method, device, storage medium and speech ciphering equipment |
CN109857930A (en) * | 2019-01-07 | 2019-06-07 | 珠海格力电器股份有限公司 | A kind of information-pushing method, device, storage medium and household electrical appliance |
CN109903769A (en) * | 2017-12-08 | 2019-06-18 | Tcl集团股份有限公司 | A kind of method, apparatus and terminal device of terminal device interaction |
CN110085226A (en) * | 2019-04-25 | 2019-08-02 | 广州智伴人工智能科技有限公司 | A kind of voice interactive method based on robot |
CN110287297A (en) * | 2019-05-22 | 2019-09-27 | 深圳壹账通智能科技有限公司 | Dialogue replies method, apparatus, computer equipment and computer readable storage medium |
CN110444207A (en) * | 2019-08-06 | 2019-11-12 | 广州豫本草电子科技有限公司 | Intelligent response control method, device, medium and terminal device based on the logical instrument that weighs |
CN112581959A (en) * | 2020-12-15 | 2021-03-30 | 四川虹美智能科技有限公司 | Intelligent device control method and system and voice server |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632664A (en) * | 2012-08-20 | 2014-03-12 | 联想(北京)有限公司 | A method for speech recognition and an electronic device |
CN104318924A (en) * | 2014-11-12 | 2015-01-28 | 沈阳美行科技有限公司 | Method for realizing voice recognition function |
CN105491126A (en) * | 2015-12-07 | 2016-04-13 | 百度在线网络技术(北京)有限公司 | Service providing method and service providing device based on artificial intelligence |
-
2016
- 2016-08-16 CN CN201610677323.3A patent/CN106297801A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632664A (en) * | 2012-08-20 | 2014-03-12 | 联想(北京)有限公司 | A method for speech recognition and an electronic device |
CN104318924A (en) * | 2014-11-12 | 2015-01-28 | 沈阳美行科技有限公司 | Method for realizing voice recognition function |
CN105491126A (en) * | 2015-12-07 | 2016-04-13 | 百度在线网络技术(北京)有限公司 | Service providing method and service providing device based on artificial intelligence |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107086038A (en) * | 2017-05-04 | 2017-08-22 | 珠海格力电器股份有限公司 | Sound control method and voice controller |
CN107170446A (en) * | 2017-05-19 | 2017-09-15 | 深圳市优必选科技有限公司 | Semantic processes server and the method for semantic processes |
CN107388487A (en) * | 2017-07-03 | 2017-11-24 | 珠海格力电器股份有限公司 | The method and apparatus for controlling air-conditioning |
CN107388487B (en) * | 2017-07-03 | 2019-07-09 | 珠海格力电器股份有限公司 | The method and apparatus for controlling air-conditioning |
CN108021633A (en) * | 2017-11-24 | 2018-05-11 | 交通宝互联网技术有限公司 | A kind of intelligent vehicle-carried robot system |
CN109903769A (en) * | 2017-12-08 | 2019-06-18 | Tcl集团股份有限公司 | A kind of method, apparatus and terminal device of terminal device interaction |
CN108091329A (en) * | 2017-12-20 | 2018-05-29 | 江西爱驰亿维实业有限公司 | Method, apparatus and computing device based on speech recognition controlled automobile |
CN109018778A (en) * | 2018-08-31 | 2018-12-18 | 深圳市研本品牌设计有限公司 | Rubbish put-on method and system based on speech recognition |
CN109346077A (en) * | 2018-11-01 | 2019-02-15 | 汤强 | A kind of voice system and its application method suitable for portable intelligent equipment |
CN109346077B (en) * | 2018-11-01 | 2022-03-25 | 汤强 | Voice system suitable for portable intelligent equipment and use method thereof |
CN109491637A (en) * | 2018-11-06 | 2019-03-19 | 珠海格力电器股份有限公司 | A kind of voice synchronous display controlling system and its method and intelligent terminal |
CN109523998A (en) * | 2018-11-06 | 2019-03-26 | 珠海格力电器股份有限公司 | A kind of simplified display system of voice command and its method and intelligent terminal |
CN109584875A (en) * | 2018-12-24 | 2019-04-05 | 珠海格力电器股份有限公司 | A kind of speech ciphering equipment control method, device, storage medium and speech ciphering equipment |
CN109857930A (en) * | 2019-01-07 | 2019-06-07 | 珠海格力电器股份有限公司 | A kind of information-pushing method, device, storage medium and household electrical appliance |
CN110085226A (en) * | 2019-04-25 | 2019-08-02 | 广州智伴人工智能科技有限公司 | A kind of voice interactive method based on robot |
CN110085226B (en) * | 2019-04-25 | 2021-05-11 | 广州智伴人工智能科技有限公司 | Voice interaction method based on robot |
CN110287297A (en) * | 2019-05-22 | 2019-09-27 | 深圳壹账通智能科技有限公司 | Dialogue replies method, apparatus, computer equipment and computer readable storage medium |
CN110444207A (en) * | 2019-08-06 | 2019-11-12 | 广州豫本草电子科技有限公司 | Intelligent response control method, device, medium and terminal device based on the logical instrument that weighs |
CN112581959A (en) * | 2020-12-15 | 2021-03-30 | 四川虹美智能科技有限公司 | Intelligent device control method and system and voice server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106297801A (en) | Method of speech processing and device | |
US11568876B2 (en) | Method and device for user registration, and electronic device | |
CN106782536B (en) | Voice awakening method and device | |
CN106658129B (en) | Terminal control method and device based on emotion and terminal | |
CN110069608B (en) | Voice interaction method, device, equipment and computer storage medium | |
US20180336889A1 (en) | Method and Apparatus of Building Acoustic Feature Extracting Model, and Acoustic Feature Extracting Method and Apparatus | |
CN106201424B (en) | A kind of information interacting method, device and electronic equipment | |
CN108958810A (en) | A kind of user identification method based on vocal print, device and equipment | |
CN104951335B (en) | The processing method and processing device of application program installation kit | |
CN108021572B (en) | Reply information recommendation method and device | |
CN111081280B (en) | Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method | |
CN108108142A (en) | Voice information processing method, device, terminal device and storage medium | |
CN108874895B (en) | Interactive information pushing method and device, computer equipment and storage medium | |
CN103841268A (en) | Information processing method and information processing device | |
CN104267922B (en) | A kind of information processing method and electronic equipment | |
CN106789543A (en) | The method and apparatus that facial expression image sends are realized in session | |
CN110602516A (en) | Information interaction method and device based on live video and electronic equipment | |
CN103853703A (en) | Information processing method and electronic equipment | |
CN110544468B (en) | Application awakening method and device, storage medium and electronic equipment | |
JP7063937B2 (en) | Methods, devices, electronic devices, computer-readable storage media, and computer programs for voice interaction. | |
CN104035995A (en) | Method and device for generating group tags | |
CN109360551B (en) | Voice recognition method and device | |
CN111354357A (en) | Audio resource playing method and device, electronic equipment and storage medium | |
CN103943111A (en) | Method and device for identity recognition | |
US10950221B2 (en) | Keyword confirmation method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170104 |
|
RJ01 | Rejection of invention patent application after publication |