CN106792047B - Voice control method and system of smart television - Google Patents
Voice control method and system of smart television Download PDFInfo
- Publication number
- CN106792047B CN106792047B CN201611182737.5A CN201611182737A CN106792047B CN 106792047 B CN106792047 B CN 106792047B CN 201611182737 A CN201611182737 A CN 201611182737A CN 106792047 B CN106792047 B CN 106792047B
- Authority
- CN
- China
- Prior art keywords
- instruction
- voice
- television
- context
- page
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
Abstract
The invention provides a voice control method and a voice control system of an intelligent television, wherein whether a voice instruction is a context instruction is judged through a received voice instruction input by a user, if so, the context instruction is output to a television client, otherwise, whether the voice instruction is a local page instruction is judged, if so, the local page instruction is output to the television client, otherwise, the voice instruction is analyzed, a global control instruction is returned according to an analysis result, the global control instruction is packaged, the global control instruction is generated into a dynamic instruction cache, and the global control instruction is output to the television client, so that the voice control of displaying page context and local page content of the television is realized, and convenience is provided for the user to control the intelligent television through voice.
Description
Technical Field
The invention relates to the technical field of intelligent television control, in particular to a voice control method and system of an intelligent television.
Background
At present, the smart television is gradually popularized, and on the basis of the control of a traditional remote controller, different input control modes such as voice, gestures and touch are mostly combined in the existing smart television. These approaches are both superior and inferior, with voice control being favored by most smart television manufacturers as a basic technology that matures and satisfies complex input and interaction approaches.
Although the voice control modules on the market are adequate for both traditional remote control functions (power, channel, volume, menu, etc.) and on-demand functions (input, selection, etc.), they can accomplish simple, context-free commands or command control of global, unrelated interfaces or modules. But the performance of context control and local page instruction control is poor, the context understanding of the instructions cannot be realized, or the instructions which are effective on a specific interface are identified.
Therefore, the prior art is subject to further improvement.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a voice control method and a voice control system for a smart television for a user, and overcome the defect that the voice control of the context of a display page and the content display of a local page of the television cannot be realized when the television is controlled by voice in the prior art.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a voice control method of an intelligent television comprises the following steps:
step A, receiving a voice instruction input by a user, judging whether the voice instruction is a context instruction, if so, outputting the context instruction to a television client, otherwise, executing step B;
b, judging whether the voice instruction is a local page instruction, if so, outputting the local page instruction to a television client, and if not, executing the step C;
step C, analyzing the voice command, and returning a global control command according to an analysis result;
and step D, packaging the global control instruction, generating a dynamic instruction cache for the global control instruction, and outputting the global control instruction to the television client.
Wherein the context instruction is a control instruction which is associated with a last input voice instruction and supports context operation;
the local page instruction is a control instruction which is associated with a last input voice instruction and does not support context operation;
the global control instruction is a control instruction which can be executed in any television display page.
The voice control method of the intelligent television comprises the following steps before the step A:
step A01, classifying the control instructions of the smart television according to functions; according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
The voice control method of the smart television, wherein before the step A of judging whether the voice command is a context command, the method further comprises the following steps:
step A1, judging whether a cache control instruction is contained under the television ID and the television display page ID requested to be controlled by the voice instruction, if so, extracting the cache control instruction, judging whether the voice instruction is a context instruction according to the cache control instruction, otherwise, executing step C.
The voice control method of the smart television, wherein the information cached in the dynamic instruction cache comprises: the television ID requested to be controlled by the voice instruction and the current television display page ID; the dynamic instructions are represented by regular expressions, each dynamic instruction corresponds to one regular expression, and the dynamic instructions are cached in the cloud.
The voice control method of the smart television, wherein the method for judging whether the voice command is a context command in the step a comprises the following steps:
step A2, judging whether the context monitoring mark is opened, if yes, executing step A3, otherwise executing step B;
and step A3, judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the current time is the context instruction, otherwise, closing the context monitoring mark, and executing the step B.
The voice control method of the smart television, wherein the method for judging whether the voice command is a local page command in the step B comprises the following steps:
b1, judging whether the current television display page ID contains a cached dynamic instruction, if so, extracting the cached dynamic instruction under the television display page ID, and executing the step B2, otherwise, executing the step C;
and step B2, judging whether the voice command input this time contains a matched regular expression, if so, directly returning a dynamic command corresponding to the regular expression, otherwise, executing the step C.
A voice control system of a smart television comprises the following components:
the context instruction judging module is used for receiving a voice instruction input by a user, judging whether the voice instruction is the context instruction or not, and if so, outputting the context instruction to the television client;
the local page instruction judging module is used for judging whether the voice instruction is a local page instruction or not, and if so, outputting the local page instruction to the television client;
the global instruction analysis module is used for analyzing the voice instruction and returning a global control instruction according to an analysis result;
the global instruction processing module is used for packaging the global control instruction, generating a dynamic instruction cache for the global control instruction and outputting the global control instruction to the television client;
the context instruction is a control instruction which is associated with a last input voice instruction and supports context operation;
the local page instruction is a control instruction which is associated with a last input voice instruction and does not support context operation;
the global control instruction is a control instruction which can be executed in any television display page.
The voice control system of the intelligent television, wherein the system further comprises:
the instruction and page classification module is used for classifying the control instructions of the smart television according to functions; according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
The voice control system of the smart television, wherein the context judgment module comprises:
a monitoring judgment unit for judging whether the context monitoring mark is turned on;
and the matching judgment unit is used for judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the context instruction is the current time, and otherwise, closing the context monitoring mark.
The voice control system of the smart television, wherein the information cached in the dynamic instruction cache comprises: the television ID requested to be controlled by the voice instruction and the current television display page ID; the dynamic instructions are represented by regular expressions, each dynamic instruction corresponds to one regular expression, and the dynamic instructions are cached in a cloud;
the local page instruction judging module comprises:
the page cache judging unit is used for judging whether the current television display page ID contains a cached dynamic instruction, and if so, extracting the cached dynamic instruction under the television display page ID;
and the expression matching unit is used for judging whether the voice command input at this time contains a matched regular expression or not, and if so, directly returning a dynamic command corresponding to the regular expression.
The method and the system for controlling the voice of the smart television have the advantages that whether the voice instruction is a context instruction or not is judged through the received voice instruction input by a user, if so, the context instruction is output to a television client, otherwise, whether the voice instruction is a local page instruction or not is judged, if so, the local page instruction is output to the television client, otherwise, the voice instruction is analyzed, a global control instruction is returned according to an analysis result, the global control instruction is packaged, the global control instruction is generated into a dynamic instruction cache, and the global control instruction is output to the television client, so that the voice control of displaying the page context and displaying the local page content of the television is realized, and convenience is provided for the user to control the smart television through the voice.
Drawings
Fig. 1 is a flowchart illustrating steps of a voice control method for a smart television according to the present invention.
Fig. 2 is a schematic step diagram of a voice control method of a smart television according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a speech control system of a smart television according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a voice control method of a smart television, which comprises the following steps as shown in figure 1:
and step S1, receiving a voice instruction input by a user, judging whether the voice instruction is a context instruction, if so, outputting the context instruction to the television client, otherwise, executing step S2.
The intelligent television receives a voice instruction input by a user through video equipment, judges whether the voice instruction is a context instruction or not, outputs the voice instruction to a television client if the context instruction is judged, and judges the next step if the context instruction is not judged.
Specifically, the context instruction is a control instruction which is associated with a last input voice instruction and supports context operation. Such as: the control instruction of "previous page", "next page" or "volume up", etc. According to the definition of the context instruction, it can be seen that if the received voice instruction is a context instruction, the last input voice instruction needs to be stored when the voice instruction is input last time, so that the last input voice instruction is conveniently matched with the current voice instruction, and whether the two voice instructions are related or not is judged. Therefore, the method also comprises the following steps:
s11, judging whether a cache control instruction is contained under the television ID and the television display page ID requested to be controlled by the voice instruction, if so, extracting the cache control instruction, judging whether the voice instruction is a context instruction according to the cache control instruction, and otherwise, directly judging the voice instruction received this time as a global control instruction to be processed.
In order to better buffer the last input voice command, when the dynamic command is buffered, the information buffered by the dynamic command comprises: the television ID requested to be controlled by the voice instruction and the current television display page ID; the dynamic instructions are represented by regular expressions, each dynamic instruction corresponds to one regular expression, and the dynamic instructions are cached in the cloud. The ID of the television and the ID of the television display page are stored, so that the accuracy of the matching of the cache information is ensured. And when the dynamic instruction is expressed, the one-to-one correspondence relationship is established between the regular expression mode and the character string of the dynamic instruction, so that the dynamic instruction can be quickly searched and matched.
In order to better determine whether the received voice command is a context command, the method further comprises the following steps:
s12, judging whether the context monitoring mark is opened, if yes, executing step A3, otherwise executing step S2.
And S13, judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the current time is the context instruction, otherwise, closing the context monitoring mark, and executing S2.
The context instruction is associated with the last input context instruction, so that when the last input voice instruction is the context instruction, the context monitoring mark is started at the same time of caching the last input voice instruction and is used for matching whether the received voice instruction is the context instruction, if the received voice instruction is matched with the context instruction, the control instruction which is the same as the last input context instruction is returned, otherwise, the control instruction is not the context instruction associated with the last input context instruction, and the voice instruction is executed in the judgment process of judging whether the voice instruction is the dynamic instruction.
And step S2, judging whether the voice command is a local page command, if so, outputting the local page command to the television client, otherwise, executing step S3.
Since it is determined in step S1 that the voice command is not a context command, in this step, it is determined whether the voice command is a local page command in the dynamic command, and if the voice command is a local page command, the local page command is output to the tv client, otherwise, the local page command is determined as a global control command.
The local page instruction is a control instruction which is associated with a last input voice instruction and does not support a context operation. Such as selection, page turning operations on the list page, are related to the resulting content of the current list. Specifically, the selection can be operated according to the position, such as 'first' and 'first row second', and the position sequence number is limited to the result number and the arrangement mode of the current page; the selection may also be operated on a specific resource content (movie title), e.g. a current page has a part that can speak the name selection directly. In addition, the page turning operation on the list page is related to the current page number and the page number, and for example, "previous page" is invalid in the first page, and "next page" is also invalid in the last page.
In this step, in order to more accurately determine whether the voice command is a local page command, the method further includes the following steps:
and S21, judging whether the current television display page ID contains a cached dynamic instruction, if so, extracting the cached dynamic instruction under the television display page ID, and executing the step S22, otherwise, executing the step S3.
And S22, judging whether the voice command input this time contains a matched regular expression, if so, directly returning a dynamic command corresponding to the regular expression, otherwise, executing the step S3.
And step S3, analyzing the voice command and returning a global control command according to the analysis result.
Since it is determined in the above steps S1 and S2 that the voice command input this time is not a dynamic command, the voice command is treated as a global control command. Specifically, the voice command is subjected to semantic analysis to obtain command information contained in the voice command, and then the command information obtained through analysis is used as a global control command. The global control instruction is a control instruction which can be executed in any television display page. Such as: the instructions of 'return', 'power off' and 'power on' in the television control instruction belong to instructions which can be executed regardless of the current page where the television is located.
And step S4, packaging the global control instruction, generating a dynamic instruction cache for the global control instruction, and outputting the global control instruction to the television client.
And (4) packaging the global control instruction analyzed in the step (S3), generating a dynamic instruction, caching the dynamic instruction to the cloud, and outputting the generated dynamic instruction to the television client. Specifically, the dynamic instruction is generated this time and cached to the cloud, and the information to be stored includes: the instruction code corresponding to the instruction, the television ID corresponding to the instruction and the page ID corresponding to the instruction are also included.
The step S1 includes: step S01, classifying the control instructions of the smart television according to functions; at present, common functional modules of the smart television comprise traditional basic control, on-demand function, weather and the like, and other common functional modules may also integrate various specific vertical services such as stock, music, shopping or household appliance control and the like. For example, the instruction sets of the three functional modules basic control, on-demand and weather are classified as follows:
as shown in the above table, each specific control corresponds to an instruction, which is represented by an instruction code, and the instructions are classified according to different function fields, and the instruction code has an identifier of the function.
The step S01 further includes: according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
Specifically, the level or classification of a page is determined according to a specific vertical service, and for an instruction that can be executed on any page, the instruction belongs to a global instruction, and the page ID is set to null; the command of the on-demand function has strong PAGE dependency and can be divided into a LIST PAGE (PAGE ID is VOD _ slow _ LIST _ PAGE), a detail PAGE (PAGE ID is VOD _ slow _ SINGLE _ PAGE), and a PLAY PAGE (PAGE ID is VOD _ PLAY _ PAGE). After the page level is flagged, the page level instructions (i.e., instructions that rely on a particular page to be effective) can make validity determinations and thus choose to execute or ignore. Page level instructions for the on-demand function include page flipping (e.g., "previous page", "next page"), see details (e.g., "first one"), play of details, selection ("first set", "next set"), and play control, among others. The generation and processing logic for the three pages is as follows:
when a user voice input search (e.g., "movie by artist director") returns multiple results (which may be in the form of a LIST), a VOD _ SHOW _ LIST instruction is generated and the current PAGE is marked as a LIST PAGE VOD _ SHOW _ LIST _ PAGE. The effective instruction of the page comprises selection and page turning operations. The page flip operation still returns to the list page.
If the content in the list PAGE result is selected (e.g., "first" or a specific resource name), the detail PAGE of the selected object is returned, i.e., the VOD _ slow _ SINGLE instruction is generated, and the current PAGE is marked as the list PAGE VOD _ slow _ SINGLE _ PAGE. The effective instruction of the page is a play operation.
If the playing operation is performed on the basis of the detail PAGE, the playing result, i.e., VOD _ USE _ PLAYER command, is returned while marking the current PAGE as the playing PAGE VOD _ PLAY _ PAGE. The effective instructions of the page comprise details and play control operation. And (4) returning to the detail page by reading the details, and only calling a player interface for control without performing page jump in the playing control operation.
Therefore, in specific implementation, whether the obtained voice instruction contains the matched regular expression or not can be quickly identified according to the one-to-one correspondence between the instruction codes of the instructions and the regular expressions and the function of each instruction.
In order to explain the above method in more detail, the following is a further description of the specific application example.
Referring to fig. 2, the method of the present invention can be mainly divided into three parts, namely, dynamic instruction processing, semantic parsing, and global instruction processing, and includes the following steps:
h1, receiving a voice command input by a user.
H2, according to the last input voice command cached in the TV, judging whether the currently received voice command is a context command, if so, executing step H3, otherwise, executing step H4.
H3, the voice command determined as the context command is subjected to dynamic command processing, and then step H10 is executed.
And step H4, judging whether the received voice command is a local page command, if so, executing step H3 to be processed as a dynamic command, otherwise, executing step H5.
And step H5, combining the semantic server interface to carry out semantic analysis on the received voice command, and then executing step H6 according to the analyzed result.
And step H6, obtaining instruction information according to the analyzed result, and processing the instruction as a global control instruction.
And step H7, generating a dynamic instruction according to the instruction information.
And step H8, caching the dynamic instruction to the cloud.
And step H9, outputting the dynamic instruction obtained by the determined context instruction, local page instruction or global instruction to the television client, and executing corresponding control operation.
The dynamic instruction processing process mainly processes context operation of an instruction supporting a context function, page turning and selection operation of a list page, playing and collecting operation of a detail page, playing control operation of a playing page and the like; the semantic analysis process analyzes the input voice command through a semantic server interface and returns a global control command irrelevant to the page; and the global instruction processing encapsulates the instructions, generates a dynamic instruction cache, and finally outputs the instructions to the client.
The invention also provides a voice control system of the smart television while the method is disclosed, as shown in fig. 3, the voice control system comprises:
a context instruction determining module 110, configured to receive a voice instruction input by a user, determine whether the voice instruction is a context instruction, and if so, output the context instruction to the television client;
a local page instruction determining module 120, configured to determine whether the voice instruction is a local page instruction, and if so, output the local page instruction to the television client;
the global instruction analysis module 130 is configured to analyze the voice instruction and return a global control instruction according to an analysis result;
the global instruction processing module 140 is configured to encapsulate the global control instruction, generate a dynamic instruction cache for the global control instruction, and output the global control instruction to the television client;
the context instruction is a control instruction which is associated with a last input voice instruction and supports context operation;
the local page instruction is a control instruction which is associated with a last input voice instruction and does not support context operation;
the global control instruction is a control instruction which can be executed in any television display page.
The system further comprises:
the instruction and page classification module is used for classifying the control instructions of the smart television according to functions; according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
The context judgment module comprises:
a monitoring judgment unit for judging whether the context monitoring mark is turned on;
and the matching judgment unit is used for judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the context instruction is the current time, and otherwise, closing the context monitoring mark.
The information cached in the dynamic instruction cache includes: the television ID requested to be controlled by the voice instruction and the current television display page ID; the dynamic instructions are represented by regular expressions, each dynamic instruction corresponds to one regular expression, and the dynamic instructions are cached in a cloud;
the local page instruction judging module comprises:
the page cache judging unit is used for judging whether the current television display page ID contains a cached dynamic instruction, and if so, extracting the cached dynamic instruction under the television display page ID;
and the expression matching unit is used for judging whether the voice command input at this time contains a matched regular expression or not, and if so, directly returning a dynamic command corresponding to the regular expression.
According to the method and the system provided by the invention, the page ID identification, the dynamic instruction cache generation and the analysis processing are added in the operation process of the intelligent television, so that the context interaction of the intelligent television is realized, the defect that the traditional intelligent television only supports the global instruction operation is overcome, the user experience is enhanced, and the interaction between the user and the television is more real, convenient and efficient.
The invention provides a voice control method and a voice control system of an intelligent television, wherein whether a voice instruction is a context instruction is judged through a received voice instruction input by a user, if so, the context instruction is output to a television client, otherwise, whether the voice instruction is a local page instruction is judged, if so, the local page instruction is output to the television client, otherwise, the voice instruction is analyzed, a global control instruction is returned according to an analysis result, the global control instruction is packaged, the global control instruction is generated into a dynamic instruction cache, and the global control instruction is output to the television client, so that the voice control of displaying page context and local page content of the television is realized, and convenience is provided for the user to control the intelligent television through voice.
It should be understood that equivalents and modifications of the technical solution and inventive concept thereof may occur to those skilled in the art, and all such modifications and alterations should fall within the scope of the appended claims.
Claims (10)
1. A voice control method of an intelligent television is characterized by comprising the following steps:
step A, receiving a voice instruction input by a user, judging whether the voice instruction is a context instruction, if so, outputting the context instruction to a television client, otherwise, executing step B;
b, judging whether the voice instruction is a local page instruction, if so, outputting the local page instruction to a television client, and if not, executing the step C;
step C, analyzing the voice command, and returning a global control command according to an analysis result;
step D, packaging the global control instruction, generating the global control instruction into a dynamic instruction cache, and outputting the global control instruction to a television client;
the information cached in the dynamic instruction cache includes: the television ID requested to be controlled by the voice instruction and the current television display page ID;
generating a dynamic instruction and caching the dynamic instruction to a cloud end, wherein the information to be stored comprises: the instruction code corresponding to the instruction, the television ID corresponding to the instruction and the page ID corresponding to the instruction are also included.
2. The voice control method of the smart television as claimed in claim 1, wherein the step a is preceded by:
step A01, classifying the control instructions of the smart television according to functions; according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
3. The voice control method of the smart television as claimed in claim 2, wherein before the step a of determining whether the voice command is a context command, the method further comprises the steps of:
step A1, judging whether a cache control instruction is contained under the television ID and the television display page ID requested to be controlled by the voice instruction, if so, extracting the cache control instruction, judging whether the voice instruction is a context instruction according to the cache control instruction, otherwise, executing step C.
4. The voice control method for the smart television as claimed in claim 3, wherein the dynamic commands are represented by regular expressions, each dynamic command corresponds to a regular expression, and the dynamic commands are cached in the cloud.
5. The voice control method of the smart television as claimed in claim 4, wherein the method for determining whether the voice command is a context command in the step a comprises the steps of:
step A2, judging whether the context monitoring mark is opened, if yes, executing step A3, otherwise executing step B;
and step A3, judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the current time is the context instruction, otherwise, closing the context monitoring mark, and executing the step B.
6. The voice control method of the smart television as claimed in claim 4, wherein the method for determining whether the voice command is a local page command in the step B comprises the steps of:
b1, judging whether the current television display page ID contains a cached dynamic instruction, if so, extracting the cached dynamic instruction under the television display page ID, and executing the step B2, otherwise, executing the step C;
and step B2, judging whether the voice command input this time contains a matched regular expression, if so, directly returning a dynamic command corresponding to the regular expression, otherwise, executing the step C.
7. A speech control system of a smart television is characterized by comprising:
the context instruction judging module is used for receiving a voice instruction input by a user, judging whether the voice instruction is the context instruction or not, and if so, outputting the context instruction to the television client;
the local page instruction judging module is used for judging whether the voice instruction is a local page instruction or not, and if so, outputting the local page instruction to the television client;
the global instruction analysis module is used for analyzing the voice instruction and returning a global control instruction according to an analysis result;
the global instruction processing module is used for packaging the global control instruction, generating a dynamic instruction cache for the global control instruction and outputting the global control instruction to the television client;
the information cached in the dynamic instruction cache includes: the television ID requested to be controlled by the voice instruction and the current television display page ID;
generating a dynamic instruction and caching the dynamic instruction to a cloud end, wherein the information to be stored comprises: the instruction code corresponding to the instruction, the television ID corresponding to the instruction and the page ID corresponding to the instruction are also included.
8. The voice control system of the smart tv as claimed in claim 7, wherein the system further comprises:
the instruction and page classification module is used for classifying the control instructions of the smart television according to functions; according to the service information contained in the television display page, the classification of the television display page is defined, and the unique ID of each television display page is set.
9. The voice control system of smart tv as claimed in claim 7, wherein the context determining module comprises:
a monitoring judgment unit for judging whether the context monitoring mark is turned on;
and the matching judgment unit is used for judging whether the context instruction is matched with the voice instruction input this time, if so, returning the voice instruction input last time, judging that the context instruction is the current time, and otherwise, closing the context monitoring mark.
10. The voice control system of the smart television as claimed in claim 8, wherein the dynamic commands are expressed by regular expressions, each dynamic command corresponds to a regular expression, and the dynamic commands are cached in the cloud;
the local page instruction judging module comprises:
the page cache judging unit is used for judging whether the current television display page ID contains a cached dynamic instruction, and if so, extracting the cached dynamic instruction under the television display page ID;
and the expression matching unit is used for judging whether the voice command input at this time contains a matched regular expression or not, and if so, directly returning a dynamic command corresponding to the regular expression.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611182737.5A CN106792047B (en) | 2016-12-20 | 2016-12-20 | Voice control method and system of smart television |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611182737.5A CN106792047B (en) | 2016-12-20 | 2016-12-20 | Voice control method and system of smart television |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106792047A CN106792047A (en) | 2017-05-31 |
CN106792047B true CN106792047B (en) | 2020-05-05 |
Family
ID=58891066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611182737.5A Active CN106792047B (en) | 2016-12-20 | 2016-12-20 | Voice control method and system of smart television |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106792047B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107608799B (en) * | 2017-08-15 | 2019-03-22 | 北京小蓦机器人技术有限公司 | It is a kind of for executing the method, equipment and storage medium of interactive instruction |
CN110634477B (en) * | 2018-06-21 | 2022-01-25 | 海信集团有限公司 | Context judgment method, device and system based on scene perception |
CN108920640B (en) * | 2018-07-02 | 2020-12-22 | 北京百度网讯科技有限公司 | Context obtaining method and device based on voice interaction |
JP2021096380A (en) * | 2019-12-18 | 2021-06-24 | 本田技研工業株式会社 | Agent system, agent system control method, and program |
CN111263236B (en) * | 2020-02-21 | 2022-04-12 | 广州欢网科技有限责任公司 | Voice adaptation method and device for television application and voice control method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093755A (en) * | 2012-09-07 | 2013-05-08 | 深圳市信利康电子有限公司 | Method and system of controlling network household appliance based on terminal and Internet voice interaction |
CN104811777A (en) * | 2014-01-23 | 2015-07-29 | 阿里巴巴集团控股有限公司 | Smart television voice processing method, smart television voice processing system and smart television |
CN105430464A (en) * | 2014-09-15 | 2016-03-23 | 上海天脉聚源文化传媒有限公司 | Method, system and device for controlling intelligent television |
CN106101789A (en) * | 2016-07-06 | 2016-11-09 | 深圳Tcl数字技术有限公司 | The voice interactive method of terminal and device |
-
2016
- 2016-12-20 CN CN201611182737.5A patent/CN106792047B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093755A (en) * | 2012-09-07 | 2013-05-08 | 深圳市信利康电子有限公司 | Method and system of controlling network household appliance based on terminal and Internet voice interaction |
CN104811777A (en) * | 2014-01-23 | 2015-07-29 | 阿里巴巴集团控股有限公司 | Smart television voice processing method, smart television voice processing system and smart television |
CN105430464A (en) * | 2014-09-15 | 2016-03-23 | 上海天脉聚源文化传媒有限公司 | Method, system and device for controlling intelligent television |
CN106101789A (en) * | 2016-07-06 | 2016-11-09 | 深圳Tcl数字技术有限公司 | The voice interactive method of terminal and device |
Also Published As
Publication number | Publication date |
---|---|
CN106792047A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106792047B (en) | Voice control method and system of smart television | |
CN109522083B (en) | Page intelligent response interaction system and method | |
CN107844586B (en) | News recommendation method and device | |
CN106250474B (en) | Voice control processing method and system | |
CN103631887B (en) | Browser side carries out the method and browser of web search | |
US9953645B2 (en) | Voice recognition device and method of controlling same | |
US20140350933A1 (en) | Voice recognition apparatus and control method thereof | |
KR102072826B1 (en) | Speech recognition apparatus and method for providing response information | |
CN111724785B (en) | Method, device and storage medium for controlling small program voice | |
CN112764620B (en) | Interactive request processing method and device, electronic equipment and readable storage medium | |
US20160364373A1 (en) | Method and apparatus for extracting webpage information | |
US20200012675A1 (en) | Method and apparatus for processing voice request | |
CN104025077A (en) | Real-Time Natural Language Processing Of Datastreams | |
CN109036397B (en) | Method and apparatus for presenting content | |
CN105391730A (en) | Information feedback method, device and system | |
CN103984745A (en) | Distributed video vertical searching method and system | |
CN102968987A (en) | Speech recognition method and system | |
CN104090887A (en) | Music search method and device | |
US11749255B2 (en) | Voice question and answer method and device, computer readable storage medium and electronic device | |
CN109271533A (en) | A kind of multimedia document retrieval method | |
CN105632487A (en) | Voice recognition method and device | |
CN106486118B (en) | Voice control method and device for application | |
WO2020124966A1 (en) | Program search method, apparatus and device, and medium | |
CN103646119A (en) | Method and device for generating user behavior record | |
CN106021319A (en) | Voice interaction method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 516006 TCL technology building, No.17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co., Ltd Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL RESEARCH AMERICA Inc. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |