CN110956960A - Intelligent voice system and method for controlling projector by using intelligent voice system - Google Patents

Intelligent voice system and method for controlling projector by using intelligent voice system Download PDF

Info

Publication number
CN110956960A
CN110956960A CN201811196308.2A CN201811196308A CN110956960A CN 110956960 A CN110956960 A CN 110956960A CN 201811196308 A CN201811196308 A CN 201811196308A CN 110956960 A CN110956960 A CN 110956960A
Authority
CN
China
Prior art keywords
projector
control command
cloud service
voice
alias
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811196308.2A
Other languages
Chinese (zh)
Inventor
林明政
陈玉孟
甘伟欣
戴基城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Coretronic Corp
Original Assignee
Coretronic Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Coretronic Corp filed Critical Coretronic Corp
Priority to US16/228,793 priority Critical patent/US11100926B2/en
Priority to EP19161914.7A priority patent/EP3629324A1/en
Priority to JP2019164079A priority patent/JP7359603B2/en
Publication of CN110956960A publication Critical patent/CN110956960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides an intelligent voice system and a method for controlling a projector. The system comprises a voice assistant, a cloud service platform, a projector and a management server. When the voice assistant receives a voice signal for controlling the projector, the voice assistant extracts keywords from the voice signal and transmits the keywords to the cloud service platform, wherein the keywords comprise aliases and first control commands corresponding to the projector, and the cloud service platform comprises a plurality of second control commands. The cloud service platform analyzes the first control command according to the semantic analysis program, takes out a corresponding second control command according to the first control command, and transmits the alias of the projector and the corresponding second control command to the management server. The management server accesses/controls the projector according to the alias, and adjusts the projector to be in the first operating state according to the corresponding second control command. The invention provides a novel, intuitive and convenient projector control system.

Description

Intelligent voice system and method for controlling projector by using intelligent voice system
Technical Field
The present invention relates to an intelligent audio system and a control method thereof, and more particularly, to an intelligent audio system and a method for controlling a projector using the same.
Background
With the development of science and technology, intelligent devices such as intelligent voice assistants and intelligent furniture are increasingly popular with consumers. The intelligent voice assistant enables the user to complete the submitted task only through voice conversation, thereby effectively improving the efficiency of executing work.
In some applications, a user can control the smart voice assistant by sending voice signals to the smart voice assistant instead of controlling a part of smart furniture, such as an air conditioner, a lamp, etc. in a space, thereby enabling the user to control the smart furniture more conveniently and intuitively.
However, the prior art does not have a technical means for the user to control the projector through the intelligent voice assistant, so that it is an important issue for those skilled in the art how to design a system for the intelligent voice assistant to control the projector according to the voice signal of the user.
Disclosure of Invention
The present invention provides an intelligent audio system and a method for controlling a projector using the same, which can solve the above technical problems.
The invention provides an intelligent voice system which comprises a first voice assistant, a first cloud service platform, a first projector and a management server. The first cloud service platform is connected with and manages the first voice assistant. The management server is connected with the first cloud service platform and the first projector and used for managing and controlling the first projector. When the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the first keywords to the first cloud service platform, wherein the first keywords comprise a first alias and a first control command corresponding to the first projector, the first cloud service platform comprises a first semantic analysis program, and the first cloud service platform comprises a plurality of second control commands. The first cloud service platform analyzes the first control command according to the first semantic analysis program, obtains or takes out or generates a corresponding second control command according to the first control command, and transmits the first alias of the first projector and the corresponding second control command to the management server. The management server accesses the first projector in response to the first alias and adjusts the first projector to a first operating state according to the corresponding second control command.
The invention provides an intelligent voice system which comprises a first voice assistant, a first projector and a first cloud service server. The first cloud service server is connected with the first voice assistant and the first projector and used for managing and controlling the first voice assistant and the first projector. When the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the first keywords to the first cloud service server, wherein the first keywords comprise a first alias and a first control command corresponding to the first projector, the first cloud service server comprises a first semantic analysis program, and the first cloud service server comprises a plurality of second control commands. The first cloud service server analyzes the first control command according to the first semantic analysis program, obtains or extracts or generates a corresponding second control command according to the first control command, accesses the first projector in response to the first alias, and adjusts the first projector to be in a first operating state according to the corresponding second control command.
The invention provides a method for controlling a projector, which is suitable for the intelligent voice system and comprises the following steps: when the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the first keywords to a first cloud service platform, wherein the first keywords comprise a first alias and a first control command corresponding to the first projector, the first cloud service platform comprises a first semantic analysis program, and the first cloud service platform comprises a plurality of second control commands; analyzing a first control command by the first cloud service platform according to the first semantic analysis program, obtaining or taking out or generating a corresponding second control command according to the first control command, and transmitting a first alias of the first projector and the corresponding second control command to the management server; the management server accesses the first projector in response to the first alias, and adjusts the first projector to a first operating state according to the corresponding second control command.
The invention provides an intelligent voice system which comprises a first projector and a first cloud service server. The first projector comprises a first voice assistant, and the first cloud service server is connected with the first voice assistant and the first projector and used for managing and controlling the first voice assistant and the first projector. When the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the first keywords to the first cloud service server, wherein the first keywords comprise a first alias and a first control command corresponding to the first projector, the first cloud service server comprises a first semantic analysis program, and the first cloud service server comprises a plurality of second control commands. The first cloud service server analyzes the first control command according to the first semantic analysis program, obtains or extracts or generates a corresponding second control command according to the first control command, accesses the first projector in response to the first alias, and adjusts the first projector to be in a first operating state according to the corresponding second control command.
Based on the above, the method for controlling the projector provided by the invention enables the user to control the operation state of the projector by speaking the voice signal to the voice assistant, thereby providing a novel, intuitive and convenient projector control system.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a schematic diagram of an intelligent speech system according to an embodiment of the invention.
Fig. 2 is a flowchart illustrating a method for controlling a projector according to an embodiment of the invention.
FIG. 3 is a schematic diagram of the intelligent speech system shown in FIG. 1.
FIG. 4 is a diagram of the intelligent speech system according to FIG. 3.
FIG. 5 is a diagram of the intelligent speech system shown in FIG. 1.
Fig. 6 is a schematic diagram of the intelligent speech system according to fig. 1, fig. 3, fig. 4 and fig. 5.
FIG. 7A is a schematic diagram of the intelligent speech system shown in FIG. 1.
FIG. 7B is a schematic diagram of another intelligent speech system according to FIG. 1.
Fig. 8 is a schematic diagram of the intelligent audio system controlling the delivery of the audio/video platform interface according to fig. 1.
FIG. 9 is a flowchart illustrating a method for controlling the projector according to FIG. 8.
FIG. 10 is a schematic diagram of performing a search operation in the video platform interface according to FIGS. 8 and 9.
FIG. 11 is a diagram illustrating interface scrolling performed in the video platform interface according to FIG. 10.
FIG. 12 is a schematic diagram of the playback operation performed in the video platform interface according to FIG. 11.
FIG. 13 is a diagram illustrating bookmark adding operation performed in the video platform interface according to FIG. 12.
FIG. 14 is a diagram illustrating bookmark adding operation performed in the video platform interface according to FIG. 13.
Detailed Description
The foregoing and other features, aspects and utilities of the present general inventive concept will be apparent from the following detailed description of a preferred embodiment thereof, which is to be read in connection with the accompanying drawings. Directional terms as referred to in the following examples, for example: up, down, left, right, front or rear, etc., are simply directions with reference to the drawings. Accordingly, the directional terminology is used for purposes of illustration and is in no way limiting.
Fig. 1 is a schematic diagram of an intelligent speech system according to an embodiment of the invention. As shown in fig. 1, the intelligent voice system 100 includes a voice assistant 102a, a cloud service platform 104a, a projector 106a and a management server 108, all of which are connected to each other via a network (Internet) 110. A connection is defined as the mutual transmission of signals.
In the present embodiment, the projector 106a includes a micro control unit 106a1 and a light source device 106a 2. The voice assistant 102a is an external device connected to the projector 106a through a signal, for example, the voice assistant 102a has a microphone (microphone) and a Speaker (Speaker) and a wireless network medium and/or a wired network medium (for example, a network card or a related dongle, or not limited to Bluetooth (Bluetooth), wireless fidelity (Wi-Fi), Zigbee (Zigbee) or other wireless transmission media, or not limited to optical fiber or other wired transmission interface for transmission). Voice assistant 102a, such as a smart horn (e.g., Amazon)TMProduced Amazon Echo or GoogleTMGoogle voice assistant, but not limited thereto), or other intelligent devices that allow a user to input commands AS represented by voice signal AS1 and perform corresponding operations; alternatively, the voice assistant 102a may be a voice circuit device built in the projector 106a, and the voice circuit device includes, for example, a voice recognition processor, an AI processor (AI Processing Unit), or a Neural-network Processing Unit (Neural-network Processing chip). In various embodiments, before the user instructs the voice assistant 102a to perform the specific operation through the voice signal, the user may speak a specific wake-up word to the voice assistant 102a to wake up the voice assistant 102 a. The wake-up is the turning on or starting. For example, if the voice assistant 102a is Amazon Echo, the user may speak a wake up word such as "Alexa" and/or "Echo" to the voice assistant 102a to wake up the voice assistant 102 a. In addition, if the voice assistant 102a is a Google assistant, the user can speak a wake-up word such as "OK Google" and/or "Hey Google" to the voice assistant 102a to wake up the voice assistant 102a, but the invention is not limited thereto.
The cloud service platform 104a is connected to the voice assistant 102a through the network 110, and may be a network platform configured by a manufacturer of the voice assistant 102a to manage the voice assistant 102 a. For example, if the voice assistant 102a is Amazon Echo, the cloud Service platform 104a may be Amazon Web Service (AWS). For another example, if the voice assistant 102a is a Google assistant, the corresponding cloud service platform 104a may be a Google cloud server or a Google service platform.
In various embodiments, skill applications (still applications) corresponding to the voice assistant 102a may be stored/recorded on the cloud service platform 104a, and these skill applications may be installed on the cloud service platform 104a by manufacturers of the applications (e.g., projector manufacturers).
In one embodiment, the user can speak the voice signal AS1 to the voice assistant 102a, and the voice signal AS1 includes the calling name (invocation name) and the intention (intent) corresponding to the skill application to be used, and the order of the front and back arrangement is not limited. When the voice assistant 102a receives the voice signal including the call name and the intention, the voice assistant 102a can extract/obtain the keywords such as the call name and the intention from the voice signal, and transmit the keywords to the cloud service platform 104 a. Then, the cloud service platform 104a finds the corresponding skill application according to the call name, and accordingly controls the voice assistant 102a to perform the corresponding operation according to the user's speaking/inputting intention (for example, but not limited to, answering the user's question). For example, the user speaks a voice signal to start the projector of company a, which represents the calling name (invocation name) corresponding to the skill application to be used, and starts the representative intention (intent). When the voice assistant 102a receives the voice signal for turning on the projector of company a, the voice assistant 102a transmits the voice signal to the cloud service platform 104 a. Then, the cloud service platform 104a finds the skill application provided by company a according to the calling name, and the cloud service platform 104a provides the first reply signal R1 to the voice assistant 102a and controls the voice assistant 102a to answer the reply voice signal RS1 of "company a projector is being turned on".
The management server 108 is connected to the cloud service platform 104a and the projector 106a, and is configured to manage and control the projector 106 a. In one embodiment, the management server 108 may be configured and maintained by a manufacturer of the projector 106a, and may be configured to control the projector 106a accordingly according to the control command from the cloud service platform 104. In addition, the management server 108 and the cloud service platform 104a are not limited to be disposed on different servers in the present invention, and in other embodiments, the cloud service platform 104a and the management server 108 may be disposed on the same server, that is, the cloud service platform 104a may be disposed on the same server and connected to the management server 108.
In one embodiment, a specific skill application for controlling the projector 106a may be installed on the cloud service platform 104a, which enables a user to control the operating state of the projector 106a through the voice assistant 102 a. In this case, after the user speaks the call name and intention of the skill-specific application to the voice assistant 102a, the cloud service platform 104a can forward the intention of the user to the management server 108 managing the projector 106a based on the call name, so that the management server 108 adjusts the operation state of the projector 106a accordingly. The details of the related operations will be further described with reference to fig. 2.
Fig. 2 is a flowchart illustrating a method for controlling a projector according to an embodiment of the invention. The method shown in fig. 2 can be executed cooperatively by the intelligent speech system 100 shown in fig. 1, and details of the steps in fig. 2 will be described below in conjunction with the apparatus shown in fig. 1.
First, in step S210, when the voice assistant 102a receives the voice signal AS1 for controlling the projector 106a, the voice assistant 102a can extract/obtain a plurality of first keywords from the voice signal AS1, and transmit the first keywords to the cloud service platform 104 a.
In one embodiment, the first keyword may include a first alias (alias) AL1 corresponding to the projector 106a and a first control command CMD 1. In various embodiments, the user registers with the web page corresponding to the management server 108 of the projector manufacturer. That is, the projector 106a has been registered in advance on the management server 108 based on the first alias AL1 and unique identification information (e.g., serial number) of the projector 106 a. Also, the first alias AL1 of the projector 106a may be selected from a plurality of preset aliases (e.g., living room, bedroom, etc.) provided by the management server 108 during the registration process of the projector 106 a. In addition, the first control command CMD1 can correspond to the user's intention, i.e., the control operation to be performed on the projector 106a (e.g., turning on, turning off, increasing/decreasing brightness, increasing/decreasing contrast, turning on an On Screen Display (OSD), etc.).
In this embodiment, the cloud service platform 104a may include a first semantic analysis program and a plurality of second control commands CMD2, wherein each of the second control commands CMD2 is, for example, a control command pre-established by a manufacturer of the projector 106a (e.g., power on, power off, brightness increase/decrease, contrast increase/decrease, OSD on, etc.).
In one embodiment, the first voice analysis program may include a first check table, and the second control commands CMD2 may be recorded in the first check table for lookup, but the invention is not limited thereto. In another embodiment, the first speech analysis program may include information of the second control commands CMD 2. In another embodiment, the first check table recorded with the second control commands CMD2 can also be stored in a database (not shown) in the cloud service platform 104 a. In other embodiments, the second control commands CMD2 can also be generated by artificial intelligence (artificial intelligence) or machine learning (machine learning). For example, the cloud service platform 104a may perform deep learning based on the voice history inputted by the user, so as to learn the idioms and grammar of the user, and further establish a plurality of second control commands CMD2, but the invention is not limited thereto.
Therefore, the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzer, obtain/retrieve or generate the corresponding second control command CMD2 according to the first control command CMD1, and transmit the first alias AL1 and the corresponding second control command CMD2 of the projector 106a to the management server 108 in step S220. It should be noted that the cloud service platform 104a is configured to convert the first control command CMD1 originally being a voice signal into a text file for comparing with the second control command CMD2 in the cloud service platform 104 a. In one embodiment, the second control command CMD2 can be, but is not limited to, a text file.
For example, if the first control command CMD1 input by the user is "power on", the cloud service platform 104a can retrieve the second control command CMD2 corresponding to "power on" if the second control command CMD2 corresponding to "power on" is stored on the cloud service platform 104 a. Thereafter, the cloud service platform 104a may send the first alias AL1 of the projector 106a and a second control command CMD2 corresponding to "power on" to the management server 108. For another example, if the first control command CMD1 input by the user is "brightness up", the cloud service platform 104a can fetch the second control command CMD2 corresponding to "brightness up" if the second control command CMD2 corresponding to "brightness up" is stored on the cloud service platform 104 a. Cloud service platform 104a may then send first alias AL1 of projector 106a and second control command CMD2 corresponding to "turn up brightness" to management server 108.
In other embodiments, if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on light) input by the user does not exist on the cloud service platform 104a, the cloud service platform 104a can control the voice assistant 102a to output/reply an unrecognizable response sentence (e.g., apology, i don't know your meaning), but the invention is not limited thereto.
Thereafter, in step S230, the management server 108 accesses/controls the projector 106a in response to the first alias AL1, and adjusts the projector 106a to the first operating state (e.g., power on, power off, brightness increase/decrease, contrast increase/decrease, OSD on, etc.) according to the corresponding second control command CMD 2.
In one embodiment, the management server 108 may generate a corresponding third control command CMD3 according to the second control command CMD2 and transmit the third control command CMD3 to the mcu 106a1 of the projector 106 a.
Accordingly, the mcu 106a1 of the projector 106a can receive the third control command CMD3 and adjust the first operating state of the projector 106 according to the third control command CMD 3. The micro control unit 106a1 includes, for example, a Central Processing Unit (CPU), or other Programmable general purpose or special purpose Microprocessor (Microprocessor), Digital Signal Processor (DSP), Programmable controller, Application Specific Integrated Circuit (ASIC), Programmable Logic Device (PLD), or other similar devices or combinations thereof. In one embodiment, the mcu 106a1 can include a first program (not shown), and the mcu 106a1 can obtain a fourth control command CMD4 according to the first program after receiving the third control command CMD3 to adjust the first operating state of the projector 106a accordingly. In this embodiment, the fourth control command CMD4 may include a digital signal, which is specifically used to adjust the hardware operating state of the projector 106a in response to the third control command CMD3 to embody the first operating state. For example, when the third control command CMD3 is "turn up brightness", the mcu 106a1 can execute the first program to generate a digital signal for turning up the driving current of the light source and/or increasing the rotation speed of the fan, and use the digital signal as the fourth control command CMD4, and the fourth control command CMD4 is provided to the light source apparatus 106a2 by the mcu 106a1, but the invention is not limited thereto.
In another embodiment, the first program of the projector 106a may include a second checking table, and the second checking table may include a pre-established fifth control command and a corresponding fourth control command CMD4, wherein the fifth control command is the same as the third control command CMD 3. That is, the first program of the projector 106a may record the corresponding relationship between each fifth control command and the fourth control command CMD4 in the second check table. In this case, when the mcu 106a1 receives the third control command CMD3, the fifth control command identical to the third control command CMD3 and the corresponding fourth control command CMD4 can be directly found from the second checking table, so as to adjust the hardware operating state of the projector 106a to show the first operating state. For example, the second check table may record a fifth control command CMD4 of "turn up brightness" and a fourth control command CMD4 of "generate a digital signal for turning up the driving current of the light source and/or increasing the rotation speed of the fan" corresponding to each other. Accordingly, when the third control command CMD3 is "turn up", the mcu 106a1 can search the second check table to find the fifth control command that is "turn up" together with the third control command CMD3, and then obtain the fourth control command CMD4 corresponding to the fifth control command (i.e., "generate a digital signal for turning up the driving current of the light source and/or increasing the rotation speed of the fan"). Then, the projector 106a can adjust the hardware operating status of the projector 106a according to the fourth control command CMD4 to embody the first operating status, but the invention is not limited thereto.
In addition, in one embodiment, the projector 106a may include electronic components such as a fan, a light source driver, a speaker, etc., and the third control command CMD3 may include information about and signals for controlling the electronic components. The information of the electronic device includes, for example, an identification code, and the signal for controlling the electronic device includes a voltage or a current. Accordingly, the projector 106a can specifically adjust the operation states of the related electronic components, such as increasing the rotation speed of a fan and/or increasing the voltage or current of a light source driver, in response to the third control command CMD3, but the invention is not limited thereto.
In view of the above, the method for controlling the projector according to the present invention enables the user to control the operating state of the projector 106a by speaking the voice signal AS1 to the voice assistant 102a, thereby providing a novel, intuitive and convenient control system for the projector.
In one embodiment, when the management server 108 accesses/controls the projector 106a, the projector 106a provides a second reply signal to the management server 108. That is, the projector 106a can accordingly feed back the operation status of the projector 106a and the current information of each component (e.g., the time of using the light source, the data of the relevant parameters of the projector set by the user) to the management server 108.
In one embodiment, after the projector 106a is adjusted to the first operating state, the projector 106a may report the first operating state back to the management server 108. Then, the management server 108 may forward the first operation status to the cloud service platform 104 a. Accordingly, the cloud service platform 104a generates a first response sentence according to the first operation state, and sends the first response sentence to the voice assistant 102a, so that the voice assistant 102a outputs the first response sentence. For example, if the projector 106a is turned on, the first response sentence output by the voice assistant 102a can be "the projector is turned on". For another example, if the projector 106a is adjusted to be in the power-off state, the first response sentence output by the voice assistant 102a may be "the projector is powered off", but the invention is not limited thereto.
In other embodiments, the method for controlling the projector according to the present invention can also allow the user to control the same projector through different voice assistants, which is described as follows.
Please refer to fig. 3, which is a schematic diagram of the intelligent speech system shown in fig. 1. As shown in fig. 3, the intelligent voice system 300 of the present embodiment includes voice assistants 102a and 102b, cloud service platforms 104a and 104b, a projector 106a and a management server 108.
In the embodiment, the voice assistants 102a and 102b may be manufactured by different manufacturers and managed by different cloud service platforms. For example, if voice assistant 102a is Amazon Echo, voice assistant 102b may be a Google assistant. Accordingly, the voice assistants 102a and 102b may be managed by the cloud service platforms 104a and 104b, respectively, wherein the cloud service platform 104a is, for example, an AWS, and the cloud service platform 104b may be a Google cloud server, but the invention is not limited thereto.
In this embodiment, the user can control the projector 106a by speaking the voice signal to the voice assistant 102a, and can also control the projector 106a by speaking the voice signal to the voice assistant 102 b.
In detail, when the voice assistant 102b receives the second voice signal AS2 for controlling the projector 106a, the voice assistant 102b can extract a plurality of second keywords from the second voice signal AS2, and transmit the second keywords to the cloud service platform 104b, wherein the second keywords include the first alias AL1 and the sixth control command CMD6 corresponding to the projector 106 a.
Similar to the cloud service platform 104a, the cloud service platform 104b includes a second semantic analysis program and a plurality of seventh control commands CMD7, wherein each of the seventh control commands CMD7 is, for example, a control command pre-established by the manufacturer of the projector 106a (e.g., power on, power off, brightness increase/decrease, contrast increase/decrease, OSD on, etc.).
In addition, in the embodiment, the cloud service platform 104b may analyze the sixth control command CMD6 according to the second semantic analysis program, obtain a corresponding seventh control command CMD7 according to the sixth control command CMD6, and transmit the first alias AL1 of the projector 106a and the seventh control command CMD7 corresponding to the sixth control command CMD6 to the management server 108. Thereafter, the management server 108 accesses the projector 106a in response to the first alias AL1, and adjusts the projector 106a to the second operating state according to the seventh control command CMD7 corresponding to the sixth control command CMD 6.
The details of the operations executed by the voice assistant 102b, the cloud service platform 104b, and the management server 108 cooperatively to control the projector 106a are substantially the same as the operations executed by the voice assistant 102a, the cloud service platform 104a, and the management server 108 cooperatively according to the previous teachings, and the roles of the second semantic analysis program, the sixth control command CMD6, and the seventh control command CMD7 are also similar to the roles of the first semantic analysis program, the first control command CMD1, and the second control command CMD2 according to the previous teachings, so the details thereof can refer to the description in the previous embodiments, and are not repeated herein.
In one embodiment, when the voice assistants 102a and 102b are located close to each other, it is possible that the voice signals AS1 or AS2 spoken by the user are received by both voice assistants 102a and 102 b.
In one embodiment, since the voice assistant 102a can receive different wake-up words than the voice assistant 102b can receive, the operations performed by the voice assistants 102a and 102b will also differ according to the wake-up words, i.e., only a single voice assistant can be woken up.
In various embodiments, the language family processed by the voice assistant 102a may be the same AS or different from the language family processed by the voice assistant 102b (i.e., the language family of the voice signal AS1 may be the same AS or different from the language family of the voice signal AS 2), and the operations performed by the voice assistants 102a and 102b will be different from each other according to the same or different language family.
For example, assuming that the language family (e.g., english) of the speech signal AS1 is different from the language family (e.g., japanese) of the speech signal AS2, the speech assistant 102a can output an unrecognizable response sentence when the speech assistant 102a receives the speech signal AS2 and the first speech analysis program cannot recognize the speech signal AS 2. Similarly, when the voice assistant 102b receives the voice signal AS1 and the second speech analysis program cannot recognize the voice signal AS1, the voice assistant 102b can also output an unrecognized response sentence.
Conversely, if the language family of the voice signal AS1 is the same AS the language family of the voice signal AS2 (e.g., japanese), the intelligent voice system 300 can perform other operations to control the projector 106a, and the details thereof will be described with reference to fig. 4.
Please refer to fig. 4, which is a schematic diagram of the intelligent speech system according to fig. 3. In this embodiment, when the voice assistant 102a also receives the voice signal AS2 for controlling the projector 106a, the voice assistant 102a can extract the second keyword from the voice signal AS2 and transmit the second keyword to the cloud service platform 104a, wherein the second keyword includes the first alias AL1 and the sixth control command CMD6 corresponding to the projector 106 a.
Thereafter, the cloud service platform 104a may analyze the sixth control command CMD6 according to the first semantic analyzer, and obtain an eighth control command CMD8 according to the sixth control command CMD6, wherein the eighth control command CMD8 is one of the second control commands CMD 2. Moreover, the cloud service platform 104a may transmit the first alias AL1 and the eighth control command CMD8 of the projector 106a to the management server 108. Thereafter, the management server 108 accesses the projector 106a in response to the first alias AL1, and adjusts the projector 106a to the second operating state again according to the eighth control command CMD8 corresponding to the sixth control command CMD 6.
In short, when the speech systems processed by the voice assistants 102a and 102b are the same and receive the same speech signal, the voice assistants 102a and 102b respectively implement repetitive control of the projector 106a through the corresponding cloud service platforms 104a and 104b, such as continuously performing two power-on operations or continuously increasing the brightness twice.
In other embodiments, the method for controlling a projector according to the present invention further allows a user to control different projectors through a voice assistant, which is described below.
Please refer to fig. 5, which is a schematic diagram of the intelligent speech system shown in fig. 1. As shown in fig. 5, the intelligent voice system 500 of the present embodiment includes a voice assistant 102a, a cloud service platform 104a, projectors 106a and 106b, and a management server 108.
In this embodiment, the projectors 106a and 106b may be manufactured by the same manufacturer. Also, similar to the projector 106a, the projector 106b may also be registered in the management server 108 in advance based on the second alias AL2 and the unique identification information (e.g., serial number) of the projector 106 b. Also, the second alias AL2 of the projector 106b may be selected from a plurality of preset aliases (e.g., living room, bedroom, etc.) provided by the management server 108 during the registration process of the projector 106 b.
In this embodiment, the user can control the projector 106a by speaking the voice signal to the voice assistant 102a, and can also control the projector 106b by speaking the voice signal to the voice assistant 102 a.
In detail, when the voice assistant 102a receives the voice signal AS3 for controlling the projector 106b, the voice assistant 102a may extract a plurality of third keywords from the voice signal AS3, and transmit the third keywords to the cloud service platform 104a, wherein the third keywords include the second alias AL2 and the ninth control command CMD9 corresponding to the projector 106 b.
Then, the cloud service platform 104a may analyze the ninth control command CMD9 according to the first semantic analyzer to obtain a tenth control command CMD10, wherein the tenth control command CMD10 is one of the second control commands CMD 2.
Also, the cloud service platform 104a may transmit the second alias AL2 of the projector 106b and a tenth control command CMD10 corresponding to the ninth control command CMD9 to the management server 108. Thereafter, the management server 108 accesses the projector 106b in response to the second alias AL2, and adjusts the projector 106b to the third operating state according to the tenth control command CMD10 corresponding to the ninth control command CMD 9.
The details of the operations executed by the voice assistant 102a, the cloud service platform 104a, and the management server 108 to complete the control of the projector 106b are substantially the same as the operations executed by the voice assistant 102a, the cloud service platform 104a, and the management server 108 to complete the control of the projector 106a, and the roles of the ninth control command CMD9 and the tenth control command CMD10 are also similar to the roles of the first control command CMD1 and the second control command CMD2, so the details thereof can refer to the description in the previous embodiments and are not repeated herein.
Please refer to fig. 6, which is a schematic diagram of the intelligent speech system according to fig. 1, fig. 3, fig. 4 and fig. 5. As shown in fig. 6, the intelligent voice system 600 includes the voice assistants 102a, 102b, the cloud service platforms 104a, 104b, the projectors 106a, 106b, and the management server 108. The smart speech system 600 of the present embodiment may be considered as a combination of the smart speech systems 100, 300 and 500 previously taught. That is, when the user wants to control the projectors 106a and/or 106b, the user can let the voice assistant 102a and/or 102b and the corresponding cloud service platform 104a and/or 104b cooperate with the management server 108 by speaking the voice signal to the voice assistant 102a and/or 102b to realize the related control of the projectors 106a and/or 106 b. For details, reference may be made to the description of the previous embodiments, which are not repeated herein.
Please refer to fig. 7A, which is a schematic diagram of the intelligent speech system shown in fig. 1. As shown in fig. 7A, the intelligent voice system 700 includes the voice assistant 102a, the cloud server 710 and the projector 106a, wherein the cloud server 710 is connected to the voice assistant 102a and the projector 106a and is used for managing and controlling the voice assistant 102a and the projector 106 a. In this embodiment, the cloud service server 710 can be regarded as a combination of the cloud service platform 104a and the management server 108 in fig. 1, and thus can be used to integrally execute the operations originally executed by the cloud service platform 104a and the management server 108 in fig. 1, respectively.
For example, when the voice assistant 102a receives the voice signal AS1 for controlling the projector 106a, the voice assistant 102a extracts a plurality of first keywords from the voice signal AS1 and transmits the first keywords to the cloud service server 710. As described with respect to FIG. 1, the first keyword may include a first alias AL1 corresponding to projector 106a and a first control command CMD 1.
Moreover, the cloud service server 710 may include the first semantic analysis program and the plurality of second control commands CMD2 included in the cloud service platform 104a of fig. 1. Similar to the cloud service platform 104a, the cloud service server 710 may analyze the first control command CMD1 according to the first semantic analysis program, and obtain or retrieve or generate the corresponding second control command CMD2 according to the first control command CMD 1.
Thereafter, similar to the management server 108, the cloud service server 710 may access the projector 106a in response to the first alias AL1 and adjust the projector 106a to the first operating state according to the second control command CMD1 corresponding to the first control command. For details of the related operations of the present embodiment, reference may also be made to the description of the previous embodiments, which are not further described herein.
Please refer to fig. 7B, which is a schematic diagram of another intelligent speech system according to fig. 1. Generally consistent with the intelligent speech system of fig. 7A. As shown in fig. 7B, the smart voice system 700 includes a cloud server 710 and a projector 106a, wherein the cloud server 710 is connected to the projector 106a and is used for managing and controlling the voice assistant 102a and the projector 106 a. In this embodiment, the cloud service server 710 can be regarded as a combination of the cloud service platform 104a and the management server 108 in fig. 1, and thus can be used to integrally execute the operations originally executed by the cloud service platform 104a and the management server 108 in fig. 1, respectively. The voice assistant 102a is integrated into the projector 106a, so that the operations originally performed by the voice assistant 102a and the projector 106a of fig. 1 can be integrally performed.
For example, when the voice circuit device 102a of the projector 106a receives the voice signal AS1 for controlling the projector 106a, the voice assistant 102a extracts a plurality of first keywords from the voice signal AS1 and transmits the first keywords to the cloud service server 710. As described with respect to FIG. 1, the first keyword may include a first alias AL1 corresponding to projector 106a and a first control command CMD 1.
Moreover, the cloud service server 710 may include the first semantic analysis program and the plurality of second control commands CMD2 included in the cloud service platform 104a of fig. 1. Similar to the cloud service platform 104a, the cloud service server 710 may analyze the first control command CMD1 according to the first semantic analysis program, and obtain or retrieve or generate the corresponding second control command CMD2 according to the first control command CMD 1.
Thereafter, similar to the management server 108, the cloud service server 710 may access the projector 106a in response to the first alias AL1 and adjust the projector 106a to the first operating state according to the second control command CMD1 corresponding to the first control command. For details of the related operations of the present embodiment, reference may also be made to the description of the previous embodiments, which are not further described herein.
Therefore, the intelligent voice system and the projector control method provided by the invention can enable a user to achieve the purpose of controlling the projector through a mode of speaking voice signals to the voice assistant. Moreover, the present invention also provides a system for controlling one or more projectors manufactured by the same manufacturer through the voice assistants. Therefore, the invention can provide a novel, intuitive and convenient projector control system for users.
In other embodiments, the present invention further provides an interface for enabling a user to control a projector to project an audio/video platform through a voice assistant, which will be further described below.
First, a user may control the projector 106a to project an interface (e.g., Youtube) of an audio/video platform in a manner taught in fig. 1TMAn interface). Specifically, assuming that the voice signal AS1 in fig. 1 is used to control the projector 106a to launch the av platform interface, the voice assistant 102a may extract a plurality of keywords from the voice signal AS1 and forward the keywords to the cloud service platform 104 a. The keywords may include first alias AL1 of projector 106a and first control command CMD1 (e.g., "open video platform interface").
Then, the cloud service platform 104a can analyze the first control command CMD1 according to the first voice analysis program, and find out the corresponding second control command CMD2 according to the first control command CMD 1. Cloud service platform 104a may then forward first alias AL1 of projector 106a and second control command CMD2 corresponding to first control command CMD1 to management server 108. Accordingly, the management server 108 may access/control the projector 106a in response to the first alias AL1 and start the first application program of the projector 106a according to the second control command CMD 2.
In the present embodiment, the first application program may be stored in a storage device (not shown) of the projector 106a (e.g., a memory, a flash memory, etc.). When the first application program is described above (e.g., you can execute you tube)TMApplication) is turned on, the projector 106a may accordingly connect to a first website (e.g., Youtube) that provides the video platform interfaceTMA web site). After the projector 106a receives the video signal from the first website, the video platform interface may be launched on a projection surface, such as a screen or a wall. In various embodiments, the projector 106a may be configured with a wireless network medium and/or a wired network medium (for example, a network card or a related dongle, or not limited to Bluetooth (Bluetooth), wireless fidelity (Wi-Fi), Zigbee (Zigbee) or other wireless transmission medium, or not limited to optical fiber or other wired transmission interface) for transmitting, so as to receive the video and audio signals.
After the projector 106a is controlled to launch the video platform interface, the present invention further provides a method for enabling the user to control the launch of the projector 106a to the video platform interface through the voice assistant 102a, which is described in detail below.
Referring to fig. 8 and 9, fig. 8 is a schematic diagram illustrating a situation where the intelligent voice system controls the audio/video platform interface according to fig. 1, and fig. 9 is a flowchart illustrating a method for controlling the projector according to fig. 8. The method shown in fig. 9 can be executed by the intelligent speech system 100 of fig. 8, and the details of the steps in fig. 9 will be described below with reference to the apparatus shown in fig. 8.
First, in step S910, when the voice assistant 102a receives the voice signal AS1 'for controlling the av platform interface, the voice assistant 102a extracts/obtains a plurality of keywords from the voice signal AS 1' and forwards the keywords to the cloud service platform 104a, wherein the keywords include the first alias AL1 and the first interface control command ICMD1 of the corresponding projector 106 a. The keywords may also include a name of the wake-up audiovisual platform interface. In this embodiment, the cloud service platform 104a further includes a plurality of second interface control commands ICMD 2.
Thereafter, in step S920, the cloud service platform 104a may analyze the first interface control command ICMD1 according to the first semantic analyzer, obtain a corresponding second interface control command ICMD2 according to the first interface control command ICMD1, and transmit the first alias AL1 of the projector 106a and the second interface control command ICMD2 corresponding to the first interface control command ICMD1 to the management server 108.
Next, in step S930, the management server 108 accesses/controls the projector 106a according to the first alias AL1, and adjusts the launch condition of the projector 106a launching the av platform interface. In this embodiment, the microprocessor 106a1 of the projector 106a can receive the second interface control command ICMD2 and generate a corresponding third interface control command ICMD3 for controlling the operation of the first application program, thereby achieving the effect of adjusting the launch condition of the video platform interface.
In various embodiments, the second interface control command ICMD2 and the corresponding third interface control command ICMD3 may present different configurations, so that the user may control the launch of the video platform interface as desired. In order to make the concept of the present invention more clear, several specific examples are further illustrated below.
Please refer to fig. 10, which is a schematic diagram illustrating a search operation performed in an audio/video platform interface according to fig. 8 and 9. In the present embodiment, the av platform interface 1010a is, for example, a preset frame displayed by the projector 106a after the first application program is started according to the second control command CMD2, and may include the search frame 1011. Further, the preset frame is a frame of the video platform interface 1010a, such as a home page of a Youtube web page.
In the embodiment, when the video/audio platform interface is turned on, the user can speak the audio signal AS 1' including the first alias AL1 and the keywords such AS "search xxx" (xxx can be the video/audio keyword to be searched) into the audio assistant 102 a. Accordingly, the voice assistant 102a can send "search xxx" as the first interface control command ICMD1 to the cloud service platform 104a along with the first alias AL 1. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include an audio/video search command of "search" and a search keyword of "xxx".
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application program of the projector 106a can perform a search in the first website according to the search keyword to find one or more video/ audio items 1021, 1022, etc. corresponding to the search keyword. Projector 106a may then launch video platform interface 1010b, which includes searched video items 1021, 1022, accordingly, as shown in fig. 10. Therefore, the user can realize the searching operation on the video platform interface through the voice assistant 102 a.
Please refer to fig. 11, which is a schematic diagram illustrating interface scrolling executed in the video platform interface according to fig. 10. In the present embodiment, it is assumed that the projector 106a is currently launching the av platform interface 1010b, which includes av items 1021, 1022, etc.
In this embodiment, the user can speak the voice signal AS 1' including the keywords such AS the first alias AL1 and "scroll down" to the voice assistant 102 a. Accordingly, the voice assistant 102a may scroll down as the first interface control command ICMD1 and communicate with the first alias AL1 to the cloud service platform 104 a. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include a "scroll down" interface scroll command signal.
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application program of the projector 106a may scroll the video platform interface 1010b according to the interface scroll command signal. Projector 106a may then project video platform interface 1010b in the roll accordingly.
In one embodiment, the user may speak the voice signal AS 1' including the keywords such AS the first alias AL1 and "stop scrolling" to the voice assistant 102a while the av platform interface 1010b is being scrolled. Accordingly, the voice assistant 102a may send "stop scrolling" as the first interface control command ICMD1 along with the first alias AL1 to the cloud service platform 104 a. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include a stop scroll command signal "stop scrolling".
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application program of the projector 106a may stop scrolling the video platform interface 1010b according to the stop scrolling command signal. Then, the projector 106a can launch the video platform interface 1010c (i.e. the video platform interface 1010b after the scrolling is stopped) correspondingly, which includes the searched video items 1022, 1023, …, 102N, as shown in fig. 11. Therefore, the user can realize the interface scrolling operation on the video platform interface through the voice assistant 102 a.
Please refer to fig. 12, which is a schematic diagram illustrating a playback operation performed in an audio/video platform interface according to fig. 11. In the present embodiment, it is assumed that the projector 106a is currently launching the AV platform interface 1010c, which includes the AV items 1022, 1023, … 102N, and each of the AV items 1022-102N has a corresponding item tag 1022a, 1023a, … 102Na, respectively.
In this embodiment, for example, the user can speak the voice signal AS 1' including the keywords such AS the first alias AL1 and the "play tag oo" (the tag oo can be an item tag corresponding to the audiovisual item to be played) into the voice assistant 102 a. For convenience of explanation, it is assumed that the user wants to play the AV item 102Na, i.e. the user can say "Play tag N", but the invention is not limited thereto.
Accordingly, the voice assistant 102a may send "PlayTicket N" as the first interface control command ICMD1 along with the first alias AL1 to the cloud service platform 104 a. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include a play command "play" and a play tag "tag N".
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application program of the projector 106a can play the av item 102N corresponding to the "tag N" according to the play command and the play tag. Projector 106a may then launch video platform interface 1010d accordingly, which includes video content of video item 102N, for example, as shown in fig. 12. Therefore, the user can realize the playing operation on the video platform interface through the voice assistant 102 a.
Please refer to fig. 13, which is a schematic diagram illustrating bookmark adding operation performed in an interface of an audio/video platform according to fig. 12. In the present embodiment, it is assumed that projector 106a is currently launching an audiovisual platform interface 1010d, which is, for example, the audiovisual content of audiovisual item 102N.
In the embodiment, the user can speak the voice signal AS 1' including the keywords such AS the first alias AL1 and the "bookmark zzz" (where the zzz can be the bookmark name set by the user AS required) to the voice assistant 102 a.
Accordingly, the voice assistant 102a may send "add bookmark zzz" as the first interface control command ICMD1 along with the first alias AL1 to the cloud service platform 104 a. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include a bookmark add command "add bookmark" and a specified bookmark name "zzz".
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application of projector 106a can mark the av content of av item 102N as the designated bookmark name (i.e., zzz) on av platform interface 1010d according to the bookmark adding instruction and the designated bookmark name. Projector 106a may then launch audiovisual platform interface 1010e accordingly, which may display a message 1310 that the bookmark was successfully added, for example, as shown in fig. 13. It is worth mentioning that the information indicating the audio content of the audio/video item 102N as the designated bookmark name (i.e., zzz) is stored in a server (not shown) of the audio/video platform by the first application program of the projector 106 a.
After the bookmark adding operation is completed, the present invention provides a method for enabling the user to intuitively and easily play the added bookmark, which is described in detail below.
Please refer to fig. 14, which is a schematic diagram illustrating bookmark adding operation performed in an interface of an audio/video platform according to fig. 13. In this embodiment, assume that projector 106a is currently launching audio visual platform interface 1010 a.
In this embodiment, the user can speak the speech signal AS 1' including keywords such AS the first alias AL1 and the "play bookmark zzz" (where zzz is a bookmark that the user has added previously, AS shown in fig. 13) to the speech assistant 102 a.
Accordingly, the voice assistant 102a may send "play bookmark zzz" as the first interface control command ICMD1 and along with the first alias AL1 to the cloud service platform 104 a. Thereafter, the cloud service platform 104a may analyze the first interface control command ICMD1 to find a corresponding second interface control command ICMD 2. In this embodiment, the second interface control command ICMD2 may include a bookmark play command "play bookmark" and a specified bookmark name "zzz".
Thereafter, the management server 108 can access the projector 106a according to the first alias AL1 and control the projector 106a to be launched according to the second interface control command ICMD 2. Accordingly, the first application program of the projector 106a can provide the video content corresponding to the designated bookmark name stored in the server of the video platform to the projector 106a and the projector 106a plays the video content corresponding to the designated bookmark name according to the bookmark playing command and the designated bookmark name. As shown in fig. 13, the projector 106a can launch an av platform interface 1010d accordingly, which can display a message 1410 that the bookmark has been successfully played in addition to the av content of the av item 102N.
Therefore, the present invention enables the user to add a bookmark on the video platform interface through the voice assistant 102 a. In addition, the invention can also play the bookmark added previously in a mode of speaking the voice signal, thereby providing a novel, intuitive and convenient operation mode.
In summary, the intelligent voice system and the method for controlling the projector according to the embodiments of the invention enable the user to control the projector by speaking the voice signal to the voice assistant. Moreover, the present invention also provides a system for controlling one or more projectors manufactured by the same manufacturer through the voice assistants.
In addition, the intelligent voice system provided by the embodiment of the invention can enable a user to achieve the purpose of controlling the projector in a mode of speaking voice signals through the voice circuit device integrated in the projector.
After the user controls the projector to project the video and audio platform interface through the voice assistant, the invention further provides a further control system, so that the user can control the projector to project the projection situation of the video and audio platform interface through a mode of speaking voice signals to the voice assistant, such as searching, scrolling, stopping scrolling, playing a certain video and audio item, adding a bookmark corresponding to a certain video and audio item, playing a certain bookmark and the like. Therefore, the novel, intuitive and convenient projector control system can be provided for users.
It should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and that the scope of the present invention should not be limited thereby, and all the simple equivalent changes and modifications made by the claims and the summary of the invention should be included in the scope of the present invention. It is not necessary for any embodiment or claim of the invention to address all of the objects, advantages, or features disclosed herein. In addition, the abstract and the title of the invention are provided for assisting the search of patent documents and are not intended to limit the scope of the invention. Furthermore, the terms "first", "second", and the like in the description or the claims are used only for naming elements (elements) or distinguishing different embodiments or ranges, and are not used for limiting the upper limit or the lower limit on the number of elements.

Claims (48)

1. An intelligent voice system, comprising a first voice assistant, a first cloud service platform, a first projector and a management server, wherein:
the first cloud service platform is connected with and manages the first voice assistant; and
the management server is connected with the first cloud service platform and the first projector and is used for managing and controlling the first projector,
wherein when the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the plurality of first keywords to the first cloud service platform,
the first cloud service platform analyzes the first control command according to the first semantic analysis program, obtains or extracts or generates a corresponding second control command according to the first control command, transmits the first alias of the first projector and the corresponding second control command to the management server, and the management server accesses the first projector in response to the first alias and adjusts the first projector to be in a first operating state according to the corresponding second control command.
2. The intelligent voice system according to claim 1, wherein the first projector has been previously registered on the management server based on the first alias and the unique identification information of the first projector.
3. The intelligent voice system of claim 1, wherein the first alias for the first projector is selected from a plurality of preset aliases provided by the management server in a registration process for the first projector.
4. The intelligent voice system according to claim 1, wherein the first projector reports the first operating status back to the management server after the first projector is tuned to the first operating status; the management server transmits the first operation state to the first cloud service platform; and the first cloud service platform generates a first reaction statement according to the first operation state, and sends the first reaction statement to the first voice assistant so as to enable the first voice assistant to output the first reaction statement.
5. The intelligent speech system of claim 1, wherein the second control command is pre-established at the first cloud service platform or generated via artificial intelligence or machine learning.
6. The intelligent speech system according to claim 1, wherein the first semantic analysis program comprises a first look-up table, and wherein the second control command is established in the first look-up table.
7. The intelligent speech system according to claim 1, wherein the first semantic analysis process includes information corresponding to the plurality of second control commands.
8. The intelligent speech system of claim 1, wherein the first cloud service platform comprises a database, and the second control command is stored in the database, wherein the database comprises a first check table, and the second control command is established in the first check table.
9. The intelligent audio system according to claim 1, wherein the adjusting the first operating state of the first projector is turning on the projector.
10. The intelligent audio system according to claim 1, wherein adjusting the first operating state of the first projector is turning on a screen adjustment menu of the first projector.
11. The intelligent audio system according to claim 1, wherein adjusting the first operating state of the first projector is adjusting a projection brightness of the first projector.
12. The intelligent voice system according to claim 1, wherein the management server generates a corresponding third control command according to the second control command corresponding to the first control command, and transmits the third control command to a micro control unit of the first projector, and the micro control unit of the first projector receives the third control command and adjusts the first operating state of the first projector according to the third control command.
13. The intelligent audio system according to claim 12, wherein the micro control unit of the first projector comprises a first program, and the micro control unit of the first projector receives the third control command, obtains a fourth control command according to the first program, and adjusts the first operating state of the first projector according to the fourth control command. The fourth control command comprises a digital signal.
14. The intelligent speech system according to claim 13, wherein the first program of the first projector comprises a second look-up table comprising pre-established fifth control commands and corresponding fourth control commands, wherein the fifth control commands are identical to third control commands.
15. The intelligent audio system according to claim 1, wherein the first projector comprises at least one electronic component, the third control command comprises information of the at least one electronic component and a signal for controlling the at least one electronic component, the at least one electronic component comprises at least one of a fan, a light source driver and a speaker, the information of the at least one electronic component comprises an identification code of the at least one electronic component, and the signal for controlling the at least one electronic component comprises a voltage or a current.
16. The intelligent speech system according to claim 1, further comprising:
a second voice assistant;
a second cloud service platform connected to the second voice assistant and the management server and managing the second voice assistant,
wherein when the second voice assistant receives a second voice signal for controlling the first projector, the second voice assistant extracts a plurality of second keywords from the second voice signal, and transmits the plurality of second keywords to the second cloud service platform, wherein the plurality of second keywords comprise the first alias and a sixth control command corresponding to the first projector,
the second cloud service platform comprises a second semantic analysis program, the second cloud service platform comprises a plurality of seventh control commands, the second cloud service platform analyzes the sixth control commands according to the second semantic analysis program, obtains corresponding seventh control commands according to the sixth control commands, transmits the first alias of the first projector and the seventh control commands corresponding to the sixth control commands to the management server, and the management server accesses the first projector according to the first alias and adjusts the first projector to be in a second operating state according to the seventh control commands corresponding to the sixth control commands.
17. The intelligent speech system according to claim 16, wherein the language family of the first speech signal is different from the language family of the second speech signal,
when the first voice assistant receives the second voice signal and the first voice analysis program cannot recognize the second voice signal, the first voice assistant outputs an unrecognized reaction sentence;
when the second voice assistant receives the first voice signal and the second voice analysis program cannot recognize the first voice signal, the second voice assistant outputs an unrecognized reaction sentence.
18. The intelligent speech system according to claim 16, wherein the language family of the first speech signal is the same as the language family of the second speech signal,
when the first voice assistant also receives the second voice signal for controlling the first projector, the first voice assistant extracts the plurality of second keywords from the second voice signal and transmits the plurality of second keywords to the first cloud service platform, wherein the plurality of second keywords include the first alias and the sixth control command corresponding to the first projector;
the first cloud service platform analyzes the sixth control command according to the first semantic analysis program, obtains an eighth control command according to the sixth control command, and transmits the first alias and the eighth control command of the first projector to the management server;
the management server accesses the first projector in response to the first alias, and readjusts the first projector to the second operating state again according to the eighth control command corresponding to the sixth control command.
19. The smart speech system of claim 18, wherein the eighth control command is one of the plurality of second control commands.
20. The intelligent speech system according to claim 1, further comprising:
a second projector connected to and controlled by the management server, wherein when the first voice assistant receives a third voice signal for controlling the second projector, the first voice assistant extracts a plurality of third keywords from the third voice signal, and transmits the plurality of third keywords to the first cloud service platform, wherein the plurality of third keywords include a second alias and a ninth control command corresponding to the second projector;
the first cloud service platform analyzes the ninth control command according to the first semantic analysis program, obtains a tenth control command according to the ninth control command, transmits the second alias of the first projector and the tenth control command corresponding to the ninth control command to the management server, and the management server accesses the second projector according to the second alias and adjusts the first projector to a third operating state according to the tenth control command corresponding to the ninth control command.
21. The smart speech system of claim 20, wherein the tenth control command is one of the plurality of second control commands.
22. The intelligent voice system of claim 20, wherein the second projector has been previously registered on the management server based on the second alias and the unique identification information of the second projector.
23. The intelligent voice system of claim 20, wherein the second alias for the second projector is selected from a plurality of preset aliases provided by the management server in a registration process for the second projector.
24. An intelligent voice system, comprising a first voice assistant, a first projector, and a first cloud server, wherein:
the first cloud service server is connected with the first voice assistant and the first projector and is used for managing and controlling the first voice assistant and the first projector;
wherein when the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal and transmits the plurality of first keywords to the first cloud service server, and wherein the plurality of first keywords comprise a first alias and a first control command corresponding to the first projector, wherein the first cloud service server comprises a first semantic analysis program, and the first cloud service server comprises a plurality of second control commands, the first cloud service server analyzes the first control command according to the first semantic analysis program, obtains or extracts or generates the second control command corresponding to the first control command according to the first control command, accesses the first projector in response to the first alias, and adjusting the first projector to a first operating state according to the second control command corresponding to the first control command.
25. A method of controlling a projector for use in the intelligent speech system of claim 1, the method comprising:
when the first voice assistant receives a first voice signal for controlling the first projector, extracting a plurality of first keywords from the first voice signal by the first voice assistant, and transmitting the plurality of first keywords to the first cloud service platform, wherein the plurality of first keywords comprise a first alias and a first control command corresponding to the first projector, wherein the first cloud service platform comprises a first semantic analysis program, and the first cloud service platform comprises a plurality of second control commands;
analyzing the first control command by the first cloud service platform according to the first semantic analysis program, obtaining or taking out or generating a corresponding second control command according to the first control command, and transmitting the first alias of the first projector and the corresponding second control command to the management server; and
and accessing, by the management server, the first projector in response to the first alias, and adjusting the first projector to a first operating state according to the second control command corresponding to the first control command.
26. The method of claim 25, wherein the first projector has been previously registered on the management server based on the first alias and unique identification information of the first projector.
27. The method of claim 25, wherein the first alias for the first projector is selected from a plurality of preset aliases provided by the management server in a registration process for the first projector.
28. The method of claim 25, after the first projector is adjusted to the first operational state, further comprising:
the first projector reports the first operating state to the management server;
the management server transmits the first operation state to the first cloud service platform; and
and the first cloud service platform generates a first reaction statement according to the first operation state, and sends the first reaction statement to the first voice assistant so as to enable the first voice assistant to output the first reaction statement.
29. The method of claim 25, wherein the second control command is pre-established at the first cloud service platform or generated via artificial intelligence or machine learning.
30. The method of claim 25, wherein the first semantic analysis program includes a first look-up table, and wherein the second control command is created in the first look-up table.
31. The method of claim 25, wherein the first semantic analysis process includes information corresponding to the plurality of second control commands.
32. The method of claim 25, wherein the first cloud service platform comprises a database, and the second control command is stored in the database, wherein the database comprises a first check table, and the second control command is established in the first check table.
33. The method as claimed in claim 25, wherein the adjusting the first operating state of the first projector is turning on the projector.
34. The method as claimed in claim 25, wherein adjusting the first operating state of the first projector is turning on a screen adjustment menu of the first projector.
35. The method as claimed in claim 25, wherein adjusting the first operating state of the first projector is adjusting a projection brightness of the first projector.
36. The method of claim 25, further comprising:
generating a corresponding third control command by the management server according to the second control command corresponding to the first control command, and transmitting the third control command to a micro control unit of the first projector; and
and receiving the third control command by the micro control unit of the first projector, and adjusting the first operating state of the first projector according to the third control command.
37. The method as claimed in claim 36, wherein the mcu of the first projector comprises a first program, the mcu of the first projector receives the third control command, obtains a fourth control command according to the first program, and adjusts the first operating state of the first projector according to the fourth control command. The fourth control command comprises a digital signal.
38. The method of claim 37, wherein the first program of the first projector includes a second lookup table, the second lookup table including pre-established fifth control commands and corresponding fourth control commands, wherein the fifth control commands are the same as third control commands.
39. The method of claim 25, wherein the first projector comprises at least one electronic component, the third control command comprises information of the at least one electronic component and a signal for controlling the at least one electronic component, the at least one electronic component comprises at least one of a fan, a light source driver and a speaker, the information of the at least one electronic component comprises an identification code of the at least one electronic component, and the signal for controlling the at least one electronic component comprises a voltage or a current.
40. The method of claim 25, wherein the smart voice system further comprises a second voice assistant and a second cloud service platform, the second cloud service platform connects the second voice assistant and the management server and manages the second voice assistant, and the method further comprises:
when the second voice assistant receives a second voice signal for controlling the first projector, extracting, by the second voice assistant, a plurality of second keywords from the second voice signal, and transmitting the plurality of second keywords to the second cloud service platform, wherein the plurality of second keywords include the first alias and a sixth control command corresponding to the first projector, wherein the second cloud service platform includes a second semantic analysis program, and the second cloud service platform includes a plurality of seventh control commands;
analyzing, by the second cloud service platform, the sixth control command according to the second semantic analysis program, obtaining a corresponding seventh control command according to the sixth control command, and transmitting the first alias of the first projector and the seventh control command corresponding to the sixth control command to the management server; and
accessing, by the management server, the first projector in response to the first alias, and adjusting the first projector to a second operating state according to the seventh control command corresponding to the sixth control command.
41. The method of claim 40, wherein a language family of the first speech signal is different from a language family of the second speech signal, and the method further comprises:
outputting, by the first voice assistant, an unrecognized response sentence when the first voice assistant receives the second voice signal and the first voice analysis program cannot recognize the second voice signal; and
when the second voice assistant receives the first voice signal and the second voice analysis program cannot recognize the first voice signal, outputting an unrecognized reaction sentence by the second voice assistant.
42. The method according to claim 40, wherein the language family of the first speech signal is the same as the language family of the second speech signal, and the method further comprises:
when the first voice assistant also receives the second voice signal for controlling the first projector, extracting, by the first voice assistant, the plurality of second keywords from the second voice signal, and transmitting the plurality of second keywords to the first cloud service platform, wherein the plurality of second keywords include the first alias and the sixth control command corresponding to the first projector;
analyzing, by the first cloud service platform, the sixth control command according to the first semantic analysis program, obtaining an eighth control command according to the sixth control command, and transmitting the first alias and the eighth control command of the first projector to the management server; and
accessing, by the management server, the first projector in response to the first alias, and adjusting the first projector to the second operating state again according to the eighth control command corresponding to the sixth control command.
43. The method of claim 42, wherein the eighth control command is one of the plurality of second control commands.
44. The method of claim 25, wherein the smart voice system further comprises a second projector connected to and controlled by the management server, and the method further comprises:
when the first voice assistant receives a third voice signal for controlling the second projector, extracting, by the first voice assistant, a plurality of third keywords from the third voice signal, and transmitting the plurality of third keywords to the first cloud service platform, wherein the plurality of third keywords comprises a second alias and a ninth control command corresponding to the second projector;
analyzing, by the first cloud service platform, the ninth control command according to the first semantic analysis program, obtaining a tenth control command according to the ninth control command, and transmitting the second alias of the first projector and the tenth control command corresponding to the ninth control command to the management server; and
accessing, by the management server, the second projector in response to the second alias, and adjusting the first projector to a third operating state according to the tenth control command corresponding to the ninth control command.
45. The method of claim 44, wherein the tenth control command is one of the plurality of second control commands.
46. The method of claim 44, wherein the second projector has been previously registered on the management server based on the second alias and unique identification information of the second projector.
47. The method of claim 44, wherein the second alias for the second projector is selected from a plurality of preset aliases provided by the management server in a registration process for the second projector.
48. The utility model provides an intelligence voice system, its characterized in that, intelligence voice system includes first projector and a high in the clouds service server, wherein:
the first projector comprises a first voice assistant; and
the first cloud service server is connected with the first voice assistant and the first projector and is used for managing and controlling the first voice assistant and the first projector,
wherein when the first voice assistant receives a first voice signal for controlling the first projector, the first voice assistant extracts a plurality of first keywords from the first voice signal, and transmits the plurality of first keywords to the first cloud service server, wherein the plurality of first keywords include a first alias and a first control command corresponding to the first projector, wherein the first cloud service server includes a first semantic analysis program, and the first cloud service server includes a plurality of second control commands, the first cloud service server analyzes the first control command according to the first semantic analysis program, obtains or extracts or generates the second control command corresponding to the first control command according to the first control command, accesses the first projector in response to the first alias, and adjusting the first projector to a first operating state according to the second control command corresponding to the first control command.
CN201811196308.2A 2018-09-27 2018-10-15 Intelligent voice system and method for controlling projector by using intelligent voice system Pending CN110956960A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/228,793 US11100926B2 (en) 2018-09-27 2018-12-21 Intelligent voice system and method for controlling projector by using the intelligent voice system
EP19161914.7A EP3629324A1 (en) 2018-09-27 2019-03-11 Intelligent voice system and method for controlling projector by using the intelligent voice system
JP2019164079A JP7359603B2 (en) 2018-09-27 2019-09-10 Intelligent audio system and projector control method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862737126P 2018-09-27 2018-09-27
US62/737,126 2018-09-27

Publications (1)

Publication Number Publication Date
CN110956960A true CN110956960A (en) 2020-04-03

Family

ID=67780739

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201821666297.5U Active CN209357459U (en) 2018-09-27 2018-10-15 Intelligent voice system
CN201821666296.0U Active CN209374052U (en) 2018-09-27 2018-10-15 Intelligent voice system
CN201811196308.2A Pending CN110956960A (en) 2018-09-27 2018-10-15 Intelligent voice system and method for controlling projector by using intelligent voice system
CN201811196721.9A Pending CN110956961A (en) 2018-09-27 2018-10-15 Intelligent voice system and method for controlling projector by using intelligent voice system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201821666297.5U Active CN209357459U (en) 2018-09-27 2018-10-15 Intelligent voice system
CN201821666296.0U Active CN209374052U (en) 2018-09-27 2018-10-15 Intelligent voice system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811196721.9A Pending CN110956961A (en) 2018-09-27 2018-10-15 Intelligent voice system and method for controlling projector by using intelligent voice system

Country Status (2)

Country Link
JP (2) JP7359603B2 (en)
CN (4) CN209357459U (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209357459U (en) * 2018-09-27 2019-09-06 中强光电股份有限公司 Intelligent voice system
EP4130845A4 (en) 2020-03-31 2023-09-20 FUJIFILM Corporation Optical device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201102841A (en) * 2009-07-02 2011-01-16 Cyberon Corp Data search method, data providing method, data search system, portable apparatus and server
CN106847269A (en) * 2017-01-20 2017-06-13 浙江小尤鱼智能技术有限公司 The sound control method and device of a kind of intelligent domestic system
CN106952647A (en) * 2017-03-14 2017-07-14 上海斐讯数据通信技术有限公司 A kind of intelligent sound box and its application method based on cloud management
US20170230709A1 (en) * 2014-06-30 2017-08-10 Apple Inc. Intelligent automated assistant for tv user interactions
CN107205075A (en) * 2016-03-16 2017-09-26 洛阳睿尚京宏智能科技有限公司 A kind of smart projector of achievable handset program speech control
CN207558417U (en) * 2017-08-11 2018-06-29 杭州古北电子科技有限公司 Intelligent home control system
CN209357459U (en) * 2018-09-27 2019-09-06 中强光电股份有限公司 Intelligent voice system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69808080T2 (en) * 1997-06-02 2003-08-07 Sony Electronics Inc PRESENTATION OF INTERNET INFORMATION AND TELEVISION PROGRAMS
JP2004163590A (en) * 2002-11-12 2004-06-10 Denso Corp Reproducing device and program
JP2005241971A (en) * 2004-02-26 2005-09-08 Seiko Epson Corp Projector system, microphone unit, projector controller, and projector
JP5433935B2 (en) * 2007-07-24 2014-03-05 日本電気株式会社 Screen display control method, screen display control method, electronic device, and program
CN101398596A (en) * 2007-09-25 2009-04-01 海尔集团公司 Projector
JP5535676B2 (en) * 2010-02-15 2014-07-02 アルパイン株式会社 Navigation device
JP6227236B2 (en) * 2012-10-01 2017-11-08 シャープ株式会社 Recording apparatus and reproducing apparatus
KR20140089861A (en) * 2013-01-07 2014-07-16 삼성전자주식회사 display apparatus and method for controlling the display apparatus
US20160203456A1 (en) * 2015-01-09 2016-07-14 Toshiba Global Commerce Solutions Holdings Corporation Point-of-sale apparatus, control method, and system thereof for outputting receipt image for a camera of a personal computing device
KR102429260B1 (en) * 2015-10-12 2022-08-05 삼성전자주식회사 Apparatus and method for processing control command based on voice agent, agent apparatus
US10185840B2 (en) * 2016-08-30 2019-01-22 Google Llc Conditional disclosure of individual-controlled content in group contexts
US10304463B2 (en) * 2016-10-03 2019-05-28 Google Llc Multi-user personalization at a voice interface device
WO2018100743A1 (en) * 2016-12-02 2018-06-07 ヤマハ株式会社 Control device and apparatus control system
JP2018128979A (en) * 2017-02-10 2018-08-16 パナソニックIpマネジメント株式会社 Kitchen supporting system
CN107479854A (en) * 2017-08-30 2017-12-15 谢锋 A kind of projecting apparatus and projecting method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201102841A (en) * 2009-07-02 2011-01-16 Cyberon Corp Data search method, data providing method, data search system, portable apparatus and server
US20170230709A1 (en) * 2014-06-30 2017-08-10 Apple Inc. Intelligent automated assistant for tv user interactions
CN107205075A (en) * 2016-03-16 2017-09-26 洛阳睿尚京宏智能科技有限公司 A kind of smart projector of achievable handset program speech control
CN106847269A (en) * 2017-01-20 2017-06-13 浙江小尤鱼智能技术有限公司 The sound control method and device of a kind of intelligent domestic system
CN106952647A (en) * 2017-03-14 2017-07-14 上海斐讯数据通信技术有限公司 A kind of intelligent sound box and its application method based on cloud management
CN207558417U (en) * 2017-08-11 2018-06-29 杭州古北电子科技有限公司 Intelligent home control system
CN209357459U (en) * 2018-09-27 2019-09-06 中强光电股份有限公司 Intelligent voice system
CN209374052U (en) * 2018-09-27 2019-09-10 中强光电股份有限公司 Intelligent voice system

Also Published As

Publication number Publication date
JP2020064617A (en) 2020-04-23
CN209357459U (en) 2019-09-06
JP7359603B2 (en) 2023-10-11
JP2020053040A (en) 2020-04-02
CN110956961A (en) 2020-04-03
CN209374052U (en) 2019-09-10

Similar Documents

Publication Publication Date Title
EP3629323B1 (en) Intelligent voice system and method for controlling projector by using the intelligent voice system
US9953648B2 (en) Electronic device and method for controlling the same
TWI511125B (en) Voice control method, mobile terminal apparatus and voice controlsystem
KR101330671B1 (en) Electronic device, server and control methods thereof
CN109343819B (en) Display apparatus and method for controlling display apparatus in voice recognition system
KR102411619B1 (en) Electronic apparatus and the controlling method thereof
US11457061B2 (en) Creating a cinematic storytelling experience using network-addressable devices
CN111052079B (en) Systems/methods and apparatus for providing multi-function links for interacting with assistant agents
WO2019228138A1 (en) Music playback method and apparatus, storage medium, and electronic device
JP7359603B2 (en) Intelligent audio system and projector control method
CN106558311B (en) Voice content prompting method and device
US10831442B2 (en) Digital assistant user interface amalgamation
EP3629324A1 (en) Intelligent voice system and method for controlling projector by using the intelligent voice system
CN111539218A (en) Method, equipment and system for disambiguating natural language content title
US11150923B2 (en) Electronic apparatus and method for providing manual thereof
CN111580766B (en) Information display method and device and information display system
CN110738044B (en) Control intention recognition method and device, electronic equipment and storage medium
US20210327437A1 (en) Electronic apparatus and method for recognizing speech thereof
CN115440213A (en) Voice control method, device, equipment, vehicle and medium
KR20180048510A (en) Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof
KR20170055466A (en) Display apparatus, Method for controlling display apparatus and Method for controlling display apparatus in Voice recognition system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination