CN110956963A - Interaction method realized based on wearable device and wearable device - Google Patents

Interaction method realized based on wearable device and wearable device Download PDF

Info

Publication number
CN110956963A
CN110956963A CN201911144257.3A CN201911144257A CN110956963A CN 110956963 A CN110956963 A CN 110956963A CN 201911144257 A CN201911144257 A CN 201911144257A CN 110956963 A CN110956963 A CN 110956963A
Authority
CN
China
Prior art keywords
instruction
intelligent
smart
wearable device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911144257.3A
Other languages
Chinese (zh)
Inventor
王立利
赵吉福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Priority to CN201911144257.3A priority Critical patent/CN110956963A/en
Publication of CN110956963A publication Critical patent/CN110956963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/34Adaptation of a single recogniser for parallel processing, e.g. by use of multiple processors or cloud computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/318Received signal strength
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/382Monitoring; Testing of propagation channels for resource allocation, admission control or handover
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses an interaction method based on wearable equipment and the wearable equipment. The wearable device is provided with at least one microphone, the method comprising: after receiving a preset awakening voice command, searching for a wireless signal, taking the intelligent equipment with the strongest wireless signal as first intelligent equipment, and establishing wireless connection with the first intelligent equipment; collecting user voice and acquiring an instruction according to the user voice, wherein the instruction comprises an identification of an instruction object and instruction content corresponding to the identification; when the distance between the first intelligent device and the wearable device is larger than a preset distance, waking up the wearable device to enable the wearable device to receive an instruction; and sending the instruction to the first intelligent equipment so that the first intelligent equipment executes corresponding operation according to the identification.

Description

Interaction method realized based on wearable device and wearable device
Technical Field
The invention relates to the technology of Internet of things, in particular to an interaction method based on wearable equipment and the wearable equipment.
Background
More and more intelligent home products enter common families, and with the increase of the use frequency and the use scenes, users have higher and higher requirements on the control of intelligent home. At present, smart homes are mainly controlled through touch screens or buttons carried by the homes, and time and energy of users are wasted. In addition, some intelligent homes adopt a voice mode for man-machine interaction, for example, sound box control products on the market, but have limitations in use scenes. How to effectively improve the interaction and control capability of the user and the intelligent household product becomes a topic which is very much concerned by all large equipment manufacturers at present.
Disclosure of Invention
The invention aims to provide a new technical scheme of interaction realized based on a wearable device.
According to a first aspect of the present invention, there is provided an interaction method implemented based on a wearable device provided with at least one microphone, the method comprising:
after receiving a preset awakening voice command, searching for a wireless signal, taking the intelligent equipment with the strongest wireless signal as first intelligent equipment, and establishing wireless connection with the first intelligent equipment;
acquiring user voice and acquiring an instruction according to the user voice, wherein the instruction comprises an identifier of an instruction object and instruction content corresponding to the identifier; when the distance between the first intelligent device and the wearable device is larger than a preset distance, waking up the wearable device to enable the wearable device to receive an instruction;
and sending the instruction to the first intelligent equipment so that the first intelligent equipment executes corresponding operation according to the identification.
Optionally, the method further comprises:
when the distance between the first intelligent device and the wearable device is not larger than a preset distance, waking up the first intelligent device so that the first intelligent device directly receives the instruction, and executing corresponding operation according to the identification.
Optionally, the first smart device executes a corresponding operation according to the identifier, including:
if the identification of the instruction object is consistent with the identification of the first intelligent device, executing the instruction;
and if the identifier of the instruction object is not consistent with the identifier of the first intelligent device, the instruction is sent to the corresponding intelligent device according to the identifier.
Optionally, the method further comprises:
after establishing a wireless connection with the first smart device, maintaining the wireless connection with the first smart device for a preset time and monitoring the strength of a wireless signal received by the first smart device, and,
and if the strength is lower than a preset threshold value, disconnecting the wireless connection with the first intelligent device and searching for the wireless signal again.
Optionally, the method further comprises:
and judging the distance between the wearable device and the first intelligent device according to the strength of the wireless signal received by the first intelligent device.
Optionally, the obtaining the instruction according to the user voice includes:
and recognizing the voice of the user into characters, and analyzing the recognized characters to generate the instruction.
Optionally, the obtaining the instruction according to the user voice includes:
and uploading the collected user voice to a cloud for analysis so as to obtain a command generated by cloud analysis.
Optionally, the obtaining the instruction according to the user voice includes:
and recognizing the voice of the user into characters, uploading the recognized characters to the cloud for analysis, and acquiring an instruction generated by cloud analysis.
Optionally, the wearable device is smart glasses or smart headphones.
Optionally, the wireless signal search is a bluetooth signal search or a WIFI signal search.
According to a second aspect of the present invention, there is provided a wearable device comprising a memory and a processor, the memory storing computer instructions which, when executed by the processor, implement the method of any of the first aspects of the present invention.
According to one embodiment of the invention, the wearable device takes the intelligent device with the strongest wireless signal as the first intelligent device, establishes wireless connection with the first intelligent device, and can collect user voice through the wearable device and generate corresponding instructions, so that the instructions can be transmitted through the first intelligent device, the situation that the user voice collected through the intelligent device is easily limited by the distance between the user and the intelligent device can be avoided, and the voice control of a plurality of intelligent devices in the internet of things system can be realized.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 shows a hardware configuration diagram of an interactive system implemented based on a wearable device according to an embodiment of the present invention;
fig. 2 shows a flowchart of an interaction method implemented based on a wearable device according to a first embodiment of the present invention;
FIG. 3 shows a flow diagram of an interaction method implemented based on a wearable device of an example of the present invention;
fig. 4 shows a block diagram of a wearable device of a second embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present invention relate to wearable devices, or to wearable devices and other smart devices that establish data communication connections with wearable devices.
The wearable device provided by the embodiment of the invention is provided with at least one microphone, and can have the capability of voice interaction with a user. The wearable device may be, for example, smart glasses with a microphone, a smart headset, a smart bracelet, a smart watch, or the like.
The smart device may be, for example, a smart home or the like. The smart home may be, for example, a smart speaker, a smart television, a smart washing machine, a projector, a smart phone, etc.
Wearable equipment and other smart machines can constitute thing networking systems. The wearable device can be in communication connection with other intelligent devices in a wireless mode, and can send instructions to the other intelligent devices so that the other intelligent devices can execute corresponding operations according to the instructions. Other intelligent devices can be in communication connection in a wireless mode or a wired mode, so that the instructions can be transmitted between the other intelligent devices.
The wearable device is provided with at least one microphone, the at least one microphone is used for picking up sound signals, collecting user voice and obtaining instructions according to the user voice. Other intelligent devices can be connected with the wearable device in a wireless mode and can communicate with an Application program (APP) of the wearable device, so that instructions for controlling the intelligent devices can be set through the APP, and the other intelligent devices can be controlled based on the instructions acquired by the wearable device according to the voice of the user to execute various commands expected by the user.
< hardware configuration >
Fig. 1 shows a block diagram of a hardware configuration of an interactive system 100 implemented based on a wearable device.
The interactive system 100 of the present embodiment includes a wearable device 1000, other smart devices, a network 2000, and a server 3000, as shown in fig. 1, the wearable device 1000 may be, for example, smart glasses with a microphone, smart headphones, a smart bracelet, a smart watch, and the like, and the other smart devices may be some smart homes, for example, a smart speaker 4000, a smart television 5000, a smart projector 6000, and the like. In other embodiments, the smart device may also include other electronic devices.
Wearable device 1000 is loaded with the APP that corresponds with a plurality of other smart machine, can set up the instruction that smart machine corresponds through the APP, and based on the instruction that wearable device obtained according to user's pronunciation, can realize controlling other smart machine to carry out the various commands that the user expects.
The wearable device 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and so forth. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 1400 may include a short-range communication device, such as any device that performs short-range wireless communication based on short-range wireless communication protocols such as the Hilink protocol, WiFi (IEEE 802.11 protocol), Mesh, bluetooth, ZigBee, Thread, Z-Wave, NFC, UWB, LiFi, etc., and the communication device 1400 may also include a long-range communication device, such as any device that performs WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, a somatosensory input, and the like. A user can input/output voice information through the speaker 1700 and the microphone 1800.
In this embodiment, the memory 1200 of the wearable device 1000 is configured to store instructions for controlling the processor 1100 to operate at least to perform the interaction method implemented based on the wearable device according to any embodiment of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Although a plurality of devices of the wearable apparatus 1000 are shown in fig. 1, the present invention may relate to only some of the devices.
The network 2000 may be a wireless communication network or a wired communication network, and may be a local area network or a wide area network. In the interactive system 100 implemented based on the wearable device shown in fig. 1, the wearable device 1000 and the smart speaker 3000, the wearable device 1000 and the smart tv 4000, and the wearable device 1000 and the smart projector 5000 may communicate through the network 2000. In addition, the wearable device 1000 and the smart speaker 3000, the wearable device 1000 and the smart television 4000, and the network 2000 on which the wearable device 1000 and the smart projector 5000 communicate with each other may be the same or different.
The server 3000 is a service point that provides processing, databases, and communication facilities. The server 3000 may be a unitary server or a distributed server across multiple computers or computer data centers. The server may be of various types, such as, but not limited to, a web server, a news server, a mail server, a message server, an advertisement server, a file server, an application server, an interaction server, a database server, or a proxy server. In some embodiments, each server may include hardware, software, or embedded logic components or a combination of two or more such components for performing the appropriate functions supported or implemented by the server. For example, a server, such as a blade server, a cloud server, etc., or may be a server group consisting of a plurality of servers, which may include one or more of the above types of servers, etc.
In one example, the server 3000 may include a processor 3100, a memory 3200, an interface device 3300, a communication device 3400, a display device 3500, and an input device 3600, as shown in fig. 1. Although the server may also include speakers, microphones, etc., these components are not relevant to the present invention and are omitted here. The processor 3100 may be, for example, a central processing unit CPU, a microprocessor MCU, or the like. The memory 3200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 3300 includes, for example, a USB interface, a serial interface, an infrared interface, and the like. The communication device 3400 can perform wired or wireless communication, for example. The display device 3500 is, for example, a liquid crystal display panel, an LED display panel touch display panel, or the like. The input device 3600 may include, for example, a touch screen, a keyboard, and the like.
As shown in fig. 1, smart sound box 4000 may include a processor 4100, a memory 4200, an interface device 4300, a communication device 4400, a display device 4500, an input device 4600, a speaker 4700, a microphone 4800, and the like.
Processor 4100 may be a mobile version processor. The memory 4200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface 4300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 4400 can perform wired or wireless communication, for example, the communication device 4400 can include a short-range communication device, such as any device that performs short-range wireless communication based on a short-range wireless communication protocol, such as the Hilink protocol, WiFi (IEEE 802.11 protocol), Mesh, bluetooth, ZigBee, Thread, Z-Wave, NFC, UWB, LiFi, and the like, and the communication device 4400 can also include a long-range communication device, such as any device that performs WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 4500 is a liquid crystal display, a touch panel, or the like, for example. The input device 4600 may include, for example, a touch screen, a keyboard, and the like. A user can output/input voice information through the speaker 4700 and the microphone 4800.
Although multiple devices of smart sound box 4000 are shown in fig. 1, the present invention may relate to only some of the devices, for example, smart sound box 4000 relates to only memory 4200 and processor 4100, speaker 4700 and microphone 4800.
In this embodiment, as shown in fig. 1, the smart tv 5000 may include a processor 4100, a memory 4200, an interface device 4300, a communication device 4400, a display device 4500, an input device 4600, a speaker 4700, a microphone 4800, and the like.
The processor 4100 may be a central processing unit CPU, a microprocessor MCU. The memory 4200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface 4300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 4400 can perform wired or wireless communication, for example, the communication device 4400 can include a short-range communication device, such as any device that performs short-range wireless communication based on a short-range wireless communication protocol, such as the Hilink protocol, WiFi (IEEE 802.11 protocol), Mesh, bluetooth, ZigBee, Thread, Z-Wave, NFC, UWB, LiFi, and the like, and the communication device 4400 can also include a long-range communication device, such as any device that performs WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 4500 is a liquid crystal display, a touch panel, or the like, for example. The input device 4600 may include, for example, a touch screen, a keyboard, and the like. A user can output/input voice information through the speaker 4700 and the microphone 4800.
Although a plurality of devices of the smart tv 5000 are illustrated in fig. 1, the present invention may relate to only some of the devices.
In the present embodiment, as shown in fig. 1, the smart projector 6000 may include a processor 4100, a memory 4200, an interface device 4300, a communication device 4400, a display device 4500, an input device 4600, a speaker 4700, a microphone 4800, and the like.
The processor 4100 may be a central processing unit CPU, a microprocessor MCU. The memory 4200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface 4300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 4400 can perform wired or wireless communication, for example, the communication device 4400 can include a short-range communication device, such as any device that performs short-range wireless communication based on a short-range wireless communication protocol, such as the Hilink protocol, WiFi (IEEE 802.11 protocol), Mesh, bluetooth, ZigBee, Thread, Z-Wave, NFC, UWB, LiFi, and the like, and the communication device 4400 can also include a long-range communication device, such as any device that performs WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 4500 is a liquid crystal display, a touch panel, or the like, for example. The input device 4600 may include, for example, a touch screen, a keyboard, and the like. A user can output/input voice information through the speaker 4700 and the microphone 4800.
Although a plurality of devices of the smart projector 6000 are shown in fig. 1, the present invention may relate to only some of the devices.
In the above description, the skilled person will be able to design instructions in accordance with the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< first embodiment >
Fig. 2 is a flowchart illustrating an interaction method implemented based on a wearable device according to an embodiment of the present invention, which is implemented by the wearable device 1000.
The wearable device provided by the embodiment of the invention is provided with at least one microphone, and can have the capability of voice interaction with a user. The wearable device may be, for example, smart glasses with a microphone, a smart headset, a smart bracelet, a smart watch, or the like. The smart device may be, for example, a smart home or the like. The smart home may be, for example, a smart speaker, a smart television, a smart washing machine, a projector, a smart phone, etc.
The wearable device can be in communication connection with other intelligent devices in a wireless mode, and can send instructions to the other intelligent devices so that the other intelligent devices can execute corresponding operations according to the instructions. Other intelligent devices can be in communication connection in a wireless mode or a wired mode, so that the instruction can be transmitted among other intelligent devices, and the problem of signal receiving failure caused by the fact that a user is far away from the intelligent device for receiving the instruction can be solved.
The wearable device is provided with at least one microphone, the at least one microphone is used for picking up sound signals, collecting user voice and obtaining instructions according to the user voice. Other intelligent devices can be connected with the wearable device in a wireless mode and can communicate with the application program of the wearable device, so that the instruction for controlling each intelligent device can be set through the application program, and based on the instruction acquired by the wearable device according to the voice of the user, the other intelligent devices can be controlled to execute various commands expected by the user.
According to fig. 2, the interaction method implemented based on the wearable device of the present embodiment may include the following steps:
step S2100, after receiving a preset wake-up voice instruction, performs wireless signal search, and establishes a wireless connection with a first intelligent device by using the intelligent device with the strongest wireless signal as the first intelligent device.
In this embodiment, the waking voice instruction is used to wake up the wearable device, and the waking voice instruction may be preset before the wearable device leaves the factory, or may be set by a user according to an operation instruction when the wearable device is used.
The wireless signal search may be a bluetooth signal search or a WIFI signal search.
In an example, when the wearable device performs wireless Signal search, the first smart device may be determined according to the Signal Strength of the searched wireless Signal, and specifically, the corresponding Signal Strength value rssi (received Signal Strength indication) may be obtained according to the Signal power of the searched wireless Signal. And selecting the intelligent equipment with the strongest wireless signal as the first intelligent equipment, and establishing wireless connection with the first intelligent equipment.
Since there is a very mature algorithm in terms of the obtained signal strength value RSSI, and the embodiment of the present invention is applicable to any algorithm capable of obtaining the received signal strength value RSSI, it is not limited herein.
In an example, when the wearable device performs the wireless signal search, the first smart device may be determined according to a distance between the smart device corresponding to the searched wireless signal and the wearable device, and specifically, the corresponding signal strength value RSSI may be obtained according to a signal power of the searched wireless signal. A signal strength value reflecting a distance between the wearable device scanning for the wireless signal search and the other smart devices. Based on the signal strength value corresponding to the searched wireless signal, the distance between the other intelligent device and the wearable device can be determined. The larger the signal intensity value corresponding to the wireless signal is, the closer the corresponding intelligent device is to the wearable device, the smaller the signal intensity value corresponding to the wireless signal is, and the farther the corresponding intelligent device is from the wearable device. And selecting the intelligent equipment closest to the wearable equipment as the first intelligent equipment, and establishing wireless connection with the first intelligent equipment.
In one example, the wearable device takes the smart device with the strongest wireless signal as the first smart device, and establishes wireless connection with the first smart device, when the distance between the wearable device and the first smart device is greater than a preset distance, the wearable device can be awakened, the voice of the user is collected through the wearable device, and a corresponding instruction is generated, so that the follow-up is combined, the transmission of the instruction can be carried out through the first smart device, the voice control of the plurality of smart devices in the internet of things system can be realized, and the problem that the voice of the user is collected by the smart device and fails due to the fact that the distance between the user and the smart device is too far away is avoided.
In an example, after establishing a wireless connection with the first smart device, the method for interaction implemented by the wearable device may further include the following step S3100.
Step S3100, when the distance between the wearable device and the first intelligent device is not larger than a preset distance, the first intelligent device can be awakened, user voice is collected through the first intelligent device, corresponding instructions are generated, voice control over a plurality of intelligent devices in the Internet of things system can be achieved, when the user is close to the first intelligent device, the user instructions can be directly received through the first intelligent device, the step of receiving the instructions by the wearable device is reduced, and energy consumption can be reduced.
In an example, after establishing a wireless connection with the first smart device, the method for interaction implemented based on the wearable device may further include the following step S3200.
Step S3200, after establishing a wireless connection with the first smart device, maintaining the wireless connection with the first smart device within a preset time and monitoring the strength of the wireless signal received by the first smart device.
The change of the intensity of the received wireless signal of the first intelligent device in the preset time reflects the change of the distance between the first intelligent device and the wearable device in the preset time, and according to the change situation of the distance between the first intelligent device and the wearable device in the preset time, whether the first intelligent device needs to be determined again can be determined.
In a more specific example, the wireless connection with the first intelligent device is maintained for a preset time, the strength of a wireless signal received by the first intelligent device is monitored, and the distance between the wearable device and the first intelligent device is judged according to the strength of the wireless signal received by the first intelligent device. If the intensity of the wireless signal of the first intelligent equipment is lower than the preset threshold value, the wireless connection with the first intelligent equipment is disconnected, the wireless signal search is carried out again, and the connection with the intelligent equipment with the strongest wireless signal is established.
In this example, by comparing with the preset threshold, it can be reflected whether the distance between the first smart device and the wearable device is too large. The preset threshold value can be set according to engineering experience or experimental simulation results. When the intensity of the wireless signal of the first intelligent device is lower than a preset threshold value, the first intelligent device is far away from the wearable device worn by the user, the first intelligent device weakens the received signal and even cannot receive an instruction sent by the wearable device to the first intelligent device, the wireless connection with the first intelligent device is disconnected at the moment, the signal is searched again, and the first intelligent device is determined.
In another more specific example, the wireless connection with the first smart device is maintained for a preset time, and the wireless connection with the first smart device is disconnected when the preset time is reached.
In this example, the wireless connection with the first smart device is maintained for the preset time, the user may send an instruction to the first smart device through the wearable device, the wearable device disconnects the wireless connection with the first smart device when the preset time is reached, and the user may re-determine the first smart device as needed.
In this embodiment, the distance between the wearable device and the first smart device may be determined according to the strength of the wireless signal received by the first smart device, the stronger the strength, the closer the distance is, the smaller the strength, the farther the distance is, when the strength is smaller than a certain threshold, that is, the distance is greater than a preset threshold, the wearable device disconnects from the first smart device, and searches for a signal again to select the first smart device, so that stability of communication between the wearable device and the first smart device can be ensured, the first smart device is facilitated to receive an instruction sent by the wearable device and transfer the instruction, and a phenomenon that the wearable device cannot send the instruction to the first smart device along with a change in the user position can be avoided. In addition, when the time of the wireless connection between the wearable device and the first intelligent device reaches the preset time, the wearable device is disconnected from the first intelligent device, so that the wearable device is in a standby state, the energy consumption can be reduced, and the user experience is improved.
After establishing a wireless connection with a first smart device, entering:
step S2200, collecting the user voice and obtaining the instruction according to the user voice.
In this embodiment, when the distance between the first smart device and the wearable device is greater than the preset distance, the wearable device is awakened, so that the wearable device receives the instruction; when the distance between the first intelligent device and the wearable device is not larger than the preset distance, the first intelligent device is awakened so that the first intelligent device directly receives the instruction.
The instructions may be instruction code or natural language. The instruction comprises an identification of the instruction object and instruction content corresponding to the identification. The identification of the instruction object is used for identifying each intelligent device in the Internet of things system. The identification of the instruction object may be, for example, the name of the instruction object or the code number of the instruction object. The instruction content corresponding to the identification may indicate an operation that the smart device corresponding to the identification needs to perform.
In the first example, the instruction code is taken as an example, and assuming that other intelligent devices include an intelligent sound box, an intelligent television and an intelligent projector, a user may set an instruction in advance, for example, instruct one corresponding to the intelligent sound box to turn on, instruct two corresponding to the intelligent sound box to turn off, instruct three corresponding to the intelligent television to turn on, instruct four corresponding to the intelligent television to turn off, instruct five corresponding to the intelligent projector to turn on, and instruct six corresponding to the intelligent projector to turn off. When the intelligent sound box is first intelligent equipment, the voice of a user is an execution instruction I, and the distance between the intelligent sound box and the wearable equipment is not more than a preset distance, the intelligent sound box is awakened and receives the execution instruction I; when the voice of the user is the first execution instruction and the distance between the intelligent sound box and the wearable device is larger than the preset distance, the wearable device is awakened and receives the first execution instruction, the first execution instruction is sent to the intelligent sound box, the wearable device or the first intelligent device can be awakened according to the conditions, energy consumption is reduced, and user experience is improved.
< example two > taking the identifier of the instruction object as the name of the instruction object as an example, the instruction may be that the smart speaker starts playing a song, the smart television is turned on, and the smart projector is turned on.
And in the third example, the identifier of the instruction object is taken as the code of the instruction object, and assuming that other intelligent devices comprise an intelligent sound box, an intelligent television and an intelligent projector, the instruction can be that the intelligent device I is started, the intelligent device II is started and the intelligent device III is started, wherein the intelligent device I, the intelligent device II and the intelligent device III respectively correspond to the intelligent sound box, the intelligent television and the intelligent projector.
In this embodiment, the number of the acquired instructions may be multiple. For example, the instruction is to turn on the smart projector, and the instruction includes turning on a host of the smart projector and turning on a screen of the smart projector.
In this embodiment, after establishing the wireless connection with first smart machine, wearable equipment can gather user's pronunciation and obtain the instruction according to user's pronunciation to combine subsequently, can carry out the transmission of instruction through first smart machine, thereby can realize the speech control to a plurality of smart machines in the thing networking system.
In a more specific example, according to the step S2200 of the user voice acquiring instruction, the following step S2201a may be further included.
In step S2201a, the user speech is recognized as characters, and the recognized characters are analyzed to generate an instruction.
In this example, the wearable device may parse the collected user speech. The analysis process comprises the steps of recognizing the voice of the user into characters, extracting key words according to the characters, and generating instructions according to the key words.
The method comprises the steps of collecting user voice of 'please start the intelligent television', recognizing the user voice into characters, extracting keywords from 'please start the intelligent television', extracting keywords 'intelligent television' representing an instruction object and keywords 'start' representing instruction content, and generating an instruction 'intelligent television start'.
In a more specific example, according to the step S2200 of the user voice acquiring instruction, the following step S2201b may be further included.
Step S2201b, uploading the collected user voice to the cloud for analysis, so as to obtain an instruction generated by cloud analysis.
In this example, the analysis of the user voice may be implemented by the cloud server. The wearable device or the first intelligent device uploads the collected user voice to the cloud server. The cloud server identifies the user voice into characters, extracts keywords according to the characters, and generates an instruction according to the keywords. And the cloud server sends the generated instruction to the wearable equipment.
In a more specific example, according to the step S2200 of the user voice acquiring instruction, the following step S2201c may be further included.
Step S2201c, recognizing the user speech into characters, uploading the recognized characters to the cloud for analysis, so as to obtain an instruction generated by cloud analysis.
In this example, the wearable device or the first smart device may collect the user speech and recognize the user speech as a text passage. The wearable device can upload the characters obtained through recognition to the cloud server. And the cloud server extracts keywords according to the characters and generates an instruction according to the keywords. And the cloud server sends the generated instruction to the wearable equipment.
In this embodiment, wearable equipment or first smart machine can analyze the user's pronunciation of gathering, also can combine the high in the clouds server to analyze the user's pronunciation, can improve the efficiency and the rate of accuracy of user's pronunciation analysis to avoid generating wrong instruction.
After the user voice is collected and an instruction is obtained according to the user voice, the following steps are entered:
and step S2300, sending the instruction to the first intelligent device.
In one example, the step S2300 of causing the first smart device to perform the corresponding operation according to the identifier may further include the following step S2301 a.
In step S2301a, if the identification of the instruction object is consistent with the identification of the first smart device, the instruction is executed.
The identification of the instruction object is used for identifying each intelligent device in the Internet of things system. The identification of the instruction object may be, for example, the name of the instruction object or the code number of the instruction object.
For example, assume that the other smart devices include a smart speaker, a smart television, and a smart projector, where the smart speaker is the first smart device. The user can set the instruction in advance, wherein, instruction one corresponds intelligent audio amplifier and opens, and instruction two corresponds intelligent audio amplifier and closes, and instruction three corresponds the smart television and opens, and instruction four corresponds the smart television and closes, and instruction five corresponds the intelligent projector and opens, and instruction six corresponds the intelligent projector and closes. When the instruction sent to the intelligent sound box is an instruction I, the identification of the instruction object corresponding to the instruction I is consistent with the identification of the intelligent sound box, and the intelligent sound box executes the operation corresponding to the instruction.
For example, assuming that other smart devices include a smart speaker, a smart television, and a smart projector, the instruction may be that the first smart device is turned on, the second smart device is turned on, and the third smart device is turned on, where the first smart device, the second smart device, and the third smart device correspond to the smart speaker, the smart television, and the smart projector, respectively. Supposing that the intelligent sound box is the first intelligent device, when the instruction sent to the intelligent sound box is 'intelligent device one is started', the identification of the instruction object corresponding to the instruction is 'intelligent device one', the identification of the intelligent sound box is 'intelligent device one', the identification of the instruction object corresponding to the instruction is consistent with the identification of the intelligent sound box, and the intelligent sound box executes the operation corresponding to the instruction.
In one example, the step S2300 of causing the first smart device to perform the corresponding operation according to the identifier may further include the following step S2301 b.
Step S2301b, if the identifier of the instruction object is not consistent with the identifier of the first smart device, the instruction is forwarded to the corresponding smart device according to the identifier.
For example, assuming that other smart devices include a smart speaker, a smart television, and a smart projector, the instruction may be that the first smart device is turned on, the second smart device is turned on, and the third smart device is turned on, where the first smart device, the second smart device, and the third smart device correspond to the smart speaker, the smart television, and the smart projector, respectively. Supposing that the intelligent sound box is the first intelligent device, when the instruction sent to the intelligent sound box is 'intelligent device two is started', the identification of the instruction object corresponding to the instruction is 'intelligent device one', the identification of the intelligent sound box is 'intelligent device two', the identification of the instruction object corresponding to the instruction is inconsistent with the identification of the intelligent sound box, and the intelligent sound box transfers the instruction to the intelligent television corresponding to the intelligent device two.
In this example, if the identification of the instruction object and the identification of the first smart device are not consistent, the first smart device sends an instruction to the other smart devices so that the smart devices consistent with the identification of the instruction object respond and execute the instruction.
For example, assuming that other smart devices include a smart speaker, a smart television, and a smart projector, the instruction may be that the first smart device is turned on, the second smart device is turned on, and the third smart device is turned on, where the first smart device, the second smart device, and the third smart device correspond to the smart speaker, the smart television, and the smart projector, respectively. Supposing that the intelligent sound box is the first intelligent device, when the instruction sent to the intelligent sound box is 'intelligent device two is started', the identification of the instruction object corresponding to the instruction is 'intelligent device one', the identification of the intelligent sound box is 'intelligent device two', the identification of the instruction object corresponding to the instruction is inconsistent with the identification of the intelligent sound box at the moment, the intelligent sound box sends the instruction to the intelligent television and the intelligent projector, the identification of the intelligent television is 'intelligent device two', the identification of the instruction object corresponding to the instruction is consistent, the intelligent television can respond to the instruction, and the operation corresponding to the instruction is executed.
In this embodiment, when the identifier of the instruction object is consistent with the identifier of the first smart device, the first smart device executes the instruction. When the identification of the instruction object is inconsistent with the identification of the first intelligent device, the instruction is forwarded to the corresponding intelligent device according to the identification or the instruction is forwarded to other intelligent devices, so that the voice control of the intelligent devices in the Internet of things system can be realized.
The interaction method implemented based on the wearable device provided in this embodiment has been described above with reference to the accompanying drawings and examples, where the wearable device uses the smart device with the strongest wireless signal as the first smart device, establishes a wireless connection with the first smart device, and can collect the user voice through the wearable device or the first smart device and generate a corresponding instruction, so that the instruction can be transmitted through the first smart device, and therefore, it can be avoided that the user voice collected through the smart device is easily limited by the distance between the user and the smart device, and voice control over multiple smart devices in the internet of things system is implemented.
< example >
The interaction method implemented based on the wearable device provided in this embodiment will be further explained in conjunction with fig. 3.
The interaction method realized based on the wearable device can comprise the following steps:
step S301, the wearable device receives the voice signal, judges whether a preset awakening voice instruction exists in the voice signal, if so, executes step S302, and if not, ends the process.
Step S302, judging whether the wearable device establishes wireless connection with the first intelligent device, if so, executing step S303, otherwise, executing steps S312-S313. Step S303, determining a distance between the wearable device and the first smart device, if the distance is greater than a preset threshold, executing steps S3041-S3081, and if the distance is not greater than the preset threshold, executing steps S3042-S3072.
Step S3041, wake up the wearable device.
Step S3051, the wearable device collects user voice.
Step S3061, analyzes the user voice, and generates an instruction.
And step S3071, judging whether the analysis of the user voice is successful, if so, executing the step S3081, otherwise, ending the process.
And step S3081, sending the instruction to the first intelligent device, and executing the step S309.
Step S3042, wake up the first smart device.
And S3052, acquiring the voice of the user by the first intelligent equipment.
Step S3062, analyze the user voice, generate the instruction.
And step S3072, judging whether the analysis of the user voice is successful, if so, executing the step S309, otherwise, ending the process.
Step S309, judging whether the identification of the instruction object is consistent with the identification of the first intelligent device, if so, executing step S310, otherwise, executing step S311.
And step S310, the first intelligent device executes corresponding operation according to the identification.
And step S311, the first intelligent device sends the instruction to the corresponding intelligent device according to the identification.
Wherein, after determining that the wearable device does not establish a wireless connection with the first smart device, performing the following steps:
step S312, the wearable device establishes a wireless connection with the first smart device.
And step S313, judging whether the wearable device is successfully connected with the first intelligent device, if so, executing step S303, otherwise, ending the process.
The interaction method implemented based on the wearable device and the first smart device provided in this embodiment is described above with reference to the accompanying drawings and examples, the wearable device uses the smart device with the strongest wireless signal as the first smart device, establishes a wireless connection with the first smart device, and can collect the user voice through the wearable device and generate a corresponding instruction, so that the instruction can be transmitted through the first smart device, the situation that the user voice collected through the smart device is easily limited by the distance between the user and the smart device can be avoided, and voice control over multiple smart devices in the internet of things system can be implemented.
< second embodiment >
In the present embodiment, a wearable device 4000 is further provided, as shown in fig. 4, the wearable device 4000 is provided with at least one microphone. The wearable device may be the wearable device 1000 as shown in fig. 1.
Wearable device 4000 may have the ability to voice interact with a user. The wearable device may be, for example, smart glasses with a microphone, a smart headset, a smart bracelet, a smart watch, or the like.
Wearable equipment and other smart machines can constitute thing networking systems. The wearable device can be in communication connection with other intelligent devices in a wireless mode, and can send instructions to the other intelligent devices so that the other intelligent devices can execute corresponding operations according to the instructions. Other intelligent devices can be in communication connection in a wireless mode or a wired mode, so that the instructions can be transmitted between the other intelligent devices. Other smart devices may be smart homes, for example. The smart home may be, for example, a smart speaker, a smart television, a smart washing machine, a projector, a smart phone, etc.
The wearable device 4000 comprises a processor 4100 and a memory 4200.
Memory 4200, which may be used to store executable instructions;
the processor 4100 may be configured to execute the interaction method implemented based on the wearable device as provided in the first embodiment according to the control of the executable instructions.
The embodiments in the present disclosure are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments, but it should be clear to those skilled in the art that the embodiments described above can be used alone or in combination with each other as needed. In addition, for the device embodiment, since it corresponds to the method embodiment, the description is relatively simple, and for relevant points, refer to the description of the corresponding parts of the method embodiment. The system embodiments described above are merely illustrative, in that modules illustrated as separate components may or may not be physically separate.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (11)

1. An interaction method implemented based on a wearable device provided with at least one microphone, the method comprising:
after receiving a preset awakening voice command, searching for a wireless signal, taking the intelligent equipment with the strongest wireless signal as first intelligent equipment, and establishing wireless connection with the first intelligent equipment;
acquiring user voice and acquiring an instruction according to the user voice, wherein the instruction comprises an identifier of an instruction object and instruction content corresponding to the identifier; when the distance between the first intelligent device and the wearable device is larger than a preset distance, waking up the wearable device to enable the wearable device to receive an instruction;
and sending the instruction to the first intelligent equipment so that the first intelligent equipment executes corresponding operation according to the identification.
2. The method of claim 1, further comprising:
when the distance between the first intelligent device and the wearable device is not larger than a preset distance, waking up the first intelligent device so that the first intelligent device directly receives the instruction, and executing corresponding operation according to the identification.
3. The method of claim 2, the first smart device performing a corresponding operation according to the identification, comprising:
if the identification of the instruction object is consistent with the identification of the first intelligent device, executing the instruction;
and if the identifier of the instruction object is not consistent with the identifier of the first intelligent device, the instruction is sent to the corresponding intelligent device according to the identifier.
4. The method of claim 1, further comprising:
after establishing a wireless connection with the first smart device, maintaining the wireless connection with the first smart device for a preset time and monitoring the strength of a wireless signal received by the first smart device, and,
and if the strength is lower than a preset threshold value, disconnecting the wireless connection with the first intelligent device and searching for the wireless signal again.
5. The method of claim 4, further comprising:
and judging the distance between the wearable device and the first intelligent device according to the strength of the wireless signal received by the first intelligent device.
6. The method of claim 1, the obtaining instructions from a user's voice comprising:
and recognizing the voice of the user into characters, and analyzing the recognized characters to generate the instruction.
7. The method of claim 1, the obtaining instructions from a user's voice comprising:
and uploading the collected user voice to a cloud for analysis so as to obtain a command generated by cloud analysis.
8. The method of claim 1, the obtaining instructions from a user's voice comprising:
and recognizing the voice of the user into characters, uploading the recognized characters to the cloud for analysis, and acquiring an instruction generated by cloud analysis.
9. The method of claim 1, the wearable device being smart glasses or a smart headset.
10. The method of claim 1, the wireless signal search being a bluetooth signal search or a WIFI signal search.
11. A wearable device comprising a memory and a processor, the memory storing computer instructions that, when executed by the processor, implement the method of any of claims 1-10.
CN201911144257.3A 2019-11-20 2019-11-20 Interaction method realized based on wearable device and wearable device Pending CN110956963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911144257.3A CN110956963A (en) 2019-11-20 2019-11-20 Interaction method realized based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911144257.3A CN110956963A (en) 2019-11-20 2019-11-20 Interaction method realized based on wearable device and wearable device

Publications (1)

Publication Number Publication Date
CN110956963A true CN110956963A (en) 2020-04-03

Family

ID=69978127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911144257.3A Pending CN110956963A (en) 2019-11-20 2019-11-20 Interaction method realized based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN110956963A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111624891A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Control method and device applied to wearable equipment and wearable equipment
CN112269468A (en) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 Bluetooth and 2.4G, WIFI connection-based human-computer interaction intelligent glasses, method and platform for acquiring cloud information
CN112543446A (en) * 2020-12-02 2021-03-23 歌尔科技有限公司 Interaction method based on near field communication, wearable device and storage medium
CN113113004A (en) * 2021-02-24 2021-07-13 花豹科技有限公司 Voice recognition starting method, electronic device, computer device and readable storage medium
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium
CN113873392A (en) * 2021-09-06 2021-12-31 深圳市海创嘉科技有限公司 Intelligent sound box array system
CN116033431A (en) * 2022-08-18 2023-04-28 荣耀终端有限公司 Connection method and device of wearable device
WO2024000836A1 (en) * 2022-06-29 2024-01-04 歌尔股份有限公司 Voice control method and apparatus for home device, wearable device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140379336A1 (en) * 2013-06-20 2014-12-25 Atul Bhatnagar Ear-based wearable networking device, system, and method
US20170339663A1 (en) * 2016-05-17 2017-11-23 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Automatically modifying notifications based on distance from a paired wearable smart device
CN109696833A (en) * 2018-12-19 2019-04-30 歌尔股份有限公司 A kind of intelligent home furnishing control method, wearable device and sound-box device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140379336A1 (en) * 2013-06-20 2014-12-25 Atul Bhatnagar Ear-based wearable networking device, system, and method
US20170339663A1 (en) * 2016-05-17 2017-11-23 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Automatically modifying notifications based on distance from a paired wearable smart device
CN109696833A (en) * 2018-12-19 2019-04-30 歌尔股份有限公司 A kind of intelligent home furnishing control method, wearable device and sound-box device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111624891A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Control method and device applied to wearable equipment and wearable equipment
CN112269468A (en) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 Bluetooth and 2.4G, WIFI connection-based human-computer interaction intelligent glasses, method and platform for acquiring cloud information
CN112543446A (en) * 2020-12-02 2021-03-23 歌尔科技有限公司 Interaction method based on near field communication, wearable device and storage medium
CN113113004A (en) * 2021-02-24 2021-07-13 花豹科技有限公司 Voice recognition starting method, electronic device, computer device and readable storage medium
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium
CN113873392A (en) * 2021-09-06 2021-12-31 深圳市海创嘉科技有限公司 Intelligent sound box array system
WO2024000836A1 (en) * 2022-06-29 2024-01-04 歌尔股份有限公司 Voice control method and apparatus for home device, wearable device and storage medium
CN116033431A (en) * 2022-08-18 2023-04-28 荣耀终端有限公司 Connection method and device of wearable device
CN116033431B (en) * 2022-08-18 2023-10-31 荣耀终端有限公司 Connection method and device of wearable device

Similar Documents

Publication Publication Date Title
CN110956963A (en) Interaction method realized based on wearable device and wearable device
US11825246B2 (en) Electrical panel for identifying devices using power data and network data
US10735829B2 (en) Identifying device state changes using power data and network data
CN111192591B (en) Awakening method and device of intelligent equipment, intelligent sound box and storage medium
US9800958B1 (en) Training power models using network data
CN102831894B (en) Command processing method, command processing device and command processing system
CN105847099B (en) Internet of things implementation system and method based on artificial intelligence
JP7017598B2 (en) Data processing methods, devices, devices and storage media for smart devices
CN102346643A (en) Realization method and device for learnable type remoter
EP4050903A1 (en) Identifying device state changes using power data and network data
CN112908318A (en) Awakening method and device of intelligent sound box, intelligent sound box and storage medium
CN105677152A (en) Voice touch screen operation processing method and device and terminal
CN104992715A (en) Interface switching method and system of intelligent device
CN112908321A (en) Device control method, device, storage medium, and electronic apparatus
CN103095927A (en) Displaying and voice outputting method and system based on mobile communication terminal and glasses
WO2021180162A1 (en) Power consumption control method and device, mode configuration method and device, vad method and device, and storage medium
CN111862965A (en) Awakening processing method and device, intelligent sound box and electronic equipment
CN105407445A (en) Connection method and first electronic device
CN112700770A (en) Voice control method, sound box device, computing device and storage medium
CN106297783A (en) A kind of interactive voice identification intelligent terminal
CN111986682A (en) Voice interaction method, device, equipment and storage medium
US20220319507A1 (en) Electronic device for identifying electronic device to perform speech recognition and method of operating same
CN115527540A (en) Sound detection method and device and electronic equipment
CN107358956B (en) Voice control method and control module thereof
CN115810356A (en) Voice control method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403

RJ01 Rejection of invention patent application after publication