CN114124597B - Control method, equipment and system of Internet of things equipment - Google Patents

Control method, equipment and system of Internet of things equipment Download PDF

Info

Publication number
CN114124597B
CN114124597B CN202111263599.4A CN202111263599A CN114124597B CN 114124597 B CN114124597 B CN 114124597B CN 202111263599 A CN202111263599 A CN 202111263599A CN 114124597 B CN114124597 B CN 114124597B
Authority
CN
China
Prior art keywords
user
control
scene
target
internet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111263599.4A
Other languages
Chinese (zh)
Other versions
CN114124597A (en
Inventor
王波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202111263599.4A priority Critical patent/CN114124597B/en
Publication of CN114124597A publication Critical patent/CN114124597A/en
Application granted granted Critical
Publication of CN114124597B publication Critical patent/CN114124597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Telephonic Communication Services (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The embodiment of the application discloses a control method, equipment and system of Internet of things equipment; the method comprises the following steps: receiving voice of a user; acquiring user information of the user, including user position and/or user identity; determining a target scene identifier in a scene identifier set according to the user information; the scene identification set comprises a space identification and a user identification, wherein the space identification is obtained according to the position of the Internet of things equipment in each control scene, and the user identification is obtained according to a user corresponding to each control scene; when a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier; controlling the Internet of things equipment in the target control scene; the voice control method is used for simplifying the voice of the user for controlling the equipment and improving the use experience of the user.

Description

Control method, equipment and system of Internet of things equipment
Technical Field
The application relates to the field of internet of things, in particular to a control method, equipment and a system of internet of things equipment.
Background
With the continuous development and progress of the internet of things technology, for example, the internet of things equipment application of intelligent home appliances is increasingly popular. The intelligent household appliances are household appliances formed by introducing a microprocessor, a sensor technology, a network communication technology and the like into household appliances, and common intelligent household appliances comprise lamps, air conditioners, refrigerators, sound equipment and the like.
In order to improve the control intelligence of various Internet of things devices, a user can control the state of the devices through voice. In general, various control scenes are set for each internet of things device, and when different control scenes are executed through user voice, the same device can be controlled to be in different working states, or different devices can be controlled.
Currently, in order to realize control of devices under different scenes, user voices are generally recognized, and the user voices are matched with voice keywords, so that a target control scene is obtained. The voice keywords typically include user actions, as well as other keywords that can be used to determine the target control scenario. However, the above voice control method for the device requires the user voice to provide more information, otherwise, the wrong device may be controlled, so that the user voice is more complex, and the use experience of the user is reduced.
Disclosure of Invention
In view of this, the present application provides a control method, device and system for an internet of things device, which are used for simplifying the voice of a user for controlling the device and improving the use experience of the user.
In a first aspect, the present application provides a control method of an internet of things device, where the method includes:
receiving voice of a user;
acquiring user information of the user, including user position and/or user identity;
determining a target scene identifier in a scene identifier set according to the user information; the scene identification set comprises a space identification and a user identification, wherein the space identification is obtained according to the position of the Internet of things equipment in each control scene, and the user identification is obtained according to a user corresponding to each control scene;
when a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier;
and controlling the Internet of things equipment in the target control scene.
Because the user information comprises the user position and/or the user identity of the user, the user information can reflect the requirement of the user more accurately; the space identification in the scene identification set is obtained according to the position of the Internet of things equipment in each control scene, and the role identification in the scene identification set is obtained according to the user identity in each control scene. Therefore, the space identifier and the role identifier in the scene identifier set can more accurately represent each control scene. Therefore, the target scene identification meeting the requirement of the user can be obtained by matching the user information with the scene identifications in the scene identification set. And when a control scene corresponding to the target scene identifier exists in the first control scene obtained according to the voice, determining the target control scene according to the control scene corresponding to the target scene identifier. Because the voice of the user is not the only basis for determining the target control scene, the voice does not need to include the part related to the user information, so that the voice of the user for controlling the equipment can be simplified, and the use experience of the user is improved.
In one possible embodiment, the method further comprises:
and when a control scene corresponding to the target scene identifier does not exist in the first control scene obtained according to the voice, determining the target control scene according to the first control scene.
In a possible implementation manner, before the obtaining the user information including the user location and/or the user identity of the user, the method further includes:
judging whether the semantics of the voice comprise preset scene keywords or not;
the obtaining the user information of the user, including the user position and/or the user identity, includes:
and when the semantic does not contain the preset scene keywords, acquiring the user information.
In one possible embodiment, the method further comprises:
and when the semantic comprises the preset scene keywords, determining a target control scene according to the semantic.
In a possible implementation manner, the determining the target control scene according to the control scene corresponding to the target scene identifier includes:
determining a control scene corresponding to the target scene identifier as a second control scene;
when the second control scenario comprises a plurality of control scenarios,
And determining a target control scene according to the history information of each control scene in the second control scene.
In one possible implementation manner, the history information of each control scenario in the second control scenario includes:
the moment at which each of the second control scenarios is executed,
or alternatively, the first and second heat exchangers may be,
the number of times each of the second control scenes is executed within a preset period.
In one possible embodiment, the method further comprises:
determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
determining the position distribution of controlled equipment in each control scene in a preset area;
and determining the spatial identifiers corresponding to the control scenes respectively according to the position distribution.
In one possible embodiment, the method further comprises:
determining the identity of a target user in each control scene;
judging whether a target user in each control scene has a role identifier of a user created by the target user, wherein the role identifier of the user is used for representing the role of the user in a preset user set;
if yes, determining the user identification corresponding to each control scene according to the identity identification of the target user in each control scene and the role identification of the user in each control scene.
In a second aspect, the present application provides an internet of things gateway device, where the internet of things gateway device is configured to execute a control method of any one of the internet of things devices, so as to control the internet of things device.
In a third aspect, the application provides an internet of things system, where the internet of things system includes the gateway device of the internet of things, and further includes one or more devices of the internet of things.
Drawings
Fig. 1 is a schematic structural diagram of an internet of things system according to an embodiment of the present application;
fig. 2 is a flowchart of a control method of an internet of things device according to an embodiment of the present application;
fig. 3 is a flowchart of a control method of an internet of things device according to another embodiment of the present application.
Detailed Description
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, the control method, the device and the system of the internet of things device provided by the embodiments of the present application are described below with reference to the accompanying drawings.
While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Based on the embodiments herein, other embodiments that may be obtained by those skilled in the art without making any inventive contribution are within the scope of the application.
In the claims and specification of this application and in the drawings of the specification, the terms "comprise" and "have" and any variations thereof, are intended to cover a non-exclusive inclusion.
Currently, in order to realize control of devices under different scenes, user voices are generally recognized, and the user voices are matched with voice keywords, so that a target control scene is obtained. The voice keywords typically include user actions, as well as other keywords that can be used to determine the target control scenario. However, the above voice control method for the device requires the user voice to provide more information, otherwise, the wrong device may be controlled, so that the user voice is more complex, and the use experience of the user is reduced.
Based on this, in the embodiment of the present application provided by the inventor, since the target scene identifier is obtained according to the user information, the user information includes the user location and/or the user identity, and more information related to the control requirement of the user can be provided. Therefore, even under the condition that less information is provided by the voice of the user, the target control scene obtained by combining the voice of the user and the user information can be combined to use various sensing data, so that the control requirement of the user can be more met, the voice of the user for controlling the equipment is simplified, and the use experience of the user is improved.
In order to improve the intelligence of controlling each internet of things device, a user can control the state of the device through voice. For example, each internet of things device is typically one or more home devices. In general, one or more home devices are provided with various control scenarios, and when different control scenarios are executed through user voice, the same home device can be controlled to be in different working states, or different devices can be controlled.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, where an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101, and one or more internet of things devices. The control method is executed by the Internet of things equipment to control one or more pieces of Internet of things equipment.
In fig. 1, an internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system can further comprise one internet of things device, two internet of things devices, or more than three internet of things devices.
In some possible cases, the internet of things device may be a home device, for example, an air conditioner, a lamp, a fan, a sound box, and the like.
Referring to fig. 2, fig. 2 is a flowchart of a control method of an internet of things device according to an embodiment of the present application.
As shown in fig. 2, the control method of the internet of things device in the embodiment of the present application includes S201-device 205.
S201, receiving voice of a user.
In S201, the user is a user who performs control of the device by the control scene. The user controls the device based on speech.
S202, acquiring user information of the user, including user positions and/or user identities.
In S202, the user position of the user refers to the position where the user is located in S201. The user identity of the user is used to identify the identity of the user.
The user information of the user may include a user location and/or a user identity, in other words, the user information of the user may include a user location; the user information of the user may include a user identity; the user information of the user may include a user location and a user identity.
It will be appreciated that the user may also include other information about the user. Here, the user information and the voice in S201 are both of the above-described users.
S203, determining a target scene identifier in a scene identifier set according to the user information; the scene identifiers in the scene identifier set comprise space identifiers and user identifiers, the space identifiers are obtained according to the positions of the Internet of things devices in the control scenes, and the user identifiers are obtained according to users corresponding to the control scenes.
In S203, the scene identifier set is a set of scene identifiers including a plurality of scene identifiers, which may be divided into two types, one being a spatial identifier and the other being a user identifier. The scene identifications in the scene identification set correspond to each control scene.
A target scene identity is determined from the set of scene identities based on the user information. Specifically, the target scene identifier is a scene identifier in the scene identifier set, and may be one or more scene identifiers; the basis for determining the target scene identity from the scene identity set is user information.
The spatial identification in the scene identification set is obtained according to the positions of the Internet of things equipment in each control scene.
The internet of things equipment in each control scene refers to the internet of things equipment with the running state controlled when each control scene is executed. For each control scenario, the internet of things devices in the control scenario may include one or more devices.
The location of the internet of things device in each control scenario refers to the location of the physical network device in each control scenario. For each control scenario, it specifically refers to the spatial location of the controlled device or devices in that control scenario.
The user identification in the scene identification set is obtained according to the user corresponding to each control scene. For each control scene, the user corresponding to the control scene is the user who controls the internet of things equipment by executing the control scene.
S204, when a control scene corresponding to the target scene identifier exists in the first control scene obtained according to the voice, determining the target control scene according to the control scene corresponding to the target scene identifier.
In S204, a first control scenario is derived from the user' S voice. In some possible cases, the first control scenario may include one or more control scenarios.
In some possible cases, the above-described first control scenario may be obtained in the following manner.
Recognizing the voice of the user to obtain the semantics of the voice; acquiring all control scenes and voice keywords corresponding to all control scenes, and matching the semantics of the voice with preset voice keywords; and obtaining the first control scene according to the matched voice keywords, and forming a scene list conforming to the semantics.
In particular, the above-described speech processing procedure may be implemented by Natural Language Processing (NLP).
The process of obtaining the first control scene from the voice of the user is performed by receiving the voice of the user at S201. Specifically, the process may be performed before the user information of the user is acquired, or may be performed after the user information of the user is acquired.
In order to achieve control of a device by speech, the information contained in the speech of the user is usually related, or partially related, to the control requirements of the user on the device. Thus, one or more control scenarios that conform to the speech semantics can typically be derived from the user's speech, i.e., the first control scenario is derived from the user's speech.
And when the voice of the user is simpler and the provided information is less, the one or more control scenes obtained according to the voice do not necessarily meet the control requirements of the user or do not necessarily meet the control requirements of the user.
The target scene identification is obtained according to the user information of the user and contains information related to the control requirement of the user. In S204, when a control scene corresponding to the target scene identifier exists in the first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier. In other words, among the first control scenes obtained from the speech, a control scene corresponding to the target scene identification is selected, thereby determining the target control scene.
The target scene identification is obtained according to the user information of the user and contains information related to the control requirement of the user. Therefore, the target control scene obtained by the process meets the control requirement of the user.
In S204, in the first control scene obtained from the speech, there is a control scene corresponding to the target scene identifier, which is a condition for determining the target control scene according to the control scene corresponding to the target scene identifier. The condition may be obtained by a form of judgment.
At this time, the steps are as follows:
judging whether a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice;
if yes, determining a target control scene according to the control scene corresponding to the target scene identifier.
Since the scene identifications include a spatial identification and a user identification, the target scene identification may be one or more scene identifications. Therefore, in the first control scenario, there are control scenarios corresponding to the target scenario identification, including at least the following cases:
in the first case, in the first control scene, a control scene corresponding to the space identifier in the target scene identifier exists, and no control scene corresponding to the user identifier in the target scene identifier exists;
In the second case, in the first control scene, a control scene corresponding to the user identifier in the target scene identifier exists, and no control scene corresponding to the space identifier in the target scene identifier exists;
and in the third case, in the first control scene, a control scene corresponding to the space identifier and the role identifier of the target scene identifier exists.
S205, controlling the Internet of things equipment in the target control scene.
In S205, the internet of things device in the target control scene refers to the internet of things device whose operation state is controlled when the target control scene is executed.
Based on S201-S205, since the target scene identification is derived from the user information, which includes the user location and/or the user identity, more information related to the control needs of the user can be provided. Therefore, even under the condition that less information is provided by the voice of the user, the target control scene obtained by combining the voice of the user and the user information can be combined to use various sensing data, so that the control requirement of the user can be more met, the voice of the user for controlling the equipment is simplified, and the use experience of the user is improved.
In one possible implementation manner, when, in the first control scenario, there is no control scenario corresponding to the target scenario identification, the target control scenario may be determined by the following implementation manner:
When, in the first control scene, there is no control scene corresponding to the target scene identification,
determining a target control scene according to the first control scene;
and controlling the Internet of things equipment in the target control scene.
When the first control scene does not exist, the control scene corresponding to the target scene identification is obtained according to the voice of the user, and the control scene obtained by matching the scene identification according to the user information is described to have a certain difference.
The voice of the user when controlling the equipment is considered, and the current control requirement of the user is generally represented, so that a target control scene is determined according to a first control scene obtained by the voice of the user, and the control of the equipment of the Internet of things is completed.
In some possible cases, the first control scenario includes a plurality of control scenarios. To improve accuracy for device control, a unique target control scenario needs to be determined. At this time, according to the first control scenario, the target control scenario may be determined by the following implementation manner:
and determining a target control scene according to the history information of each control scene in the first control scene.
Here, the history information of each control scene in the first control scene is executed, and refers to a case where a past control scene is executed for each control scene in the first control scene.
In some possible cases, the history information of each control scenario executed in the first control scenario may include:
the first control scene includes a time when each control scene is executed. Further, in particular the moment of the last execution. For example, for a certain control scenario among the first control scenarios, the time point of the last execution is close to the current time, indicating that the relevance of the control scenario at the current time point may be large. Thus, the control scene may be determined as the target control scene.
In some possible cases, the history information of each control scenario executed in the first control scenario may further include:
the number of times each of the first control scenes is executed within a preset period. For example, for a certain control scenario in the first control scenario, the number of times of execution in the preset period is larger, which indicates that the control scenario is executed more frequently in the preset period, and the probability that the control scenario is executed is greater. Thus, the control scene may be determined as the target control scene.
In one possible implementation manner, determining the target control scene according to the control scene corresponding to the target scene identifier may include the following implementation manners:
Determining a control scene corresponding to the target scene identifier as a second control scene;
when the second control scenario comprises a plurality of control scenarios,
and determining a target control scene according to the history information of each control scene in the second control scene.
Since the target scene identifier may be one or more scene identifiers, when a control scene corresponding to the target scene identifier exists in the first control scene, the control scene corresponding to the target scene identifier is determined to be the second control scene. In this case, the second control scene may include a plurality of control scenes.
In order to improve the accuracy of the control of the device, the resulting target control scenario is unique, and therefore, it is necessary to determine a unique target control scenario from the second control scenario.
Here, the history information of each control scene in the second control scene is executed, and refers to a case where a past control scene is executed for each control scene in the second control scene.
In some possible cases, the history information of each control scenario executed in the second control scenario may include: the time at which each of the second control scenes is executed. Further, in particular the moment of the last execution.
In some possible cases, the history information of each control scenario executed in the second control scenario may further include: the number of times each of the second control scenes is executed within a preset period.
The meaning and role of the history information for each control scene in the second control scene to be executed is similar to the history information for each control scene in the first control scene above. And will not be described in detail here.
In some possible cases, when the second control scenario includes multiple control scenarios, the target control scenario may also be determined by other means.
In S202, user information of the user including a user location and/or a user identity is acquired.
The user information is used for determining the target scene identification and further determining the target control scene, so that the control of the Internet of things equipment is completed. Therefore, more accurate user information is acquired, and the accuracy of equipment control is improved.
Examples of specific implementations for user information acquisition are provided herein. It should be understood that this is merely an example of a specific implementation of the embodiments of the present application and is not intended to limit the embodiments of the present application in any way.
In the control method S202 of the internet of things device in the embodiment of the present application, the user position may be obtained in the following manner. The control of household equipment applied to families is realized in the following way.
A mode one,
Detecting the voice of the user according to voice equipment in the user family; each voice device obtains a sound distance for the voice; according to the sound distance obtained by each voice device, determining the space where one device nearest to the user is located; and determining the position of the user according to the space of the equipment nearest to the user.
The voice equipment refers to equipment capable of receiving voice of a user in an internet of things system. The sound distance refers to a distance from a voice emission position of each voice device, which is obtained by the device according to the voice of the user, and generally refers to a position where the user emits the voice.
For example, in the internet of things system, there are a first voice device, a second voice device, and a third voice device. The first voice device and the second voice device simultaneously receive voice of a user, and the energy corresponding to the voice is respectively first energy, second energy and third energy. And judging the magnitude relation of the first energy, the second energy and the third energy. And when the first energy is maximum, determining that the first interactive device is the voice device closest to the user, and determining the position of the user according to the position of the first voice device.
A second mode,
In order to realize the control of equipment in the internet of things system, a voice assistant can be installed on the intelligent equipment of the user. The intelligent device can be used for receiving the voice of the user, completing the dialogue with the user, interacting with the cloud and the like according to the voice of the user. In the internet of things system, a plurality of the intelligent devices can be included.
In the internet of things system, a first intelligent device is used for actually receiving voice of a user and interacting with a cloud.
In some possible cases, the user location may be determined directly from the location where the first smart device is located. In order for a user to control a device by voice, it is generally necessary to input voice using the above-described smart device. And when the user inputs voice, the distance between the user and the intelligent device is relatively short. Therefore, in order to obtain the user position, the intelligent device for receiving the voice input of the user can be utilized to directly determine the position of the voice device as the position of the user.
In some possible cases, the received sound can be judged through a plurality of intelligent devices; and determining the position of the user according to the position of the intelligent device nearest to the user. Since there may be a plurality of the above-mentioned intelligent devices in the internet of things system. In order to further improve the accuracy of determining the position of the user, the position of the user can be obtained according to a plurality of intelligent devices.
Mode III,
In the internet of things system, there may be home devices, such as sensing devices of infrared air conditioner, lamp, etc., having a function of detecting a spatial position of a user.
And detecting the position of the user by using the sensing equipment, so as to obtain the position of the user.
Mode four,
When determining the position of the user, the device receiving the voice of the user, the device performing interactive response with the user and the device nearest to the user are not necessarily the position where the user is currently speaking the voice. Therefore, the above three modes can be combined, or any combination of the above modes and other modes can be used for improving the accuracy of determining the position of the user.
It should be understood that the foregoing is merely an example of the implementation manner of obtaining the user position in the embodiment S202 of the present application, and is not a limitation of the embodiment of the present application. Acquiring the user location may also be accomplished in other ways.
In the control method S202 of the internet of things device in the embodiment of the present application, the user identity may be obtained in the following manner. The control of household equipment applied to families is realized in the following way.
And identifying the collected biological characteristic data of the user by utilizing an interaction end which carries out voice interaction with the user, and identifying the interactive user.
The biometric data of the user includes voiceprints, irises, etc. of the user. And processing the biological characteristic data by utilizing the image and voice processing capability of the interaction end. After identifying the interactive user, the user's user identity may be characterized by a user ID or the like.
It should be understood that the foregoing is merely an example of the implementation manner of obtaining the user identity in the embodiment S202 of the present application, and is not a limitation of the embodiment of the present application. Acquiring the user identity may also be achieved in other ways.
In the embodiment of the present application corresponding to fig. 2, according to the voice of the user and the acquired user identity including the user position or/and the user identity, the voice of the user control device can be simplified, and the use experience of the user can be improved.
When a user controls a device by voice, the voice uttered by the user is typically representative of the user's current control needs. When the voice sent by the user contains more information, the control scene meeting the user requirement can be directly obtained according to the voice.
In order to improve the efficiency of device control, a certain analysis is first performed on the voice of the user. When the voice of the user meets the preset condition, the target control scene is directly determined according to the voice of the user.
Referring to fig. 3, fig. 3 is a flowchart of a control method of an internet of things device according to another embodiment of the present application.
As shown in fig. 3, the control method of the internet of things device in the embodiment of the present application includes S301-S310.
S301, recognizing voice of a user, and obtaining the semantic meaning of the voice.
In S301, the user is a user who performs device control by voice.
S302, judging whether the semantic comprises preset scene keywords or not.
In S302, the scene keyword is preset.
In some possible cases, the determining whether the semantic includes a preset scene keyword may be implemented as follows:
after the semantics of the voice are obtained, judging whether the semantics contain the scene keywords or not through an exhaustive comparison mode.
For example, the preset scene keywords may be used to describe related information of a spatial position corresponding to the control scene, and may also be used to describe related information of a user corresponding to the control scene.
If yes, executing S303-S306; otherwise, S306 is performed.
S303, acquiring user information of the user, including the user position and/or the user identity.
And when the semantics comprise preset scene keywords, acquiring user information of the user, including the user position and/or the user identity.
S304, determining a target scene identifier in a scene identifier set according to the user information; the scene identifiers in the scene identifier set comprise space identifiers and user identifiers, the space identifiers are obtained according to the positions of the Internet of things devices in the control scenes, and the user identifiers are obtained according to users corresponding to the control scenes.
S305, when a control scene corresponding to the target scene identifier exists in the first control scene obtained according to the voice, determining the target control scene according to the control scene corresponding to the target scene identifier.
S306, determining a target control scene according to the semantics.
And when the semantics comprise preset scene keywords, determining a target control scene according to the semantics.
In some possible cases, the preset scene keywords may be used to describe related information of the spatial position corresponding to the control scene.
For example, in a scenario of control of home devices, control scenarios include a "parent sleeping in a home" scenario, a "child sleeping in a child's house" scenario, and a "visitor sleeping in a secondary sleeping" scenario. And under the three control scenes, respectively controlling different devices.
The preset scene keywords can be set to describe the position of the internet of things equipment in the control scene.
The preset scene keywords are specifically "primary lying", "secondary lying" and "child room".
The semantics of recognizing the user's voice is "sleep in primary lying".
In the process of controlling a device through speech, semantics are typically recognized to obtain action keywords that identify user actions. At this time, the action keyword "sleep" is obtained.
The detected semantics include a preset scene keyword "bedroom".
The judgment result obtained in S302 is: the semantics comprise preset scene keywords.
At this time, the "main sleeping" and the "sleeping" obtained directly according to the above process are obtained, and the target control scene "the parent sleeps on the main sleeping" in S308 is obtained, and the internet of things device in the target control scene is controlled.
Because the voice of the user not only comprises the user action information of sleeping, but also comprises the spatial position information of lying down corresponding to the control scene, at the moment, the target control scene is directly determined according to the voice of the user, and the control of the equipment can be realized more accurately.
In some possible cases, the preset scene keywords may also be used to describe related information of the user corresponding to the control scene.
For example, in a scenario of control of home devices, control scenarios include a "parent sleeping in a home" scenario, a "child sleeping in a child's house" scenario, and a "visitor sleeping in a secondary sleeping" scenario. And under the three control scenes, respectively controlling different devices.
The preset scene keyword may be set as a user name for the service of the internet of things device by performing the control scene.
The preset scene keywords are specifically "parents", "children", "visitors".
The semantic meaning of recognizing the user's voice is "parental sleep".
In the process of controlling a device through speech, semantics are typically recognized to obtain action keywords that identify user actions. At this time, the action keyword "sleep" is obtained.
The detected semantics include a preset scene keyword "parent".
The judgment result obtained in S302 is: the semantics comprise preset scene keywords.
At this time, the "parent" sleeps "obtained directly according to the above process, the target control scene" parent sleeps on the main sleeper "in S308 is obtained, and the internet of things device in the target control scene is controlled.
Because the voice of the user not only comprises the user action information of sleeping, but also comprises the information of parents of the user corresponding to the control scene, the target control scene is determined directly according to the voice of the user, and the control of the equipment can be realized more accurately.
S307, controlling the Internet of things equipment in the target control scene.
In one possible scenario, the spatial identifiers in the scene identifier set described above may be obtained in the following manner, including in particular S401-S404.
S401, determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment.
In S401, the control scene refers to a scene in which the internet of things device is controlled. When different control scenes are executed through the voice of the user, the same pair of equipment can be controlled to be in different working states or different equipment can be controlled.
The controlled equipment in the control scene refers to the Internet of things equipment with the working state controlled when the control scene is executed. The controlled devices may include one or more internet of things devices.
In order to meet different demands of users for controlling devices, by providing one or more control scenes, the object determined in S401 is a controlled device, which is among the respective control scenes.
S402, determining the position distribution of the controlled equipment in each control scene in a preset area.
In S402, the location distribution refers to a distribution of locations where the controlled devices are located in space.
In some possible cases, the preset area may be an area corresponding to a home house, and specifically may include an area corresponding to one or more rooms. One or more home devices are distributed within a room in a house.
The controlled devices in each control scene are respectively distributed in the preset area, so that each control scene corresponds to the position distribution. S402 determines the above-described position distribution corresponding to each control scene.
S403, determining the space identifiers corresponding to the control scenes respectively according to the position distribution.
In S403, the result of the determination is that each control scene corresponds to a spatial identifier, that is, each control scene corresponds to its own spatial identifier, where the spatial identifier is used to identify the control scene.
For each control scene, the spatial identification is obtained according to the position distribution, so the spatial identification has the position information of the devices in the control scene.
Based on S401-S404, according to the position distribution of the controlled equipment in each control scene in the preset area, the spatial identification of each control scene is obtained.
For each control scene, there is a correspondence between the spatial identification and the control scene, and the spatial identification contains location information of the controlled device in the control scene.
When the Internet of things equipment is controlled through voice, when the user information comprises the user position, determining a target space identifier according to the user information, and obtaining controlled equipment in a control scene corresponding to the target space identifier through a corresponding relation between the space identifier and the control scene, and controlling the controlled equipment.
Because the space identifier contains the position information of the equipment, the voice of the user does not need to contain the position information of the equipment, so that the voice of the user when the user controls the equipment is simplified, and the use experience of the user is improved.
In one possible scenario, the user identities in the set of user identities described above may be obtained in the following way, including in particular S501-S504.
S501, determining the identity of a target user in each control scene.
In S501, the identity belongs to the target user. The target user is a target user when executing the control scene. Executing the control scene can complete the control of the target user on the equipment through voice.
In some possible cases, determining the identity of the target user in each control scenario may be achieved by: for each control scene, acquiring biological characteristic data of a target user through equipment with a function of acquiring the biological characteristic data, and taking the biological characteristic information of the target user as an identity of the target user; the biological characteristic information and the user name can be corresponding, and the user name is used as the identity of the target user. For example, in order to realize that a user realizes control over a device through voice, the device for receiving the voice of the user is an intelligent device, and collection of biometric data of a target user can be completed through the intelligent device. The biometric information may include an image, iris, voiceprint, etc. of the target user. It will be appreciated that the above implementation is one implementation of the embodiments of the present application, and is not limited to the embodiments of the present application, and other implementations may also be included in the embodiments of the present application.
S502, judging whether a role identifier of a user created by a target user exists in the target user in each control scene, wherein the role identifier of the user is used for representing the role of the user in a preset user set.
In S502, a role identifier of a user is created by a target user, and is used to represent a role of the user in a preset user set.
It will be appreciated that the role identification of the user may be created by the target user and used to represent the target user's own role, or may be created by the target user and used to represent other users' roles.
The preset user set refers to a set including at least the target user, and one or more users are included in the set, that is, the target user is one of the user sets. The role of the target user in the preset user set refers to the role of the target user in the one or more users; the roles of the other users have similar meanings.
For example, the target user is an adult man. In order to distinguish between different users, the users are associated with identities. Suppose the identity of the adult man is "Zhangsan".
The set of preset users are members of the family in which the adult man is located. The adult men's role in the family is father, and the roles of the members in the family include father, mother, and child.
The adult man may create a character identification of the user.
In one possible scenario, the adult man may create his own character identity.
For example, it will be appreciated that the adult male and "father" roles are corresponding, i.e., that "father" can be identified as the adult male's role. The "father" herein is the character identification of the user created by the target user, where the user refers to the target user itself.
In the above case, the identity "Zhang Sano" and the character "father" represent the same person, both of which are the adult men.
By means of the character identification of the user, it is possible to distinguish between different control demands of the adult man in the "Zhang Sanj" identity and in the "father" character. When the adult men pass through the voice control device, the device control corresponding to Zhang Sanor father can be realized according to the requirements.
In one possible scenario, the adult man creates a character identity for the other user.
For example, the adult man creates a character identifier "child" for his child. "child" herein is a character identification of a user created by the target user, where user refers to the adult male child.
By means of the character identity of the user, it is possible to distinguish between the different control demands of the adult man in the "Zhang Sany" identity and the character identity of the adult man "child". When the adult men pass through the voice control device, the device control corresponding to Zhang Sanor child can be realized according to the requirements.
The foregoing is illustrative of the embodiments of the present application and is not to be construed as limiting thereof.
And S503, if yes, determining the user identification corresponding to each control scene according to the identity identification of the target user in each control scene and the role identification of the user in each control scene.
In S503, for each control scenario, when there is a role identifier of the user, the user identifier of the control scenario is determined according to the identity identifier of the target user and the role identifier of the user.
The result obtained in S503 is the user identification of each control scenario. The user identification and control scenario are corresponding.
For a certain control scenario, when there is no role identification of the user, the following implementation manner may be adopted: and determining the user identification of the control scene according to the identification of the target user. At this time, the basis of the user identification of the control scene may not include the character identification of the user.
Based on S501-S503, the identity of the target user is used to distinguish different users, so that when the user controls the device through voice, it can distinguish which user controls the device.
By judging whether the character identification exists or not, the control requirement of the target user is distinguished from the control requirement of the character corresponding to the character identification, so that the control of the same user under different conditions can be accurately realized, and the control accuracy is improved.
Because the user identification contains the information of the user, the voice of the user is not required to contain the information of the user, so that the voice of the user when the user controls the equipment is simplified, and the use experience of the user is improved.
Another embodiment of the present application is an internet of things system. As shown in fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, where an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101, and one or more internet of things devices. The gateway device of the internet of things is used for executing the control method of any one of the devices of the internet of things, and is used for controlling the devices of the internet of things.
In fig. 1, an internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system can further comprise one internet of things device, two internet of things devices, or more than three internet of things devices.
In some possible implementations, the scene identifier is obtained through a control method of any of the internet of things devices.
The internet of things system 100, the devices in the system, and the relationships between the devices, the achieved beneficial effects are the same as the above, and the detailed description is not hindered here.
Another embodiment of the present application is an internet of things gateway device. As shown in fig. 1, the gateway device of the internet of things is configured to execute the control method of the device of the internet of things, so as to control the device of the internet of things.
In an embodiment of the present application, a computer readable storage medium is further provided, where the computer readable storage medium is configured to store a computer program, where the computer program is configured to execute the control method of the internet of things device and achieve the same technical effect, and in order to avoid repetition, no further description is provided herein. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The control method of the internet of things equipment is characterized by comprising the following steps:
receiving voice of a user;
acquiring user information of the user, including user position and/or user identity;
determining a target scene identifier in a scene identifier set according to the user information; the scene identification set comprises a space identification and a user identification, wherein the space identification is obtained according to the position of the Internet of things equipment in each control scene, and the user identification is obtained according to a user corresponding to each control scene;
determining the identity of a target user in each control scene;
judging whether a target user in each control scene has a role identifier of a user created by the target user, wherein the role identifier of the user is used for representing the role of the user in a preset user set;
if yes, determining the user identification corresponding to each control scene according to the identity identification of the target user in each control scene and the role identification of the user in each control scene;
when a control scene corresponding to a target scene identifier exists in a first control scene obtained according to the voice, determining a target control scene according to the control scene corresponding to the target scene identifier;
And controlling the Internet of things equipment in the target control scene.
2. The method according to claim 1, wherein the method further comprises:
and when a control scene corresponding to the target scene identifier does not exist in the first control scene obtained according to the voice, determining the target control scene according to the first control scene.
3. The method according to claim 1, further comprising, prior to said obtaining user information of said user including user location and/or user identity:
judging whether the semantics of the voice comprise preset scene keywords or not;
the obtaining the user information of the user, including the user position and/or the user identity, includes:
and when the semantic does not contain the preset scene keywords, acquiring the user information.
4. A method according to claim 3, characterized in that the method further comprises:
and when the semantic comprises the preset scene keywords, determining a target control scene according to the semantic.
5. The method according to claim 1, wherein determining the target control scene according to the control scene corresponding to the target scene identifier comprises:
Determining a control scene corresponding to the target scene identifier as a second control scene;
when the second control scenario comprises a plurality of control scenarios,
and determining a target control scene according to the history information of each control scene in the second control scene.
6. The method of claim 5, wherein the history information for each of the second control scenarios to be executed comprises:
the moment at which each of the second control scenarios is executed,
or alternatively, the first and second heat exchangers may be,
the number of times each of the second control scenes is executed within a preset period.
7. The method according to claim 1, wherein the method further comprises:
determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
determining the position distribution of controlled equipment in each control scene in a preset area;
and determining the spatial identifiers corresponding to the control scenes respectively according to the position distribution.
8. An internet of things gateway device, wherein the internet of things gateway device is configured to perform the control method of the internet of things device according to any one of claims 1-7, so as to control the internet of things device.
9. An internet of things system, comprising the internet of things gateway device of claim 8, further comprising one or more internet of things devices.
CN202111263599.4A 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment Active CN114124597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111263599.4A CN114124597B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111263599.4A CN114124597B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Publications (2)

Publication Number Publication Date
CN114124597A CN114124597A (en) 2022-03-01
CN114124597B true CN114124597B (en) 2023-06-16

Family

ID=80377542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111263599.4A Active CN114124597B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Country Status (1)

Country Link
CN (1) CN114124597B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106356057A (en) * 2016-08-24 2017-01-25 安徽咪鼠科技有限公司 Speech recognition system based on semantic understanding of computer application scenario
CN107832286B (en) * 2017-09-11 2021-09-14 远光软件股份有限公司 Intelligent interaction method, equipment and storage medium
CN113409797A (en) * 2020-03-16 2021-09-17 阿里巴巴集团控股有限公司 Voice processing method and system, and voice interaction device and method
CN111428512B (en) * 2020-03-27 2023-12-12 大众问问(北京)信息科技有限公司 Semantic recognition method, device and equipment
CN111665737B (en) * 2020-07-21 2023-09-15 宁波奥克斯电气股份有限公司 Smart home scene control method and system

Also Published As

Publication number Publication date
CN114124597A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US20220317641A1 (en) Device control method, conflict processing method, corresponding apparatus and electronic device
CN109347709B (en) Intelligent equipment control method, device and system
CN109559742B (en) Voice control method, system, storage medium and computer equipment
EP3599605A1 (en) Home appliance and speech recognition server system using artificial intelligence and method for controlling thereof
CN110799978B (en) Face recognition in a residential environment
EP3779306A1 (en) Aroma releasing system
CN109377995B (en) Method and device for controlling equipment
CN108470568A (en) intelligent device control method and device, storage medium and electronic device
WO2021051955A1 (en) Method and apparatus for controlling electrical appliance, and computer-readable storage medium
CN112764352A (en) Household environment adjusting method and device, server and storage medium
CN113205807B (en) Voice equipment control method and device, storage medium and voice equipment
CN108932947B (en) Voice control method and household appliance
CN113091245B (en) Control method and device for air conditioner and air conditioner
CN110147047A (en) Smart home device screening technique, device, computer equipment and storage medium
CN114859749B (en) Intelligent home management method and system based on Internet of things
CN111524514A (en) Voice control method and central control equipment
CN110754948B (en) Intention identification method in cooking process and intelligent cooking equipment
CN114124597B (en) Control method, equipment and system of Internet of things equipment
CN112859634A (en) Intelligent service method based on intelligent home system and intelligent home system
CN106843882B (en) Information processing method and device and information processing system
CN116165931A (en) Control method and system of intelligent equipment, device, storage medium and electronic device
CN111596557B (en) Device control method, device, electronic device and computer-readable storage medium
CN114137841B (en) Control method, equipment and system of Internet of things equipment
KR20220160755A (en) System of multi family housing management and method ithereof
KR100529950B1 (en) Air conditioner system and the methode of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant