CN110265004B - Control method and device for target terminal in intelligent home operating system - Google Patents

Control method and device for target terminal in intelligent home operating system Download PDF

Info

Publication number
CN110265004B
CN110265004B CN201910568750.1A CN201910568750A CN110265004B CN 110265004 B CN110265004 B CN 110265004B CN 201910568750 A CN201910568750 A CN 201910568750A CN 110265004 B CN110265004 B CN 110265004B
Authority
CN
China
Prior art keywords
target
receiver
determining
sender
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910568750.1A
Other languages
Chinese (zh)
Other versions
CN110265004A (en
Inventor
梁海山
赵峰
徐志方
刘超
尹德帅
马成东
李莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN201910568750.1A priority Critical patent/CN110265004B/en
Publication of CN110265004A publication Critical patent/CN110265004A/en
Application granted granted Critical
Publication of CN110265004B publication Critical patent/CN110265004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a method and a device for controlling a target terminal in an intelligent home operating system, wherein the method comprises the following steps: determining a sender of a target voice instruction according to pre-stored voice information; determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction; and when the sender and the receiver are determined to be positioned in different areas, controlling a target terminal in a target area where the receiver is positioned to execute a preset operation. The invention solves the problem that the user voice information in different areas can not be effectively transmitted, and achieves the effect of effectively transmitting the user voice information in different areas.

Description

Control method and device for target terminal in intelligent home operating system
Technical Field
The invention relates to the field of communication, in particular to a method and a device for controlling a target terminal in an intelligent home operating system.
Background
When a user is playing a game or watching a video, the user often chooses to wear headphones or play the game or watch the video using an audio device in order to have better audio-visual enjoyment. However, in certain scenarios (such as wearing a headphone game or watching a video), the user cannot effectively receive the user's voice in other room areas. For example, when a user plays a game with headphones (or with a sound device), when other users speak to the user in different rooms, the user may not hear the other users' voices due to the game or the loud video. The children wear the earphones in the study room to play games, at the moment, mothers in a living room call the children to eat, and the sound of the mothers cannot be heard by the children due to the fact that the children wear the earphones, so that normal communication among family members is greatly influenced.
In the related art, the voice information of the user in different areas cannot be effectively transmitted.
Disclosure of Invention
The embodiment of the invention provides a control method and a control device for a target terminal in an intelligent home operating system, which are used for at least solving the problem that user voice information in different areas cannot be effectively transmitted in the related technology.
According to an embodiment of the present invention, a method for controlling a target terminal in an intelligent home operating system is provided, including: determining a sender of a target voice instruction according to pre-stored voice information; determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction; and when the sender and the receiver are determined to be positioned in different areas, controlling a target terminal in a target area where the receiver is positioned to execute a preset operation.
Optionally, before determining the sender of the target voice instruction according to the pre-stored voice information, the method further includes: converting the target voice instruction into text information; performing word segmentation processing on the text information; and under the condition that the text information is determined to have the specific keywords matched with the preset text information, triggering and executing a sender for determining the target voice instruction according to the pre-stored voice information.
Optionally, determining a receiver of the target voice instruction according to the voice content included in the target voice instruction includes: extracting a target name contained in the target voice command; and determining a target receiver corresponding to the target name in a pre-established relation table, and determining the target receiver as a receiver of the target voice instruction.
Optionally, when it is determined that the sender and the receiver are located in different areas, before controlling the target terminal in the target area where the receiver is located to perform a predetermined operation, the method further includes: determining the position of the sender according to the position of the device for monitoring the target voice command; determining the position of the receiver according to the position of the equipment logged by the receiver; and judging whether the sender and the receiver are in the same area or not according to the position of the sender and the position of the receiver.
Optionally, the target terminal in the target area where the receiving party is located is controlled to perform a predetermined operation, including at least one of: controlling a light-emitting device in a target area where the receiving party is located to emit light according to a preset mode; controlling the equipment logged by the receiver to vibrate according to a preset frequency; and controlling the device on which the receiver logs to display the image with the preset pattern.
According to another embodiment of the present invention, a control apparatus for a target terminal in an intelligent home operating system is provided, including: the first determining module is used for determining a sender of the target voice instruction according to pre-stored voice information; the second determining module is used for determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction; and the control module is used for controlling a target terminal in a target area where the receiver is located to execute a preset operation when the sender and the receiver are determined to be located in different areas.
Optionally, the apparatus further comprises: the conversion module is used for converting the target voice instruction into text information before determining a sender of the target voice instruction according to the pre-stored voice information; the processing module is used for carrying out word segmentation processing on the text information; and the triggering module is used for triggering and executing a sender for determining a target voice instruction according to the pre-stored voice information under the condition that the text information is determined to have the specific keyword matched with the preset text information.
Optionally, the second determining module includes: the extracting unit is used for extracting a target name contained in the target voice instruction; and the determining unit is used for determining an object receiver corresponding to the object name in a pre-established relation table and determining the object receiver as the receiver of the object voice instruction.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the sender of the target voice instruction is determined according to the pre-stored voice information; determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction; and controlling a target terminal in a target area where the receiver is located to execute a predetermined operation when the sender and the receiver are determined to be located in different areas. Therefore, the problem that the user voice information in different areas cannot be effectively transmitted can be solved, and the effect of effectively transmitting the user voice information in different areas is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of an intelligent home terminal of a method for controlling a target terminal in an intelligent home operating system according to an embodiment of the present invention;
fig. 2 is a flowchart of control of a target terminal in the smart home operating system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an implementation scenario according to an embodiment of the invention;
FIG. 4 is a flowchart of an overall scheme according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intelligent home control APP according to an alternative embodiment of the present invention;
fig. 6 is a block diagram of a control device of a target terminal in an intelligent home operating system according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the application can be executed in an intelligent home terminal, a mobile terminal, a computer terminal or a similar operation device. Taking an example of an operation on an intelligent home terminal, fig. 1 is a hardware structure block diagram of an intelligent home terminal of a control method of a target terminal in an intelligent home operating system according to an embodiment of the present invention. As shown in fig. 1, the smart home terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the smart home terminal may further include a transmission device 106 for communication function and an input/output device 108. It can be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the smart home terminal. For example, the smart home terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be configured to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to a control method of a target terminal in an intelligent home operating system in an embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, implements the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, and these remote memories may be connected to the smart home terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The specific example of the network may include a wireless network provided by a communication provider of the smart home terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a method for controlling a target terminal in an intelligent home operating system running on an intelligent home terminal is provided, and fig. 2 is a flowchart for controlling the target terminal in the intelligent home operating system according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
step S202, determining a sender of a target voice instruction according to pre-stored voice information;
step S204, determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction;
step S206, when the sender and the receiver are determined to be in different areas, the target terminal in the target area where the receiver is located is controlled to execute the predetermined operation.
Through the steps, the sender of the target voice command is determined according to the pre-stored voice information; determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction; and controlling a target terminal in a target area where the receiver is located to execute a predetermined operation when the sender and the receiver are determined to be located in different areas. Therefore, the problem that the user voice information in different areas cannot be effectively transmitted can be solved, and the effect of effectively transmitting the user voice information in different areas is achieved.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
In an alternative embodiment, before determining the sender of the target voice instruction based on the pre-stored voice information, the method further comprises: converting the target voice instruction into text information; performing word segmentation processing on the text information; and under the condition that the text information is determined to have the specific keywords matched with the preset text information, triggering and executing a sender for determining the target voice instruction according to the pre-stored voice information. In this embodiment, a voice instruction of a user is monitored through an intelligent sound, the voice instruction is converted into text information after the voice instruction of the user is received, word segmentation processing is performed on the text information, and a sender sending the voice instruction is determined according to pre-stored voice information under the condition that the text information contains pre-set keywords. For example, the user is guided to input voice information through the intelligent home control APP, so that voiceprint information of the user is extracted, and the voiceprint information of the user is uploaded to the cloud control platform, so that the cloud control platform establishes a binding relationship between voiceprint characteristics and a user account. The audio acquisition equipment monitors the voice of a user, performs semantic recognition on the acquired voice command of the user, judges whether the voice carries specific keywords or not based on a semantic recognition result so as to determine whether the acquired voice is a specified type of voice, wherein the specified type of voice can be eating, sleeping, coming, getting away and the like, and when the voice containing the command type is detected, the voice sending object is determined according to the binding relationship between the voiceprint established in the cloud control platform and the user account.
In an alternative embodiment, determining a recipient of the target voice instruction based on the voice content contained in the target voice instruction comprises: extracting a target name contained in the target voice command; and determining a target receiver corresponding to the target name in a pre-established relation table, and determining the target receiver as a receiver of the target voice instruction. In this embodiment, the relationship list stored in the cloud control platform may be used by the cloud control platform to determine the relationship between the user accounts in the family member according to the table. For example, the sender of the voice command issues "mom, next turn". Firstly, the account A is determined as the object for sending the voice command according to the voiceprint characteristics, and the target name contained in the voice is 'mom'. And determining that the 'mom' corresponds to the account B according to the pre-established relation list so as to determine that the receiver of the voice instruction is the account B.
In an optional embodiment, before controlling the target terminal in the target area where the receiver is located to perform a predetermined operation when it is determined that the sender and the receiver are located in different areas, the method further includes: determining the position of the sender according to the position of the device for monitoring the target voice command; determining the position of the receiver according to the position of the equipment logged by the receiver; and judging whether the sender and the receiver are in the same area or not according to the position of the sender and the position of the receiver. In this embodiment, the positions of the sender and the receiver may be determined by the positions of devices being used by the sender and the receiver of the voice instruction. The position of the sender can be determined by monitoring the position of the sound of the voice command. The user logs in when using the smart home, and further, the position of the receiver can be determined by the position of the device logging in the receiver, and whether the sender and the receiver are in the same area is determined by the positions of the devices used by the sender and the receiver.
In an optional embodiment, the controlling the target terminal in the target area where the receiving party is located to perform a predetermined operation includes at least one of: controlling a light-emitting device in a target area where the receiving party is located to emit light according to a preset mode; controlling the equipment logged by the receiver to vibrate according to a preset frequency; and controlling the device on which the receiver logs to display the image with the preset pattern. In this embodiment, if the sender and the receiver of the voice command are not in the same area, the lamp in the area where the receiver is located is controlled to flash, or the receiver logs in a device used by the receiver, such as a mobile phone or an electronic device like a computer, vibrates, or a corresponding pattern is displayed on a screen to remind the receiver of the voice command.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The application is illustrated below by means of a specific example.
Fig. 3 is a schematic diagram of an implementation scenario according to an embodiment of the present invention. Fig. 4 is a flowchart of the overall scheme according to the embodiment of the present invention, which mainly includes the following steps:
step 1, uploading the layout condition of each intelligent home in a room, and uploading the relationship between user accounts.
The method comprises the following steps that a user starts an intelligent home control terminal application APP through a mobile terminal and logs in a personal account, the layout condition of each intelligent home in a room is uploaded through the intelligent home control application APP, and meanwhile, the user can also upload the relationship among user accounts of each family member; specifically, when a user uses the smart home control application APP for the first time, the smart home control application APP guides the user to perform account registration, and after the user completes account registration, the cloud control platform of the smart home device stores account information of the user in the database.
When a subsequent user logs in the user account through the intelligent home control application APP to add the household intelligent home equipment, the cloud control platform can automatically establish and store the binding relationship between the user account and the newly added intelligent home so as to follow up the content stored in association and determine that the user can have the control authority for the intelligent home.
In the scheme, the cloud control platform can determine other user account numbers having a binding relationship with the smart television by searching the binding relationship table corresponding to the smart television.
Table 1 binding relation table for smart tv
Figure GDA0003164515170000081
In this scheme, when the user is carrying out intelligent house equipment control through intelligent house control application APP, can upload the overall arrangement condition of intelligent house at home.
For example, fig. 5 is a schematic structural diagram of an intelligent home control APP according to an alternative embodiment of the present invention, which includes fig. 5a and fig. 5 b. The smart home control APP may show a home room area diagram shown in fig. 5a, when a user desires to add new smart home devices in a bedroom area, the user may click a "main-lying" button, and at this time, the smart home control APP switches to display existing smart home devices in the main-lying area, as shown in fig. 5b, the user may manage (for example, delete or start) the smart home devices on the interface, in addition, a "add new devices" button is also displayed on the interface, and after the user clicks the "add new devices" button, new smart home devices may be added in the main-lying area. Besides, the user can upload the house type graph of the room through the intelligent home control APP, and the distribution situation of each intelligent home device in the room is displayed and set on the intelligent home control APP according to the display of the house type graph.
It should be noted here that the cloud control platform may determine which users may belong to the same family according to the binding relationship tables of different devices, and when a plurality of different users exist in the binding relationship tables of a plurality of smart home devices at the same time, the cloud control platform may determine that the users belong to the same family, and further store the user accounts in the same family group, and use the user accounts as members of the same family group.
In order to ensure accuracy, in one embodiment, the cloud control platform may set that when multiple users exist in the binding relationship table with at least 5 pieces of smart home devices at the same time, the users are determined as the same home user. For example, assuming that the user a, the user B, and the user C exist in the binding relationship table of the smart television a, the air conditioner B, the humidifier C, the smart sound d, and the sweeping robot e at the same time, the cloud control platform may determine that the user a, the user B, and the user C belong to the same family.
Therefore, in the scheme, the user can also upload the relationship between the family members in the same family group through the intelligent household control APP.
For example, three family member accounts of the user a, the user B and the user C are stored in the family group on the cloud control platform, and all three family member users can log in the smart home control APP through the personal accounts and upload the relationships with the other two users. If the user a and the user B are couples and the user C is a child, the user a may upload the relationship between the user a and the user B through the smart home control APP, and create and store a user account relationship table by the cloud control platform, and as shown in the family member relationship table in table 2 below, the subsequent cloud control platform may determine the relationship between the user accounts in the family members according to the table.
Table 2 family group membership table
Figure GDA0003164515170000101
And 2, extracting the voiceprint information of the user and establishing a binding relationship between the voiceprint characteristics and the user account. The intelligent home control APP guides a user to input voice information so as to extract user voiceprint information, and uploads the user voiceprint information to the cloud control platform, so that the cloud control platform establishes a binding relationship between voiceprint characteristics and a user account.
And 3, performing semantic recognition on the collected user voice instruction by the audio collection equipment, and judging whether the collected voice is the specified type voice.
The audio acquisition equipment monitors the voice of a user, performs semantic recognition on the acquired voice instruction of the user, and judges whether the voice carries a specific keyword or not based on a semantic recognition result so as to determine whether the acquired voice is a specified type of voice or not;
if yes, executing step 4; and if the judgment result is negative, continuing to execute the step 3.
The audio acquisition device may be an intelligent sound box, or other intelligent household devices with an audio acquisition function, such as an intelligent television, an intelligent refrigerator, an intelligent air conditioner, or even an intelligent humidifier, etc. with an audio acquisition module. In order to ensure that the audio acquisition device can accurately acquire the voice of the sender user, in the scheme, the cloud control platform can compare the Position information of the user and each room based on the Position information of a Global Positioning System (GPS) of a device such as a smart phone carried by the user, so as to judge the room where the user is currently located, and then perform voice monitoring on the user through the audio acquisition device located in the same area with the user.
For example, if the cloud control platform determines that the current location of the user is a living room, and according to the pre-stored distribution situation of the smart home in the room, it can be determined that the device located in the living room and having the audio acquisition function is a smart sound device, and then the voice of the user can be monitored subsequently through the smart sound device.
For convenience of description, the following description will use an audio acquisition device in the same area as the user as an intelligent sound box as an example to explain the present scheme. After the intelligent sound collects the voice input by the user, Automatic Speech Recognition (ASR for short) is firstly performed on the voice, and the purpose of the intelligent sound is to mainly convert the vocabulary content in the human voice into computer-readable input, such as key, binary code or character sequence. The ASR process is implemented mainly using a decoder provided in an intelligent device to recognize the speech of a user as a text of speech in text form.
And then the subsequent intelligent sound equipment can perform word segmentation processing on the user voice converted into the text format, and judge whether the collected user voice is the voice of the specified type by judging whether the user voice in the text format contains characters matched with the preset text.
In general, when the collected user speech includes: when the words "come over", "help", "go XXX la" are used, it is often indicated that the speech of the user has clear directivity, i.e. the user wants the speech to be accurately transmitted to a certain user. Therefore, in this scheme, the preset keywords preserved in advance on the cloud control platform can include: the words "come", "come and go", "help", "come XXX la", etc. The intelligent sound can judge whether the received user voice is the voice of the specified type by judging whether the user voice in the text format contains the words.
Step 4, the intelligent sound equipment uploads the user voice determined to be of the specified type to the cloud control platform, so that the cloud control platform determines a sender user and a receiver user of the voice information;
specifically, the cloud control platform can extract voiceprint features of the voice, and determines a sender user corresponding to the voice in a mode of comparing the voiceprint features with the voiceprint features stored in advance.
In addition, it should be noted that in real life, users with different relationships often use different names to refer to each other in the process of mutual communication. For example, the term between couples is generally: "husband," "wife," and the names between parents and children are generally: "dad", "mom", and "son/daughter", etc., the cloud control platform may preset the names corresponding to different user relationships, such as the membership name correspondence table shown in table 3 below. And then the subsequent cloud control platform can determine the receiver user of the voice based on the family member relation table stored in advance according to the title contained in the voice and the sender user of the voice.
Membership name correspondence table shown in table 3
Membership relationship Title to be called
Father-son/father-daughter Father, son/daughter
Mother-child/mother-daughter Mother, son/daughter
Couple of man Husband and wife
For example, the cloud control platform determines that a sender user of a voice is user a, a name included in the voice is "son", a membership corresponding to the name is "father-son" or "mother-son", and by looking up table 2, it can be determined that a relationship between user a and user C is mother-son, and then the cloud control platform can determine a receiver user of the voice as user C.
And 5, when the cloud control platform judges that the receiver user and the sender user are in different rooms, sending a control instruction to the lighting equipment in the room where the receiver user is currently located.
The cloud control platform determines the rooms where the receiver user and the sender user are located respectively, and when the receiver user and the sender user are judged to be located in different rooms currently, the cloud control platform can send a control instruction to the lighting equipment in the room where the receiver user is located currently, so that the lighting equipment in the room can be adjusted according to the received control instruction.
Specifically, the cloud control platform may compare the position information of the user and the position information of each room based on GPS position information of devices, such as smart phones, carried by the user, so as to determine the current room where the user is located, and further determine whether the receiver user and the sender user are currently located in the same room.
It should be noted here that, the user may also preset light control modes corresponding to different semantics according to the use habit of the user, and then in actual use, the cloud control platform may perform semantic recognition on the received voice to determine the semantics expressed by the voice, and then send a control instruction corresponding to the voice semantics to the lighting device.
For example, a light control table shown in table 4 below is pre-stored on the cloud control platform in light control modes corresponding to different semantics, and when the collected voice is: if the son has eaten, the cloud control platform determines that the receiving party user corresponding to the voice is the user C by executing the steps 3-5, and the current room where the user C is located is a study room, the cloud control platform can send a control instruction to the lighting equipment of the study room, so that the lighting equipment of the study room continuously flashes 3 times of white light, and the user C is prompted to eat.
Table 4 lamplight control table
Semantics Lighting device control mode
Eating food White light flashing continuously for 3 times
Sleep Continuous flashing of red light under 2 deg.C
Help for business Continuous flashing of green light under 3 deg.C
Example 2
The embodiment also provides a control device of a target terminal in an intelligent home operating system, which is used for implementing the above embodiments and preferred embodiments, and the description of which is already given is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a control apparatus of a target terminal in an intelligent home operating system according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes: a first determining module 62, configured to determine a sender of the target voice instruction according to pre-stored voice information; a second determining module 64, configured to determine a receiver of the target voice instruction according to the voice content included in the target voice instruction; a control module 66, configured to control a target terminal in a target area where the receiver is located to perform a predetermined operation when it is determined that the sender and the receiver are located in different areas.
In an alternative embodiment, the apparatus further comprises: the conversion module is used for converting the target voice instruction into text information before determining a sender of the target voice instruction according to the pre-stored voice information; the processing module is used for carrying out word segmentation processing on the text information; and the triggering module is used for triggering and executing a sender for determining a target voice instruction according to the pre-stored voice information under the condition that the text information is determined to have the specific keyword matched with the preset text information.
In an alternative embodiment, the second determining module 64 includes: the extracting unit is used for extracting a target name contained in the target voice instruction; and the determining unit is used for determining an object receiver corresponding to the object name in a pre-established relation table and determining the object receiver as the receiver of the object voice instruction.
In an optional embodiment, the apparatus is further configured to determine the location of the sender according to the location of the device that monitors the target voice command before controlling the target terminal in the target area where the receiver is located to perform a predetermined operation when it is determined that the sender and the receiver are located in different areas; determining the position of the receiver according to the position of the equipment logged by the receiver; and judging whether the sender and the receiver are in the same area or not according to the position of the sender and the position of the receiver.
In an alternative embodiment, the control module 66 is further configured to control the light emitting device in the target area where the receiving party is located to emit light according to a predetermined manner; controlling the equipment logged by the receiver to vibrate according to a preset frequency; and controlling the device on which the receiver logs to display the image with the preset pattern.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, determining the sender of the target voice command according to the pre-stored voice information;
s2, determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction;
and S3, when the sender and the receiver are determined to be in different areas, controlling the target terminal in the target area where the receiver is located to execute the predetermined operation.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining the sender of the target voice command according to the pre-stored voice information;
s2, determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction;
and S3, when the sender and the receiver are determined to be in different areas, controlling the target terminal in the target area where the receiver is located to execute the predetermined operation.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A control method for a target terminal in an intelligent home operating system is characterized by comprising the following steps:
determining a sender of a target voice instruction according to pre-stored voice information;
determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction;
and when the sender and the receiver are determined to be positioned in different areas, controlling a target terminal in a target area where the receiver is positioned to execute a preset operation.
2. The method of claim 1, wherein prior to determining a sender of the target voice instruction based on pre-stored voice information, the method further comprises:
converting the target voice instruction into text information;
performing word segmentation processing on the text information;
and under the condition that the text information is determined to have the specific keywords matched with the preset text information, triggering and executing a sender for determining the target voice instruction according to the pre-stored voice information.
3. The method of claim 1, wherein determining a recipient of the target voice command based on the voice content contained in the target voice command comprises:
extracting a target name contained in the target voice command;
and determining a target receiver corresponding to the target name in a pre-established relation table, and determining the target receiver as a receiver of the target voice instruction.
4. The method according to claim 1, wherein before controlling the target terminal in the target area where the receiver is located to perform a predetermined operation when it is determined that the sender and the receiver are located in different areas, the method further comprises:
determining the position of the sender according to the position of the device for monitoring the target voice command;
determining the position of the receiver according to the position of the equipment logged by the receiver;
and judging whether the sender and the receiver are in the same area or not according to the position of the sender and the position of the receiver.
5. The method of claim 1, wherein controlling the target terminal in the target area where the receiving party is located to perform a predetermined operation comprises at least one of:
controlling a light-emitting device in a target area where the receiving party is located to emit light according to a preset mode;
controlling the equipment logged by the receiver to vibrate according to a preset frequency;
and controlling the device on which the receiver logs to display the image with the preset pattern.
6. The utility model provides a target terminal's controlling means among intelligent house operating system which characterized in that includes:
the first determining module is used for determining a sender of the target voice instruction according to pre-stored voice information;
the second determining module is used for determining a receiver of the target voice instruction according to the voice content contained in the target voice instruction;
and the control module is used for controlling a target terminal in a target area where the receiver is located to execute a preset operation when the sender and the receiver are determined to be located in different areas.
7. The apparatus of claim 6, further comprising:
the conversion module is used for converting the target voice instruction into text information before determining a sender of the target voice instruction according to the pre-stored voice information;
the processing module is used for carrying out word segmentation processing on the text information;
and the triggering module is used for triggering and executing a sender for determining a target voice instruction according to the pre-stored voice information under the condition that the text information is determined to have the specific keyword matched with the preset text information.
8. The apparatus of claim 6, wherein the second determining module comprises:
the extracting unit is used for extracting a target name contained in the target voice instruction;
and the determining unit is used for determining an object receiver corresponding to the object name in a pre-established relation table and determining the object receiver as the receiver of the object voice instruction.
9. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 5 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN201910568750.1A 2019-06-27 2019-06-27 Control method and device for target terminal in intelligent home operating system Active CN110265004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910568750.1A CN110265004B (en) 2019-06-27 2019-06-27 Control method and device for target terminal in intelligent home operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910568750.1A CN110265004B (en) 2019-06-27 2019-06-27 Control method and device for target terminal in intelligent home operating system

Publications (2)

Publication Number Publication Date
CN110265004A CN110265004A (en) 2019-09-20
CN110265004B true CN110265004B (en) 2021-11-02

Family

ID=67922494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910568750.1A Active CN110265004B (en) 2019-06-27 2019-06-27 Control method and device for target terminal in intelligent home operating system

Country Status (1)

Country Link
CN (1) CN110265004B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113448251A (en) * 2020-03-24 2021-09-28 海信集团有限公司 Position prompting method and system
CN111667820A (en) * 2020-06-22 2020-09-15 京东方科技集团股份有限公司 Communication method, communication device, electronic equipment and computer-readable storage medium
CN112270927A (en) * 2020-09-27 2021-01-26 青岛海尔空调器有限总公司 Intelligent interaction method based on environment adjusting equipment and intelligent interaction equipment
CN113515256B (en) * 2021-05-19 2024-02-02 云米互联科技(广东)有限公司 HomeMap-based visual control method and device for equipment sound source state
CN113485335A (en) * 2021-07-02 2021-10-08 追觅创新科技(苏州)有限公司 Voice instruction execution method and device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105206275A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Device control method, apparatus and terminal
CN105699936A (en) * 2014-11-27 2016-06-22 青岛海尔智能技术研发有限公司 Smart home indoor positioning method
CN109164715A (en) * 2018-11-20 2019-01-08 深圳创维-Rgb电子有限公司 A kind of smart home system, control method, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3672800B2 (en) * 2000-06-20 2005-07-20 シャープ株式会社 Voice input communication system
US9620124B2 (en) * 2014-02-28 2017-04-11 Comcast Cable Communications, Llc Voice enabled screen reader

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105699936A (en) * 2014-11-27 2016-06-22 青岛海尔智能技术研发有限公司 Smart home indoor positioning method
CN105206275A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Device control method, apparatus and terminal
CN109164715A (en) * 2018-11-20 2019-01-08 深圳创维-Rgb电子有限公司 A kind of smart home system, control method, equipment and medium

Also Published As

Publication number Publication date
CN110265004A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110265004B (en) Control method and device for target terminal in intelligent home operating system
US10945008B2 (en) Smart terminal, interaction method based thereon, interaction system, and non-transitory computer readable storage medium
CN107426701B (en) Intelligent voice tour guide system and tour guide method thereof
CN109147802B (en) Playing speed adjusting method and device
US20130268956A1 (en) Real-time collection of audience feedback of a television or radio show
CN108922450B (en) Method and device for controlling automatic broadcasting of house speaking content in virtual three-dimensional space of house
US11122636B2 (en) Network-based user identification
CN108712681A (en) Smart television sound control method, smart television and readable storage medium storing program for executing
CN109741747B (en) Voice scene recognition method and device, voice control method and device and air conditioner
US10312874B2 (en) Volume control methods and devices, and multimedia playback control methods and devices
US10104524B2 (en) Communications via a receiving device network
CN107451242A (en) Data playback control method, system and computer-readable recording medium
CN108932947B (en) Voice control method and household appliance
US9264854B2 (en) Content selection system, content selection method and management apparatus
CN106453005A (en) Intelligent air-conditioning system having personalized speech broadcast function
CN110730330A (en) Sound processing method and related product
CN107171760B (en) A kind of radio playback method, cloud server and radio
CN107146609B (en) Switching method and device of playing resources and intelligent equipment
CN110418243B (en) Intelligent sound box control system and control method
CN106664432A (en) Multimedia information play methods and systems, acquisition equipment, standardized server
CN113518297A (en) Sound box interaction method, device and system and sound box
CN109407843A (en) Control method and device, the storage medium, electronic device of multimedia
CN111089396A (en) Method for controlling air conditioner and air conditioner
CN112837694A (en) Equipment awakening method and device, storage medium and electronic device
CN110797048B (en) Method and device for acquiring voice information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant