CN111953572A - Control method and device of intelligent service equipment - Google Patents

Control method and device of intelligent service equipment Download PDF

Info

Publication number
CN111953572A
CN111953572A CN202010787549.5A CN202010787549A CN111953572A CN 111953572 A CN111953572 A CN 111953572A CN 202010787549 A CN202010787549 A CN 202010787549A CN 111953572 A CN111953572 A CN 111953572A
Authority
CN
China
Prior art keywords
task instruction
target object
execute
matched
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010787549.5A
Other languages
Chinese (zh)
Inventor
张瑜明
宋雨濛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN202010787549.5A priority Critical patent/CN111953572A/en
Publication of CN111953572A publication Critical patent/CN111953572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The disclosure relates to the field of Internet of things, and discloses a control method and device of intelligent service equipment, which are used for avoiding chaotic task instruction execution. The method comprises the following steps: after the target intelligent home equipment is controlled to execute a first task instruction issued by a first target object, if a second task instruction issued by a second target object aiming at the target intelligent home equipment is received, extracting feature information to be matched from multimedia data acquired based on the surrounding environment, if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched, outputting prompt information whether to execute the second task instruction to the first target object, and if an instruction allowing the second task instruction to be executed is received, controlling the target intelligent home equipment to execute the second task instruction; therefore, when the intelligent service robot receives task instructions aiming at the same intelligent household equipment and sent by different target objects, the task instructions can be correctly executed, and the task instructions are prevented from being disordered.

Description

Control method and device of intelligent service equipment
Technical Field
The disclosure relates to the field of internet of things, and in particular relates to a control method and device for intelligent service equipment.
Background
In the prior art, intelligent service devices have been integrated into aspects of life, for example, target objects such as: family members such as a husband, a wife, a son, a daughter and the like can control the intelligent household equipment by using the intelligent service equipment.
As shown in fig. 1, the intelligent service device is connected with each intelligent home device, so that when a user needs to use the intelligent service device, the target object can issue a task instruction to the intelligent service device, and the intelligent service device sends the task instruction to the corresponding intelligent home device to complete corresponding operation.
However, in the prior art, when the intelligent service device receives a plurality of target object task instructions, there is no reasonable solution, which further causes the intelligent service device to have a situation of disordered task instruction execution, and affects user experience.
In view of this, a control method for an intelligent service device needs to be redesigned to overcome the reporting defect.
Disclosure of Invention
The embodiment of the disclosure provides a control method and a control device for intelligent service equipment, which enable an intelligent service robot to correctly execute a task instruction.
The specific technical scheme provided by the embodiment of the disclosure is as follows:
in a first aspect, a method for controlling an intelligent service device includes:
after controlling target intelligent household equipment to execute a first task instruction of a first target object, if a second task instruction from a second target object is received, acquiring multimedia data of a surrounding environment, wherein the second task instruction is specific to the target intelligent household equipment;
extracting feature information to be matched from the multimedia data, and if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched, outputting prompt information for judging whether to execute the second task instruction;
and if an instruction allowing the second task instruction to be executed is received, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after receiving the second task instruction from the second target object, before acquiring the multimedia data of the surrounding environment, the method further includes:
extracting feature information of the second target object from the second task instruction;
and if the matching of the characteristic information of the second target object and the characteristic information of the first target object fails, determining that the first target object is different from the second target object.
Optionally, the method further includes:
and if the characteristic information of the second target object is successfully matched with the characteristic information of the first target object, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, the obtaining of the multimedia data of the surrounding environment and the extracting of the feature information to be matched from the multimedia data include any one or a combination of the following operations:
acquiring video data or image data of the surrounding environment through a camera, and extracting facial features to be matched or human body features to be matched from the video data or the image data;
and acquiring audio data of the surrounding environment through a microphone, and extracting the voiceprint features to be matched from the audio data.
Optionally, outputting a prompt message indicating whether to execute the second task instruction includes any one of the following operations:
popping up a dialog box on a screen of the intelligent service equipment, and presenting prompt information whether to execute the second task instruction or not through the dialog box;
and broadcasting whether to execute the prompt information of the second task instruction or not in a voice broadcasting mode.
Optionally, the method further includes:
if the feature information of the first target object fails to be matched with the feature information to be matched, controlling the target intelligent household equipment to execute the second task instruction; or
And if the feature information of the first target object fails to be matched with the feature information to be matched, and the second target object passes identity verification, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after outputting a prompt message indicating whether to execute the second task instruction, the method further includes: and if no reply is received within the set time length, refusing to execute the second task instruction.
In a second aspect, a control apparatus for a smart service device, the apparatus comprising:
the first processing unit is used for controlling the target intelligent household equipment to execute a first task instruction of a first target object, and acquiring multimedia data of the surrounding environment if a second task instruction from a second target object is received, wherein the second task instruction is specific to the target intelligent household equipment;
the second processing unit is used for extracting the feature information to be matched from the multimedia data, and outputting prompt information for judging whether to execute the second task instruction or not if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched;
and the control unit is used for controlling the target intelligent household equipment to execute the second task instruction if receiving an instruction of allowing the second task instruction to be executed.
Optionally, after receiving the second task instruction from the second target object, before acquiring the multimedia data of the surrounding environment, the first processing unit is further configured to:
extracting feature information of the second target object from the second task instruction;
and if the matching of the characteristic information of the second target object and the characteristic information of the first target object fails, determining that the first target object is different from the second target object.
Optionally, the control unit is further configured to:
and if the characteristic information of the second target object is successfully matched with the characteristic information of the first target object, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, the multimedia data of the surrounding environment is acquired, and the feature information to be matched is extracted from the multimedia data, and the second processing unit executes any one or a combination of the following operations:
acquiring video data or image data of the surrounding environment through a camera, and extracting facial features to be matched or human body features to be matched from the video data or the image data;
and acquiring audio data of the surrounding environment through a microphone, and extracting the voiceprint features to be matched from the audio data.
Optionally, prompt information indicating whether to execute the second task instruction is output, and the second processing unit executes any one of the following operations:
popping up a dialog box on a screen of the intelligent service equipment, and presenting prompt information whether to execute the second task instruction or not through the dialog box;
and broadcasting whether to execute the prompt information of the second task instruction or not in a voice broadcasting mode.
Optionally, the control unit is further configured to:
if the feature information of the first target object fails to be matched with the feature information to be matched, controlling the target intelligent household equipment to execute the second task instruction; or
And if the feature information of the first target object fails to be matched with the feature information to be matched, and the second target object passes identity verification, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after outputting a prompt message indicating whether to execute the second task instruction, the control unit is further configured to:
and if no reply is received within the set time length, refusing to execute the second task instruction.
In a third aspect, an intelligent service robot, comprising:
a memory for storing executable instructions;
a processor for reading and executing executable instructions stored in the memory to perform the method according to any of the above first aspects.
In a fourth aspect, a computer storage medium having instructions which, when executed by a processor, enable the processor to perform the method of any of the first aspects as described above.
In the embodiment of the disclosure, after a target smart home device is controlled to execute a first task instruction issued by a first target object, if a second task instruction issued by a second target object for the target smart home device is received, feature information to be matched is extracted from multimedia data acquired based on the surrounding environment, if matching between feature information prestored corresponding to the first target object and the feature information to be matched is successful, prompt information whether to execute the second task instruction is output to the first target object, and if an instruction allowing to execute the second task instruction is received, the target smart home device is controlled to execute the second task instruction; therefore, when the intelligent service robot receives the task instructions which are sent by different target objects and aim at the same intelligent household equipment, the task instructions can still be correctly executed, and the situation that the task instructions are disordered to be executed is effectively avoided.
Drawings
FIG. 1 is a schematic diagram of an intelligent service robot interaction in the prior art;
FIG. 2 is a schematic diagram illustrating an intelligent service robot entering a user ID and feature information in an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating an intelligent service robot executing a first task instruction according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating the intelligent service robot executing a second task instruction according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating an embodiment of the disclosure in which a first user agrees to execute a second task instruction;
FIG. 6 is a schematic diagram of a logic architecture of an intelligent service robot in an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an entity architecture of an intelligent service robot in an embodiment of the present disclosure.
Detailed Description
In order to enable the intelligent service equipment to correctly execute the task instruction, in the embodiment of the disclosure, when the intelligent service equipment receives the task instruction which is sent by two different target objects and aims at the same intelligent home equipment, multimedia data of the surrounding environment is obtained first, feature information to be matched, which is collected from the multimedia data of the surrounding environment, is matched with feature information which is prestored by a target object which sends the task instruction first, if the matching is successful, prompt information whether to execute the second task instruction is output to the target object which sends the task instruction first, and if an instruction which allows the second task instruction to be executed is received, the target intelligent home equipment is controlled to execute the second task instruction.
Preferred embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings.
In the embodiment of the present disclosure, in order to fit an actual application scenario, the intelligent service device is collectively referred to as an intelligent service robot.
In the embodiment of the present disclosure, after purchasing the intelligent service robot, a user may enter feature information of each family member into the intelligent service robot, where the feature information includes but is not limited to: voice print features, facial features, fingerprint features, human body features, and the like. The intelligent service robot stores the feature information of each family member, the Identity identification number (ID) of the user identification information corresponding to each family member in the intelligent service robot, or in a cloud server in communication connection with the intelligent service robot.
For example, referring to FIG. 2, take a family containing three family members (e.g., husband, wife, and son) as an example, wherein the user ID of the husband is assumed to be: ZF, user ID of wife is: QZ, user ID of son: and EZ, meanwhile, the collected characteristic information is assumed to contain voiceprint characteristics, facial characteristics and human body characteristics.
When the intelligent service robot is used for the first time, the family members can input respective feature information into the intelligent service robot, or the intelligent service robot actively collects the feature information of the family members. For example, the intelligent service robot collects voice information of each family member through a microphone to acquire voiceprint characteristics of each family member. For another example, the intelligent service robot acquires image information of each family member through the camera to acquire facial features and/or human body features of each family member. The intelligent service robot stores the characteristic information of each family member according to the following modes:
the user ID is ZF and corresponds to a voiceprint feature 1, a facial feature 1 and a human body feature 1 of the husband;
the user ID is QZ and corresponds to the vocal print characteristic 2, the facial characteristic 2 and the human body characteristic 2 of the wife;
the user ID is EZ, corresponding to the voiceprint 3, facial 3 and body 3 features of the son.
Furthermore, the intelligent service robot can be connected with each intelligent household device respectively.
Meanwhile, in order to fit the actual application scenario, in the following embodiments, the first target object is referred to as a first user, and the second target object is referred to as a second user.
After the preprocessing process, the family members can control each smart home device used in the family through the smart service robot, as shown in fig. 3, the specific control process is as follows:
step 300: the intelligent service robot receives a first task instruction sent by a first target object.
Specifically, it is assumed that the target smart home device to which the first task instruction is directed is a smart television, and task content of the first task instruction sent by the first target object indicates that the smart television is tuned to channel 1.
The first target object may be any user in the family members, or may be any non-family member, which is not limited herein.
Step 310: the intelligent service robot carries out identity verification on the first target object, and if the first target object passes the identity verification, the step 320 is executed; if the verification is not passed, step 330 is performed.
In performing the verification process of step 310, the following three verification methods can be adopted, but not limited to:
1. assume that the first task instruction is a voice instruction.
In the verification process, the voiceprint feature of the first user extracted from the first task instruction is obtained, and then the voiceprint feature is matched with feature information pre-recorded by family members, so that identity verification is completed.
Optionally, the voiceprint feature extraction process and the voiceprint feature and pre-entered feature information matching process may be completed by an intelligent service robot, or related data may be sent to a cloud server by the intelligent robot, and the cloud server completes the voiceprint feature extraction process and the voiceprint feature and pre-entered feature information matching process, and a result is returned, which is not described herein again.
2. Assume that the first task instruction is a voice instruction.
In the verification process, voiceprint features of a first user extracted from a first task instruction are obtained, image information of the first user sending the first task instruction is shot through a camera, facial features or/and human body features of the first user are extracted from the image information, and then the voiceprint features, the facial features and/or the human body features are matched with feature information recorded in advance by family members on the basis of any one or any combination of the voiceprint features, the facial features and the human body features, so that identity verification is completed.
Optionally, the process of extracting the voiceprint features, the facial features and the human body features, and the process of matching the voiceprint features, the facial features and the human body features with the feature information pre-entered by the family members or the process of matching the feature information pre-entered by the family members can be completed by the intelligent service robot, or the intelligent service robot sends the related data to the cloud server, and the process is completed by the cloud server and returns the result, which is not repeated herein.
In a specific implementation process, when the image information of the first user who sends the first task instruction is shot through the camera, the sound source direction of the voice which sends the first task instruction can be determined firstly, and then the camera is controlled to shoot towards the sound source direction so as to shoot the image information of the first user.
3. Assume that the first task instruction is a text instruction or a touch instruction input on the screen of the intelligent service robot.
In the verification process, the image information of a first user inputting a first task instruction is shot through a camera, the facial features or/and the human body features of the first user are extracted from the image information, and the facial features or/and the human body features are matched with the feature information input in advance by family members on the basis of the facial features or/and the human body features, so that identity verification is completed.
In a specific implementation process, matching of the feature information can be judged by calculating the similarity between the two feature information, and if the similarity between the feature information of the first user and any pre-input feature information is higher than a set threshold value, the matching is determined to be successful; and if the similarity between the feature information of the first user and any feature information which is input in advance is not higher than a set threshold value, determining that the matching between the feature information of the first user and the feature information of the second user fails.
Optionally, the process of extracting the facial features or/and the human body features, and the process of matching the facial features or/and the human body features with the feature information pre-entered by the family members may be completed by the intelligent service robot, or the intelligent robot sends the relevant data to the cloud server, and the cloud server completes the process and returns the result, which is not described herein again.
It should be noted that the step 310 is also an optional step, and the user can select whether to start the authentication function of the intelligent service robot according to the user's own requirement. If the authentication function is enabled, after receiving the first task instruction sent by the first target object in step 300, executing the authentication process in step 310; if the authentication function is not enabled, after receiving the first task instruction sent by the first target object in step 300, the authentication process is not executed, and step 320 is directly executed.
Step 320: the intelligent service robot controls the target intelligent household equipment to execute a first task instruction of the first target object.
Specifically, the intelligent service robot sends the first task instruction to the target intelligent home equipment, and the target intelligent home equipment is made to execute task content in the first task instruction.
For example, assuming that the first user is a "husband," when the intelligent service robot determines that the received voiceprint feature and facial feature match successfully with the stored voiceprint feature and facial feature corresponding to the first user ID "ZF," the intelligent service robot sends the task content in the first task instruction to the intelligent television, and the intelligent television is tuned to channel 1.
Step 330: the intelligent service robot refuses to execute the first task instruction.
For example, assuming that the first user is milk, the family member does not pre-enter the characteristic information of the family member and the first user ID into the intelligent service robot, and therefore, when the intelligent service robot fails to match the received voiceprint characteristics and facial characteristics with the pre-entered voiceprint characteristics and facial characteristics, the first task instruction is rejected from being executed.
Based on the above embodiment, referring to fig. 4, in the embodiment of the present disclosure, when the intelligent service robot receives the second task instruction for the same target smart home device, the specific control process adopted is as follows:
step 400: and the intelligent service robot receives a second task instruction sent by a second target object.
Specifically, it is assumed that the target smart home device to which the second task instruction is directed is a smart television, and task content of the second task instruction sent by the second target object indicates that the smart television is tuned to channel 2.
The second target object may be any user in the family members, or may be any non-family member, which is not limited herein.
Step 410: the intelligent service robot performs identity verification on the second target object, and if the verification is passed, step 420 is executed; if the verification fails, step 460 is performed.
Specifically, the verification process is the same as the step of identity verification of the first target object by the intelligent service robot.
The step 410 is also an optional step, and the user can select whether to start the authentication function of the intelligent service robot according to the user's own requirements. If the authentication function is enabled, after receiving a second task instruction sent by a second target object in step 400, executing the authentication process in step 410; if the authentication function is not enabled, after receiving the second task instruction sent by the second target object in step 400, the authentication process is not executed, and step 420 is directly executed.
Step 420: the intelligent service robot matches the characteristic information of the second target object with the characteristic information prestored corresponding to the first target object, and if the matching is successful, the step 450 is executed; otherwise, step 430 is performed.
Specifically, based on the step 410, the intelligent service robot matches the obtained feature information of the second user with the feature information of the first user, and when it is determined that the matching is successful, it is determined that the first user sending the first task instruction and the second user sending the second task instruction are the same user; and when the matching is determined to be failed, judging that a first user sending the first task instruction and a second user sending the second task instruction are different users.
In a specific implementation process, if the first task instruction and the second task instruction are voice instructions, the voiceprint feature of the second user is obtained first, and the voiceprint feature of the second user is matched with the voiceprint feature of the first user. If the similarity between the voiceprint features of the second user and the voiceprint features of the first user is higher than a preset threshold value, determining that the matching is successful; and if the similarity between the voiceprint features of the second user and the voiceprint features of the first user is not higher than a preset threshold value, determining that the matching fails.
Step 430: the intelligent service robot obtains multimedia data of the surrounding environment, extracts feature information to be matched from the multimedia data, matches the feature information pre-stored corresponding to the first target object with the feature information to be matched, if the matching is successful, step 440 is executed, otherwise, step 450 is executed.
When the intelligent service robot determines that the first task instruction and the second task instruction are not sent by the same user, it needs to further determine whether the first user is still near the intelligent home device, and in the specific implementation process, the intelligent service robot may use a camera or/and a microphone to collect multimedia data near the target intelligent home device in real time, so as to determine whether the first user is near, including but not limited to the following two determination methods:
1. the intelligent service robot collects audio files in the surrounding environment through the microphone.
After the audio file is adopted, voiceprint features can be extracted from the audio file and matched with feature information which is pre-recorded by family members, so that the judgment process is completed.
Optionally, the voiceprint feature extraction process and the voiceprint feature and pre-entered feature information matching process may be completed by an intelligent service robot, or related data may be sent to a cloud server by the intelligent robot, and the cloud server completes the voiceprint feature extraction process and the voiceprint feature and pre-entered feature information matching process, and a result is returned, which is not described herein again.
2. The intelligent service robot collects image information (such as image files or video files) in the surrounding environment through the camera.
After the image information is adopted, facial features or/and human body features can be extracted from the image information, and then the facial features or/and the human body features are matched with feature information which is pre-recorded by family members on the basis of the facial features or/and the human body features, so that the judgment process is completed.
Optionally, the process of extracting the facial features or/and the human body features, and the process of matching the facial features or/and the human body features with the feature information pre-entered by the family members may be completed by the intelligent service robot, or the intelligent robot sends the relevant data to the cloud server, and the cloud server completes the process and returns the result, which is not described herein again.
Step 440: the intelligent service robot outputs prompt information whether to execute the second task instruction to a first target object, and judges whether to receive an instruction allowing the second task instruction to be executed; if so, go to step 450, otherwise go to step 460.
Specifically, the prompt information indicating whether to execute the second task instruction is output to the first target object may be, but is not limited to, the following manners:
1. and popping up a dialog box on a screen of the intelligent service robot, and outputting prompt information for judging whether to execute the second task instruction or not through the dialog box.
Specifically, assume that the task content of the second task instruction is an instruction to tune the smart television to channel 2. The "whether to tune the smart tv to channel 2" may be presented on the screen of the smart service robot in a pop-up dialog, thereby asking the first user whether to execute the second task instruction.
In a specific implementation, the sound source direction can be determined through the first task instruction, so that the screen faces the sound source direction, and the first user can conveniently check whether prompt information of the second task instruction is executed; the position of the first user can also be determined through the collected image information containing the first user, and the screen is oriented to the first user, so that the first user can conveniently view the prompt information.
2. And the intelligent service robot broadcasts prompt information of whether to execute the second task instruction or not in a voice broadcast mode.
Specifically, assume that the task content of the second task instruction is an instruction to tune the smart television to channel 2. The intelligent service robot broadcasts whether to tune the intelligent television to the channel 2 or not in a voice broadcasting mode, and therefore the first user is inquired whether to execute the second task instruction or not.
In specific implementation, the sound source direction can be determined through the first task instruction, so that the intelligent service robot faces the sound source direction, and a first user can conveniently listen to the voice broadcast; the position of the first user can be determined through the collected image information containing the first user, so that the intelligent service robot faces the first user, and the first user can listen to the voice broadcast conveniently.
Step 450: and the intelligent service robot controls the target intelligent household equipment to execute the second task instruction.
Specifically, the intelligent service robot sends the second task instruction to the target intelligent home equipment, and the target intelligent home equipment executes task content in the second task instruction.
In the specific implementation process, if the flow goes from step 420 to step 450, it indicates that the user sending the first task instruction and the second task instruction is the same user, and then the intelligent service robot may directly execute the second task instruction sent by the second user.
For example, assuming that the second user is a "husband," when the intelligent service robot determines that the received feature information of the second user is the same as the feature information of the first user, the intelligent service robot determines that both the first task instruction and the second task instruction are sent by the corresponding first user, and therefore, the task content in the second task instruction is sent to the smart television, and the smart television is controlled to be tuned to channel 2.
In a specific implementation process, if the flow goes from step 430 to step 450, it indicates that the first user is not located near the intelligent service robot, and at this time, if the intelligent service robot determines that the second user passes the identity authentication, it proves that the second user is a family member, and then the intelligent service robot executes a second task instruction sent by the second user.
In the embodiment of the present disclosure, since in step 410, the authentication process has been executed, which indicates that the second user is a family member, when the process goes from step 430 to step 450, the intelligent service robot determines that the first user is not near the target smart home device, and directly executes the second task instruction issued by the second user.
In practical application, the authentication may be performed at any time point, as long as the prompt information indicating whether to execute the second task instruction is presented to the first target object, and the authentication result is obtained.
On the other hand, in some specific application scenarios, if the matching between the feature information of the first target object and the feature information to be matched fails, it is indicated that the first target object is not nearby, and then the intelligent server person directly controls the target intelligent home equipment to execute the second task instruction without confirming whether the second target object passes the identity authentication.
For example, the second user is a "guest", the first user is a "husband", and when the second task instruction is received, the intelligent service robot directly sends the task content in the second task instruction to the smart television after determining that the "husband" is not near the target smart home device, so as to control the smart television to tune to channel 2.
In the specific implementation process, if the flow goes from step 440 to step 450, it means that the first target object agrees to execute the second task instruction.
For example, assuming that the second user is a wife and the first user is a husband, when the second instruction task is sent, the intelligent service robot determines that the "husband" is not near the target smart home device, and the intelligent service robot sends the task content in the second task instruction to the smart television to control the smart television to tune to channel 2.
For example, referring to fig. 5, assuming that the second user is "wife", when the intelligent service robot determines that the first user "husband" confirms that the first user "husband" agrees to perform the second instruction task, the intelligent service robot also sends the task content in the second task instruction to the smart television, and controls the smart television to tune to channel 2.
Step 460: and the intelligent service robot refuses to execute the second task instruction.
Specifically, if the flow passes from step 410 to step 460, it indicates that the second user is not a family member.
For example, assuming that the second user is milk, the family member does not pre-enter its own feature information into the intelligent service robot, and therefore, when the intelligent service robot fails to match the received voiceprint features and facial features with the pre-entered voiceprint features and facial features, the intelligent service robot refuses to execute the second task instruction, continues to execute the first task instruction, and still stays the channel of the smart television in channel 1.
Specifically, if the flow goes from step 440 to step 460, it means that the first user does not approve execution of the second task command sent by the second user.
For example, assuming that the first user "husband" does not agree to execute the second task instruction sent by the second user "wife", or the first user "husband" does not reply to the smart service robot within a set time period, the smart service robot refuses to execute the second task instruction, continues to execute the first task instruction, and still stays the channel of the smart television on channel 1.
Based on the same inventive concept, referring to fig. 6, an embodiment of the present application further provides a control apparatus for an intelligent service device, including:
the first processing unit 610 is configured to, after controlling the target smart home device to execute a first task instruction of a first target object, obtain multimedia data of a surrounding environment if a second task instruction from a second target object is received, where the second task instruction is for the target smart home device;
the second processing unit 620 extracts feature information to be matched from the multimedia data, and outputs prompt information indicating whether to execute the second task instruction if the feature information pre-stored corresponding to the first target object is successfully matched with the feature information to be matched;
the control unit 630, if receiving an instruction allowing execution of the second task instruction, controls the target smart home device to execute the second task instruction.
Optionally, after receiving the second task instruction from the second target object, before acquiring the multimedia data of the surrounding environment, the first processing unit 610 is further configured to:
extracting feature information of the second target object from the second task instruction;
and if the matching of the characteristic information of the second target object and the characteristic information of the first target object fails, determining that the first target object is different from the second target object.
Optionally, the control unit 630 is further configured to:
and if the characteristic information of the second target object is successfully matched with the characteristic information of the first target object, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, multimedia data of a surrounding environment is obtained, and feature information to be matched is extracted from the multimedia data, and the second processing unit 620 performs any one or a combination of the following operations:
acquiring video data or image data of the surrounding environment through a camera, and extracting facial features to be matched or human body features to be matched from the video data or the image data;
and acquiring audio data of the surrounding environment through a microphone, and extracting the voiceprint features to be matched from the audio data.
Optionally, prompt information indicating whether to execute the second task instruction is output, and the second processing unit 620 performs any one of the following operations:
popping up a dialog box on a screen of the intelligent service equipment, and presenting prompt information whether to execute the second task instruction or not through the dialog box;
and broadcasting whether to execute the prompt information of the second task instruction or not in a voice broadcasting mode.
Optionally, the control unit 630 is further configured to:
if the feature information of the first target object fails to be matched with the feature information to be matched, controlling the target intelligent household equipment to execute the second task instruction; or
And if the feature information of the first target object fails to be matched with the feature information to be matched, and the second target object passes identity verification, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after outputting a prompt message indicating whether to execute the second task instruction, the control unit 630 is further configured to:
and if no reply is received within the set time length, refusing to execute the second task instruction.
Based on the same inventive concept, referring to fig. 7, an embodiment of the present application further provides an intelligent service robot, including:
a memory 710 for storing executable instructions;
a processor 720 for reading and executing the executable instructions stored in the memory, performing the following method:
after controlling target intelligent household equipment to execute a first task instruction of a first target object, if a second task instruction from a second target object is received, acquiring multimedia data of a surrounding environment, wherein the second task instruction is specific to the target intelligent household equipment;
extracting feature information to be matched from the multimedia data, and if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched, outputting prompt information for judging whether to execute the second task instruction;
and if an instruction allowing the second task instruction to be executed is received, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after receiving the second task instruction from the second target object, before acquiring the multimedia data of the surrounding environment, the processor 720 is further configured to:
extracting feature information of the second target object from the second task instruction;
and if the matching of the characteristic information of the second target object and the characteristic information of the first target object fails, determining that the first target object is different from the second target object.
Optionally, the processor 720 is further configured to:
and if the characteristic information of the second target object is successfully matched with the characteristic information of the first target object, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, multimedia data of a surrounding environment is obtained, and feature information to be matched is extracted from the multimedia data, and the processor 720 executes any one or a combination of the following operations:
acquiring video data or image data of the surrounding environment through a camera, and extracting facial features to be matched or human body features to be matched from the video data or the image data;
and acquiring audio data of the surrounding environment through a microphone, and extracting the voiceprint features to be matched from the audio data.
Optionally, a prompt message indicating whether to execute the second task instruction is output, and the processor 720 performs any one of the following operations:
popping up a dialog box on a screen of the intelligent service equipment, and presenting prompt information whether to execute the second task instruction or not through the dialog box;
and broadcasting whether to execute the prompt information of the second task instruction or not in a voice broadcasting mode.
Optionally, the processor 720 is further configured to:
if the feature information of the first target object fails to be matched with the feature information to be matched, controlling the target intelligent household equipment to execute the second task instruction; or
And if the feature information of the first target object fails to be matched with the feature information to be matched, and the second target object passes identity verification, controlling the target intelligent household equipment to execute the second task instruction.
Optionally, after outputting a prompt message indicating whether to execute the second task instruction, the processor 720 is further configured to:
and if no reply is received within the set time length, refusing to execute the second task instruction.
Based on the same inventive concept, the embodiment of the present application further provides a computer storage medium, and when instructions in the computer storage medium are executed by a processor, the processor is enabled to execute any one of the methods performed by the intelligent service robot.
In summary, in the embodiment of the present disclosure, after a target smart home device executes a first task instruction issued by a first target object, if a second task instruction issued by a second target object for the target smart home device is received, feature information to be matched is extracted from multimedia data acquired based on an ambient environment, if matching between feature information pre-stored in correspondence with the first target object and the feature information to be matched is successful, prompt information indicating whether to execute the second task instruction is output to the first target object, and if an instruction allowing to execute the second task instruction is received, the target smart home device is controlled to execute the second task instruction; therefore, when the intelligent service robot receives the task instructions which are sent by different target objects and aim at the same intelligent household equipment, the task instructions can still be correctly executed, and the situation that the task instructions are disordered to be executed is effectively avoided.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made to the disclosed embodiments without departing from the spirit and scope of the disclosed embodiments. Thus, if such modifications and variations of the embodiments of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is also intended to encompass such modifications and variations.

Claims (10)

1. A control method of an intelligent service device is characterized by comprising the following steps:
after controlling target intelligent household equipment to execute a first task instruction of a first target object, if a second task instruction from a second target object is received, acquiring multimedia data of a surrounding environment, wherein the second task instruction is specific to the target intelligent household equipment;
extracting feature information to be matched from the multimedia data, and if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched, outputting prompt information for judging whether to execute the second task instruction;
and if an instruction allowing the second task instruction to be executed is received, controlling the target intelligent household equipment to execute the second task instruction.
2. The method of claim 1, wherein after receiving the second task instruction from the second target object, prior to obtaining the multimedia data of the surrounding environment, further comprising:
extracting feature information of the second target object from the second task instruction;
and if the matching of the characteristic information of the second target object and the characteristic information of the first target object fails, determining that the first target object is different from the second target object.
3. The method of claim 2, wherein the method further comprises:
and if the characteristic information of the second target object is successfully matched with the characteristic information of the first target object, controlling the target intelligent household equipment to execute the second task instruction.
4. The method as claimed in claim 1, 2 or 3, wherein the step of obtaining multimedia data of the surrounding environment and extracting feature information to be matched from the multimedia data comprises any one or combination of the following operations:
acquiring video data or image data of the surrounding environment through a camera, and extracting facial features to be matched or human body features to be matched from the video data or the image data;
and acquiring audio data of the surrounding environment through a microphone, and extracting the voiceprint features to be matched from the audio data.
5. The method of claim 1, 2 or 3, wherein outputting the prompt information whether to execute the second task instruction comprises any one of:
popping up a dialog box on a screen of the intelligent service equipment, and presenting prompt information whether to execute the second task instruction or not through the dialog box;
and broadcasting whether to execute the prompt information of the second task instruction or not in a voice broadcasting mode.
6. The method of claim 5, wherein the method further comprises:
if the feature information of the first target object fails to be matched with the feature information to be matched, controlling the target intelligent household equipment to execute the second task instruction; or
And if the feature information of the first target object fails to be matched with the feature information to be matched, and the second target object passes identity verification, controlling the target intelligent household equipment to execute the second task instruction.
7. The method of claim 5, wherein after outputting a prompt to whether to execute the second task instruction, the method further comprises:
and if no reply is received within the set time length, refusing to execute the second task instruction.
8. A control apparatus of an intelligent service device, the apparatus comprising:
the first processing unit is used for controlling the target intelligent household equipment to execute a first task instruction of a first target object, and acquiring multimedia data of the surrounding environment if a second task instruction from a second target object is received, wherein the second task instruction is specific to the target intelligent household equipment;
the second processing unit is used for extracting the feature information to be matched from the multimedia data, and outputting prompt information for judging whether to execute the second task instruction or not if the feature information prestored corresponding to the first target object is successfully matched with the feature information to be matched;
and the control unit is used for controlling the target intelligent household equipment to execute the second task instruction if receiving an instruction of allowing the second task instruction to be executed.
9. An intelligent service robot, comprising:
a memory for storing executable instructions;
a processor for reading and executing executable instructions stored in the memory to perform the method of any one of claims 1 to 7.
10. A computer storage medium, wherein instructions in the computer storage medium, when executed by a processor, enable the processor to perform the method of any one of claims 1-7.
CN202010787549.5A 2020-08-07 2020-08-07 Control method and device of intelligent service equipment Pending CN111953572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010787549.5A CN111953572A (en) 2020-08-07 2020-08-07 Control method and device of intelligent service equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010787549.5A CN111953572A (en) 2020-08-07 2020-08-07 Control method and device of intelligent service equipment

Publications (1)

Publication Number Publication Date
CN111953572A true CN111953572A (en) 2020-11-17

Family

ID=73332992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010787549.5A Pending CN111953572A (en) 2020-08-07 2020-08-07 Control method and device of intelligent service equipment

Country Status (1)

Country Link
CN (1) CN111953572A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785268A (en) * 2021-01-22 2021-05-11 广州富港万嘉智能科技有限公司 Intelligent household service method and device, electronic equipment and storage medium
CN114654477A (en) * 2022-03-25 2022-06-24 济宁市海豚科技有限公司 Service robot control method, system and storage medium based on cloud platform
WO2023040827A1 (en) * 2021-09-16 2023-03-23 华为技术有限公司 Smart home control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677806A (en) * 2015-12-31 2016-06-15 联想(北京)有限公司 Information processing method and electronic equipment
CN105867141A (en) * 2016-03-30 2016-08-17 宁波三博电子科技有限公司 Intelligent home control method and system based on effective instruction scope
CN109525537A (en) * 2017-09-19 2019-03-26 中兴通讯股份有限公司 A kind of control method and device accessing smart home system
CN110134022A (en) * 2019-05-10 2019-08-16 平安科技(深圳)有限公司 Audio control method, device and the electronic device of smart home device
CN110161875A (en) * 2019-06-28 2019-08-23 青岛海尔科技有限公司 The control method and system of smart home operating system based on Internet of Things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677806A (en) * 2015-12-31 2016-06-15 联想(北京)有限公司 Information processing method and electronic equipment
CN105867141A (en) * 2016-03-30 2016-08-17 宁波三博电子科技有限公司 Intelligent home control method and system based on effective instruction scope
CN109525537A (en) * 2017-09-19 2019-03-26 中兴通讯股份有限公司 A kind of control method and device accessing smart home system
CN110134022A (en) * 2019-05-10 2019-08-16 平安科技(深圳)有限公司 Audio control method, device and the electronic device of smart home device
CN110161875A (en) * 2019-06-28 2019-08-23 青岛海尔科技有限公司 The control method and system of smart home operating system based on Internet of Things

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785268A (en) * 2021-01-22 2021-05-11 广州富港万嘉智能科技有限公司 Intelligent household service method and device, electronic equipment and storage medium
WO2023040827A1 (en) * 2021-09-16 2023-03-23 华为技术有限公司 Smart home control method
CN114654477A (en) * 2022-03-25 2022-06-24 济宁市海豚科技有限公司 Service robot control method, system and storage medium based on cloud platform
CN114654477B (en) * 2022-03-25 2023-08-18 济宁市海豚科技有限公司 Service robot control method, system and storage medium based on cloud platform

Similar Documents

Publication Publication Date Title
CN111953572A (en) Control method and device of intelligent service equipment
CN108805091B (en) Method and apparatus for generating a model
CN109361703B (en) Voice equipment binding method, device, equipment and computer readable medium
US9583102B2 (en) Method of controlling interactive system, method of controlling server, server, and interactive device
CN106210836B (en) Interactive learning method and device in video playing process and terminal equipment
CN110572601B (en) Double-recording video recording system with real-time checking function
CN105141427B (en) A kind of login authentication method, apparatus and system based on Application on Voiceprint Recognition
CN109582581B (en) Result determining method based on crowdsourcing task and related equipment
CN111160928A (en) Identity verification method and device
CN109285542B (en) Voice interaction method, medium, device and system of karaoke system
CN110751129A (en) Express delivery service identity verification method, device, equipment and storage medium
CN111145762B (en) Electronic certificate verification method and system based on voiceprint recognition
CN114240342A (en) Conference control method and device
CN105871900A (en) Identity authentication method and system
CN114095738A (en) Video and live broadcast processing method, live broadcast system, electronic device, terminal and medium
CN110335237B (en) Method and device for generating model and method and device for recognizing image
US20230205670A1 (en) Method and electronic checking system for checking performances of an application on a target equipment of a vehicle, related computer program and applications platform
CN111882739A (en) Entrance guard verification method, entrance guard device, server and system
CN109539495B (en) Control method, air conditioning apparatus, and storage medium
CN113111759A (en) Customer confirmation detection method and device in double-record data quality inspection
CN109065056B (en) Method and device for controlling air conditioner through voice
CN113241057A (en) Interactive method, apparatus, system and medium for speech synthesis model training
CN110276942A (en) Add method, apparatus, equipment and the readable storage medium storing program for executing of controlled device
CN112927687B (en) Function control method, device, system and storage medium of equipment
CN111078082A (en) Point reading method based on image recognition and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201117