CN113900384A - Method and device for interaction between robot and intelligent equipment and electronic equipment - Google Patents

Method and device for interaction between robot and intelligent equipment and electronic equipment Download PDF

Info

Publication number
CN113900384A
CN113900384A CN202111193475.3A CN202111193475A CN113900384A CN 113900384 A CN113900384 A CN 113900384A CN 202111193475 A CN202111193475 A CN 202111193475A CN 113900384 A CN113900384 A CN 113900384A
Authority
CN
China
Prior art keywords
robot
parameters
mode
intelligent
intelligent equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111193475.3A
Other languages
Chinese (zh)
Inventor
高斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN202111193475.3A priority Critical patent/CN113900384A/en
Publication of CN113900384A publication Critical patent/CN113900384A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The disclosure provides a method, a device and an electronic device for interaction between a robot and an intelligent device, wherein the method comprises the following steps: generating an adjustable parameter training model according to the sensing data and the corresponding intelligent equipment parameters; training through the adjustable parameter training model to obtain different control interaction modes; establishing network connection between the robot and the intelligent equipment; selecting corresponding intelligent equipment and a control interaction mode according to the sensing data parameters of the scene to which the robot belongs; acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode; and adjusting and controlling the intelligent equipment by taking the target parameter as a reference according to the adjustable parameter. The intelligent equipment can be controlled to achieve the optimal environment mode through the interaction between the robot and the intelligent hardware, the problem of obstacles in the task execution process of the robot can be solved, and interference factors in the task execution process of the robot are weakened or eliminated.

Description

Method and device for interaction between robot and intelligent equipment and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method and an apparatus for interaction between a robot and an intelligent device, and an electronic device.
Background
With the development of artificial intelligence, the intelligent robot can solve more and more actual problems such as intelligent recommendation, restaurant meal delivery, intelligent tracking and the like according to the requirements of human beings, can perform various intelligent interactions with users, and increases interestingness while solving problems.
In the process of executing a task or an interaction, an existing robot often encounters a situation of poor surrounding environment, for example, the robot has insufficient ambient light, too high or too low ambient temperature, factors hindering identification or reducing identification accuracy exist in the environment, too high ambient noise, inappropriate music volume, or equipment hindering the robot from traveling, and so on, so how to reduce or eliminate interference factors of the robot in the process of executing the task or the interaction, and the improvement of user experience becomes a problem to be solved urgently.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a method for interaction between a robot and an intelligent device, where if a robot encounters an environmental problem and a task blocking problem, for example, an environmental light problem causes an object to be unrecognized or a shooting effect is poor, an environmental temperature is too low or too high, and an intelligent device causes a robot to perform an obstacle, the robot and the intelligent device are networked for interaction, and the networked intelligent device is intelligently controlled, so as to solve the above problem.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a method for a robot to interact with a smart device, including:
generating an adjustable parameter training model according to the sensing data and the corresponding intelligent equipment parameters;
training through the adjustable parameter training model to obtain different control interaction modes;
establishing network connection between the robot and the intelligent equipment;
selecting corresponding intelligent equipment and a control interaction mode according to the sensing data parameters of the scene to which the robot belongs;
acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode;
and adjusting and controlling the intelligent equipment by taking the target parameter as a reference according to the adjustable parameter.
Further, the generating an adjustable parameter training model according to the sensing data and the corresponding smart device parameters includes:
acquiring a plurality of sensing data through a sensor on the robot;
recording the position and adjustable parameters of the intelligent equipment corresponding to the sensing data;
extracting sensing data parameters in the sensing data;
and generating a convolutional neural network training model of the sensing data parameters and the adjustable parameters.
Further, the obtaining of different control interaction modes through training of the adjustable parameter training model includes:
inputting a plurality of sensing data in an adjustable parameter training model;
performing scene classification on the plurality of sensing data;
training and optimizing sensing data parameters of sensing data in each scene;
and outputting a control interaction mode corresponding to the sensing data.
Further, the selecting of the corresponding intelligent device and the control interaction mode according to the sensing data parameters of the scene to which the robot belongs includes:
the robot collects the current sensing data of the scene in real time;
extracting the current sensing data parameters;
comparing the sensing data parameters with a plurality of stored sensing data parameters, and selecting consistent sensing data;
and selecting intelligent equipment corresponding to the consistent sensing data and controlling an interaction mode.
Further, the control interaction mode includes a position and an adjustable parameter of at least one intelligent device in a corresponding scene, the target parameter of the intelligent device is a reference intelligent device parameter preset in the corresponding control interaction mode, and the adjustable parameter is the intelligent device parameter corresponding to the scene to which the robot belongs.
Further, the method further comprises:
detecting intelligent equipment in a certain area of the position of the robot;
selecting a corresponding control interaction mode according to the type of the intelligent equipment;
performing control interaction on the intelligent equipment according to the control interaction mode;
and after the robot executes the finished task and leaves the area, restoring the hardware equipment to the original state.
Further, the method comprises:
when the intelligent equipment is detected to be movable equipment, according to the relative positions of the robot and the intelligent equipment, if the position of the intelligent equipment is located on a path of the robot for executing tasks, the intelligent equipment is subjected to movement control;
controlling the intelligent device to move a first distance outside a path of the robot for executing the task;
and after the robot executes the task and leaves a second distance, controlling the intelligent equipment to the original position.
Further, when the sensing data is a visually recognizable image, the method comprises:
detecting lighting equipment in a certain area of the position of the robot;
selecting a corresponding illumination adjustment mode according to the image parameters of the illumination equipment;
and adjusting the adjustable parameters of the lighting equipment according to the lighting adjustment mode.
Further, when the sensing data is voice recognition data, the method includes:
detecting media equipment and noise equipment in a certain area of the position of the robot;
selecting a corresponding sound control mode according to the sound parameters of the media equipment and the noise equipment;
and adjusting the adjustable parameters of the media device and the noise device according to the sound control mode.
Further, the intelligent device comprises at least one of lighting equipment, household appliances, an intelligent terminal, intelligent household equipment, intelligent transportation equipment and an intelligent robot.
Further, the control interaction mode includes at least one of a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, an air conditioning mode, a sound control mode, and a combination mode.
In a second aspect, an embodiment of the present disclosure provides an apparatus for a robot to interact with a smart device, where the apparatus includes:
the model generation module is used for generating an adjustable parameter training model according to the visually recognizable image and the corresponding intelligent equipment parameters;
the training module is used for obtaining different control interaction modes through training of the adjustable parameter training model;
the connection module is used for establishing network connection between the robot and the intelligent equipment;
the mode selection module is used for selecting corresponding intelligent equipment and controlling an interaction mode according to the image parameters of the scene to which the robot belongs;
the parameter acquisition module is used for acquiring adjustable parameters of the intelligent equipment in the control interaction mode;
and the control module is used for controlling the intelligent equipment according to the adjustable parameters.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory for storing computer readable instructions; and
a processor configured to execute the computer readable instructions to enable the electronic device to implement the method of any of the first aspect.
In a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to implement the method of any one of the above first aspects.
The embodiment of the disclosure discloses a method, a device, an electronic device and a computer readable storage medium for interaction between a robot and an intelligent device, wherein the method comprises the following steps: generating an adjustable parameter training model of the corresponding intelligent equipment according to the sensing data; training through the adjustable parameter training model to obtain different control interaction modes; establishing network connection between the robot and the intelligent equipment; selecting corresponding intelligent equipment and a control interaction mode according to the sensing data parameters of the scene to which the robot belongs; acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode; and adjusting and controlling the intelligent equipment by taking the target parameter as a reference according to the adjustable parameter. By the method for interaction between the robot and the intelligent equipment, the intelligent equipment can be controlled to reach the optimal environment mode through interaction between the robot and the intelligent hardware, the problem of obstacles in the process of executing tasks by the robot can be solved, and interference factors in the process of executing the tasks by the robot are weakened or eliminated.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flowchart of a method for interaction between a robot and a smart device according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a scenario of interaction between a robot and a smart device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an intelligent device control interaction mode of a robot according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an apparatus for interaction between a robot and a smart device according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
In order to more clearly describe the technical contents of the present invention, the following further description is given in conjunction with specific embodiments.
The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The disclosed embodiments are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for interaction between a robot and an intelligent device according to an embodiment of the present disclosure, where the method provided in this embodiment may be performed by an apparatus for interaction between a robot and an intelligent device, where the apparatus may be implemented as software, or implemented as a combination of software and hardware, and the apparatus may be integrated in a device, such as a terminal device, in a system for interaction between a robot and an intelligent device. As shown in fig. 1, the method comprises the steps of:
step S101: and generating an adjustable parameter training model of the corresponding intelligent equipment according to the sensing data.
In step S101, generating an adjustable parameter training model of a corresponding smart device according to the sensing data, including: acquiring a plurality of sensing data through a sensor on the robot; recording the position and adjustable parameters of the intelligent equipment corresponding to the visually recognizable image; extracting image parameters in the recognizable image; and generating a convolutional neural network training model of the image parameters and the adjustable parameters.
With reference to fig. 2, fig. 2 shows a schematic view of a scene where a robot interacts with a smart device according to an embodiment of the present disclosure, where a vision sensor, such as an image camera and/or a depth camera, is disposed on the robot, where the image camera is used to take a picture or make a video, and an environment image or a target image that the robot wants to capture is captured in real time. The depth camera is used for collecting depth images around the robot and calculating the size of the target object. For example, ToF cameras, which are called Time of Flight, have many different ways to implement depth cameras, such as parallax of two cameras, a single camera captures the same scene by moving at different angles, a stereometric stereo, and so on, and even ML reconstructs a scene model, or calculates distances by focusing at different distances for many times. As shown in fig. 2, the robot and each intelligent device in the environment are connected through a network, and the intelligent device may be a lighting device, an air conditioner, an electric curtain, a television, a computer, a floor cleaning robot, an intelligent socket, an intelligent switch, etc., which are merely examples and are not limited thereto. For example, the smart device may also be a drone, a smart car, a balance car, etc.
The robot collects images and target objects of the surrounding environment in real time in the process of executing tasks, the scene where the robot is located has an environment which can be normal light, or can be an environment with insufficient light or an environment with over-strong light, the shot images can be visually recognizable images or images which can not be visually recognizable images, wherein the visually recognizable images can also be different according to the difference of the light of the shooting environment, the effect is good, and the effect is poor. When the robot shoots images, adjustable parameters corresponding to intelligent hardware in the environment at the time of shooting, such as the brightness, the illumination angle, the position and the like of lighting equipment, the temperature, the humidity, the air conditioning mode and the like of an air conditioner, the shading proportion of a curtain, the pulling-up position of the curtain and the like, the position, the motion direction, the track and the like of a sweeping robot, the states of each intelligent switch and each intelligent socket and the like are obtained through a network. The images are combined with corresponding intelligent equipment parameters during photographing to be subjected to data marking, so that image classification with intelligent equipment parameter identification is formed, convolution neural network learning training is carried out on the image classification data, an adjustable parameter training model of the corresponding intelligent equipment is generated, a visually recognizable image which meets the requirements of a user and is optimal is selected, and the adjusting parameters of the corresponding intelligent equipment are used as target adjusting parameters. Through the training model, the newly acquired images of the robot can be correspondingly acquired to obtain the adjustable parameters of the intelligent hardware equipment during acquisition. And the robot can adjust and control the corresponding adjustable parameters of the intelligent equipment to the target adjustment parameters according to the target adjustment parameters.
Step S102: and training through the adjustable parameter training model to obtain different control interaction modes.
In step S102, the training through the adjustable parameter training model to obtain different control interaction modes includes: inputting a plurality of sensing data in an adjustable parameter training model, wherein the sensing data takes a visual recognizable image as an example, and the visual recognizable images are subjected to scene classification; training and optimizing image parameters of the visually recognizable images in each scene; and outputting the control interaction mode corresponding to the visual recognizable image. The control interaction mode includes at least one of a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, a temperature adjustment mode, a sound control mode, and a combination mode.
With reference to fig. 2, the control interaction pattern includes a position and an adjustable parameter of at least one smart device in a corresponding scene. One of the modes corresponds to a group of smart devices, which may be one or multiple smart devices, and each group of smart devices is controllable and has adjustable parameters. And the robot can switch the lighting conditions according to the scene when passing through different scene positions. The robot performs data training labeling according to the collected visual image corresponding to the relevant adjustable parameters of the group of intelligent devices during collection, and according to the collected image and the type of the associated intelligent device, and trains to obtain a corresponding control interaction mode, such as a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, a temperature adjustment mode, a sound control mode, a combination mode, and the like, which is not limited in this disclosure. Each mode corresponds to different equipment parameters, the equipment parameters can be set to corresponding target parameters under the condition of optimal conditions, each mode is provided with one target parameter, the target parameters are set to be corresponding to the intelligent equipment parameters set under the condition of the optimal scene, the target parameters are the reference intelligent equipment parameters preset in the corresponding control interaction mode, and the adjustable parameters acquired by the robot are the intelligent equipment parameters corresponding to the scene to which the robot belongs.
Step S103: and establishing network connection between the robot and the intelligent equipment.
In step S103, a network connection is established between the robot and the smart device through at least one of a WiFi wireless network, a bluetooth, a Zigbee gateway, and a multi-mode network. The robot acquires the control right of related intelligent equipment in the network through network connection and is used for adjusting and controlling various related parameters of the intelligent equipment.
Step S104: and selecting corresponding intelligent equipment and a control interaction mode according to the sensing data parameters of the scene to which the robot belongs.
In step S104, selecting a corresponding intelligent device and controlling an interaction mode according to the sensing data parameter of the scene to which the robot belongs includes: the robot collects the current sensing data of the scene in real time, wherein an image is taken as an example; extracting image parameters of the current image; comparing the image parameters of the current image with the stored image parameters of the plurality of images, and selecting images with consistent image parameters; and selecting intelligent equipment corresponding to the image with the consistent image parameters and controlling an interaction mode. In the present disclosure, different control modes are set according to scenes and requirements, such as a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, an air conditioning mode, a sound control mode, a combination mode, and the like. In connection with the various scenarios of fig. 2, the following describes the corresponding different interaction control modes.
When the robot carries out visual identification, the light of scene can lead to the fact the influence to robot visual identification for the robot can not discern because of irradiant influence when visual identification, perhaps recognition effect or photographic effect are not the best, and at this moment, need adjust lighting apparatus according to the adjustable parameter that the best effect in the training model corresponds of robot, makes the environment reach best illuminating effect. At the moment, the lighting equipment can be lamplight, sunlight and the like, at the moment, the robot acquires images of scenes, and a corresponding control scene obtained according to the training model is a lighting adjustment mode. In this mode, the intelligent device to be matched can be an intelligent desk lamp/bluetooth lamp/intelligent night plug lamp for communication, but is not limited thereto.
When the robot moves on a path, if the robot touches the movable intelligent equipment on the moving path, the control interaction mode can be set to be a position adjusting mode. In this mode, the intelligent device to be matched may be a sweeping robot/balance car/intelligent bicycle/electric vehicle/dust collector/scooter/unmanned vehicle/intelligent curtain/electric clothes hanger, etc., but is not limited thereto.
When the robot detects that the sound of the intelligent device in the scene is too loud in the scene needing to be quiet, the control interaction mode can be set to be the sound adjustment mode. In this mode, the intelligent device to be matched may be, but is not limited to, a smart television, a smart speaker, a sweeping robot, a smart washing machine, an air purifier, a fan, a cooking machine, a range hood, a washing machine, and the like.
The control interaction mode may be set to an air-conditioning mode when the robot is in a life/office scene where it is desired to set comfort to the user. In this mode, the intelligent device to be cooperated may be, but is not limited to, an intelligent air conditioner, a humidifier, a dehumidifier, a fan, and/or the like.
In addition, in the embodiment of the present disclosure, a plurality of formats may be combined, for example, an illumination mode and an air conditioning mode, a location movement mode, a human body service mode, and the like, to form a combined mode, which is suitable for various requirements.
Step S105: and acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode.
Corresponding to each control interaction mode in step S104, in step S105, a target parameter and an adjustable parameter of the smart device in each control interaction mode are obtained. The control interaction mode comprises a position and adjustable parameters of at least one intelligent device in a corresponding scene, target parameters of the intelligent device are reference intelligent device parameters preset in the corresponding control interaction mode, and the adjustable parameters are the intelligent device parameters corresponding to the scene to which the robot belongs. Each mode corresponds to different equipment parameters, the equipment parameters can be set to corresponding target parameters under the condition of optimal conditions, each mode is provided with one target parameter, the target parameters are set to be corresponding to the intelligent equipment parameters set under the condition of the optimal scene, the target parameters are the reference intelligent equipment parameters preset in the corresponding control interaction mode, and the adjustable parameters acquired by the robot are the intelligent equipment parameters corresponding to the scene to which the robot belongs. In connection with the various scenarios of fig. 2, the following describes the corresponding different interaction control modes.
In this embodiment, use visual identification as an example, when the robot carries out visual identification, the light of scene can lead to the fact the influence to robot visual identification for the robot can not discern because of irradiant influence when visual identification, perhaps recognition effect or photographic effect are not the best, and at this moment, need adjust lighting apparatus according to the adjustable parameter that the best effect in the robot is corresponding in the training model, make the environment reach best illuminating effect. The lighting equipment can be light, sunlight and the like, at the moment, the robot acquires images of scenes, and a corresponding control scene obtained according to the training model is a lighting adjustment mode. According to the mode, the server acquires the color adjustable range, the color temperature adjustable range, the brightness, the rotation angle/illumination angle (up-down turning or left-right turning), the illumination range and the mode (such as sunlight mode/color light mode/night light mode/reading mode/computer mode) of various intelligent lamps.
When the robot executes task movement, operations such as picture scanning, grabbing and recognition are carried out, a current picture is collected in real time, when the robot passes through a certain intelligent device, the robot and the intelligent device carry out one-to-one and one-to-many function requests aiming at the devices including the scene to which the robot belongs, and photographing is carried out (for example, when a table lamp at a bedroom position is turned on (the current color temperature, the current brightness and the current rotation angle), the object recognition performance in a picture is low, but if the table lamp at the bedroom is turned on (the current color temperature, the current brightness and the current rotation angle), a ceiling lamp at the bedroom is also turned on, and the object recognition performance in the picture is high). The server stores current data of the intelligent device, including intelligent hardware, function values, hardware positions, photos with light sources turned on (namely photos corresponding to target parameters) and photos without light sources turned on, and marks the positions and the function values of the light sources aiming at the photos. Because the picture is not clear because the backlight/light is too dark/overexposure leads to, when leading to unable accurate discernment, according to robot position, environment and picture image, the night-light communication is inserted to robot and required complex intelligence desk lamp bluetooth lamp/intelligence.
When the robot moves on a path, if the robot touches the movable intelligent equipment on the moving path, the control interaction mode can be set to be a position adjusting mode. In this mode, the server obtains the ratio of intelligent curtain opening/opening pause closing, vehicle speed/movable direction, clothes hanger (height raising/lowering/pause). In the process of executing tasks (scanning/grabbing/identifying and the like) by the robot, the barrier is visually identified in real time, and when the movable equipment such as a curtain, a balance car, an intelligent bicycle, a sweeping robot, an intelligent dust collector, an intelligent scooter, an unmanned aerial vehicle and a clothes hanger are detected, the adjustable parameters of the movable intelligent equipment are obtained, the position and the environment of the intelligent equipment are obtained, and the moving range and the direction of the required mobile equipment are given.
When the robot detects that the sound of the intelligent device in the scene is too loud in the scene needing to be quiet, the control interaction mode can be set to be the sound adjustment mode. In the mode, the server acquires adjustable parameters of the intelligent equipment, such as volume control of the intelligent television/intelligent sound box, modes of the sweeping robot/intelligent washing machine/air purifier, opening and closing pause, gear of the floor fan, starting to shake the head and stop shaking the head, opening and closing, left and right wind sweeping (wind speed 1-5), modes of the food processor, opening and closing pause, rotation angle, opening and closing of the intelligent camera, day and night modes and the like, opening and closing of the range hood, low wind speed and high wind speed of the washing machine, and opening and closing pause of the washing machine. When the robot passes through a certain intelligent device, aiming at the devices including the scene not limited to the robot, the robot and the intelligent device perform one-to-one and one-to-many function requests, the sound of the intelligent device is started (from the highest to the lowest), the robot plays a section of recording, the robot performs voice recognition on the recording content at the same time, and the information of the voice recognition and the recording file are compared to obtain the value of the sound of the intelligent device accepted by the robot. The server stores current data of the intelligent device, wherein the data relates to intelligent hardware, function values, device positions and sound data.
In addition, in the embodiment of the present disclosure, a plurality of modes may be combined, for example, an illumination mode and an air conditioning mode, an air conditioning mode and a position moving mode, to form a combined mode, so as to meet various requirements. Adjustable parameters and target parameters of the intelligent hardware in each mode included in the combined mode are acquired.
Step S106: and adjusting and controlling the intelligent equipment by taking the target parameter as a reference according to the adjustable parameter.
In step S106, according to the corresponding control interaction mode, the obtained adjustable parameter is used as a reference to perform adjustment control on the intelligent device.
And when the scene corresponding control interaction mode acquired by the robot is the illumination adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. When the sensing data is a visually recognizable image, the method comprises the following steps: detecting lighting equipment in a certain area of the position of the robot; selecting a corresponding illumination adjustment mode according to the image parameters of the illumination equipment; and adjusting the adjustable parameters of the lighting equipment according to the lighting adjustment mode.
Specifically, when the robot executes task movement, operations such as scanning/grabbing/recognition are performed, a current picture is collected in real time, when the robot passes through a certain intelligent device, the robot and the intelligent device perform one-to-one and one-to-many function requests aiming at devices including but not limited to the scene, photographing is performed, the starting state, the current color temperature, the current brightness and the rotation angle of a table lamp at the position of a bedroom at the moment are recorded, and the object recognition performance in the obtained picture is low; but if the bedroom desk lamp is in the on state, the current color temperature, the brightness and the rotation angle; meanwhile, the bedroom ceiling lamp is also turned on, and the object identification in the picture is higher. When the image is not clear due to too dark/too much exposure of the backlight/light, and the image cannot be accurately identified, according to the position, environment and image of the robot, the robot communicates with the intelligent desk lamp/Bluetooth lamp/intelligent plug-in night lamp which needs to be matched, the device is turned on or adjusted to the previously stored function value (lamp color adjustment/color temperature adjustment/brightness/turning on or off, for example, turning on a bedroom desk lamp and a ceiling lamp, and adjusting to the corresponding function value), and the rotation angle/irradiation angle of the lamp is moved. After the first adjustment, if the effect is still not good, the adjustment is continued until a good image effect is obtained. For the smart device acquired by the server, the marked photos are added to the algorithm training (visual recognition algorithm/neural network algorithm). If a zone has not been previously passed, light adjustments are made by a visual algorithm to arrive at a light turn-on number and function value. When the robot leaves the area to perform the completion task, the intelligent lamp needing to be matched is restored to the previous state.
When the scene acquired by the robot corresponds to the control interaction default mode, the target parameter is adjusted according to the acquired adjustable parameter in the intelligent device. When the sensing data is voice recognition data, the method comprises the following steps: detecting media equipment and noise equipment in a certain area of the position of the robot; selecting a corresponding sound control mode according to the sound parameters of the media equipment and the noise equipment; and adjusting the adjustable parameters of the media device and the noise device according to the sound control mode. When the robot detects that the sound of the intelligent device in the scene is too loud in the scene needing to be quiet, the control interaction mode can be set to be the sound adjustment mode. In the mode, the server acquires adjustable parameters of media equipment and noise equipment, wherein the intelligent television, the intelligent sound box, other music equipment and the like are the media equipment, sound does not form noise, and the adjustment is carried out according to the requirements of users; the sweeping robot, the washing machine, the fan, the food processor and the like are noise equipment, and the mode or the gear needs to be adjusted to reduce the noise in the quiet mode. For example, the volume control of the smart television/smart sound box, the modes, the on-off pause, the gear of the floor fan, the start of shaking the head, the stop of shaking the head, the on-off, the left-right wind sweeping (wind speed 1-5), the modes, the on-off pause, the rotation angle, the on-off and day-night modes of the smart camera, the on-off, the low wind speed and the high wind speed of the range hood, and the on-off pause of the washing machine are performed. When the robot passes through a certain intelligent device, aiming at the devices including the scene not limited to the robot, the robot and the intelligent device perform one-to-one and one-to-many function requests, the sound of the intelligent device is started (from the highest to the lowest), the robot plays a section of recording, the robot performs voice recognition on the recording content at the same time, and the information of the voice recognition and the recording file are compared to obtain the value of the sound of the intelligent device accepted by the robot.
And when the scene corresponding control interaction mode acquired by the robot is the position adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. In the mode, detecting intelligent equipment in a certain area of the position of the robot; selecting a corresponding control interaction mode according to the type of the intelligent equipment; performing control interaction on the intelligent equipment according to the control interaction mode; and after the robot executes the finished task and leaves the area, restoring the hardware equipment to the original state. When the intelligent equipment is detected to be movable equipment, according to the relative positions of the robot and the intelligent equipment, if the position of the intelligent equipment is located on a path of the robot for executing tasks, the intelligent equipment is subjected to movement control; controlling the intelligent device to move a first distance outside a path of the robot for executing the task; and after the robot executes the task and leaves a second distance, controlling the intelligent equipment to the original position. The robot gives the moving range/direction/speed of a required mobile device (such as a curtain, a balance car, an intelligent bicycle, a sweeping robot, an intelligent dust collector, an intelligent scooter, an unmanned aerial vehicle, a clothes hanger and the like) according to the acquired adjustable parameters of the intelligent device and the specific position of the intelligent device, and according to the position and the environment of the robot, the robot communicates with the mobile device which is required to be matched, the mobile device is moved to other positions, and when the robot executes a task to leave, the mobile device returns to the original position.
And when the scene corresponding control interaction mode acquired by the robot is a sound adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. In this mode, the control adjustment to the smart device includes, but is not limited to, for example, volume control of the smart television/smart sound box, modes of the sweeping robot/smart washing machine/air purifier, on/off pause, gear of the floor fan, starting to shake the head and stop shaking the head, on/off, left/right wind sweeping (wind speed 1-5), modes of the food processor, on/off pause, rotation angle, on/off, day/night modes of the smart camera, on/off of the range hood, low wind speed and high wind speed, and on/off pause of the washing machine. When the robot passes through a certain intelligent device, aiming at the devices including the scene not limited to the robot, the robot and the intelligent device perform one-to-one and one-to-many function requests, the sound of the intelligent device is started (from the highest to the lowest), the robot plays a section of recording, the robot performs voice recognition on the recording content at the same time, and the information of the voice recognition and the recording file are compared to obtain the value of the sound of the intelligent device accepted by the robot. When the robot receives voice information and voice interference is detected, the size/gear/opening and closing of the voice intelligent device are adjusted according to the position and environment of the robot. Alternatively, when the visual/audio recognition identifies that a person is within a certain range of the device, the voice interaction mode asks whether the device can be paused. When the robot leaves the task, the robot returns to the previous state.
And when the scene corresponding control interaction mode acquired by the robot is the air-conditioning mode, adjusting the target parameters according to the acquired adjustable parameters (environmental parameters) in the intelligent equipment. In the mode, adjustable control parameters of the air conditioner are adjusted, and the air conditioner mode (such as automatic/refrigeration dehumidification/heating/air supply)/air supply (such as up-down air sweeping/left-right air sweeping)/wind speed (such as automatic/low/medium/high), a fan, a gear/on/off of a humidifier (opening and closing, 1-5 gears of the wind speed), and a dehumidifier (opening and closing, low and high wind speeds) are adopted. And (4) starting or closing or suspending certain intelligent equipment when the robot runs due to the influence of too high/too low indoor temperature.
In addition, under the combined mode, in order to adapt to multiple demand. And performing target parameter adjustment control on adjustable parameters of the intelligent hardware in each mode contained in the combination mode.
Fig. 3 is a schematic diagram of an interaction mode of controlling an intelligent device of a robot according to an embodiment of the present disclosure. Wherein the control interaction mode comprises at least one of a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, a temperature adjustment mode, a sound control mode, and a combination mode. The figure shows that a plurality of interactive control modes can combine a plurality of modes, such as a lighting mode and an air conditioning mode, an air conditioning mode and a position moving mode, and the like to form a combined mode, so as to meet various requirements. The adjustable and target parameters of the intelligent hardware in each mode contained in the mode are combined. In a combined mode, the device is suitable for various requirements. And performing target parameter adjustment control on adjustable parameters of the intelligent hardware in each mode contained in the combination mode.
In the figure, the lighting adjustment mode, the position adjustment mode, and the combination mode are taken as examples, as shown in the figure, the lighting adjustment mode may set different sub-modes 1, 2, and … …, where n is a natural number, according to different scenes and requirements. For example, in a living room environment, the intelligent device corresponding to sub-mode 1 in the lighting adjustment mode is a living room ceiling lamp, a floor lamp, a corridor lamp, an electric curtain, and the like, and is adjusted and controlled according to a target parameter corresponding to an optimal light effect, in a bedroom environment, the intelligent devices corresponding to sub-mode 2 and sub-mode 3 in the lighting adjustment mode are a bedroom lamp, a desk lamp, a night lamp, an electric curtain, and the like, and the sub-mode 2 is set to be in a daytime mode according to a state of a user, or the sub-mode 3 is set to be in a nighttime mode, and the intelligent device is adjusted and controlled according to the corresponding target parameter.
Similarly, the position adjustment mode can set different sub-modes 1, 2 and … …, where n is a natural number, according to different scenes and requirements. When the robot moves on a path, if the robot touches the movable intelligent equipment on the moving path, the control interaction mode can be set to be a position adjusting mode. In this mode, the intelligent device to be matched can be a sweeping robot/a balance car/an intelligent bicycle/an electric vehicle/a dust collector/a scooter/an unmanned aerial vehicle/an intelligent curtain/an electric clothes hanger and the like. The robot can select the sub-mode corresponding to the scene according to the difference of the sub-modes, and adjust and control the intelligent equipment according to the corresponding target parameters.
Other control interaction modes in the embodiment of the disclosure also respectively comprise sub-mode 1, sub-mode 2 and sub-mode n … … of the respective modes according to different scenes and requirements, wherein n is a natural number.
The different modes can be combined arbitrarily or according to requirements to form a combined mode, and the combined mode can comprise a lighting adjustment mode, a position adjustment mode or other modes. The robot can select a sub-mode in the corresponding mode according to different relevant modes in the combined mode, and adjusts and controls the intelligent equipment according to the corresponding target parameters.
Fig. 4 is a schematic diagram of an apparatus for driving virtual portrait behavior through audio according to another embodiment of the present disclosure. The device for driving the virtual portrait behaviors through the audio comprises: the model generation module 401, the training module 402, the connection module 403, the mode selection module 404, the parameter acquisition module 405, and the control module 406. Wherein:
the model generation module 401 generates an adjustable parameter training model according to the sensing data and the corresponding intelligent device parameters.
In this embodiment, the sensing data is a visually recognizable image, and the model generation module is specifically configured to: acquiring a plurality of the visually identifiable images; recording the position and adjustable parameters of the intelligent equipment corresponding to the visually recognizable image; extracting image parameters in the recognizable image; and generating a convolutional neural network training model of the image parameters and the adjustable parameters.
When the robot shoots images, adjustable parameters corresponding to intelligent hardware in the environment at the time of shooting, such as the brightness, the illumination angle, the position and the like of lighting equipment, the temperature, the humidity, the air conditioning mode and the like of an air conditioner, the shading proportion of a curtain, the pulling-up position of the curtain and the like, the position, the motion direction, the track and the like of a sweeping robot, the states of each intelligent switch and each intelligent socket and the like are obtained through a network. The images are combined with corresponding intelligent equipment parameters during photographing to be subjected to data marking, so that image classification with intelligent equipment parameter identification is formed, convolution neural network learning training is carried out on the image classification data, an adjustable parameter training model of the corresponding intelligent equipment is generated, a visually recognizable image which meets the requirements of a user and is optimal is selected, and the adjusting parameters of the corresponding intelligent equipment are used as target adjusting parameters. Through the training model, the newly acquired images of the robot can be correspondingly acquired to obtain the adjustable parameters of the intelligent hardware equipment during acquisition. And the robot can adjust and control the corresponding adjustable parameters of the intelligent equipment to the target adjustment parameters according to the target adjustment parameters.
The training module 402 is configured to obtain different control interaction modes through training of the adjustable parameter training model.
The training module is specifically configured to: inputting a plurality of visually recognizable images in an adjustable parameter training model; performing scene classification on the plurality of visually recognizable images; training and optimizing image parameters of the visually recognizable images in each scene; and outputting the control interaction mode corresponding to the visual recognizable image.
The robot performs data training labeling according to the collected visual image corresponding to the relevant adjustable parameters of the group of intelligent devices during collection, and according to the collected image and the type of the associated intelligent device, and trains to obtain a corresponding control interaction mode, such as a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, a temperature adjustment mode, a sound control mode, a combination mode, and the like, which is not limited in this disclosure. Each mode corresponds to different equipment parameters, the equipment parameters can be set to corresponding target parameters under the condition of optimal conditions, each mode is provided with one target parameter, the target parameters are set to be corresponding to the intelligent equipment parameters set under the condition of the optimal scene, the target parameters are the reference intelligent equipment parameters preset in the corresponding control interaction mode, and the adjustable parameters acquired by the robot are the intelligent equipment parameters corresponding to the scene to which the robot belongs.
The connection module 403 is configured to establish a network connection between the robot and the intelligent device.
And establishing network connection between the robot and the intelligent equipment through at least one of a WiFi wireless network, a Bluetooth network, a Zigbee gateway and a multi-mode network. The robot acquires the control right of related intelligent equipment in the network through network connection and is used for adjusting and controlling various related parameters of the intelligent equipment.
The mode selection module 404 is configured to select a corresponding intelligent device and control an interaction mode according to the sensing data parameter of the scene to which the robot belongs.
The mode selection module is specifically configured to: the robot collects the current sensing data of the scene in real time; extracting the current sensing data parameters; comparing the sensing data parameters with a plurality of stored sensing data parameters, and selecting consistent sensing data; and selecting intelligent equipment corresponding to the consistent sensing data and controlling an interaction mode.
The parameter obtaining module 405 is configured to obtain a target parameter and an adjustable parameter of the smart device in the control interaction mode.
The control interaction mode comprises a position and adjustable parameters of at least one intelligent device in a corresponding scene, target parameters of the intelligent device are reference intelligent device parameters preset in the corresponding control interaction mode, and the adjustable parameters are the intelligent device parameters corresponding to the scene to which the robot belongs.
Each mode corresponds to different equipment parameters, the equipment parameters can be set to corresponding target parameters under the condition of optimal conditions, each mode is provided with one target parameter, the target parameters are set to be corresponding to the intelligent equipment parameters set under the condition of the optimal scene, the target parameters are the reference intelligent equipment parameters preset in the corresponding control interaction mode, and the adjustable parameters acquired by the robot are the intelligent equipment parameters corresponding to the scene to which the robot belongs.
The control module 406 is configured to adjust and control the intelligent device according to the adjustable parameter and using a target parameter as a reference.
And when the scene corresponding control interaction mode acquired by the robot is the illumination adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. And when the scene corresponding control interaction mode acquired by the robot is the position adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. And when the scene corresponding control interaction mode acquired by the robot is a sound adjustment mode, adjusting the target parameters according to the acquired adjustable parameters in the intelligent equipment. And when the scene corresponding control interaction mode acquired by the robot is the air-conditioning mode, adjusting the target parameters according to the acquired adjustable parameters (environmental parameters) in the intelligent equipment.
In addition, under the combined mode, in order to adapt to multiple demand. And performing target parameter adjustment control on adjustable parameters of the intelligent hardware in each mode contained in the combination mode.
And when the robot performs the task to leave or detects that no human body is in the environment, the intelligent device is restored to the previous state.
The device further comprises:
a position movement recovery module to: detecting intelligent equipment in a certain area of the position of the robot; selecting a corresponding control interaction mode according to the type of the intelligent equipment; performing control interaction on the intelligent equipment according to the control interaction mode; and after the robot executes the finished task and leaves the area, restoring the hardware equipment to the original state.
The position movement recovery module is further configured to:
when the intelligent equipment is detected to be movable equipment, according to the relative positions of the robot and the intelligent equipment, if the position of the intelligent equipment is located on a path of the robot for executing tasks, the intelligent equipment is subjected to movement control; controlling the intelligent device to move a first distance outside a path of the robot for executing the task; and after the robot executes the task and leaves a second distance, controlling the intelligent equipment to the original position.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 5, shown is a schematic diagram of an electronic device 500 suitable for use in implementing another embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other through a communication line 504. An input/output (I/O) interface 505 is also connected to communication lines 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the interaction method in the above embodiment is performed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to perform the method of any of the preceding first aspects.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. A method for a robot to interact with a smart device, comprising:
generating an adjustable parameter training model according to the sensing data and the corresponding intelligent equipment parameters;
training through the adjustable parameter training model to obtain different control interaction modes;
establishing network connection between the robot and the intelligent equipment;
selecting corresponding intelligent equipment and a control interaction mode according to the sensing data parameters of the scene to which the robot belongs;
acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode;
and adjusting and controlling the intelligent equipment by taking the target parameter as a reference according to the adjustable parameter.
2. The method of claim 1, wherein generating an adjustable parameter training model from the sensed data and corresponding smart device parameters comprises:
acquiring a plurality of sensing data through a sensor on the robot;
recording the position and adjustable parameters of the intelligent equipment corresponding to the sensing data;
extracting sensing data parameters in the sensing data;
and generating a convolutional neural network training model of the sensing data parameters and the adjustable parameters.
3. The method of claim 1, wherein the training by the adjustable parameter training model results in different control interaction patterns, comprising:
inputting a plurality of sensing data in an adjustable parameter training model;
performing scene classification on the plurality of sensing data;
training and optimizing sensing data parameters of sensing data in each scene;
and outputting a control interaction mode corresponding to the sensing data.
4. The method of claim 1, wherein selecting corresponding smart devices and control interaction modes according to the sensed data parameters of the scene to which the robot belongs comprises:
the robot collects the current sensing data of the scene in real time;
extracting the current sensing data parameters;
comparing the sensing data parameters with a plurality of stored sensing data parameters, and selecting consistent sensing data;
and selecting intelligent equipment corresponding to the consistent sensing data and controlling an interaction mode.
5. The method according to claim 1, wherein the control interaction mode comprises a position and an adjustable parameter of at least one smart device in a corresponding scene, the target parameter of the smart device is a reference smart device parameter preset in the corresponding control interaction mode, and the adjustable parameter is the smart device parameter corresponding to the scene to which the robot belongs.
6. The method of claim 1, further comprising:
detecting intelligent equipment in a certain area of the position of the robot;
selecting a corresponding control interaction mode according to the type of the intelligent equipment;
performing control interaction on the intelligent equipment according to the control interaction mode;
and after the robot executes the finished task and leaves the area, restoring the hardware equipment to the original state.
7. The method of claim 6, wherein the method comprises:
when the intelligent equipment is detected to be movable equipment, according to the relative positions of the robot and the intelligent equipment, if the position of the intelligent equipment is located on a path of the robot for executing tasks, the intelligent equipment is subjected to movement control;
controlling the intelligent device to move a first distance outside a path of the robot for executing the task;
and after the robot executes the task and leaves a second distance, controlling the intelligent equipment to the original position.
8. The method of claim 1, wherein when the sensory data is a visually recognizable image, the method comprises:
detecting lighting equipment in a certain area of the position of the robot;
selecting a corresponding illumination adjustment mode according to the image parameters of the illumination equipment;
and adjusting the adjustable parameters of the lighting equipment according to the lighting adjustment mode.
9. The method of claim 1, wherein when the sensed data is voice recognition data, the method comprises:
detecting media equipment and noise equipment in a certain area of the position of the robot;
selecting a corresponding sound control mode according to the sound parameters of the media equipment and the noise equipment;
and adjusting the adjustable parameters of the media device and the noise device according to the sound control mode.
10. The method of claim 1, wherein the smart device comprises at least one of a lighting device, a household appliance, a smart terminal, a smart home device, a smart transportation device, and a smart robot.
11. The method of claim 1, wherein the control interaction mode comprises at least one of a lighting adjustment mode, a position adjustment mode, an angle adjustment mode, a display adjustment mode, an air conditioning mode, a voice control mode, and a combination mode.
12. An apparatus for interaction between a robot and a smart device, comprising:
the model generation module is used for generating an adjustable parameter training model according to the sensor data and the corresponding intelligent equipment parameters;
the training module is used for obtaining different control interaction modes through training of the adjustable parameter training model;
the connection module is used for establishing network connection between the robot and the intelligent equipment;
the mode selection module is used for selecting corresponding intelligent equipment and controlling an interaction mode according to the sensing data parameters of the scene to which the robot belongs;
the parameter acquisition module is used for acquiring target parameters and adjustable parameters of the intelligent equipment in the control interaction mode;
and the control module is used for adjusting and controlling the intelligent equipment by taking the target parameter as a reference.
13. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor configured to execute the computer-readable instructions to cause the electronic device to implement the method according to any one of claims 1-11.
CN202111193475.3A 2021-10-13 2021-10-13 Method and device for interaction between robot and intelligent equipment and electronic equipment Pending CN113900384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111193475.3A CN113900384A (en) 2021-10-13 2021-10-13 Method and device for interaction between robot and intelligent equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111193475.3A CN113900384A (en) 2021-10-13 2021-10-13 Method and device for interaction between robot and intelligent equipment and electronic equipment

Publications (1)

Publication Number Publication Date
CN113900384A true CN113900384A (en) 2022-01-07

Family

ID=79191986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111193475.3A Pending CN113900384A (en) 2021-10-13 2021-10-13 Method and device for interaction between robot and intelligent equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN113900384A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114364099A (en) * 2022-01-13 2022-04-15 达闼机器人有限公司 Method for adjusting intelligent lighting equipment, robot and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446156A (en) * 2015-12-30 2016-03-30 百度在线网络技术(北京)有限公司 Method, device and system for controlling household electric appliance based on artificial intelligence
US20170144304A1 (en) * 2014-09-02 2017-05-25 The Johns Hopkins University System and method for flexible human-machine collaboration
CN108302723A (en) * 2018-02-06 2018-07-20 北京智能管家科技有限公司 Adjusting method, device and the storage medium of indoor air quality
US20190222432A1 (en) * 2016-08-27 2019-07-18 Beijing Vrv Software Corporation Limited Smart household control method, apparatus and system
CN110716444A (en) * 2019-11-21 2020-01-21 三星电子(中国)研发中心 Sound control method and device based on smart home and storage medium
CN112102515A (en) * 2020-09-14 2020-12-18 深圳优地科技有限公司 Robot inspection method, device, equipment and storage medium
CN112378056A (en) * 2020-11-18 2021-02-19 珠海格力电器股份有限公司 Intelligent air conditioner control method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170144304A1 (en) * 2014-09-02 2017-05-25 The Johns Hopkins University System and method for flexible human-machine collaboration
CN105446156A (en) * 2015-12-30 2016-03-30 百度在线网络技术(北京)有限公司 Method, device and system for controlling household electric appliance based on artificial intelligence
US20190222432A1 (en) * 2016-08-27 2019-07-18 Beijing Vrv Software Corporation Limited Smart household control method, apparatus and system
CN108302723A (en) * 2018-02-06 2018-07-20 北京智能管家科技有限公司 Adjusting method, device and the storage medium of indoor air quality
CN110716444A (en) * 2019-11-21 2020-01-21 三星电子(中国)研发中心 Sound control method and device based on smart home and storage medium
CN112102515A (en) * 2020-09-14 2020-12-18 深圳优地科技有限公司 Robot inspection method, device, equipment and storage medium
CN112378056A (en) * 2020-11-18 2021-02-19 珠海格力电器股份有限公司 Intelligent air conditioner control method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114364099A (en) * 2022-01-13 2022-04-15 达闼机器人有限公司 Method for adjusting intelligent lighting equipment, robot and electronic equipment
CN114364099B (en) * 2022-01-13 2023-07-18 达闼机器人股份有限公司 Method for adjusting intelligent light equipment, robot and electronic equipment

Similar Documents

Publication Publication Date Title
US10678108B2 (en) Electrochromic filtering in a camera
US20210216787A1 (en) Methods and Systems for Presenting Image Data for Detected Regions of Interest
US11595579B2 (en) Systems and methods for automatic exposure in high dynamic range video capture systems
JP2022111133A (en) Image processing device and control method for the same
US20170195561A1 (en) Automated processing of panoramic video content using machine learning techniques
CN108388142A (en) Methods, devices and systems for controlling home equipment
CN104247388B (en) Self-propelled electronic equipment, terminal installation and the operating system with remote control electronic equipment
CN105223826B (en) Home equipment control method, apparatus and system
WO2023134743A1 (en) Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program
JP7233162B2 (en) IMAGING DEVICE AND CONTROL METHOD THEREOF, PROGRAM, STORAGE MEDIUM
CN109479115A (en) Information processing unit, information processing method and program
US11483451B2 (en) Methods and systems for colorizing infrared images
CN102685207A (en) Intelligent photographic method based on cloud service and cloud service equipment
CN106737724A (en) A kind of family's social interaction server humanoid robot system
EP3398029B1 (en) Intelligent smart room control system
EP3622724A1 (en) Methods and systems for presenting image data for detected regions of interest
KR20170075625A (en) Method and system for controlling an object
WO2019065454A1 (en) Imaging device and control method therefor
CN113900384A (en) Method and device for interaction between robot and intelligent equipment and electronic equipment
JP2023057157A (en) Image capturing apparatus, method for controlling the same, and program
US11463617B2 (en) Information processing apparatus, information processing system, image capturing apparatus, information processing method, and memory
CN111158258A (en) Environment monitoring method and system
US20210152750A1 (en) Information processing apparatus and method for controlling the same
WO2016201357A1 (en) Using infrared images of a monitored scene to identify false alert regions
CN115567778A (en) Automatic focusing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination