CN113710133B - Intelligent terminal, control system and interaction method with mobile robot - Google Patents

Intelligent terminal, control system and interaction method with mobile robot Download PDF

Info

Publication number
CN113710133B
CN113710133B CN201980094943.6A CN201980094943A CN113710133B CN 113710133 B CN113710133 B CN 113710133B CN 201980094943 A CN201980094943 A CN 201980094943A CN 113710133 B CN113710133 B CN 113710133B
Authority
CN
China
Prior art keywords
target area
mobile robot
robot
coordinate system
intelligent terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980094943.6A
Other languages
Chinese (zh)
Other versions
CN113710133A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Akobert Robot Co ltd
Shenzhen Akobot Robot Co ltd
Original Assignee
Shanghai Akobert Robot Co ltd
Shenzhen Akobot Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Akobert Robot Co ltd, Shenzhen Akobot Robot Co ltd filed Critical Shanghai Akobert Robot Co ltd
Publication of CN113710133A publication Critical patent/CN113710133A/en
Application granted granted Critical
Publication of CN113710133B publication Critical patent/CN113710133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an intelligent terminal, a control system and an interaction method with a mobile robot, wherein the method comprises the steps of firstly detecting input of a user in a state that a display device previews a physical space interface, and then responding to the detected input to create at least one target area in the previewed physical space interface; and finally, generating an interactive instruction based on the at least one target area to be sent to the mobile robot, so that the mobile robot performs navigation movement and behavior control based on the accurate and definite target area established at the intelligent terminal.

Description

Intelligent terminal, control system and interaction method with mobile robot
Technical Field
The application relates to the technical field of mobile robot interaction, in particular to an intelligent terminal, a control system and an interaction method with a mobile robot.
Background
The mobile robot relies on a pre-constructed map for navigational movement and behavioral control during operation. When a user needs to perform a predetermined operation on a target area or not, a specified position is usually determined by means of voice, limb instructions and the like of the user in the prior art, and then the mobile robot determines a target area based on a preset range with the specified position as a center; alternatively, the user causes the mobile robot to determine the target area by editing a map that the mobile robot has previously constructed.
However, in practical applications, users often want the mobile robot to perform predetermined navigation movements or behavior control based on a precise target area. However, for mobile robots such as cleaning robots and inspection robots, it is impossible to determine a precise preset range according to a corresponding instruction of a user, and particularly, when a target area is irregular, the user cannot accurately describe the target area by using instructions such as voice and limbs, and the mobile robot cannot accurately determine the precise target area. Although the mobile robot can determine an accurate preset range by editing the map by the user, the map constructed by the mobile robot is not intuitive for the user, and the user cannot immediately determine the position of the target area in the actual physical space in the robot map.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present application is to provide an intelligent terminal, a control system, and an interaction method with a mobile robot, which are used to solve the problems that the mobile robot in the prior art cannot determine an accurate target area based on a user instruction and the user cannot immediately determine the accurate position of the target area in the actual physical space in the robot map when editing the mobile robot map.
To achieve the above and other related objects, a first aspect of the present application provides a method for interacting with a mobile robot, for a smart terminal including at least a display device, comprising the steps of: detecting an input of a user in a state where the display apparatus previews a physical space interface; creating at least one target area in the previewed physical space interface in response to the detected input; the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot; and generating an interaction instruction to be sent to the mobile robot based on the at least one target area.
In certain embodiments of the first aspect of the present application, the step of detecting the user's input in a state where the display device previews the physical space interface comprises: displaying a video stream shot by a camera device of the intelligent terminal in real time in a physical space interface previewed by the display device; and detecting the input of a user in the physical space interface by using an input device of the intelligent terminal.
In certain embodiments of the first aspect of the present application, the detected input comprises at least one of: slide input operation, click input operation.
In certain embodiments of the first aspect of the present application, the step of detecting the user's input in a state where the display device previews the physical space interface comprises: displaying a video stream shot by a camera device of the intelligent terminal in real time in a physical space interface previewed by the display device; and detecting a mobile sensing device in the intelligent terminal to obtain the input of the user.
In certain embodiments of the first aspect of the present application, further comprising: and constructing the terminal coordinate system in a state of previewing the physical space interface so as to respond to the detected input in a state of finishing constructing the terminal coordinate system.
In certain embodiments of the first aspect of the present application, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal; or coordinate information in a robot coordinate system of the mobile robot is prestored in a cloud server connected with the intelligent terminal network; or the coordinate information in the robot coordinate system of the mobile robot is prestored in the mobile robot connected with the intelligent terminal network.
In certain embodiments of the first aspect of the present application, the method of interacting further comprises: determining the corresponding relation based on the coordinate information of the consensus elements extracted from the previewed physical space interface under the robot coordinate system and the coordinate information under the terminal coordinate system respectively; and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the corresponding relation.
In certain embodiments of the first aspect of the present application, the step of generating an interaction instruction to send to the mobile robot based on the at least one target area comprises: generating an interaction instruction comprising the at least one target area described with the coordinate information in the robot coordinate system to send to the mobile robot.
In certain embodiments of the first aspect of the present application, the step of generating an interaction instruction to send to the mobile robot based on the at least one target area comprises: generating an interactive instruction containing the at least one target area and consensus elements associated with creating the at least one target area for transmission to the mobile robot; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system.
In certain embodiments of the first aspect of the present application, further comprising at least one of the following steps: prompting a user to perform input operation by utilizing the physical space interface; prompting a user to perform input operation by using sound; or prompt the user to perform an input operation using vibration.
In certain embodiments of the first aspect of the present application, in the detecting of the user's input in the state where the display device previews the physical space interface, the user's input is detected as a first input, the method further comprises: and generating an interactive instruction to be sent to the mobile robot based on the target area and the second input of the detection user.
In certain embodiments of the first aspect of the present application, the second input comprises any one of: cleaning or not cleaning the target area, entering or not entering the target area, collating or not collating items within the target area.
The second aspect of the present application further provides an intelligent terminal, including: the display device is used for providing preview operation for a physical space interface; storage means for storing at least one program; the interface device is used for carrying out communication interaction with a mobile robot; processing means, connected to the display means, the storage means and the interface means, for executing the at least one program to coordinate the display means, the storage means and the interface means to perform the interaction method according to any one of the first aspect of the present application.
A third aspect of the present application further provides a server, including: storage means for storing at least one program; the interface device is used for assisting an intelligent terminal and the mobile robot to carry out communication interaction; the processing device is connected with the storage device and the interface device and used for executing the at least one program so as to coordinate the storage device and the interface device to execute the following interaction method: acquiring at least one target area from the intelligent terminal; the target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot; generating an interaction instruction based on the at least one target area to send to the mobile robot through the interface device.
In certain embodiments of the third aspect of the present application, the storage means is pre-stored with the robot coordinate system; or the processing device acquires the robot coordinate system from the intelligent terminal or the mobile robot through an interface device.
In certain embodiments of the third aspect of the present application, the processing device further obtains, through an interface device, a video stream captured by the smart terminal; the processing device determines the corresponding relation based on the coordinate information of the consensus elements provided by the video stream under the robot coordinate system and the coordinate information under the terminal coordinate system respectively; and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the correspondence.
In certain embodiments of the third aspect of the present application, the step of the processing device generating an interaction instruction based on at least one target area to send to the mobile robot comprises: generating an interaction instruction comprising the at least one target area described with the coordinate information in the robot coordinate system for transmission to the mobile robot through the interface device.
In certain embodiments of the third aspect of the present application, the step of generating an interaction instruction to send to the mobile robot based on the at least one target area comprises: acquiring a consensus element from the intelligent terminal related to the creation of the at least one target area; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system; generating an interaction instruction comprising the at least one target area and the consensus element for transmission to the mobile robot through the interface device.
In certain embodiments of the third aspect of the present application, the processing device further obtains a second input from the smart terminal through the interface device, and the processing device further performs generating an interaction instruction based on the target area and the second input to send to the mobile robot.
In certain embodiments of the third aspect of the present application, the second input comprises any one of: cleaning or non-cleaning the target area, entering or not entering the target area, and sorting or not sorting items within the target area.
A fourth aspect of the present application also provides a mobile robot including: a storage device for storing at least one program and a robot coordinate system constructed in advance; the interface device is used for carrying out communication interaction with an intelligent terminal; the execution device is used for controlling to execute corresponding operations; the processing device is connected with the storage device, the interface device and the execution device and is used for executing the at least one program so as to coordinate the storage device and the interface device to execute the following interaction method: acquiring an interactive instruction from the intelligent terminal; wherein the interactive instruction comprises at least one target area; the target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system; controlling the execution device to execute the operation related to the at least one target area.
In some embodiments of the fourth aspect of the present application, the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through the interface device for obtaining the interaction instruction.
In certain embodiments of the fourth aspect of the present application, the step of the processing device performing an operation related to the at least one target area comprises: parsing the interactive instructions to obtain at least: including the at least one target area described with coordinate information in a robot coordinate system; controlling the execution device to execute the operation related to the at least one target area.
In certain embodiments of the fourth aspect of the present application, the processing device further obtains, via the interface device, a consensus element from the smart terminal associated with creating the at least one target area; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system; the processing device further performs the steps of: determining the corresponding relation based on the coordinate information of the consensus elements in the robot coordinate system and the coordinate information of the consensus elements in the terminal coordinate system respectively; and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the correspondence.
In certain embodiments of the fourth aspect of the present application, the processing device further obtains a second input from the smart terminal through the interface device, and the processing device further performs controlling the executing device to perform an operation related to the at least one target area based on the second input.
In certain embodiments of the fourth aspect of the present application, the second input comprises any one of: cleaning or not cleaning the target area, the force to clean the target area, entering or not entering the target area, and sorting or not sorting items within the target area.
In certain embodiments of the fourth aspect of the present application, the performing means comprises a mobile device, and the processing means generates a navigation route associated with the at least one target area based on the second input and controls the mobile device to perform a navigation movement based on the navigation route.
In certain embodiments of the fourth aspect of the present application, the performing means comprises a cleaning device, and the processing means controls a cleaning operation of the cleaning device within the at least one target area based on the second input.
In certain embodiments of the fourth aspect of the present application, the mobile robot comprises: cleaning robot, inspection robot, transfer robot.
The fifth aspect of the present application also provides a control system of a mobile robot, including: a smart terminal as described in the second aspect of the present application; a mobile robot as claimed in any one of the fourth aspects of the present application.
The fifth aspect of the present application also provides a computer-readable storage medium storing at least one program which, when invoked, executes and implements the interaction method as described in any one of the first aspects of the present application.
As described above, according to the intelligent terminal, the control system and the interaction method with the mobile robot, the intelligent terminal with the positioning and mapping function and the display device is used for detecting the input of the user and further creating at least one target area, and the mobile robot can execute the predetermined operation or not in the target area based on the accurate target area in the mobile robot map by matching the target area described by the coordinate information of the intelligent terminal to the map of the mobile robot on at least one side of the intelligent terminal, the server and the mobile robot, so that the accuracy of the mobile robot in determining the target area specified by the user in the man-machine interaction process is improved, and the difficulty of determining the position of the target area in the map when the user edits the mobile robot map is reduced.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present disclosure.
Fig. 2 is a flow chart illustrating an interaction method with a mobile robot according to an embodiment of the present disclosure.
Fig. 3a is a schematic diagram illustrating a target area created by the smart terminal of the present application in the previewed physical space interface according to an embodiment.
Fig. 3b is a schematic diagram of a target area created by the smart terminal of the present application in the previewed physical space interface in another embodiment.
Fig. 3c is a schematic diagram illustrating a target area created in the previewed physical space interface by the smart terminal according to the present application in another embodiment.
Fig. 4 shows a schematic diagram of a coordinate system established for the smart terminal of the present application in one embodiment.
Fig. 5 is a schematic diagram of a virtual key of the smart terminal according to an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a network architecture for interaction among the intelligent terminal, the server and the mobile robot according to the present application.
Fig. 7 is a schematic flow chart of an interaction method with a mobile robot according to another embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present disclosure.
Fig. 9 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention.
Fig. 10 is a flowchart illustrating an interaction method according to another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first input may be referred to as a second input, and similarly, a second input may be referred to as a first input, without departing from the scope of the various described embodiments. The first input and the input are both describing one input, but they are not the same input unless the context clearly dictates otherwise.
Also, as used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
In practical applications, if a user needs the mobile robot to perform a predetermined operation based on a target area, a designated position is usually determined by means of a voice, a limb instruction, and the like of the user, and then the mobile robot determines the target area within a preset range centered on the designated position. However, the mobile robot cannot determine an accurate preset range according to a corresponding instruction of the user, and particularly, the mobile robot cannot accurately determine an accurate target area when the target area is irregular. So that the mobile robot cannot perform a navigation movement or behavior control based on an accurate target area. Taking the cleaning robot as an example, a user wants to go to the periphery of a dining table for cleaning, and usually sends an instruction including "dining table", and the mobile robot usually only determines a target area to clean within a preset range centered on the dining table. If in practical application, a user wants to make the cleaning robot clean an irregular range, and the user cannot accurately describe the target area by using commands such as sound and limbs, the cleaning robot cannot effectively clean the accurate target area.
In order for the mobile robot to perform navigation movement and behavior control based on a precise target area, a user enables the mobile robot to determine coordinate information of the target area in a map thereof by editing the map which is previously constructed by the mobile robot. However, in practical applications, the map constructed by the mobile robot is not intuitive and difficult for the user to distinguish, and the user cannot immediately determine the position of the target area in the actual physical space in the mobile robot map.
Therefore, the application provides an intelligent terminal, a control system and an interaction method with a mobile robot, which are used for creating at least one target area on the intelligent terminal based on input of a user so that the intelligent terminal can generate an interaction instruction sent to the mobile robot based on the at least one target area.
Wherein the mobile robot is a machine device that automatically performs a specific work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The mobile robot can be used indoors or outdoors, can be used for industry, business or families, can be used for replacing security patrol, welcome personnel or diners, or people to clean the ground, and can also be used for family accompanying, assistant office work and the like. The mobile robot is provided with at least one camera device for shooting images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping); according to the constructed map, the mobile robot can carry out route planning of work such as tour, cleaning and arrangement. Generally, the mobile robot caches a map constructed during the operation of the mobile robot in a local storage space, or uploads the map to a server or a cloud for storage, or uploads the map to an intelligent terminal of a user for storage.
The interaction method with the mobile robot is used for an intelligent terminal at least comprising a display device. Referring to fig. 1, fig. 1 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present disclosure. As shown in the figure, the smart terminal includes a display device 11, a storage device 12, an interface device 13, a processing device 14, an imaging device (not shown), and the like. For example, the smart terminal may be a smart phone, AR glasses, a tablet computer, or the like.
The display device 11 is a human-machine interface device, and is used for providing a physical space interface for a user to preview. The display device 11 can convert the coordinate information or various data information of the intelligent terminal map into electronic files such as various characters, numbers, symbols or visual images and the like for display. The input device or the mobile sensing device can be used for inputting user input or data into the intelligent terminal, and the processing device 14 of the intelligent terminal is used for adding, deleting and changing display contents at any time. The display device 11 may be classified into various types of display devices such as plasma, liquid crystal, light emitting diode, and cathode ray tube according to the display device. The display device 11 of the present application can provide a physical space interface for a user to view and use electronic files of the intelligent terminal. The physical space interface of the display device 11 displays to the user the image corresponding to the actual physical space captured by the intelligent terminal camera. The physical space interface displays the visual physical space for the user by calling the image of the actual physical space shot by the camera device of the intelligent terminal. For example, if the camera of the smart terminal is capturing a heap of rice scattered on the kitchen floor, the physical space interface displays an image including the heap of rice scattered and the kitchen floor. The physical space is the actual space in which the mobile robot works. For example, the mobile robot is a cleaning robot, and the physical space may be a physical space where a user who needs to clean the cleaning robot lives and works.
The storage means 12 is for storing at least one program. Wherein the at least one program is operable to cause the processing device to perform the interaction method described herein. The storage means 12 also stores coordinate information in the robot coordinate system of the mobile robot.
Here, the storage device 12 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, the storage 12 includes a flash memory device or other non-volatile solid state storage device. In certain embodiments, storage device 12 may also include memory remote from the one or more processing devices, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, Local Area Networks (LANs), wide area networks (WLANs), Storage Area Networks (SANs), etc., or a suitable combination thereof. The storage device 12 also includes a memory controller that controls access control of the memory by the components of the intelligent terminal, such as a Central Processing Unit (CPU) and the interface device 13, or other components.
The interface device 13 is used for communication interaction with a mobile robot or a server. For example, the interface device 13 may send the interaction instruction generated by the intelligent terminal to a server or the mobile robot. For another example, the interface device 13 sends an instruction to the mobile robot or the server to obtain coordinate information in the mobile robot coordinate system. The interface device 13 includes a network interface, a data line interface, and the like. Wherein the network interfaces include, but are not limited to: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on short-range communication (WiFi, bluetooth, etc.), and the like. The data line interface includes, but is not limited to: USB interface, RS232, etc. The interface device 13 is connected with the display device 11, the storage device 12, the processing device 14, the internet, the mobile robot located in a physical space, the server and the like.
The processing device 14 is connected to the display device 11, the storage device 12 and the interface device 13, and is configured to execute the at least one program to coordinate the display device 11, the storage device 12 and the interface device 13 to perform the interaction method described herein. The processing device 14 includes one or more processors. The processing device 14 is operable to perform data read and write operations with the storage device 12. The processing device 14 performs processing such as extracting images, temporarily storing features, locating in a map based on features, and the like. The processing device 14 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof. The processing device 14 is also operatively coupled with an input device that enables a user to interact with the intelligent terminal. Thus, the input device may include buttons, a keyboard, a mouse, a touch pad, and the like.
The camera device is used for capturing images in an actual physical space in real time, and includes but is not limited to: monocular camera devices, binocular camera devices, multi-view camera devices, depth camera devices, and the like.
Referring to fig. 2, fig. 2 is a flowchart illustrating an interaction method with a mobile robot according to an embodiment of the present disclosure. In an embodiment, the interaction method is used for interaction between an intelligent terminal and a mobile robot, and the intelligent terminal is provided with a display device.
In step S110, an input of a user is detected in a state where the display device previews a physical space interface. The state of the physical space interface is previewed, namely the physical space interface of the display device can display the image of the actual physical space shot by the camera device of the intelligent terminal in real time for a user to view and use. When the display device is in a state of previewing a physical space interface, a user can check the image shot by the intelligent terminal in real time, so that the user can correspond to the area and the position in the actual physical space based on the visual image displayed by the physical space interface. Taking the smart terminal as an example of a smart phone, after a user enters an AR application of the smart phone, an image of an actual physical space shot by the smart phone can be displayed on an AR application interface of the smart phone in real time, and the image displayed on the AR-based application interface of the user can immediately correspond to an area of the actual physical space. Before step S110 is executed, a step of the intelligent terminal constructing the terminal coordinate system in a state of previewing the physical space interface is further included, so as to respond to the detected input in a state of completing the construction of the terminal coordinate system. In an embodiment, the intelligent terminal first constructs a map corresponding to an actual physical space in a state of previewing the physical space interface and stores coordinate information corresponding to the map. And the terminal coordinate system is constructed to describe coordinate information corresponding to the intelligent terminal map. The coordinate information includes: a location feature, coordinates of the location feature in a map, etc. Wherein the locating features include, but are not limited to: feature points, feature lines, etc.
In one embodiment, the camera device of the intelligent terminal continuously shoots images in the actual physical space during the movement process of the intelligent terminal, and the intelligent terminal constructs a map based on the shot images in the actual physical space and the position of the intelligent terminal during the movement process. The map constructed by the intelligent terminal is used for describing the position and occupied range of an object in the actual physical space in the map. And the map constructed by the intelligent terminal and the actual physical space corresponding to the map constructed by the mobile robot need to have a superposed part, so as to execute the steps S110 to S130. For example, if the mobile robot map corresponds to a plurality of positioning features, at least one of the positioning features corresponding to the intelligent terminal map should be the same as the positioning feature corresponding to the mobile robot map.
The step S110 further includes a step of prompting the user to start the input operation after the intelligent terminal constructs the map.
In one embodiment, the physical space interface is utilized to prompt a user for an input operation. For example: the intelligent terminal display device displays characters such as 'please a user to perform input operation' and the like to prompt the user to perform input operation or displays preset graphs to prompt the user that the intelligent terminal can perform input operation after the map is constructed.
In another embodiment, the user is prompted for an input operation with a sound. For example, the audio device of the smart terminal may emit a prompt voice such as "please the user to perform an input operation" or may emit a preset music or sound to prompt the user to perform an input operation.
In yet another embodiment, the user is prompted for an input operation using a vibration. For example, the vibration device of the intelligent terminal can generate vibration to prompt the user to perform input operation. Wherein the input includes, but is not limited to, a first input and a second input, etc., which will be described in detail later.
In one embodiment, step S110 includes displaying, in a physical space interface previewed by the display device, a video stream captured in real time by a camera device of the smart terminal, and the smart terminal detecting an input of a user in the physical space interface by using an input device of the smart terminal.
The video stream is a multi-frame image continuously shot by the camera device in real time, and can be acquired by the mobile intelligent terminal so as to be continuously displayed by the display device in real time under a previewed physical space interface. For example; when a user shoots a scene by using mobile phone shooting software, a display screen of the mobile phone continuously displays scene images shot by the camera device in real time under a preview interface so that the user can adjust a shooting angle to shoot based on a video stream shot by the camera device.
The input device is a device that can detect and sense a user's input in a physical space interface. For example, a touch display screen of the smart terminal, keys, buttons, etc. of the smart terminal. The detected input includes at least one of: slide input operation, click input operation. The detected input corresponds to the input device. Based on the detected input, the processing device of the intelligent terminal may determine that the input operation corresponds to a location or area in a map constructed by the intelligent terminal.
In a specific embodiment, the input device is a touch display screen, and the user input detected by the touch display screen may be a sliding operation or a clicking operation. For example, the user slides continuously on the touch display screen, and the processing device of the intelligent terminal can determine the corresponding position of the sliding track in the map of the intelligent terminal based on that the sliding operation display device of the user can continuously detect and sense the sliding track corresponding to the sliding operation of the user.
For another example, the user clicks on the touch screen, and based on a plurality of positions clicked on the touch screen, the processing device of the smart terminal may determine that the plurality of positions correspond to positions in a map of the smart terminal. In another embodiment, the input device is a key, and the clicking operation on the touch screen can be converted into the clicking operation on the key. For example, a target point exists in a video stream displayed by a display device of the intelligent terminal, and the display position of the target point is at a fixed position of the display device, can be displayed at a central position, and can also be displayed at other positions. The target point is corresponding to different positions in the actual physical space through the mobile intelligent terminal, and then a user can input through clicking a key.
In another embodiment, step S110 includes displaying a video stream captured in real time by a camera of the smart terminal in a physical space interface previewed by the display device, and using a motion sensor in the smart terminal to obtain the input of the user. The mobile sensing device may check and record the position and direction of the intelligent terminal and the mobile position of the intelligent terminal, and the processing device of the intelligent terminal may determine the position of the mobile position in the map of the intelligent terminal. The mobile sensing device comprises sensing devices such as an accelerometer, a gyroscope and the like, and a user can move the intelligent terminal through a video stream displayed by the display device, so that the moving track of the intelligent terminal in the physical space forms user input.
The processing device of the smart device may perform step S120 based on the input of the user detected in the state where the display device previews the physical space interface.
In step S120, at least one target area is created in the previewed physical space interface in response to the detected input. The target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot.
Wherein the processing means responds in real time to input detected by the input means or the movement sensing means in order to create at least one target area in the previewed physical space interface. The at least one target area is created by the input operation of the user.
In one embodiment, the input operation is a click operation on a touch screen, and the touch screen may be a touch screen sensing a click position based on a capacitance change or a touch screen sensing a click position based on a resistance change. A click position on the touch screen at a certain moment may cause a change in capacitance or resistance of the touch screen, any of the above changes may enable the processing device of the smart terminal to correspond the click position to a position in an image corresponding to the certain moment in a video stream displayed in the previewed physical space interface, at least one target area may be created in the previewed physical space interface based on a plurality of click positions in the video stream image, and the processing device may correspond at least one target area created in the previewed physical space interface to a map constructed by the smart terminal. At least one target area may be created in the previewed physical space interface based on preset rules and multiple click positions in the video stream image, for example, one target area may be created.
In a specific embodiment, the preset rule is that a connecting line is sequentially connected with each click position to form a target area. The connecting line can be a straight line or a curved line. In another embodiment, the preset rule is that a target area is formed based on a circumscribed figure of a figure formed by connecting the plurality of click positions by using connecting lines. The circumscribed figure includes, but is not limited to, a rectangle, a circumscribed circle, an external polygon or an irregular figure, etc. In another embodiment, the preset rule is that a target area is formed based on an inscribed graph of a graph formed by connecting the plurality of click positions by connecting lines. The inscribed graph includes, but is not limited to, a rectangle, an inscribed circle, an inner-bound polygon, an irregular graph, and the like. It should be noted that the preset rule used by the target area created by the intelligent terminal may be changed based on the selection of the user, and the same preset rule may also be adopted for any click operation. The operation of the user selecting the preset rule may be performed before the click operation or after the click operation.
Taking a mobile robot as an example of a cleaning robot, a user wants to make the cleaning robot enter a scattered garbage area to perform cleaning work, and the user needs to perform a click operation in a touch screen of the intelligent terminal to enable the intelligent terminal to create a target area. For example, referring to fig. 3a, fig. 3a is a schematic diagram illustrating an embodiment of a target area created by the smart terminal of the present application in the previewed physical space interface. The preset rule selected by the user is that the target area is formed on the basis of a circumscribed circle of a graph formed by connecting the plurality of click positions by using a connecting line. The user clicks multiple times around the scattered-garbage region on the touchscreen by a finger or stylus based on the location of the scattered-garbage region in the image. And the processing device of the intelligent terminal creates a circular target area based on the click of the user and the preset rule selected by the user. Referring to fig. 3b, fig. 3b is a schematic diagram illustrating a target area created by the smart terminal of the present application in the previewed physical space interface according to another embodiment. The preset rule selected by the user is that the target area is formed on the basis of a circumscribed rectangle of a graph formed by connecting the plurality of click positions by connecting lines. The user clicks multiple times around the scattered-garbage region on the touchscreen by a finger or stylus based on the location of the scattered-garbage region in the image. And the processing device of the intelligent terminal creates a rectangular target area based on the click of the user and the preset rule selected by the user. For another example, please refer to fig. 3c, where fig. 3c is a schematic diagram of a target area created by the intelligent terminal of the present application in the previewed physical space interface according to another embodiment. The preset rule selected by the user is that a curve is used for connecting each click position in sequence to form an irregular target area. The user clicks multiple times around the scattered-garbage region on the touchscreen by a finger or stylus based on the location of the scattered-garbage region in the image. And the processing device of the intelligent terminal creates an irregular target area based on the click of the user and the preset rule selected by the user. In another embodiment, the input operation is a sliding operation on the touch screen, and a continuous sliding of the user on the touch screen may cause a change in capacitance or resistance of the touch screen, any of which may enable the processing device of the intelligent terminal to correspond a sliding position of the user on the touch screen at each time to a position in an image corresponding to each time displayed in the previewed physical space interface, and the at least one target area may be created in the previewed physical space interface based on a graphic position corresponding to the continuous sliding operation. The intelligent terminal can create target areas with different shapes based on different continuous sliding operations of the user.
In some embodiments, as shown in fig. 3a, 3b, and 3c, after the intelligent terminal completes creating the target area, the user may click the "confirm" virtual key on the touch screen to enable the intelligent terminal to perform step S130 after confirming the target area, or click the "modify" virtual key on the touch screen to re-perform the input operation. In other embodiments, the user may also issue a preset confirmation voice command to indicate that the target area has been confirmed so that the intelligent terminal performs step S130, for example, the user issues a "confirmation" voice command. Alternatively, the user issues a preset modification voice command to re-execute the input operation, for example, the user issues a "modification" voice command.
When a user creates a plurality of target areas in the previewed physical space interface, taking the input of the user as a click operation as an example, the intelligent terminal may create the plurality of target areas by using a preset time interval. For example, if the next click operation on the input device is determined to create a new target area after the preset time interval is exceeded, the smart terminal may prompt the user to perform an input operation on the current target area as soon as possible by using sound, vibration or the physical space interface before the time interval is exceeded.
When a plurality of target areas are created in the previewed physical space interface by the user, the intelligent terminal may sort the plurality of target areas based on the time of creation of the plurality of target areas to generate a plurality of ordered target areas so as to generate an interactive instruction based on the plurality of ordered target areas, so that the mobile robot can perform related operations based on the sorted plurality of target areas. For example, based on user input, the first created target area is ranked as the first target area. The intelligent terminal can also generate a plurality of ordered target areas based on the user-defined ordering so that the intelligent terminal can generate an interactive instruction based on the ordered target areas, and the mobile robot can execute related operations based on the ordered target areas. For example, a user ranks a plurality of target areas based on how urgently they need to be cleaned.
In one embodiment, step S120 further includes step S121 (not shown) and step S122 (not shown). In step S121, the processing device of the smart terminal determines the correspondence relationship based on the coordinate information of the consensus elements extracted from the previewed physical space interface in the robot coordinate system and the coordinate information in the terminal coordinate system, respectively. The consensus element is an element which can enable the intelligent terminal, the mobile robot or the server to determine the corresponding relationship of the coordinate information in the two coordinate systems after acquiring the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system. The consensus elements include, but are not limited to: a positioning feature common to the mobile robot and the intelligent terminal, an image including an object corresponding to the positioning feature of the mobile robot map, and the like.
The coordinate information in the robot coordinate system may be stored in the intelligent terminal for a long time or acquired from the mobile robot or the server when the interaction method is executed.
The robot coordinate system of the mobile robot is to describe coordinate information corresponding to the mobile robot map. The coordinate information includes: a location feature, coordinates of the location feature in a map, etc. The position of an object in the actual physical space described by the locating feature in the map can then be determined by the coordinates of the locating feature in the map. Wherein the locating features include, but are not limited to: feature points, feature lines, etc. The positioning features are described by way of example by descriptors. For example, based on a Scale-invariant feature transform (SIFT-invariant feature transform), a positioning feature is extracted from a plurality of images, a gray value sequence for describing the positioning feature is obtained based on an image block containing the positioning feature in the plurality of images, and the gray value sequence is a descriptor. For another example, the descriptor is used to describe the positioning feature by encoding the surrounding brightness information of the positioning feature, sampling several points around the positioning feature as a circle by taking the positioning feature as a center, wherein the number of the sampling points is, but not limited to, 256 or 512, comparing the sampling points two by two to obtain the brightness relationship between the sampling points, and converting the brightness relationship into a binary string or other encoding format.
The common positioning feature is the positioning feature of the map constructed by the intelligent terminal under the terminal coordinate system and the positioning feature of the map constructed by the mobile robot under the robot coordinate system. The intelligent terminal extracts a plurality of positioning features for describing objects in the actual physical space from the video stream displayed on the basis of the previewed physical space interface when the map is built. And determining the coordinates of the plurality of positioning features in the intelligent terminal coordinate system. For example, the positioning features of the map constructed by the intelligent terminal in the terminal coordinate system include the positioning features corresponding to the dining table legs, and the positioning features of the map constructed by the mobile robot in the robot coordinate system also include the positioning features corresponding to the dining table legs, so that the processing device of the intelligent terminal can determine the corresponding relationship between the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system based on the coordinates of the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system, and further determine the corresponding relationship between all the coordinates in the terminal coordinate system of the intelligent terminal and all the coordinates in the robot coordinate system of the mobile robot. Step S122 may be executed after obtaining the corresponding relationship.
The image containing the object corresponding to the positioning feature of the mobile robot map means that the processing device of the intelligent terminal acquires a video stream captured by the intelligent terminal. Wherein the positioning feature of the object of the actual physical space corresponding to the at least one frame of image in the video stream is a positioning feature of a robot map. For example, if one of the positioning features of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
And the processing device of the intelligent terminal acquires the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. And the processing device matches the positioning features in the at least one frame of image with a map, positioning features and coordinate information of the physical space pre-constructed by the mobile robot through an image matching algorithm, so that the positioning features matched with the map of the mobile robot in the image are determined. Here, in some examples, the intelligent terminal is configured with an extraction algorithm that is the same as that of the mobile robot for extracting the positioning features in the image in advance, and extracts the candidate positioning features in the image based on the extraction algorithm. Wherein the extraction algorithm includes, but is not limited to: and (3) an extraction algorithm based on at least one characteristic of texture, shape and spatial relationship. The extraction algorithm based on the texture features comprises texture feature analysis of at least one gray level co-occurrence matrix, a checkerboard feature method, a random field model method and the like; examples of the shape feature-based extraction algorithm include at least one of the following fourier shape description method, shape quantitative measurement method, and the like; the extraction algorithm based on the spatial relationship features is exemplified by the mutual spatial position or relative direction relationship among a plurality of image blocks divided from the image, and these relationships include, but are not limited to, a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. And the intelligent terminal matches the candidate locating feature fs1 in the image with the locating feature fs2 corresponding to the mobile robot map by using an image matching technology, so that a matched locating feature fs 1' is obtained. The intelligent terminal can determine the corresponding relation of the coordinates between the intelligent terminal map and the mobile robot map based on the coordinates of fs 1' in the intelligent terminal map and the coordinates in the mobile robot map. Step S122 may be executed after obtaining the corresponding relationship.
In step S122, coordinate information of the at least one target area in a robot coordinate system of the mobile robot is determined based on the correspondence. The processing device of the intelligent terminal can determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relation between the robot coordinate system and the coordinate information of the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the intelligent terminal.
Wherein, when the intelligent terminal acquires a video stream, the processing device may determine the correspondence relationship based on a common identification element that is a positioning feature common to the mobile robot and the intelligent terminal.
In order to reduce the amount of calculation by the processing means for determining the correspondence. In one embodiment, referring to fig. 4, fig. 4 is a schematic diagram of a coordinate system established by the intelligent terminal of the present application in one embodiment, as shown in the figure. And when the intelligent terminal establishes a coordinate system, taking a coordinate point O' of one positioning feature in the robot map as an initial coordinate point of the terminal coordinate system of the intelligent terminal. The coordinates of the at least one target area in the robot coordinate system of the mobile robot can be directly determined based on the coordinate system of the intelligent terminal established in the above manner and the coordinate information in the map constructed by the target area in the coordinate system. For example, a point P in the intelligent terminal coordinate system is a point in the target area, and a vector O 'P, that is, a coordinate of the point P in the mobile robot coordinate system can be determined according to the vectors O' O "and O" P, so as to determine a coordinate of the at least one target area in the robot coordinate system of the mobile robot.
In step S130, an interactive command is generated based on the at least one target area to be sent to the mobile robot. The interaction instruction includes at least one target area and a corresponding operation performed by the mobile robot. The interactive instruction is used for instructing the mobile robot to execute corresponding operation in the target area or not to execute corresponding operation in the target area.
In an embodiment, in the step of detecting the input of the user in the state where the display device previews the physical space interface, the input of the user is detected as the first input. The intelligent terminal creates at least one target area based on the first input. In order to generate an interaction instruction based on the at least one target area, the interaction method further comprises: and detecting a second input of the user, wherein the second input corresponds to a corresponding operation executed by the mobile robot, and generating an interactive instruction to be sent to the mobile robot based on the target area and the second input. The intelligent terminal may detect the second input before the first input or after the first input.
The second input comprises any one of: cleaning or not cleaning the target area, entering or not entering the target area, collating or not collating items within the target area. For example, the mobile robot is a cleaning robot, and if the target area corresponds to an area where debris is scattered on the floor, the second input is a cleaning target area. The second input is not to clear the target area if the target area corresponds to an area of an obstacle. As another example, the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is entering the target area. And if the target area corresponds to an area which does not need to be viewed by the user, the second input is that the target area is not entered. For another example, the mobile robot is a transfer robot, and if the target area corresponds to an area where the user needs to sort the articles, the second input is to sort the articles in the target area. And if the target area corresponds to an area where the user does not need to sort the articles, the second input is not to sort the articles in the target area.
The second input can be input in a voice mode or a virtual key clicking mode. For example, if the user wants the mobile robot to enter a scattered garbage area to perform a cleaning job, the user needs to perform the first input in the input device of the smart terminal to enable the smart terminal to create a target area. Referring to fig. 5, fig. 5 is a schematic view illustrating a virtual key of the smart terminal according to an embodiment of the present application, and as shown in fig. 5, a user may complete the second input by clicking the virtual key of the "cleaning target area" in a menu bar of the smart terminal, so that the smart terminal generates an interactive instruction. The virtual key of the "cleaning target area" may be displayed in a form of not limited to characters but patterns.
In another embodiment, the interactive instruction is related to a function of the mobile robot, and the interactive instruction can be generated without a second input by a user, where the interactive instruction only includes the at least one target area in this embodiment. For example, the mobile robot is a cleaning robot performing a cleaning work, the intelligent terminal sends the at least one target area to the cleaning robot, and the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route. As another example, the mobile robot is a patrol robot for performing patrol work, the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area to perform patrol work based on the navigation route. For another example, the mobile robot is a transfer robot for performing the finishing and transfer work, the intelligent terminal sends the at least one target area to the transfer robot, and the transfer robot generates a navigation route and automatically enters the target area to perform the finishing and transfer work based on the navigation route.
The interaction method with the mobile robot as described above not only enables the user to determine precise input based on the visual video stream provided by the intelligent terminal and further enables the intelligent terminal to respond to the detected input to create at least one precise target area in the previewed physical space interface, but also enables the user to generate an interaction instruction to be sent to the mobile robot based on the position of the at least one target area in the mobile robot map. And the mobile robot analyzes the interactive instruction to acquire the position of the at least one target area in the robot map so as to execute corresponding operation in the target area or not.
In an embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the intelligent terminal. The interactive method further includes step S210, step S220, and step S230. The coordinate information in the robot coordinate system may be stored in the intelligent terminal for a long time or acquired from the mobile robot or the server when the interaction method is executed.
The processing device of the intelligent terminal executes step S210 based on the coordinate information of the mobile robot coordinate system and the coordinate information of the intelligent terminal coordinate system stored in the storage device. In step S210, the processing device of the intelligent terminal determines the correspondence relationship based on the coordinate information of the consensus elements extracted from the previewed physical space interface in the robot coordinate system and the coordinate information in the terminal coordinate system, respectively.
The consensus element is an element which can enable the intelligent terminal, the mobile robot or the server to determine the corresponding relation of the coordinate information in the two coordinate systems after acquiring the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system. The consensus elements include, but are not limited to: a positioning feature common to the mobile robot and the intelligent terminal, an image including an object corresponding to the positioning feature of the mobile robot map, and the like.
The shared positioning feature is a positioning feature of a map constructed by the intelligent terminal under a terminal coordinate system and a positioning feature of a map constructed by the mobile robot under a robot coordinate system. The intelligent terminal extracts a plurality of positioning features for describing objects in the actual physical space from the video stream displayed on the basis of the previewed physical space interface when the map is built. And determining the coordinates of the plurality of positioning features in the intelligent terminal coordinate system. For example, the positioning features of the map constructed by the intelligent terminal in the terminal coordinate system include the positioning features corresponding to the dining table legs, and the positioning features of the map constructed by the mobile robot in the robot coordinate system also include the positioning features corresponding to the dining table legs, so that the processing device of the intelligent terminal can determine the corresponding relationship between the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system based on the coordinates of the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system, and further determine the corresponding relationship between all the coordinates in the terminal coordinate system of the intelligent terminal and all the coordinates in the robot coordinate system of the mobile robot. Step S220 may be executed after obtaining the corresponding relationship. In step S220, the method for creating at least one target area in the previewed physical space interface based on the detected input in response to step S120 may obtain coordinate information of the created target area in a terminal coordinate system of the smart terminal. The processing device of the intelligent terminal can determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relation between the robot coordinate system and the coordinate information of the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the intelligent terminal.
The image containing the object corresponding to the positioning feature of the mobile robot map means that the processing device of the intelligent terminal acquires a video stream captured by the intelligent terminal. Wherein the positioning feature of the object of the actual physical space corresponding to the at least one frame of image in the video stream is a positioning feature of a robot map. For example, if one of the positioning features of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
And the processing device of the intelligent terminal acquires the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. And the processing device matches the positioning features in the at least one frame of image with a map, positioning features and coordinate information of the physical space pre-constructed by the mobile robot through an image matching algorithm, so that the positioning features matched with the map of the mobile robot in the image are determined. Here, in some examples, the intelligent terminal is configured with an extraction algorithm that is the same as that of the mobile robot for extracting the positioning features in the image in advance, and extracts the candidate positioning features in the image based on the extraction algorithm. Wherein the extraction algorithm includes, but is not limited to: and (3) an extraction algorithm based on at least one characteristic of texture, shape and spatial relationship. The extraction algorithm based on the texture features comprises texture feature analysis of at least one gray level co-occurrence matrix, a checkerboard feature method, a random field model method and the like; examples of the extraction algorithm based on the shape feature include at least one of the following fourier shape description method, shape quantitative measurement method, and the like; the extraction algorithm based on the spatial relationship features is exemplified by the mutual spatial position or relative direction relationship among a plurality of image blocks divided from the image, and these relationships include, but are not limited to, a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. And the intelligent terminal matches the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map by using an image matching technology, so that the matched location feature fs 1' is obtained. The intelligent terminal can determine the corresponding relation of the coordinates between the intelligent terminal map and the mobile robot map based on the coordinates of fs 1' in the intelligent terminal map and the coordinates in the mobile robot map. Step S220 may be executed after obtaining the corresponding relationship. In step S220, the method for creating at least one target area in the previewed physical space interface based on the input detected in response to step S120 may obtain coordinate information of the created target area in the terminal coordinate system of the smart terminal. And determining the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relation between the robot coordinate system and the coordinate information of the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the intelligent terminal.
Wherein, in the case where the intelligent terminal acquires a video stream, the processing device may also determine the correspondence relationship based on a common element that is a positioning feature common to the mobile robot and the intelligent terminal.
In order to reduce the amount of calculation by the processing means for determining the correspondence. In one embodiment, referring to fig. 4, fig. 4 is a schematic diagram of a coordinate system established by the intelligent terminal of the present application in one embodiment, as shown in the figure. And when the intelligent terminal establishes a coordinate system, the coordinate point O' of one positioning feature in the robot map is taken as an initial coordinate point of the terminal coordinate system of the intelligent terminal. The coordinates of the at least one target area in the robot coordinate system of the mobile robot can be directly determined based on the coordinate system of the intelligent terminal established in the above manner and the coordinate information in the map constructed by the target area in the coordinate system. For example, a point P in the intelligent terminal coordinate system is a point in the target area, and a vector O 'P, that is, a coordinate of the point P in the mobile robot coordinate system can be determined according to the vectors O' O "and O" P, so as to determine a coordinate of the at least one target area in the robot coordinate system of the mobile robot.
The processing device performs step S230 based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot, and in step S230, generates an interactive instruction including the at least one target area described by the coordinate information in the robot coordinate system to send to the mobile robot. The interactive instruction of step S230 includes at least one target area and corresponding operation performed by the mobile robot. The interactive instruction is used for instructing the mobile robot to execute corresponding operation in the target area or not to execute corresponding operation in the target area. The method for generating the interactive instruction and the corresponding description are the same as or similar to those in step S130, and are not repeated herein.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a network architecture for interaction among the intelligent terminal 10, the server 20, and the mobile robot 30 according to the present application. The interactive instruction may be directly sent to the mobile robot 30 through an interface device of the intelligent terminal 10, or may be sent to the server 20 through the interface device and then sent to the mobile robot 30 through the server 20.
And when the coordinate information in the robot coordinate system of the mobile robot is prestored in the cloud server connected with the intelligent terminal network or the coordinate information in the robot coordinate system of the mobile robot is prestored in the mobile robot connected with the intelligent terminal network. The processing device of the intelligent terminal can also generate an interactive instruction containing the at least one target area and a consensus element related to the creation of the at least one target area, so as to send the interactive instruction to the mobile robot or send the interactive instruction to the mobile robot through a server. Wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system. Wherein the consensus elements are related to creating the at least one target region, including but not limited to: a positioning feature common to the mobile robot and the intelligent terminal, an image including an object corresponding to the positioning feature of the mobile robot map, and the like.
Referring to fig. 7, fig. 7 is a flowchart illustrating an interaction method with a mobile robot according to another embodiment of the present disclosure. In this embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the server connected to the intelligent terminal network.
The server can be a single computer device, a service system based on a cloud architecture, a cloud server and the like. The single computer device may be an autonomously configured computer device that can execute the interaction method, and may be located in a private computer room or a leased computer position in a public computer room. The Service system of the Cloud architecture comprises a Public Cloud (Public Cloud) Service end and a Private Cloud (Private Cloud) Service end, wherein the Public or Private Cloud Service end comprises Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), Infrastructure-as-a-Service (IaaS), and the like. The private cloud service end is used for example for an Aliskian cloud computing service platform, an Amazon cloud computing service platform, a Baidu cloud computing platform, a Tencent cloud computing platform and the like.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a server according to an embodiment of the present disclosure. As shown, the server includes a storage device 21, an interface device 22, a processing device 23, and the like.
The storage means 21 is for storing at least one program. Wherein the at least one program is operable to cause the processing device 23 to perform the interaction method described in the embodiment of fig. 7. The storage device 21 may further pre-store coordinate information in a robot coordinate system of the mobile robot or the processing device 23 of the server may obtain the coordinate information of the robot coordinate system from the smart terminal or the mobile robot through the interface device 22 when the interaction method is executed.
Here, the storage device 21 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, the storage includes a flash memory device or other non-volatile solid state storage device. In certain embodiments, the storage device 21 may also include memory remote from the one or more processing devices, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, Local Area Networks (LANs), wide area networks (WLANs), Storage Area Networks (SANs), etc., or a suitable combination thereof. The storage device 21 also includes a memory controller that controls access control of memory by the server side components such as a Central Processing Unit (CPU) and the interface device 22, or other components.
The interface device 22 is used for assisting a smart terminal and the mobile robot to perform communication interaction. For example, the interface device 22 may receive an interaction instruction generated by the smart terminal, and send the interaction instruction generated by the smart terminal to the mobile robot. For another example, the interface device 22 of the server sends an instruction to the mobile robot or the smart terminal to acquire coordinate information in the mobile robot coordinate system. For another example, the interface device 22 further obtains a video stream captured by the intelligent terminal, a second input of the intelligent terminal, and obtains a consensus element from the intelligent terminal related to the creation of the at least one target area, and sends the consensus element to the mobile robot. The interface means 22 comprises a network interface, a data line interface, etc. Wherein the network interfaces include, but are not limited to: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on near field communication (WiFi, bluetooth, etc.), and the like. The data line interface includes, but is not limited to: USB interface, RS232, etc. The interface device 22 is connected with the storage device 21, the processing device 23, the internet, a mobile robot located in a physical space, an intelligent terminal and the like.
Processing means 23 are connected to said storage means 21 and interface means 22 for executing said at least one program for coordinating said storage means 21 and interface means 22 to perform the interaction method as described in fig. 7. The processing means 23 comprise one or more processors. The processing device 23 is operable to perform data read and write operations with the storage device. The processing means 23 performs processing such as extracting images, registering features temporarily, positioning in a map based on features, etc. The processing device 23 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), or any combination thereof.
The processing device 23 of the server performs step S310 based on the coordinate information of the mobile robot coordinate system stored in the storage device 21, and in step S310, acquires at least one target area from the intelligent terminal. The target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot.
Wherein the at least one target area is a target area including coordinate information of a terminal coordinate system of the smart terminal created by the processing means of the smart terminal in response to the input of the user detected in a state where the display means previews the physical space interface. The processing device of the intelligent terminal can correspond at least one target area created in the previewed physical space interface to a map constructed by the intelligent terminal, and further determine coordinate information of the at least one target area in the map of the intelligent terminal. The manner in which the user input is detected and at least one target area is created in response to the input is the same as or similar to that in the interaction method described in fig. 2 and will not be described in detail herein.
Here, the processing device of the server acquires coordinate information of the coordinate system of the mobile robot and coordinate information of the at least one target area in the map of the smart terminal. The intelligent terminal constructs a map according to the terminal coordinate system of the intelligent terminal, and the intelligent terminal and the mobile robot construct a map based on the robot coordinate system, so that the intelligent terminal and the mobile robot construct a map corresponding to an actual physical space. The processing device of the server can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the intelligent terminal.
In step S320, an interaction instruction is generated based on the at least one target area to be sent to the mobile robot. The interaction instruction includes at least one target area and a corresponding operation performed by the mobile robot. The interactive instruction is used for instructing the mobile robot to execute corresponding operations in the target area or not to execute corresponding operations in the target area.
In a specific embodiment, in the step of detecting the input of the user in the state where the display device of the intelligent terminal previews the physical space interface, the input of the user is detected as a first input. The intelligent terminal creates at least one target area based on the first input. In order to generate an interaction instruction based on the at least one target area, the interaction method further includes: the processing device also acquires a second input from the intelligent terminal through the interface device, and generates an interactive instruction to be sent to the mobile robot based on the target area and the second input. The second inputs correspond to respective operations performed by the mobile robot. Wherein the step of obtaining the second input may be performed before the first input or after the first input.
The second input comprises any one of: cleaning or not cleaning the target area, entering or not entering the target area, collating or not collating items within the target area. For example, the mobile robot is a cleaning robot, and if the target area corresponds to an area where debris is scattered on the floor, the second input is a cleaning target area. The second input is not to clear the target area if the target area corresponds to an area of an obstacle. As another example, the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is entering the target area. And if the target area corresponds to an area which does not need to be viewed by the user, the second input is that the target area is not entered. For another example, the mobile robot is a transfer robot, and if the target area corresponds to an area where the user needs to sort the articles, the second input is to sort the articles in the target area. And if the target area corresponds to an area where the user does not need to sort the articles, the second input is not to sort the articles in the target area.
In another embodiment, the interactive instruction is related to a function of the mobile robot, and the interactive instruction can be generated without a second input by the user, where the interactive instruction only includes the at least one target area in this embodiment. For example, the mobile robot is a cleaning robot performing a cleaning work, the intelligent terminal transmits the at least one target area to the cleaning robot, and the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route. As another example, the mobile robot is a patrol robot for performing patrol work, the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area to perform patrol work based on the navigation route. In another example, the mobile robot is a transfer robot for performing the finishing transfer work, the intelligent terminal sends the at least one target area to the transfer robot, and the transfer robot generates a navigation route and automatically enters the target area to perform the finishing transfer work based on the navigation route.
In an embodiment, the processing device further obtains a video stream captured by the smart terminal through an interface device, step S310 further includes steps S311 and S312, and in step S311, the processing device determines the corresponding relationship based on coordinate information of the consensus element provided by the video stream in the robot coordinate system and coordinate information of the consensus element in the terminal coordinate system, respectively.
Wherein the consensus elements comprise: images of objects corresponding to the locating features of the mobile robot map are included. For example, one positioning feature of the mobile robot map corresponds to a chair in the actual physical space, and at least one frame of image including the chair exists in the video stream acquired by the server. And the processing device of the server acquires the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. And the processing device matches the positioning features in the at least one frame of image with a map, positioning features and coordinate information of the physical space pre-constructed by the mobile robot through an image matching algorithm, so as to determine the positioning features matched with the map of the mobile robot in the image. Here, in some examples, the server is configured with an extraction algorithm in advance in the same way as the mobile robot extracts the positioning features in the image, and extracts the candidate positioning features in the image based on the extraction algorithm. Wherein the extraction algorithm includes, but is not limited to: and (3) an extraction algorithm based on at least one characteristic of texture, shape and spatial relationship. The extraction algorithm based on the texture features comprises texture feature analysis of at least one gray level co-occurrence matrix, a checkerboard feature method, a random field model method and the like; examples of the shape feature-based extraction algorithm include at least one of the following fourier shape description method, shape quantitative measurement method, and the like; the extraction algorithm based on the spatial relationship features is exemplified by the mutual spatial position or relative direction relationship among a plurality of image blocks divided from the image, and these relationships include, but are not limited to, a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. And the server side matches the candidate locating feature fs1 in the image with the locating feature fs2 corresponding to the mobile robot map by using an image matching technology, so that a matched locating feature fs 1' is obtained. The server side can determine the corresponding relation of the coordinates between the intelligent terminal map and the mobile robot map based on the coordinates of fs 1' in the intelligent terminal map and the coordinates in the mobile robot map. For example, the processing device at the server obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and obtains the correspondence between any one of the coordinates in the terminal coordinate system and the coordinates in the mobile robot coordinate system.
In step S312, the processing device of the server determines the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system of the intelligent terminal and the coordinate information of the target area in the robot coordinate system of the intelligent terminal. For example, the target area corresponds to a target area where an actual physical space patch cord is located, and the server may determine a plurality of coordinates of the target area in the mobile robot map based on a plurality of coordinates of the target area in the intelligent terminal map and the corresponding relationship.
The processing device of the server generates an interaction instruction including the at least one target area described by the coordinate information in the robot coordinate system of the mobile robot based on the coordinate information of the at least one target area in the robot coordinate system obtained in step S312, and sends the interaction instruction to the mobile robot through the interface device of the server. For example, the interactive command includes a region where the patch cord is located in a corresponding physical space described by coordinate information in a coordinate system of the mobile robot, and the mobile robot may directly perform a preset operation or perform an operation specified by the second input based on the region where the patch cord is located. The interactive instruction is the same as or similar to that in step S320 and will not be described in detail here.
In another embodiment, the server does not determine the coordinate position of the at least one target area in the robot coordinate system based on the video stream captured by the intelligent terminal and acquired by the processing device. Step S310 further includes step S313, in step S313, the processing device of the server obtains the consensus elements related to creating the at least one target area from the intelligent terminal through the interface device. Wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system.
For example, the common identification element is a positioning feature common to the smart terminal and the mobile robot. The common positioning feature is the positioning feature of the map constructed by the intelligent terminal under the terminal coordinate system and the positioning feature of the map constructed by the mobile robot under the robot coordinate system. The intelligent terminal extracts a plurality of positioning features for describing objects in the actual physical space from the video stream displayed on the basis of the previewed physical space interface when the map is built. And determining the coordinates of the plurality of positioning features in the intelligent terminal coordinate system. For example, the positioning features of the map constructed by the intelligent terminal in the terminal coordinate system include the positioning features corresponding to the dining table legs, the positioning features of the map constructed by the mobile robot in the robot coordinate system also include the positioning features corresponding to the dining table legs, and the processing device of the server can determine the corresponding relationship between the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system based on the coordinates of the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system, and further determine the corresponding relationship between all the coordinates in the terminal coordinate system of the intelligent terminal and all the coordinates in the robot coordinate system of the mobile robot.
And the processing device of the server determines the coordinate position of the at least one target area in the robot coordinate system based on the corresponding relation, and generates an interactive instruction containing the at least one target area and the consensus elements so as to send the interactive instruction to the mobile robot through the interface device. The mobile robot may directly perform an operation related to the at least one target area based on the acquired target area. The mobile robot can also acquire coordinate information under the intelligent terminal coordinate system through an interface device of a server or an interface device of an intelligent terminal, and determine the coordinate position of the at least one target area in the robot coordinate system based on the consensus element and the coordinate information so as to execute related operations based on the target area.
When the intelligent terminal acquires the video stream, the processing device at the server may determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on a common recognition element, which is a positioning feature common to the mobile robot and the intelligent terminal.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a mobile robot according to an embodiment of the present disclosure. As shown, the mobile robot includes a storage device 31, an interface device 33, a processing device 34, an execution device 32, and the like.
The mobile robot is a machine device that automatically performs a specific work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The mobile robot can be used indoors or outdoors, can be used for industry, business or families, can be used for replacing security patrol, welcome personnel or diners, or people to clean the ground, and can also be used for family accompanying, assistant office work and the like. The mobile robot is provided with at least one camera device for shooting images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping ); according to the constructed map, the mobile robot can carry out route planning of work such as tour, cleaning, arrangement and the like. Generally, the mobile robot caches a map constructed during the operation of the mobile robot in a local storage device, or uploads the map to a server or a cloud for storage, or uploads the map to an intelligent terminal of a user for storage.
In terms of the functional classification of the mobile robot, the mobile robot includes but is not limited to: cleaning robot, inspection robot, transfer robot. The cleaning robot is a mobile robot for performing cleaning and sweeping operations. The patrol robot is a mobile robot for performing a monitoring operation. The transfer robot is a mobile robot that performs transfer and sort operations.
The executing device 32 is used for controlling and executing corresponding operations, which correspond to the types of the mobile robots. For example, the robot is a cleaning robot, and the performing device 32 includes a cleaning device for performing cleaning and sweeping operations and a moving device for performing a navigation moving operation. The cleaning devices include, but are not limited to: edge brushes, roller brushes, fans, etc. The mobile devices include, but are not limited to: a walking mechanism and a driving mechanism. The walking mechanism can be arranged at the bottom of the cleaning robot, and the driving mechanism is arranged in a shell of the cleaning robot. As another example, the mobile robot is a transfer robot, and the executing device 32 includes a transfer device for performing a transfer and sort operation and a moving device for performing a navigation moving operation. The handling device includes but is not limited to: mechanical arms, motors, and the like. The mobile devices include, but are not limited to: a traveling mechanism and a driving mechanism. The walking mechanism can be arranged at the bottom of the transfer robot, and the driving mechanism is arranged in a shell of the transfer robot. As another example, the mobile robot is a patrol robot, and the executing device 32 includes a camera device for performing monitoring and a mobile device for performing a navigation moving operation. The image capturing device includes but is not limited to: color camera devices, grayscale camera devices, infrared camera devices, etc., including but not limited to: a traveling mechanism and a driving mechanism. The walking mechanism can be arranged at the bottom of the inspection robot, and the driving mechanism is arranged in the shell of the inspection robot.
The storage means 31 is used for storing at least one program and for storing a pre-constructed robot coordinate system. Wherein the at least one program is operable to cause the processing device to perform the interaction method described in the embodiment of fig. 10.
Here, the storage device 31 includes, but is not limited to: Read-Only Memory (ROM), Random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, storage 31 comprises a flash memory device or other non-volatile solid state memory device. In certain embodiments, the storage device 31 may also include memory remote from the one or more processing devices, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, a Local Area Network (LAN), a wide area network (WLAN), a Storage Area Network (SAN), etc., or a suitable combination thereof. The storage device 31 also includes a memory controller that controls access control of memory by components such as a Central Processing Unit (CPU) and an interface device 33 of the mobile robot or other components.
The interface device 33 is used for communication interaction with an intelligent terminal and a server. For example, the interface device 33 may receive an interaction instruction generated by the smart terminal and sent by the smart terminal or sent via the server. As another example, the interface device 33 of the mobile robot obtains the video stream captured by the smart terminal and sent by the server or the smart terminal, the second input of the smart terminal, and obtains the consensus element from the smart terminal related to creating the at least one target area. For another example, the mobile robot provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through the interface device 33, so as to obtain the interaction instruction. The interface means 33 includes a network interface, a data line interface, and the like. Wherein the network interfaces include, but are not limited to: network interface devices based on ethernet, network interface devices based on mobile networks (3G, 4G, 5G, etc.), network interface devices based on near field communication (WiFi, bluetooth, etc.), and the like. The data line interface includes, but is not limited to: USB interface, RS232, etc. The interface device 33 is connected with the storage device 31, the processing device 34, the internet, a server, an intelligent terminal, the execution device 32 and other data.
The processing device 34 is connected to the storage device 31, the executing device 32 and the interface device 33, and is configured to execute the at least one program, so as to coordinate the storage device 31 and the interface device 33 to execute the interaction method described in the embodiment of fig. 10. The processing device 34 includes one or more processors. The processing device 34 is operable to perform data read and write operations with the storage device 31. The processing device 34 performs functions such as extracting images, temporarily storing features, locating in a map based on features, and the like. The processing device 34 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof.
By the aforementioned embodiment of the interaction method, on the basis that the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a server through the interface device, the smart terminal or the server may generate an interaction instruction including the at least one target area described by the coordinate information in the robot coordinate system to be sent to the mobile robot through the interface device of the smart terminal or the interface device of the server. And the processing means of the mobile robot may parse the interaction instructions to obtain at least: including the at least one target area described by coordinate information in the robot coordinate system. For example, if the interactive instruction is an interactive instruction generated based on the target area and the second input, the mobile robot parses the interactive instruction to obtain the at least one target area and the second input, which are described by using coordinate information in a robot coordinate system. The mobile robot processing device controls the execution device to execute the relevant operation based on the second input and the target area. Wherein the second input is the same as or similar to the second input mentioned above and will not be described in detail herein. As another example, the interactive instruction is related to a function of the mobile robot and includes only the at least one target area, and the interactive instruction may be generated without a second input by the user. The mobile robot resolves the interaction instruction to obtain the at least one target area described by the coordinate information in the robot coordinate system. And the processing device controls the execution device to execute relevant operations based on preset functions of the mobile robot and the target area.
In yet another embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the intelligent terminal network. Referring to fig. 10, fig. 10 is a schematic flowchart illustrating an interaction method according to another embodiment of the present application.
The processing device of the mobile robot executes step S410 based on the coordinate information of the mobile robot coordinate system stored in the storage device, and in step S410, the processing device acquires an interaction instruction from the intelligent terminal or the server. Wherein the interactive instruction comprises at least one target area; the target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system.
Wherein the at least one target area is a target area including coordinate information of a terminal coordinate system of the smart terminal created by the processing means of the smart terminal in response to the input of the user detected in a state where the display means previews the physical space interface. The processing device of the intelligent terminal can correspond at least one target area created in the previewed physical space interface to a map constructed by the intelligent terminal, and further determine coordinate information of the at least one target area in the map of the intelligent terminal. The manner in which the user input is detected and at least one target area is created in response to the input is the same as or similar to that in the interaction method described in fig. 2 and will not be described in detail here.
Here, the processing device of the mobile robot acquires coordinate information of the mobile robot coordinate system and coordinate information of the at least one target area in the smart terminal map. The intelligent terminal constructs a map according to the terminal coordinate system of the intelligent terminal, and the intelligent terminal and the mobile robot construct a map based on the robot coordinate system, so that the intelligent terminal and the mobile robot construct a map corresponding to an actual physical space. The processing device of the mobile robot can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the intelligent terminal.
In step S420, the execution means is controlled to execute an operation related to the at least one target area.
In a specific embodiment, in the step of detecting the input of the user in the state where the display device of the intelligent terminal previews the physical space interface, the input of the user is detected as a first input. And the intelligent terminal creates at least one target area based on the first input to generate an interactive instruction. The mobile robot processing device further obtains a second input from the smart terminal through the interface device, and the step of obtaining the second input may be performed before the first input or after the first input. The processing device also performs controlling the execution device to perform an operation related to the at least one target area based on the second input. For example, the performing means comprises a mobile device, the processing means generates a navigation route relating to the at least one target area based on the second input, and controls the mobile device to perform a navigation movement based on the navigation route. For another example, the performance apparatus includes a cleaning apparatus, and the processing apparatus controls a cleaning operation of the cleaning apparatus within the at least one target area based on the second input. For another example, the executing means includes an imaging means, and the processing means controls an imaging operation of the imaging means within the at least one target region based on the second input.
The second input comprises any one of: cleaning or not cleaning the target area, entering or not entering the target area, cleaning the force of the target area, sorting or not sorting items within the target area. For example, the mobile robot is a cleaning robot, if the target area corresponds to an area where debris is scattered on the floor, the second input is a cleaning target area, the processing device generates a navigation route into the at least one target area based on the second input, controls the mobile device to perform a navigation movement based on the navigation route, and controls the cleaning device to clean the debris scattered on the floor when the cleaning robot reaches the at least one target area. Based on the different types of garbage on the ground, the processing device can also control the force for cleaning the target area based on the second input. The second input is not to clear the target area if the target area corresponds to an area of an obstacle. The processing device generates a navigation route related to not entering the at least one target area based on the second input and controls the mobile device to perform a navigation movement to move away from, bypass, the at least one target area based on the navigation route. As another example, the mobile robot is a patrol robot, if the target area corresponds to an area that the user needs to view, the second input is entering the target area, the processing device generates a navigation route entering the at least one target area based on the second input, controls the mobile device to perform navigation movement based on the navigation route, and controls the camera device to capture an image of the at least one target area when the patrol robot reaches the at least one target area. And if the target area corresponds to an area which does not need to be checked by the user and the second input indicates that the target area is not entered, the processing device generates a navigation route which does not enter the at least one target area based on the second input and controls the mobile device to execute navigation movement based on the navigation route. In another example, the mobile robot is a transfer robot, if the target area corresponds to an area where a user needs to sort articles, the second input is to sort articles in the target area, the processing device generates a navigation route into the at least one target area based on the second input, controls the mobile device to perform navigation movement based on the navigation route, and controls the transfer device to transfer articles sorted in the at least one target area when the transfer robot reaches the at least one target area. And if the target area corresponds to an area where the user does not need to sort the articles, the second input is that the articles in the target area are not sorted, and the processing device controls the carrying device to not sort the articles in the at least one target area based on the second input.
In another embodiment, the interaction instruction is related to a function of the mobile robot, and the processing device of the mobile robot controls the executing device to perform the operation related to the at least one target area without acquiring the second input from the smart terminal through the interface device. In this embodiment the instructions for interaction only include the at least one target area. For example, the mobile robot is a cleaning robot performing cleaning work, the server or the intelligent terminal sends the at least one target area to the cleaning robot, and the cleaning robot automatically cleans the target area. As another example, the mobile robot is a patrol robot for performing patrol work, the server or the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot automatically enters the target area to perform patrol work. For another example, the mobile robot is a transfer robot for performing a finishing and handling operation, the server or the intelligent terminal sends the at least one target area to the transfer robot, and the transfer robot automatically enters the target area to perform the finishing and handling operation.
It should be noted that, when there are a plurality of target areas, the mobile robot may sort the target areas based on the sorting of the intelligent terminal or the user, or may sort the target areas according to the distances between the target areas and the current position of the mobile robot. So that the mobile robot performs the relevant operation based on the sorted target areas. For example, the number of the target areas is two, the first target area is two meters away from the mobile robot, and the second target area is four meters away from the mobile robot, then the mobile robot performs the correlation operation based on the first target area and then performs the correlation operation on the second target area.
In a specific embodiment, the mobile robot obtains an interactive instruction containing at least one target area from the intelligent terminal or the server and the processing device further obtains a consensus element related to the creation of the at least one target area from the intelligent terminal through the interface device. Wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system. And the processing device of the mobile robot analyzes the interactive instruction to obtain the at least one target area described by utilizing the coordinate information in the terminal coordinate system of the intelligent terminal. The consensus elements related to the creation of at least one target area refer to data required for determining the coordinate information of the at least one target area in the robot coordinate system including, but not limited to: the mobile robot comprises a video stream shot by the intelligent terminal, a positioning feature shared by the intelligent terminal and the mobile robot, and the like.
Here, the processing device further executes step S510 and step S520, and in step S510, the correspondence relationship is determined based on the coordinate information of the common identification element in the robot coordinate system and the coordinate information in the terminal coordinate system, respectively.
For example, the common identification element is a positioning feature common to the smart terminal and the mobile robot. The common positioning feature is the positioning feature of the map constructed by the intelligent terminal under the terminal coordinate system and the positioning feature of the map constructed by the mobile robot under the robot coordinate system. The intelligent terminal extracts a plurality of positioning features for describing objects in the actual physical space from the video stream displayed on the basis of the previewed physical space interface when the map is built. And determining the coordinates of the plurality of positioning features in the intelligent terminal coordinate system. For example, the positioning features of the map constructed by the intelligent terminal in the terminal coordinate system include the positioning features corresponding to the dining table legs, and the positioning features of the map constructed by the mobile robot in the robot coordinate system also include the positioning features corresponding to the dining table legs, so that the processing device of the mobile robot can determine the corresponding relationship between the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system based on the coordinates of the positioning features corresponding to the dining table legs in the robot coordinate system and the coordinates in the terminal coordinate system. And further determining the corresponding relation between all coordinates in the terminal coordinate system of the intelligent terminal and all coordinates in the robot coordinate system of the mobile robot.
For another example, the common recognition element is an image including an object corresponding to a positioning feature of the mobile robot map. For example, a positioning feature of the mobile robot map corresponds to a chair in the actual physical space, and the video stream acquired by the mobile robot includes an image of the chair. The processing device of the mobile robot can match the positioning features in at least one frame of image with the map, the positioning features and the coordinate information of the physical space pre-constructed by the mobile robot through an image matching algorithm based on the coordinate information of the robot coordinate system and at least one frame of image in the video stream, so as to determine the positioning features matched with the map of the mobile robot in the image. Here, in some examples, the processing device of the mobile robot invokes the same extraction algorithm as that for extracting the positioning features in the image when the mobile robot constructs the map, and extracts the candidate positioning features in the image based on the extraction algorithm. Wherein, the extraction algorithm includes but is not limited to: and (3) an extraction algorithm based on at least one characteristic of texture, shape and spatial relationship. The extraction algorithm based on the texture features comprises texture feature analysis of at least one gray level co-occurrence matrix, a checkerboard feature method, a random field model method and the like; examples of the extraction algorithm based on the shape feature include at least one of the following fourier shape description method, shape quantitative measurement method, and the like; the extraction algorithm based on the spatial relationship features is exemplified by the mutual spatial position or relative direction relationship among a plurality of image blocks divided from the image, and these relationships include, but are not limited to, a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. The processing means of the mobile robot uses image matching techniques to match the candidate localization feature fs1 in the image with the localization feature fs2 corresponding to the mobile robot map, resulting in a matching localization feature fs 1'. The processing means of the mobile robot can determine the correspondence of coordinates between the smart terminal map and the mobile robot map based on the coordinates in the smart terminal map of fs 1' and the coordinates in the mobile robot map. For example, the processing device of the mobile robot obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and obtains the correspondence between any one of the coordinates in the terminal coordinate system and the coordinates in the mobile robot coordinate system.
In step S520, the processing device of the mobile robot determines the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system of the smart terminal and the coordinate information of the target area in the robot coordinate system of the smart terminal. For example, the target area corresponds to an area where the garbage is scattered in the actual physical space, and the mobile robot may determine a plurality of coordinates of the target area in the mobile robot map based on the plurality of coordinates of the target area in the smart terminal map and the corresponding relationship.
The processing device of the mobile robot controls the execution device of the mobile robot to execute the operation related to the at least one target area based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot obtained in step S520. Wherein the description of the processing device controlling the executing device to execute the operation related to the at least one target area is the same as or similar to that in step S420 and is not described in detail herein.
Wherein, in the case that the mobile robot acquires the video stream, the processing device of the mobile robot may also determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on a common recognition element that is a positioning feature common to the mobile robot and the smart terminal.
In summary, the mobile robot may obtain the coordinate information of the at least one target area in the mobile robot coordinate system based on the interaction method according to any embodiment of the present disclosure, and the processing device of the mobile robot may control the executing device to execute the operation related to the at least one target area.
Taking the example of the mobile robot as a cleaning robot working indoors, the intelligent terminal detects the input of the user in a state that the display device previews the indoor space interface.
In one embodiment, the positioning features of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the positioning features in the map are pre-stored in the intelligent terminal. The processing means of the intelligent terminal responds to the detected user input to create a target area in the previewed indoor space interface that includes scattered debris. And determining the coordinates of the target area comprising the scattered garbage in the robot coordinate system of the mobile robot based on the coordinates of the common identification elements in the robot coordinate system and the coordinates in the terminal coordinate system respectively. The description of the common identification element is the same as or similar to the common identification element mentioned in step S210, and is not detailed here.
The processing device of the intelligent terminal generates an interactive instruction based on the coordinates of the target area containing scattered garbage in the robot coordinate system of the cleaning robot and sends the interactive instruction to the cleaning robot or sends the interactive instruction to the cleaning robot through the server. The interaction instruction may only contain the coordinates of the target area comprising the scattered debris in the robot coordinate system of the cleaning robot. When the processing device of the cleaning robot receives the interactive instruction through the interface device, a navigation route entering the target area including the scattered garbage can be directly generated based on the interactive instruction, the mobile device is controlled to perform navigation movement based on the navigation route, and the cleaning device is controlled to clean the scattered garbage on the ground when the cleaning robot reaches the target area including the scattered garbage. The interaction instruction may further comprise a second input of the user. For a cleaning robot, the second input by the user includes, but is not limited to: cleaning or not cleaning the target area, the force of cleaning the target area, entering or not entering the target area. For example, the second input is a deep cleaning target area, when the processing device of the cleaning robot receives the interactive instruction through the interface device, a navigation route entering the target area including scattered garbage is generated based on the interactive instruction, the mobile device is controlled to perform navigation movement based on the navigation route, and when the cleaning robot reaches the target area including scattered garbage, a fan, a side brush and a rolling brush of the cleaning device are controlled, so that the cleaning device deeply cleans the garbage scattered on the ground.
In another embodiment, the positioning feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinate of the positioning feature in the map are prestored in the server. For example, the processing device of the smart terminal responds to the detected user input to create a target area in the previewed indoor space interface that includes pet feces. And the server acquires a target area including pet excrement from the intelligent terminal. And the processing device of the server respectively determines the coordinates of the target area comprising the pet excrement in the robot coordinate system of the mobile robot based on the coordinates of the common identification elements in the robot coordinate system and the coordinates in the terminal coordinate system. The description of the common identification elements is the same as or similar to the common identification elements mentioned in steps S311 and S313, and is not detailed here.
The server generates an interactive instruction based on the coordinates of the target area including the pet feces in the robot coordinate system of the mobile robot to be sent to the mobile robot through the interface device. The interaction instruction comprises coordinates of a target area comprising scattered debris in a robot coordinate system of the cleaning robot and a second input of the user not to enter the target area. And when the processing device of the cleaning robot receives the interactive instruction through the interface device, generating a navigation route which does not enter a target area containing the pet excrement based on the interactive instruction, and controlling the mobile device to execute navigation movement based on the navigation route. It should be noted that the second input is not limited to not entering the target area, and the second input may be related to the target area created by the intelligent terminal based on the actual user input.
In yet another embodiment, the positioning features of a map constructed by the cleaning robot under a robot coordinate system and coordinates of the positioning features in the map are pre-stored in the cleaning robot. For example, the processing device of the smart terminal responds to the detected user input to create a target area in the previewed indoor space interface that includes a wrap. And the processing device of the cleaning robot acquires an interactive instruction from the intelligent terminal or forwarded by the server through the interface device, wherein the interactive instruction is the coordinates of the target area comprising the winding object in the coordinate system of the intelligent terminal. The processing means of the cleaning robot may determine the coordinates of the target area comprising the winding object in the robot coordinate system of the cleaning robot, again based on the coordinates of the common elements in the robot coordinate system and the coordinates in the terminal coordinate system, respectively. The description of the common identification element is the same as or similar to the common identification element mentioned in step S510, and is not detailed here.
The cleaning robot generates a navigation route not to enter the target area including the winding based on coordinates of the target area including the winding in a robot coordinate system of the cleaning robot and a second input of a user not to enter the target area, and controls the mobile device to perform a navigation movement based on the navigation route. It should be noted that the second input is not limited to not entering the target area, and the second input is related to the target area created by the intelligent terminal based on the actual user input.
The application also provides a control system of the mobile robot, which comprises an intelligent terminal and the mobile robot. The hardware devices of the mobile robot and the intelligent terminal in the control system and the interaction method performed by each are the same as or similar to those of the mobile robot and the intelligent terminal mentioned in the foregoing embodiments and the interaction method performed by each, and are not described in detail here.
The present application also provides a computer readable storage medium for storing at least one program, which when invoked, performs the interaction method described above with respect to the embodiment of fig. 2.
The interaction method, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application.
In the embodiments provided herein, the computer-readable and writable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, a USB flash drive, a removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable-writable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (31)

1. An interaction method with a mobile robot is used for an intelligent terminal at least comprising a display device, and is characterized by comprising the following steps:
detecting an input of a user in a state where the display apparatus previews a physical space interface;
creating at least one target area in the previewed physical space interface in response to the detected input;
acquiring a consensus element related to the at least one target area; the consensus element is used for determining the coordinate position of the at least one target area in the robot coordinate system; the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot; the mobile robot is provided with at least one camera device for shooting images of the operating environment of the mobile robot so as to enable the mobile robot to execute VSLAM work;
generating an interaction instruction to send to the mobile robot based on the at least one target area.
2. The method of claim 1, wherein the step of detecting the user's input in a state where the display device previews the physical space interface comprises:
displaying a video stream shot by a camera device of the intelligent terminal in real time in a physical space interface previewed by the display device;
and detecting the input of a user in the physical space interface by using an input device of the intelligent terminal.
3. The method of interacting with a mobile robot of claim 2, wherein the detected input comprises at least one of: slide input operation, click input operation.
4. The method of interacting with a mobile robot according to claim 1, wherein the step of detecting the input of the user in a state where the display device previews the physical space interface comprises:
displaying a video stream shot by a camera device of the intelligent terminal in real time in a physical space interface previewed by the display device;
and detecting a mobile sensing device in the intelligent terminal to obtain the input of the user.
5. The method of interacting with a mobile robot of claim 1, further comprising: and constructing the terminal coordinate system in a state of previewing the physical space interface so as to respond to the detected input in a state of finishing constructing the terminal coordinate system.
6. The interaction method with a mobile robot according to claim 1, wherein coordinate information in the robot coordinate system of the mobile robot is pre-stored in the intelligent terminal; or coordinate information in a robot coordinate system of the mobile robot is prestored in a cloud server connected with the intelligent terminal network; or the coordinate information in the robot coordinate system of the mobile robot is prestored in the mobile robot connected with the intelligent terminal network.
7. The interaction method with a mobile robot according to claim 1 or 6, further comprising:
determining the corresponding relation based on the coordinate information of the common recognition elements extracted from the previewed physical space interface under the robot coordinate system and the coordinate information under the terminal coordinate system respectively;
and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the corresponding relation.
8. The method of claim 7, wherein the step of generating an interactive command to send to the mobile robot based on the at least one target area comprises:
generating an interaction instruction comprising the at least one target area described with the coordinate information in the robot coordinate system to send to the mobile robot.
9. The method of claim 1, wherein the step of generating an interactive command to send to the mobile robot based on the at least one target area comprises:
generating an interaction instruction containing the at least one target area and a consensus element related to the creation of the at least one target area for transmission to the mobile robot; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system.
10. The method of claim 1, further comprising at least one of:
prompting a user to perform input operation by utilizing the physical space interface;
prompting a user to perform input operation by using sound; or
And prompting the user to perform input operation by using vibration.
11. The method of interacting with a mobile robot of claim 1, wherein in the step of detecting the user's input in a state where the display device previews the physical space interface, the user's input is detected as a first input, the method further comprising: and generating an interactive instruction to be sent to the mobile robot based on the target area and the second input of the detection user.
12. The method of interacting with a mobile robot of claim 11, wherein the second input comprises any of: cleaning or not cleaning the target area, entering or not entering the target area, collating or not collating items within the target area.
13. An intelligent terminal, comprising:
the display device is used for providing preview operation for a physical space interface;
storage means for storing at least one program;
the interface device is used for carrying out communication interaction with a mobile robot;
processing means coupled to said display means, storage means and interface means for executing said at least one program to coordinate said display means, storage means and interface means to perform the interaction method of any of claims 1-12.
14. A server, comprising:
storage means for storing at least one program;
the interface device is used for assisting an intelligent terminal and the mobile robot to carry out communication interaction;
the processing device is connected with the storage device and the interface device and used for executing the at least one program so as to coordinate the storage device and the interface device to execute the following interaction method:
acquiring at least one target area from the intelligent terminal and acquiring consensus factors related to the at least one target area; wherein the consensus element is used to determine a coordinate position of the at least one target area in a robot coordinate system; the target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system of the mobile robot; the mobile robot is provided with at least one camera device for shooting images of the operating environment of the mobile robot so as to enable the mobile robot to execute VSLAM work;
generating an interaction instruction based on the at least one target area to send to the mobile robot through the interface device.
15. The server according to claim 14, wherein the storage device is pre-stored with the robot coordinate system; or the processing device acquires the robot coordinate system from the intelligent terminal or the mobile robot through an interface device.
16. The server according to claim 15, wherein the processing device further obtains a video stream captured by the intelligent terminal through an interface device;
the processing device determines the corresponding relation based on the coordinate information of the consensus elements provided by the video stream under the robot coordinate system and the coordinate information under the terminal coordinate system respectively; and
and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the corresponding relation.
17. The server of claim 16, wherein the processing device generates an interactive command to send to the mobile robot based on the at least one target area, comprising:
generating an interaction instruction comprising the at least one target area described with the coordinate information in the robot coordinate system for transmission to the mobile robot through the interface device.
18. The server of claim 14, wherein the step of generating an interactive command to send to the mobile robot based on the at least one target area comprises:
acquiring a consensus element related to the creation of the at least one target area from the intelligent terminal; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system;
generating an interaction instruction comprising the at least one target area and the consensus element for transmission to the mobile robot through the interface device.
19. The server according to claim 14, wherein the processing device further obtains a second input from the smart terminal through the interface device, and the processing device further performs generating an interactive command to send to the mobile robot based on the target area and the second input.
20. The server according to claim 19, wherein the second input comprises any of: cleaning or not cleaning the target area, entering or not entering the target area, collating or not collating items within the target area.
21. A mobile robot, comprising:
a storage device for storing at least one program and a robot coordinate system constructed in advance;
the interface device is used for carrying out communication interaction with an intelligent terminal;
the execution device is used for controlling to execute corresponding operations;
the mobile robot comprises at least one camera device, a control device and a display device, wherein the camera device is used for shooting images of the operating environment of the mobile robot so as to enable the mobile robot to execute VSLAM work;
the processing device is connected with the storage device, the interface device and the execution device and is used for executing the at least one program so as to coordinate the storage device and the interface device to execute the following interaction method:
acquiring an interactive instruction from the intelligent terminal and acquiring a consensus element related to the at least one target area; wherein the consensus element is used to determine a coordinate position of the at least one target area in the robot coordinate system; the interaction instruction comprises at least one target area; the target area is obtained by detecting user input through the intelligent terminal, the target area comprises coordinate information of a terminal coordinate system of the intelligent terminal, and the coordinate information has a corresponding relation with coordinate information in a robot coordinate system;
controlling the execution device to execute the operation related to the at least one target area.
22. The mobile robot of claim 21, wherein the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through the interface device for obtaining the interaction command.
23. The mobile robot of claim 22, wherein the processing device performing operations related to the at least one target area comprises:
parsing the interactive instructions to obtain at least: including the at least one target area described with coordinate information in a robot coordinate system;
controlling the execution device to execute the operation related to the at least one target area.
24. The mobile robot of claim 21, wherein the processing device further performs the steps of:
determining the corresponding relation based on the coordinate information of the consensus elements in the robot coordinate system and the coordinate information of the consensus elements in the terminal coordinate system respectively; and
and determining coordinate information of the at least one target area in a robot coordinate system of the mobile robot based on the corresponding relation.
25. The mobile robot of claim 21, wherein the processing device further obtains a second input from the smart terminal via the interface device, and wherein the processing device further performs controlling the execution device to perform an operation associated with the at least one target area based on the second input.
26. The mobile robot of claim 25, wherein the second input comprises any one of: cleaning or not cleaning the target area, the force to clean the target area, entering or not entering the target area, and sorting or not sorting items within the target area.
27. The mobile robot of claim 25, wherein the performing means comprises a mobile device, and wherein the processing means generates a navigation route associated with the at least one target area based on the second input and controls the mobile device to perform a navigation movement based on the navigation route.
28. The mobile robot of claim 25, wherein the performing means comprises a cleaning device, and the processing means controls a cleaning operation of the cleaning device within the at least one target area based on the second input.
29. The mobile robot of claim 21, wherein the mobile robot comprises: cleaning robot, inspection robot, transfer robot.
30. A control system of a mobile robot, comprising:
the intelligent terminal of claim 13;
the mobile robot of any one of claims 21-29.
31. A computer-readable storage medium, characterized by storing at least one program which, when called, executes and implements an interaction method according to any one of claims 1 to 12.
CN201980094943.6A 2019-09-27 2019-09-27 Intelligent terminal, control system and interaction method with mobile robot Active CN113710133B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108590 WO2021056428A1 (en) 2019-09-27 2019-09-27 Intelligent terminal, control system, and method for interaction with mobile robot

Publications (2)

Publication Number Publication Date
CN113710133A CN113710133A (en) 2021-11-26
CN113710133B true CN113710133B (en) 2022-09-09

Family

ID=75164788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980094943.6A Active CN113710133B (en) 2019-09-27 2019-09-27 Intelligent terminal, control system and interaction method with mobile robot

Country Status (2)

Country Link
CN (1) CN113710133B (en)
WO (1) WO2021056428A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848892B (en) * 2021-09-10 2024-01-16 广东盈峰智能环卫科技有限公司 Robot cleaning area dividing method, path planning method and device
CN114153310A (en) * 2021-11-18 2022-03-08 天津塔米智能科技有限公司 Robot guest greeting method, device, equipment and medium
CN114431800B (en) * 2022-01-04 2024-04-16 北京石头世纪科技股份有限公司 Control method and device for cleaning robot zoning cleaning and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015055969A (en) * 2013-09-11 2015-03-23 学校法人常翔学園 Mobile robot, mobile robot control system, sheet where control figure is displayed and program
CN109725632A (en) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 Removable smart machine control method, removable smart machine and intelligent sweeping machine
CN110147091A (en) * 2018-02-13 2019-08-20 深圳市优必选科技有限公司 Motion planning and robot control method, apparatus and robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102094347B1 (en) * 2013-07-29 2020-03-30 삼성전자주식회사 Auto-cleaning system, cleaning robot and controlling method thereof
KR20180024600A (en) * 2016-08-30 2018-03-08 엘지전자 주식회사 Robot cleaner and a system inlduing the same
CN106933227B (en) * 2017-03-31 2020-12-18 联想(北京)有限公司 Method for guiding intelligent robot and electronic equipment
CN109262607A (en) * 2018-08-15 2019-01-25 武汉华安科技股份有限公司 Robot coordinate system's conversion method
CN110200549A (en) * 2019-04-22 2019-09-06 深圳飞科机器人有限公司 Clean robot control method and Related product

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015055969A (en) * 2013-09-11 2015-03-23 学校法人常翔学園 Mobile robot, mobile robot control system, sheet where control figure is displayed and program
CN109725632A (en) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 Removable smart machine control method, removable smart machine and intelligent sweeping machine
CN110147091A (en) * 2018-02-13 2019-08-20 深圳市优必选科技有限公司 Motion planning and robot control method, apparatus and robot

Also Published As

Publication number Publication date
WO2021056428A1 (en) 2021-04-01
CN113710133A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN111989537B (en) System and method for detecting human gaze and gestures in an unconstrained environment
CN113710133B (en) Intelligent terminal, control system and interaction method with mobile robot
RU2644520C2 (en) Non-contact input
CN110310175A (en) System and method for mobile augmented reality
US9081419B2 (en) Natural gesture based user interface methods and systems
JP5942456B2 (en) Image processing apparatus, image processing method, and program
CN107430686A (en) Mass-rent for the zone profiles of positioning of mobile equipment creates and renewal
WO2019179442A1 (en) Interaction target determination method and apparatus for intelligent device
WO2020223975A1 (en) Method of locating device on map, server, and mobile robot
KR102032662B1 (en) Human-computer interaction with scene space monitoring
CN107438853A (en) The privacy filtering of zone profiles before uploading
KR20210029586A (en) Method of slam based on salient object in image and robot and cloud server implementing thereof
CN110533694A (en) Image processing method, device, terminal and storage medium
KR101470757B1 (en) Method and apparatus for providing augmented reality service
KR101967343B1 (en) Appartus for saving and managing of object-information for analying image data
US20200357177A1 (en) Apparatus and method for generating point cloud data
CN113116224A (en) Robot and control method thereof
CN111736709A (en) AR glasses control method, device, storage medium and apparatus
JP2021152942A (en) Dress coordination method and device, computing device and medium
CN111872928B (en) Obstacle attribute distinguishing method and system and intelligent robot
CN102799344B (en) Virtual touch screen system and method
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
JP6256545B2 (en) Information processing apparatus, control method and program thereof, and information processing system, control method and program thereof
Chen et al. A 3-D point clouds scanning and registration methodology for automatic object digitization
JP6304305B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant