WO2021056428A1 - Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile - Google Patents

Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile Download PDF

Info

Publication number
WO2021056428A1
WO2021056428A1 PCT/CN2019/108590 CN2019108590W WO2021056428A1 WO 2021056428 A1 WO2021056428 A1 WO 2021056428A1 CN 2019108590 W CN2019108590 W CN 2019108590W WO 2021056428 A1 WO2021056428 A1 WO 2021056428A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
mobile robot
robot
coordinate system
input
Prior art date
Application number
PCT/CN2019/108590
Other languages
English (en)
Chinese (zh)
Inventor
李重兴
崔彧玮
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to PCT/CN2019/108590 priority Critical patent/WO2021056428A1/fr
Priority to CN201980094943.6A priority patent/CN113710133B/zh
Publication of WO2021056428A1 publication Critical patent/WO2021056428A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • This application relates to the field of mobile robot interaction technology, and in particular to an intelligent terminal, a control system, and an interaction method with a mobile robot.
  • a designated position is usually determined by the user's voice, body instructions, etc., and then the mobile robot is based on the designation.
  • the preset range with the position as the center determines a target area; or, the user can make the mobile robot determine the target area by editing a map pre-built by the mobile robot.
  • the purpose of this application is to provide an intelligent terminal, a control system, and an interaction method with a mobile robot to solve the problem that the mobile robot in the prior art cannot determine an accurate target area and user based on user instructions.
  • the first aspect of the present application provides an interaction method with a mobile robot, which is used in a smart terminal including at least a display device, including the following steps: previewing the state of the physical space interface on the display device The user’s input is detected in the next step; in response to the detected input, at least one target area is created in the previewed physical space interface; the target area includes the coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is The coordinate information in the robot coordinate system of the mobile robot has a corresponding relationship; an interactive command is generated based on the at least one target area and sent to the mobile robot.
  • the step of detecting the user's input in a state where the display device previews the physical space interface includes: displaying the smart device in the physical space interface previewed by the display device.
  • the detected input includes at least one of the following: a sliding input operation and a tap input operation.
  • the step of detecting the user's input in a state where the display device previews the physical space interface includes: displaying the smart device in the physical space interface previewed by the display device.
  • the method further includes: constructing the terminal coordinate system in the state of previewing the physical space interface, so as to respond to the detected terminal coordinate system in the state of completing the construction of the terminal coordinate system enter.
  • the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal; or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the In the cloud server connected to the smart terminal network; or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network.
  • the interaction method further includes: coordinate information in the robot coordinate system and the terminal coordinate system based on the consensus elements extracted from the previewed physical space interface. Determine the corresponding relationship based on the coordinate information below; determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
  • the step of generating an interactive command based on at least one target area to be sent to the mobile robot includes: generating the information described by the coordinate information in the robot coordinate system. At least one interactive instruction of the target area is sent to the mobile robot.
  • the step of generating an interactive command based on at least one target area to send to the mobile robot includes: generating the at least one target area, and creating the An interactive instruction of at least one consensus element related to the target area is sent to the mobile robot; wherein the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
  • it further includes at least one of the following steps: using the physical space interface to prompt the user to perform an input operation; using voice to prompt the user to perform an input operation; or using vibration to prompt the user to perform an input operation .
  • the method further includes: The target area and detecting the second input of the user generate an interactive command to be sent to the mobile robot.
  • the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area.
  • the second aspect of the present application also provides an intelligent terminal, including: a display device, used to provide a preview operation for a physical space interface; a storage device, used to store at least one program; an interface device, used to communicate with a mobile robot Interaction; a processing device, connected to the display device, storage device, and interface device, and used to execute the at least one program to coordinate the display device, storage device, and interface device to execute any one of the first aspect of this application The described interaction method.
  • the third aspect of the present application also provides a server, including: a storage device for storing at least one program; an interface device for assisting an intelligent terminal and a mobile robot to communicate and interact; a processing device, and the storage device and The interface device is connected to execute the at least one program to coordinate the storage device and the interface device to perform the following interaction method: obtain at least one target area from the smart terminal; wherein the target area passes through the smart terminal Obtained by detecting user input, the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system of the mobile robot; based on the at least one The target area generates an interactive command to be sent to the mobile robot through the interface device.
  • the storage device prestores the robot coordinate system; or the processing device obtains the robot coordinate system from the smart terminal or mobile robot through an interface device.
  • the processing device also obtains the video stream taken by the smart terminal through the interface device; the processing device is based on the consensus elements provided by the video stream in the Determining the corresponding relationship between the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system; and determining the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
  • the step of the processing device generating an interactive command based on at least one target area to send to the mobile robot includes: The interactive instruction of the at least one target area is sent to the mobile robot through the interface device.
  • the step of generating an interactive command based on at least one target area to send to the mobile robot includes: acquiring information from the smart terminal and creating the at least one target Region-related consensus elements; wherein the consensus elements are used to determine the coordinate position of the at least one target area in the robot coordinate system; generate an interactive instruction containing the at least one target area and the consensus element to Send to the mobile robot through the interface device.
  • the processing device further obtains a second input from the smart terminal through the interface device, and the processing device also performs a process based on the target area and the second input.
  • An interactive command is generated to send to the mobile robot.
  • the second input includes any of the following: cleaning or not cleaning the target area, entering or not entering the target area, and sorting or not sorting items in the target area.
  • the fourth aspect of the present application also provides a mobile robot, including: a storage device for storing at least one program and a pre-built robot coordinate system; an interface device for communicating and interacting with an intelligent terminal; an execution device , Used for controlled execution of corresponding operations; processing device, connected to the storage device, interface device, and execution device, and used to execute the at least one program to coordinate the storage device and the interface device to perform the following interaction method: get from The interactive instruction of the smart terminal; wherein the interactive instruction includes at least one target area; the target area is obtained by detecting user input via the intelligent terminal, and the target area includes the terminal coordinate system of the intelligent terminal Coordinate information, where the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system; and the execution device is controlled to perform an operation related to the at least one target area.
  • the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through an interface device for obtaining the interactive instruction.
  • the step of the processing device performing an operation related to the at least one target area includes: parsing the interactive instruction to at least obtain: including using coordinates in the robot coordinate system The at least one target area described by the information; controlling the execution device to perform operations related to the at least one target area.
  • the processing device also obtains the consensus element related to the creation of the at least one target area from the smart terminal through the interface device; wherein, the consensus element is used for Determine the coordinate position of the at least one target area in the robot coordinate system; the processing device further performs the following steps: based on the coordinate information of the consensus element in the robot coordinate system and the coordinates in the terminal coordinate system Information, determining the corresponding relationship; and determining the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
  • the processing device also obtains a second input from the smart terminal through the interface device, and the processing device also performs control of the second input based on the second input.
  • the execution device executes an operation related to the at least one target area.
  • the second input includes any one of the following: cleaning or not cleaning the target area, strength of cleaning the target area, entering or not entering the target area, sorting or not sorting the target Items in the area.
  • the execution device includes a mobile device, and the processing device generates a navigation route related to the at least one target area based on the second input, and based on the navigation The route controls the mobile device to perform navigational movement.
  • the execution device includes a cleaning device, and the processing device controls a cleaning operation of the cleaning device in the at least one target area based on the second input.
  • the mobile robot includes: a cleaning robot, a patrol robot, and a handling robot.
  • the fifth aspect of the present application also provides a control system for a mobile robot, including: the smart terminal as described in the second aspect of the present application; and the mobile robot as described in any of the fourth aspect of the present application.
  • the fifth aspect of the present application also provides a computer-readable storage medium that stores at least one program that executes and implements the interaction method as described in any one of the first aspect of the present application when the at least one program is called .
  • the smart terminal, control system, and interaction method with mobile robots of the present application use smart terminals with positioning mapping functions and display devices to detect user input and create at least one target area. At least one side of the mobile robot and the mobile robot matches the target area described by the coordinate information of the smart terminal to the map of the mobile robot, so that the mobile robot can perform a predetermined operation in the target area based on the precise target area in the mobile robot map or Not performing predetermined operations in the target area improves the accuracy of the mobile robot in determining the target area specified by the user during the human-computer interaction process, and reduces the difficulty of determining the location of the target area on the map when the user edits the mobile robot map.
  • FIG. 1 shows a schematic diagram of the structure of the smart terminal of this application in an embodiment.
  • FIG. 2 shows a schematic flowchart of an embodiment of the method for interacting with a mobile robot according to the present application.
  • Fig. 3a shows a schematic diagram of a target area created in the previewed physical space interface by the smart terminal of this application in an implementation manner.
  • Fig. 3b shows a schematic diagram of another embodiment of the target area created by the smart terminal of the present application in the previewed physical space interface.
  • Fig. 3c shows a schematic diagram of the target area created in the previewed physical space interface by the smart terminal of this application in another embodiment.
  • Fig. 4 shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment.
  • FIG. 5 shows a schematic diagram of a virtual button of the smart terminal of this application in an embodiment.
  • Figure 6 shows a schematic diagram of the network architecture for interaction between the smart terminal, the server, and the mobile robot of this application.
  • FIG. 7 shows a schematic flowchart of another embodiment of the method for interacting with a mobile robot according to the present application.
  • FIG. 8 shows a schematic diagram of the structure of the server of this application in an embodiment.
  • FIG. 9 shows a schematic diagram of the structure of the mobile robot of this application in an embodiment.
  • FIG. 10 shows a schematic flowchart of another embodiment of the interaction method of this application.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
  • the first input may be referred to as the second input, and similarly, the second input may be referred to as the first input without departing from the scope of the various described embodiments. Both the first input and the input are describing an input, but unless the context clearly indicates otherwise, they are not the same input.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition will only occur when the combination of elements, functions, steps, or operations is inherently mutually exclusive in some way.
  • the mobile robot determines the position in the preset range centered on the designated position.
  • the target area the mobile robot cannot determine the precise preset range according to the corresponding instructions of the user, especially when the target area is irregular, the mobile robot cannot accurately determine the precise target area. This makes the mobile robot unable to perform navigational movement or behavior control based on a precise target area.
  • the cleaning robot wants the cleaning robot to clean around the dining table, and usually sends an instruction including the “dining table”.
  • the mobile robot usually only determines the target area with a preset range centered on the dining table to clean. In practical applications, if the user wants the cleaning robot to clean the area irregularly, and the user cannot accurately describe the target area with instructions such as voice and limbs, it will be difficult for the cleaning robot to effectively clean the accurate target area.
  • the user edits the pre-built map of the mobile robot to make the mobile robot determine the coordinate information of the target area in its map.
  • the map constructed by the mobile robot is not intuitive and difficult to distinguish for the user, and the user cannot immediately determine the location of the target area in the actual physical space on the mobile robot map.
  • this application provides an intelligent terminal, a control system, and an interaction method with a mobile robot, which are used to create at least one target area on the intelligent terminal based on user input for the intelligent terminal to generate based on the at least one target area An interactive instruction sent to the mobile robot.
  • the mobile robot is a machine device that automatically performs specific tasks. It can accept human commands, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology.
  • This type of mobile robot can be used indoors or outdoors. It can be used in industry, commerce or households. It can be used to replace security patrols, to replace greeters or orderers, or to replace people to clean the ground. It can also be used for family accompaniment, auxiliary office, etc.
  • the mobile robot is provided with at least one camera device for capturing images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction); according to the constructed map, the mobile robot can perform inspections, Path planning for cleaning and tidying up.
  • VSLAM Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction
  • the mobile robot can perform inspections, Path planning for cleaning and tidying up.
  • the mobile robot caches the map built during its operation in the local storage space, or uploads it
  • FIG. 1 shows a schematic diagram of the structure of the smart terminal of this application in an embodiment.
  • the smart terminal includes a display device 11, a storage device 12, an interface device 13, a processing device 14, a camera device (not shown), and the like.
  • the smart terminal may be a device such as a smart phone, AR glasses, and a tablet computer.
  • the display device 11 is a human-machine interface device for providing a physical space interface for the user to preview.
  • the display device 11 can transform the coordinate information or various data information of the smart terminal map into various characters, numbers, symbols, or intuitive images and other electronic files for display.
  • the input device or the mobile sensor device can be used to input user input or data into the smart terminal, and the display content can be added, deleted, or changed at any time with the help of the processing device 14 of the smart terminal.
  • the display device 11 can be divided into different types of display devices such as plasma, liquid crystal, light emitting diode, and cathode ray tube according to different display devices.
  • the display device 11 of the present application can provide the user with a physical space interface for the user to view and use the electronic file of the smart terminal.
  • the physical space interface of the display device 11 displays the image corresponding to the actual physical space captured by the smart terminal camera device to the user.
  • the physical space interface displays the intuitive physical space to the user by calling the image of the actual physical space captured by the camera device of the smart terminal. For example, if the camera device of the smart terminal is capturing a pile of rice scattered on the kitchen floor, the physical space interface displays an image including the scattered pile of rice and the kitchen floor.
  • the physical space is the actual space where the mobile robot works.
  • the mobile robot is a cleaning robot
  • the physical space may be a physical space where the user needs to be cleaned by the cleaning robot to live and work.
  • the storage device 12 is used to store at least one program. Wherein, the at least one program can be used by the processing device to execute the interaction method described in this application.
  • the storage device 12 also stores coordinate information in the robot coordinate system of the mobile robot.
  • the storage device 12 includes but is not limited to: read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), and nonvolatile RAM (Nonvolatile RAM, NVRAM for short).
  • the storage device 12 includes a flash memory device or other non-volatile solid-state storage devices.
  • the storage device 12 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof.
  • the storage device 12 also includes a memory controller, which can control the access control of the smart terminal such as a central processing unit (CPU) and the interface device 13 or other components to the memory.
  • CPU central processing unit
  • the interface device 13 is used to communicate and interact with a mobile robot or a server.
  • the interface device 13 may send the interactive instruction generated by the smart terminal to the server or the mobile robot.
  • the interface device 13 sends an instruction for acquiring coordinate information in the coordinate system of the mobile robot to the mobile robot or the server.
  • the interface device 13 includes a network interface, a data line interface, and the like.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the interface device 13 is connected to the display device 11, the storage device 12, the processing device 14, the Internet, a mobile robot located in a physical space, a server, and other data.
  • the processing device 14 is connected to the display device 11, the storage device 12, and the interface device 13, and is used to execute the at least one program to coordinate the display device 11, the storage device 12, and the interface device 13 to perform the interaction described in this application method.
  • the processing device 14 includes one or more processors.
  • the processing device 14 is operable to perform data read and write operations with the storage device 12.
  • the processing device 14 performs operations such as extracting images, temporarily storing features, and locating in a map based on features.
  • the processing device 14 includes one or more general-purpose microprocessors, one or more special-purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors.
  • the processing device 14 is also operatively coupled with an input device, which can enable a user to interact with the smart terminal. Therefore, input devices may include buttons, keyboards, mice, touch pads, and so on.
  • the camera device is used to capture images in the actual physical space in real time, and includes, but is not limited to: a monocular camera device, a binocular camera device, a multi-eye camera device, a depth camera device, and the like.
  • FIG. 2 shows a schematic flowchart of an embodiment of the method for interacting with a mobile robot according to the present application.
  • the interaction method is used for interaction between a smart terminal and a mobile robot, and the smart terminal has a display device.
  • step S110 the user's input is detected in a state where the display device previews the physical space interface.
  • the state of the preview physical space interface means that the physical space interface of the display device can display real-time images of the actual physical space captured by the camera device of the smart terminal for the user to view and use.
  • the display device is previewing the physical space interface, the user can view the image taken by the smart terminal in real time, so that the user can correspond to the area in the actual physical space based on the intuitive image displayed on the physical space interface ,position.
  • the AR application interface of the mobile phone can display the image of the actual physical space taken by the smart phone in real time, and the image displayed by the user based on the AR application interface Can immediately correspond to the area of the actual physical space.
  • step S110 further includes the step of constructing the terminal coordinate system by the smart terminal in the state of previewing the physical space interface, so as to respond to the detected input in the state of completing the construction of the terminal coordinate system.
  • the smart terminal first constructs a map corresponding to the actual physical space while previewing the physical space interface and stores coordinate information corresponding to the map.
  • the terminal coordinate system is constructed to describe the coordinate information corresponding to the smart terminal map.
  • the coordinate information includes: positioning features, coordinates of the positioning features in the map, and the like. Wherein, the positioning feature includes, but is not limited to: feature points, feature lines, and so on.
  • the camera device of the smart terminal will continuously take images in the actual physical space during the movement of the smart terminal.
  • the smart terminal is based on the captured images in the actual physical space and the movement of the smart terminal.
  • the map constructed by the smart terminal is used to describe the location and the area occupied by objects in the actual physical space on the map.
  • the actual physical space corresponding to the map constructed by the smart terminal and the map constructed by the mobile robot needs to overlap in order to perform steps S110 to S130. For example, if the mobile robot map corresponds to multiple positioning features, at least one of the positioning features corresponding to the smart terminal map should be the same as the positioning feature corresponding to the mobile robot map.
  • step S110 further includes the step of prompting the user to start the input operation after the smart terminal has constructed the map.
  • the physical space interface is used to prompt the user to perform an input operation.
  • the smart terminal display device displays a text such as "please input the user" to prompt the user to perform an input operation or displays a preset graphic to prompt the user that the smart terminal has constructed a map and the user can perform an input operation.
  • a voice is used to prompt the user to perform an input operation.
  • the audio device of the smart terminal emits a prompt voice such as "please the user to perform an input operation" or may emit a preset music or sound to prompt the user to perform an input operation.
  • vibration is used to prompt the user to perform an input operation.
  • the vibration device of the smart terminal may generate vibration to prompt the user to perform an input operation.
  • the input includes, but is not limited to, a first input and a second input, etc. The first input and the second input will be described in detail later.
  • step S110 includes displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device, and the smart terminal uses the input device of the smart terminal to detect that the user is at the location. Describe the input in the physical space interface.
  • the video stream is a multi-frame image captured by a camera device in real time and continuously, and the video stream can be acquired by a mobile smart terminal for the display device to continuously display in real time under the previewed physical space interface.
  • a mobile smart terminal for the display device to continuously display in real time under the previewed physical space interface.
  • the display screen of the mobile phone continuously displays the scene images taken by the camera device in real time under the preview interface, so that the user can adjust the shooting angle and take pictures based on the video stream captured by the camera device.
  • the input device is a device that can detect and perceive the user's input in the physical space interface. For example, the touch screen of the smart terminal, the keys and buttons of the smart terminal, etc.
  • the detected input includes at least one of the following: a sliding input operation and a tap input operation.
  • the detected input corresponds to the input device.
  • the processing device of the smart terminal can determine that the input operation corresponds to the location or area on the map constructed by the smart terminal.
  • the input device is a touch display screen
  • the user input detected by the touch display screen may be a sliding operation or a click operation.
  • the display device can continuously detect and perceive the sliding track corresponding to the user's sliding operation based on the user's sliding operation, and the processing device of the smart terminal can determine the sliding track in the map of the smart terminal. Corresponding location.
  • the user clicks on the touch screen, and based on the multiple locations clicked on the touch screen, the processing device of the smart terminal may determine that the multiple locations correspond to locations on the smart terminal map.
  • the input device is a button
  • the click operation on the touch screen can be converted into a click operation on the button.
  • the display position of the target point is at a fixed position of the display device, and may be displayed in the center position or in other positions.
  • the mobile smart terminal is used to make the target point correspond to different positions in the actual physical space, and then the user can input by clicking the button.
  • step S110 includes displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device, and detecting the movement sensor device in the smart terminal to Obtain user input.
  • the mobile sensing device can check and record the location and direction of the smart terminal and the mobile location of the smart terminal, and the processing device of the smart terminal can determine the location of the mobile location on the smart terminal map.
  • the movement sensing device includes, but is not limited to, accelerometer, gyroscope and other sensing devices. Through the video stream displayed by the display device, the user can move the smart terminal to make the movement track of the smart terminal in the physical space constitute a user input .
  • the processing device of the smart device may perform step S120 based on the user's input detected in the state where the display device previews the physical space interface.
  • step S120 in response to the detected input, at least one target area is created in the previewed physical space interface.
  • the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system of the mobile robot.
  • the processing device responds to the input detected by the input device or the movement sensing device in real time in order to create at least one target area in the previewed physical space interface.
  • the at least one target area is created by the input operation of the user.
  • the input operation is a click operation on a touch screen
  • the touch screen may be a touch screen that senses a click position based on a change in capacitance or a touch screen that senses a click position based on a change in resistance.
  • the click position on the touch screen at a certain moment can cause the capacitance or resistance of the touch screen to change. Any of the above changes can enable the processing device of the smart terminal to correspond the click position to the previewed physical space interface.
  • At least one target area can be created in the physical space interface of the preview based on multiple click positions in the video stream image, and the processing device At least one target area created in the previewed physical space interface can be mapped to the map constructed by the smart terminal. Wherein, based on preset rules and multiple click positions in the video stream image, at least one target area can be created in the previewed physical space interface, taking the creation of a target area as an example.
  • the preset rule is to sequentially connect each click position with a connecting line to form a target area.
  • the connecting line may be a straight line or a curved line.
  • the preset rule is to form a target area based on the circumscribed figure of the figure formed by connecting the multiple click positions with a connecting line.
  • the circumscribed graphics include but are not limited to rectangles, circumscribed circles, external polygons or irregular graphics.
  • the preset rule is to form a target area based on an inscribed graphic formed by connecting the multiple click positions with a connecting line.
  • the inscribed graphics include, but are not limited to, rectangles, inscribed circles, internal polygons, or irregular graphics.
  • the preset rules used in the target area created by the smart terminal can be changed based on the user's selection, or the same preset rules can be adopted for any click operation.
  • the operation of the user selecting the preset rule can be performed before the click operation or after the click operation.
  • FIG. 3a shows a schematic diagram of a target area created in the previewed physical space interface by the smart terminal of this application in an embodiment.
  • the preset rule selected by the user is that the one target area is formed based on the circumscribed circle of the figure formed by connecting the multiple click positions with a connecting line. The user clicks multiple times around the garbage-scattered area with a finger or a touch screen pen on the touch screen based on the location of the garbage-scattered area in the image.
  • the processing device of the smart terminal creates a circular target area based on the user's click and the preset rule selected by the user. Please refer to FIG. 3b.
  • FIG. 3b shows a schematic diagram of another embodiment of the target area created by the smart terminal of the present application in the previewed physical space interface.
  • the preset rule selected by the user is that the one target area is formed based on the circumscribed rectangle of the figure formed by connecting the multiple click positions with a connecting line.
  • the processing device of the smart terminal creates a rectangular target area based on a user's click and a preset rule selected by the user.
  • FIG. 3c shows a schematic diagram of the target area created by the smart terminal of this application in the previewed physical space interface in another embodiment.
  • the preset rule selected by the user is to connect each click position with a curve to form an irregular target area.
  • the processing device of the smart terminal creates an irregular target area based on the user's click and the preset rule selected by the user.
  • the input operation is a sliding operation on the touch screen. The user's continuous sliding on the touch screen can cause the capacitance or resistance of the touch screen to change.
  • the user's sliding position at each moment on the touch screen can be mapped to the position in the image corresponding to each moment displayed in the previewed physical space interface, and the graphic position corresponding to the continuous sliding operation can be displayed in the preview Create at least one target area in the physical space interface.
  • the smart terminal can create target areas of different shapes based on different continuous sliding operations of the user.
  • the user can click the "Confirm” virtual button on the touch screen after confirming the target area to make the The smart terminal executes step S130, or clicks the "modify” virtual button on the touch screen to perform the input operation again.
  • the user may also issue a preset confirmation voice instruction to indicate that the target area has been confirmed so that the smart terminal executes step S130, for example, the user issues a voice instruction of "confirm".
  • the user issues a preset modified voice instruction to re-execute the input operation, for example, the user issues a "modified” voice instruction.
  • the smart terminal may use a preset time interval to create multiple target areas. For example, if the preset time interval is exceeded, it is determined that the next click operation on the input device is to create a new target area. Before the time interval is exceeded, the smart terminal can use sound, vibration, or the physical The space interface prompts the user to enter the current target area as soon as possible.
  • the smart terminal may sort the multiple target areas based on the time when the multiple target areas are created to generate multiple ordered areas. In order to facilitate the mobile robot to perform related operations based on the sorted target regions based on the interactive instructions generated from the plurality of ordered target regions. For example, based on user input, the first created target area is sorted as the first target area.
  • the smart terminal may also generate a plurality of ordered target areas based on a user-defined ranking so that the smart terminal generates an interactive instruction based on the plurality of ordered target areas, so that the mobile robot can be based on the sorted multiple targets Perform related operations in the area. For example, the user sorts the multiple target areas based on the urgency of the multiple target areas that need to be cleaned.
  • step S120 further includes step S121 (not shown) and step S122 (not shown).
  • step S121 the processing device of the smart terminal determines the corresponding information based on the coordinate information of the robot coordinate system and the coordinate information of the terminal coordinate system of the consensus elements extracted from the previewed physical space interface. relationship.
  • the consensus element enables the smart terminal, mobile robot, or server to determine the corresponding relationship between the coordinate information in the above two coordinate systems after obtaining the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system.
  • the consensus elements include, but are not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
  • the coordinate information in the robot coordinate system may be stored in the smart terminal for a long time, or may be obtained from the mobile robot or the server when the interaction method is executed.
  • the robot coordinate system of the mobile robot is used to describe coordinate information corresponding to the mobile robot map.
  • the coordinate information includes: positioning features, coordinates of the positioning features in the map, and the like.
  • the position of the object in the actual physical space described by the positioning feature in the map can be determined through the coordinates of the positioning feature in the map.
  • the positioning feature includes, but is not limited to: feature points, feature lines, and so on. Examples of the positioning feature are described by a descriptor. For example, based on the SIFT algorithm (Scale-invariant feature transform), the location feature is extracted from multiple images, and the gray scale used to describe the location feature is obtained based on the image blocks containing the location feature in the multiple images Value sequence, and the gray value sequence is the descriptor.
  • SIFT algorithm Scale-invariant feature transform
  • the descriptor is used to describe the location feature by encoding the surrounding brightness information of the location feature, and take the location feature as the center to sample several points in a circle around it, wherein the number of sampling points is but not limited to 256 or 512, compare these sampling points in pairs to obtain the brightness relationship between these sampling points and convert the brightness relationship into a binary string or other encoding format.
  • the shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system.
  • the intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined.
  • the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg
  • the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg.
  • the processing device of the intelligent terminal can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system.
  • the corresponding relationship of the coordinates below can then determine the corresponding relationship between all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot. After obtaining the corresponding relationship, step S122 may be executed.
  • the image containing the object corresponding to the location feature of the mobile robot map means that the processing device of the smart terminal has obtained the video stream taken by the smart terminal.
  • the location feature of the object in the actual physical space corresponding to at least one frame of the image in the video stream is the location feature of the robot map. For example, if a location feature of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
  • the processing device of the smart terminal obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream.
  • the processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map.
  • the smart terminal is pre-configured with an extraction algorithm that is the same as the location feature in the image extracted by the mobile robot, and extracts the candidate location feature in the image based on the extraction algorithm.
  • the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship.
  • Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlap/overlap relationship and containment/containment relationship, etc.
  • the intelligent terminal uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'.
  • the intelligent terminal can determine the correspondence between the coordinates of the intelligent terminal map and the mobile robot map based on the coordinates of fs1' in the intelligent terminal map and the coordinates in the mobile robot map. Step S122 may be executed after obtaining the corresponding relationship.
  • step S122 the coordinate information of the at least one target area in the robot coordinate system of the mobile robot is determined based on the corresponding relationship.
  • the processing device of the smart terminal can determine that the at least one target area is in the terminal coordinate system based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal.
  • the coordinate information in the robot coordinate system of the mobile robot is determined based on the corresponding relationship.
  • the processing device may also determine the corresponding relationship based on a consensus element of a positioning feature shared by the mobile robot and the smart terminal.
  • FIG. 4 shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment, as shown in the figure.
  • the coordinate point O" of a positioning feature in the robot map is used as the starting coordinate point of the terminal coordinate system of the smart terminal.
  • the coordinate system of the smart terminal and the target established based on the above method
  • the coordinate information in the map constructed under the coordinate system of the area can directly determine the coordinates of the at least one target area in the robot coordinate system of the mobile robot.
  • the point P in the coordinate system of the smart terminal is For a point in the target area, the vector O’P, that is, the coordinates of the point P in the mobile robot coordinate system, can be determined according to the vectors O’O” and O”P, so as to determine that the at least one target area is in the mobile robot The coordinates in the robot coordinate system.
  • an interactive command is generated based on the at least one target area to be sent to the mobile robot.
  • the interactive instruction includes at least one target area and a corresponding operation performed by the mobile robot.
  • the interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area.
  • the step of detecting the user's input while the display device is previewing the physical space interface it is detected that the user's input is the first input.
  • the smart terminal creates at least one target area based on the first input.
  • the interactive method further includes: detecting a second input of the user, the second input corresponding to a corresponding operation performed by the mobile robot, based on the target area and The second input generates an interactive command to be sent to the mobile robot.
  • the detection by the smart terminal of the second input may be performed before the first input or after the first input.
  • the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area.
  • the mobile robot is a cleaning robot, and if the target area corresponds to an area where garbage is scattered on the ground, the second input is to clean the target area. If the target area corresponds to an obstacle area, the second input is that the target area is not cleaned.
  • the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is to enter the target area. If the target area corresponds to an area that the user does not need to view, the second input is not to enter the target area.
  • the mobile robot is a transport robot, and if the target area corresponds to an area where the user needs to sort items, the second input is to sort items in the target area. If the target area corresponds to an area where the user does not need to sort items, the second input is not to sort items in the target area.
  • the second input can be input by voice or by clicking a virtual button.
  • a virtual button For example, if a user wants a mobile robot to enter an area where garbage is scattered to perform cleaning work, the user needs to perform the first input in the input device of the smart terminal to make the smart terminal create a target area.
  • Figure 5 shows a schematic diagram of the virtual button of the smart terminal of this application in an embodiment. As shown in Figure 5, the user can click the virtual button of the "clean target area" in the menu bar of the smart terminal.
  • the second input is completed to enable the smart terminal to generate an interactive instruction.
  • the display form of the virtual button of the "cleaning target area" is not limited to text, but may also be a pattern.
  • the interactive instruction is related to the function of the mobile robot, and the interactive instruction can be generated without the user's second input.
  • the interactive instruction only includes the at least one target area.
  • the mobile robot is a cleaning robot that performs cleaning work
  • the smart terminal sends the at least one target area to the cleaning robot
  • the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route.
  • the mobile robot is a patrol robot that performs patrol work
  • the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area based on the navigation route. Inspection work.
  • the mobile robot is a handling robot that performs sorting and handling tasks
  • the intelligent terminal sends the at least one target area to the handling robot
  • the handling robot generates a navigation route and automatically enters the target area based on the navigation route for execution Handling and finishing work.
  • the interaction method with the mobile robot as described above not only enables the user to determine the precise input based on the intuitive video stream provided by the smart terminal, but also enables the smart terminal to respond to the detected input to display in the previewed physical space interface.
  • At least one precise target area is created, and an interactive instruction sent to the mobile robot may be generated based on the position of the at least one target area in the mobile robot map.
  • the mobile robot parses the interactive instruction to obtain the position of the at least one target area on the robot map, and then performs a corresponding operation in the target area or does not perform a corresponding operation in the target area.
  • the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal. Then, the interaction method further includes step S210, step S220, and step S230. Wherein, the coordinate information in the robot coordinate system may be stored in the smart terminal for a long time, or may be obtained from the mobile robot or the server when the interaction method is executed.
  • the processing device of the smart terminal executes step S210 based on the coordinate information of the mobile robot coordinate system and the coordinate information of the smart terminal terminal coordinate system stored in the storage device.
  • the processing device of the smart terminal determines the corresponding coordinates based on the coordinate information of the robot coordinate system and the coordinate information of the terminal coordinate system of the consensus elements extracted from the previewed physical space interface. relationship.
  • the consensus element is that the smart terminal, mobile robot, or server can determine the corresponding relationship between the coordinate information in the above two coordinate systems after obtaining the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system Elements.
  • the consensus elements include, but are not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
  • the shared positioning feature is both the positioning feature of the map constructed by the smart terminal in the terminal coordinate system and the positioning feature of the map constructed by the mobile robot in the robot coordinate system.
  • the intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined.
  • the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg
  • the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg.
  • the processing device of the intelligent terminal can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system.
  • the corresponding relationship of the coordinates below can then determine the corresponding relationship between all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot.
  • Step S220 may be executed after obtaining the corresponding relationship.
  • the method of creating at least one target area in the previewed physical space interface based on the detected input in step S120 can obtain that the created target area is in the terminal coordinate system of the smart terminal The coordinate information.
  • the processing device of the smart terminal can determine that the at least one target area is in the terminal coordinate system based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal.
  • the coordinate information in the robot coordinate system of the mobile robot can be determined that the at least one target area is in the terminal coordinate system based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal.
  • the image containing the object corresponding to the positioning feature of the mobile robot map means that the processing device of the smart terminal has obtained the video stream taken by the smart terminal.
  • the location feature of the object in the actual physical space corresponding to at least one frame of the image in the video stream is the location feature of the robot map. For example, if a location feature of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
  • the processing device of the smart terminal obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream.
  • the processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map.
  • the smart terminal is pre-configured with an extraction algorithm that is the same as the location feature in the image extracted by the mobile robot, and extracts the candidate location feature in the image based on the extraction algorithm.
  • the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship.
  • Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlapping/overlapping relations and inclusion/containment relations, etc.
  • the intelligent terminal uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'.
  • the smart terminal can determine the correspondence between the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map.
  • Step S220 may be executed after obtaining the corresponding relationship.
  • the method of creating at least one target area in the previewed physical space interface based on the detected input in step S120 can obtain that the created target area is in the terminal coordinate system of the smart terminal. Coordinate information under. Based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal, it is determined that the at least one target area is in the robot coordinate system of the mobile robot The coordinate information.
  • the processing device may also determine the corresponding relationship based on a consensus element of a positioning feature shared by the mobile robot and the smart terminal.
  • FIG. 4 shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment, as shown in the figure.
  • the coordinate point O" of a positioning feature in the robot map is used as the starting coordinate point of the terminal coordinate system of the smart terminal.
  • the coordinate system of the smart terminal and the target established based on the above method
  • the coordinate information in the map constructed under the coordinate system of the area can directly determine the coordinates of the at least one target area in the robot coordinate system of the mobile robot.
  • the point P in the coordinate system of the smart terminal is For a point in the target area, the vector O’P, that is, the coordinates of the point P in the mobile robot coordinate system, can be determined according to the vectors O’O” and O”P, so as to determine that the at least one target area is in the mobile robot The coordinates in the robot coordinate system.
  • the processing device executes step S230 based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot.
  • step S230 it generates the at least one target described by the coordinate information in the robot coordinate system.
  • the interactive command of the area is sent to the mobile robot.
  • the interactive instruction of step S230 includes at least one target area and corresponding operations performed by the mobile robot.
  • the interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area.
  • the method of generating the interactive instruction and the corresponding description are the same as or similar to those in step S130, and will not be repeated here.
  • FIG. 6 shows a schematic diagram of the network architecture for interaction among the smart terminal 10, the server 20, and the mobile robot 30 of this application.
  • the interactive instruction may be directly sent to the mobile robot 30 through the interface device of the smart terminal 10, or may be sent to the server 20 through the interface device, and then sent to the mobile robot 30 through the server 20. .
  • the processing device of the smart terminal may also generate an interactive instruction containing the at least one target area and consensus elements related to the creation of the at least one target area, to be sent to the mobile robot or sent to the mobile robot through the server. move robot.
  • the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
  • the consensus element is related to the creation of the at least one target area, including but not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
  • FIG. 7 shows a schematic flowchart of another embodiment of the method for interacting with a mobile robot according to the present application.
  • the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the server connected to the smart terminal network.
  • the server can be a single computer device, a service system based on a cloud architecture, a cloud server, etc.
  • the single computer device may be an autonomously configured computer device that can execute the interaction method, and it may be located in a private computer room or in a rented computer room in a public computer room.
  • the service system of the cloud architecture includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, where the public or private cloud server includes Software-as-a-Service (Software-as-a-Service, abbreviated as Software-as-a-Service).
  • the private cloud service terminal is, for example, Facebook Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, Tencent Cloud Computing Platform, and so on.
  • FIG. 8 shows a schematic structural diagram of the server of this application in an embodiment.
  • the server includes a storage device 21, an interface device 22, a processing device 23, and the like.
  • the storage device 21 is used to store at least one program.
  • the at least one program can be used by the processing device 23 to execute the interaction method described in the embodiment of FIG. 7.
  • the storage device 21 also pre-stores the coordinate information in the robot coordinate system of the mobile robot or the processing device 23 of the server can be used from the smart terminal or mobile through the interface device 22 when the interaction method is executed.
  • the robot obtains the coordinate information of the robot coordinate system.
  • the storage device 21 includes, but is not limited to: read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), and nonvolatile RAM (Nonvolatile RAM, NVRAM for short).
  • the storage device includes a flash memory device or other non-volatile solid-state storage devices.
  • the storage device 21 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof.
  • the storage device 21 also includes a memory controller, which can control the access control of the server, such as a central processing unit (CPU) and the interface device 22 or other components, to the memory.
  • CPU central processing unit
  • the interface device 22 is used to assist an intelligent terminal and a mobile robot to communicate and interact.
  • the interface device 22 may receive an interactive instruction generated by the smart terminal, and send the interactive instruction generated by the smart terminal to the mobile robot.
  • the interface device 22 of the server sends an instruction for acquiring coordinate information in the coordinate system of the mobile robot to the mobile robot or smart terminal.
  • the interface device 22 also obtains the video stream taken by the smart terminal, the second input of the smart terminal, and obtains the consensus element related to the creation of the at least one target area from the smart terminal and sends it To the mobile robot.
  • the interface device 22 includes a network interface, a data line interface, and the like.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the interface device 22 is data connected with the storage device 21, the processing device 23, the Internet, a mobile robot located in a physical space, an intelligent terminal, and the like.
  • the processing device 23 is connected to the storage device 21 and the interface device 22, and is used to execute the at least one program to coordinate the storage device 21 and the interface device 22 to perform the interaction method as described in FIG. 7.
  • the processing device 23 includes one or more processors.
  • the processing device 23 is operable to perform data read and write operations with the storage device.
  • the processing device 23 performs operations such as extracting images, temporarily storing features, positioning in a map based on features, and the like.
  • the processing device 23 includes one or more general-purpose microprocessors, one or more special-purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors.
  • Logic array Field Programmable Gate Array, FPGA for short), or any combination of them.
  • the processing device 23 of the server executes step S310 based on the coordinate information of the mobile robot coordinate system stored in the storage device 21.
  • step S310 at least one target area from the smart terminal is acquired.
  • the target area is obtained by detecting user input by the smart terminal, and the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is the same as that in the robot coordinate system of the mobile robot.
  • the coordinate information has a corresponding relationship.
  • the at least one target area is created by the processing device of the smart terminal in response to user input detected in a state where the display device previews the physical space interface and includes the coordinates of the terminal coordinate system of the smart terminal.
  • the target area of the information The processing device of the smart terminal can correspond at least one target area created in the previewed physical space interface to the map constructed by the smart terminal, and then determine the location of the at least one target area in the smart terminal map. Coordinate information.
  • the manner of detecting user input and creating at least one target area in response to the input is the same as or similar to the manner in the interaction method described in FIG.
  • the processing device of the server obtains the coordinate information of the mobile robot coordinate system and the coordinate information of the at least one target area in the smart terminal map. Because the map constructed by the smart terminal according to its terminal coordinate system overlaps with the actual physical space corresponding to the map constructed by the mobile robot based on the robot coordinate system. Therefore, the processing device of the server can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the smart terminal.
  • step S320 an interactive command is generated based on the at least one target area to be sent to the mobile robot.
  • the interactive instruction includes at least one target area and a corresponding operation performed by the mobile robot.
  • the interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area.
  • the step of detecting the user's input in a state where the display device of the smart terminal previews the physical space interface it is detected that the user's input is the first input.
  • the smart terminal creates at least one target area based on the first input.
  • the interactive method further includes: the processing device further obtains a second input from the smart terminal through the interface device, and is also based on the The target area and the second input generate an interactive command to be sent to the mobile robot.
  • the second input corresponds to a corresponding operation performed by the mobile robot.
  • the step of obtaining the second input may be performed before the first input or may be performed after the first input.
  • the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area.
  • the mobile robot is a cleaning robot, and if the target area corresponds to an area where garbage is scattered on the ground, the second input is to clean the target area. If the target area corresponds to an obstacle area, the second input is that the target area is not cleaned.
  • the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is to enter the target area. If the target area corresponds to an area that the user does not need to view, the second input is not to enter the target area.
  • the mobile robot is a transport robot, and if the target area corresponds to an area where the user needs to sort items, the second input is to sort items in the target area. If the target area corresponds to an area where the user does not need to sort items, the second input is not to sort items in the target area.
  • the interactive instruction is related to the function of the mobile robot, and the interactive instruction can be generated without the user's second input.
  • the interactive instruction only includes the at least one target area.
  • the mobile robot is a cleaning robot that performs cleaning work
  • the smart terminal sends the at least one target area to the cleaning robot
  • the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route.
  • the mobile robot is a patrol robot that performs patrol work
  • the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area based on the navigation route. Inspection work.
  • the mobile robot is a handling robot that performs sorting and handling tasks
  • the intelligent terminal sends the at least one target area to the handling robot
  • the handling robot generates a navigation route and automatically enters the target area based on the navigation route for execution Handling and finishing work.
  • the processing device further obtains the video stream captured by the smart terminal through the interface device, and step S310 further includes S311 and S312.
  • step S311 the processing device is based on the video stream provided by the video stream.
  • the consensus elements are respectively coordinate information in the robot coordinate system and coordinate information in the terminal coordinate system to determine the corresponding relationship.
  • the consensus element includes: an image containing an object corresponding to the positioning feature of the mobile robot map.
  • a positioning feature of the mobile robot map corresponds to a chair in an actual physical space, and at least one frame of image containing the chair is present in the video stream obtained by the server.
  • the processing device of the server obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream.
  • the processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map.
  • the server is pre-configured with an extraction algorithm that uses the same extraction method as the mobile robot to extract the location features in the image, and extracts candidate location features in the image based on the extraction algorithm.
  • the extraction algorithm includes but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship.
  • texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.
  • examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.
  • the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image.
  • the server uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, so as to obtain a matching location feature fs1'.
  • the server can determine the correspondence between the coordinates of the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map.
  • the processing device on the server side obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and can obtain any coordinate in the terminal coordinate system and the mobile robot coordinate system. Correspondence of coordinates.
  • step S312 the processing device of the server determines the at least one target based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal.
  • the coordinate information of the area in the robot coordinate system of the mobile robot For example, if the target area corresponds to the target area where the actual physical space plug-in line is located, the server can determine the position of the target area in the mobile robot map based on the multiple coordinates of the target area in the smart terminal map and the corresponding relationship. Multiple coordinates.
  • the processing device of the server generates an interaction including the at least one target area described by the coordinate information in the robot coordinate system based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot obtained in step S312
  • the instruction is sent to the mobile robot through the interface device of the server.
  • the interactive instruction includes the area corresponding to the plug-in line in the actual physical space described by the coordinate information in the coordinate system of the mobile robot, and the mobile robot can directly perform a preset operation or perform a second input based on the area where the plug-in line is located. The required operation.
  • the interaction instruction is the same as or similar to that in step S320 and will not be described in detail here.
  • the server does not determine the coordinate position of the at least one target area in the robot coordinate system based on the video stream captured by the smart terminal acquired by the processing device.
  • Step S310 also includes step S313.
  • the processing device of the server obtains the consensus element related to the creation of the at least one target area from the smart terminal through the interface device. Wherein, the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
  • the consensus element is a positioning feature shared by the smart terminal and the mobile robot.
  • the shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system.
  • the intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined.
  • the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg
  • the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg.
  • the processing device on the server side can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system.
  • the corresponding relationship of the coordinates of the mobile robot can be determined to determine the corresponding relationship of all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot.
  • the processing device of the server determines the coordinate position of the at least one target area in the robot coordinate system based on the corresponding relationship, and generates an interactive instruction including the at least one target area and the consensus element to pass the
  • the interface device is sent to the mobile robot.
  • the mobile robot may directly perform operations related to the at least one target area based on the acquired target area.
  • the mobile robot may also obtain the coordinate information in the coordinate system of the smart terminal through the interface device of the server or the interface device of the smart terminal, and determine that the at least one target area is in the robot based on the consensus element and the coordinate information.
  • the coordinate position in the coordinate system then performs related operations based on the target area.
  • the processing device of the server may also determine that the at least one target area is in the mobile robot based on the consensus element of the positioning feature shared by the mobile robot and the smart terminal.
  • the coordinate information in the robot coordinate system of the robot may be determined.
  • FIG. 9 shows a schematic structural diagram of the mobile robot according to an embodiment of the present application.
  • the mobile robot includes a storage device 31, an interface device 33, a processing device 34, an execution device 32, and the like.
  • the mobile robot is a machine device that automatically performs specific tasks. It can accept human commands, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors. It can be used in industry, commerce or households. It can be used to replace security patrols, to replace greeters or orderers, or to replace people to clean the ground. It can also be used for family accompaniment, auxiliary office, etc.
  • the mobile robot is provided with at least one camera device for capturing images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction); according to the constructed map, the mobile robot can perform inspections, Path planning for cleaning and tidying up.
  • VSLAM Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction
  • the mobile robot can perform inspections, Path planning for cleaning and tidying up.
  • the mobile robot caches the map built during its operation in a local storage device, or uploads it to
  • the mobile robot includes, but is not limited to: a cleaning robot, a patrol robot, and a handling robot.
  • the cleaning robot is a mobile robot for performing cleaning and cleaning operations.
  • the patrol robot is a mobile robot for performing monitoring operations.
  • the handling robot is a mobile robot that performs handling and sorting operations.
  • the execution device 32 is used for controlled execution of corresponding operations, which corresponds to the type of the mobile robot.
  • the robot is a cleaning robot, and the execution device 32 includes a cleaning device for performing cleaning and cleaning operations, and a moving device for performing navigation and movement operations.
  • the cleaning device includes, but is not limited to: side brushes, rolling brushes, fans and the like.
  • the moving device includes, but is not limited to: a walking mechanism and a driving mechanism. Wherein, the walking mechanism may be arranged at the bottom of the cleaning robot, and the driving mechanism is built in the housing of the cleaning robot.
  • the mobile robot is a transport robot, and the execution device 32 includes a transport device for carrying and sorting operations and a mobile device for performing navigation and movement operations.
  • the conveying device includes but is not limited to: a manipulator, a manipulator, a motor, and the like.
  • the moving device includes, but is not limited to: a walking mechanism and a driving mechanism.
  • the walking mechanism may be arranged at the bottom of the handling robot, and the driving mechanism is built in the housing of the handling robot.
  • the mobile robot is a patrol robot, and the execution device 32 includes a camera device for performing monitoring and a mobile device for performing navigation movement operations.
  • the camera device includes but is not limited to: a color camera device, a grayscale camera device, an infrared camera device, etc.
  • the mobile device includes, but is not limited to, a walking mechanism and a driving mechanism.
  • the walking mechanism may be arranged at the bottom of the patrol robot, and the driving mechanism is built in the housing of the patrol robot.
  • the storage device 31 is used to store at least one program and a pre-built robot coordinate system. Wherein, the at least one program can be used by the processing device to execute the interaction method described in the embodiment of FIG. 10.
  • the storage device 31 includes, but is not limited to: Read-Only Memory (Read-Only Memory, ROM for short), Random Access Memory (RAM for short), and Nonvolatile RAM (Nonvolatile RAM, NVRAM for short).
  • the storage device 31 includes a flash memory device or other non-volatile solid-state storage devices.
  • the storage device 31 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof.
  • the storage device 31 also includes a memory controller, which can control access control of the mobile robot such as a central processing unit (CPU) and an interface device 33 or other components to the memory.
  • CPU central processing unit
  • the interface device 33 is used to communicate and interact with an intelligent terminal and a server.
  • the interface device 33 may receive an interaction instruction sent by the smart terminal or generated by the smart terminal via the server.
  • the interface device 33 of the mobile robot obtains the video stream taken by the smart terminal and the second input of the smart terminal sent by the server or the smart terminal, and obtains the video stream from the smart terminal and the creation site. Describe at least one consensus element related to the target area.
  • the mobile robot provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through the interface device 33 for obtaining the interactive instruction.
  • the interface device 33 includes a network interface, a data line interface, and the like.
  • the network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like.
  • the data line interface includes but is not limited to: USB interface, RS232, etc.
  • the interface device 33 is data connected to the storage device 31, the processing device 34, the Internet, the server, the smart terminal, the execution device 32, and the like.
  • the processing device 34 is connected to the storage device 31, the execution device 32, and the interface device 33, and is used to execute the at least one program to coordinate the storage device 31 and the interface device 33 to execute the interaction method described in the embodiment of FIG. 10 .
  • the processing device 34 includes one or more processors.
  • the processing device 34 is operable to perform data read and write operations with the storage device 31.
  • the processing device 34 performs operations such as extracting images, temporarily storing features, positioning in a map based on features, and the like.
  • the processing device 34 includes one or more general-purpose microprocessors, one or more special purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors.
  • Logic array Field Programmable Gate Array, FPGA for short), or any combination of them.
  • the intelligent terminal or the The server may generate an interactive command including the at least one target area described by the coordinate information in the robot coordinate system to be sent to the mobile robot through the interface device of the smart terminal or the interface device of the server.
  • the processing device of the mobile robot may parse the interactive instruction to obtain at least: the at least one target area described by the coordinate information in the robot coordinate system. For example, if the interactive instruction is an interactive instruction generated based on the target area and a second input, the mobile robot parses the interactive instruction to obtain the at least one target area described by the coordinate information in the robot coordinate system And the second input.
  • the mobile robot processing device controls the execution device to perform related operations based on the second input and the target area.
  • the second input is the same as or similar to the second input mentioned above, and will not be described in detail here.
  • the interactive instruction is related to the function of the mobile robot and only includes the at least one target area, and the interactive instruction can be generated without the user's second input. Then, the mobile robot parses the interactive instruction to obtain the at least one target area described by using coordinate information in the robot coordinate system.
  • the processing device controls the execution device to perform related operations based on the preset function of the mobile robot and the target area.
  • the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network.
  • FIG. 10 shows a schematic flowchart of another embodiment of the interaction method of this application.
  • the processing device of the mobile robot executes step S410 based on the coordinate information of the coordinate system of the mobile robot stored in the storage device.
  • the processing device obtains an interactive instruction from the smart terminal or the server.
  • the interaction instruction includes at least one target area; the target area is obtained by detecting user input by the smart terminal, and the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is the same as The coordinate information in the robot coordinate system has a corresponding relationship.
  • the at least one target area is created by the processing device of the smart terminal in response to user input detected in a state where the display device previews the physical space interface and includes the coordinates of the terminal coordinate system of the smart terminal.
  • the target area of the information The processing device of the smart terminal can correspond at least one target area created in the previewed physical space interface to the map constructed by the smart terminal, and then determine the location of the at least one target area in the smart terminal map. Coordinate information.
  • the manner of detecting user input and creating at least one target area in response to the input is the same as or similar to the manner in the interaction method described in FIG.
  • the processing device of the mobile robot obtains the coordinate information of the mobile robot coordinate system and the coordinate information of the at least one target area in the smart terminal map. Because the map constructed by the smart terminal according to its terminal coordinate system overlaps with the actual physical space corresponding to the map constructed by the mobile robot based on the robot coordinate system. Therefore, the processing device of the mobile robot can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the intelligent terminal.
  • step S420 the execution device is controlled to perform an operation related to the at least one target area.
  • the step of detecting the user's input in a state where the display device of the smart terminal previews the physical space interface it is detected that the user's input is the first input.
  • the smart terminal creates at least one target area based on the first input to generate an interactive instruction.
  • the mobile robot processing device also obtains a second input from the smart terminal through the interface device, and the step of obtaining the second input may be performed before the first input or after the first input .
  • the processing device further controls the execution device to perform an operation related to the at least one target area based on the second input.
  • the execution device includes a mobile device, and the processing device generates a navigation route related to the at least one target area based on the second input, and controls the mobile device to perform navigation movement based on the navigation route.
  • the execution device includes a cleaning device, and the processing device controls a cleaning operation of the cleaning device in the at least one target area based on the second input.
  • the execution device includes a camera device, and the processing device controls a camera operation of the camera device in the at least one target area based on the second input.
  • the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, strength of cleaning the target area, sorting or not sorting the items in the target area.
  • the mobile robot is a cleaning robot
  • the target area corresponds to an area where garbage is scattered on the ground
  • the second input is a cleaning target area
  • the processing device generates entry into the at least one target based on the second input
  • the navigation route of the area and based on the navigation route, the mobile device is controlled to perform navigation movement, and when the cleaning robot reaches the at least one target area, the cleaning device is controlled to clean up garbage scattered on the ground.
  • the processing device may also control the force of cleaning the target area based on the second input.
  • the second input is that the target area is not cleaned.
  • the processing device generates a navigation route related to not entering the at least one target area based on the second input, and controls the mobile device to perform a navigation movement based on the navigation route to move away from or bypass the at least one target area.
  • the mobile robot is a patrol robot
  • the target area corresponds to an area that the user needs to view
  • the second input is entering the target area
  • the processing device generates entry to the at least one target area based on the second input
  • control the mobile device to perform navigational movement based on the navigation route, and control the camera device to capture images of the at least one target area when the patrol robot reaches the at least one target area.
  • the second input is not entering the target area, and the processing device generates a navigation route that does not enter the at least one target area based on the second input, and based on the The navigation route controls the mobile device to perform navigation movement.
  • the mobile robot is a transport robot, and if the target area corresponds to the area where the user needs to sort items, the second input is to sort items in the target area, and the processing device generates entry into the A navigation route of at least one target area, and based on the navigation route, the mobile device is controlled to perform navigational movement, and when the handling robot reaches the at least one target area, the handling device is controlled to carry and sort the at least one target area Items.
  • the target area corresponds to an area where the user does not need to organize items
  • the second input is not to organize items in the target area
  • the processing device controls the handling device to not organize the at least one target area based on the second input Of items within.
  • the interactive instruction is related to the function of the mobile robot, and the processing device of the mobile robot does not need to obtain the second input from the smart terminal through the interface device to control the The execution device executes an operation related to the at least one target area.
  • the interaction instruction only includes the at least one target area.
  • the mobile robot is a cleaning robot that performs cleaning work
  • the server or smart terminal sends the at least one target area to the cleaning robot, and the cleaning robot automatically cleans the target area.
  • the mobile robot is a patrol robot that performs patrol work
  • the server or smart terminal sends the at least one target area to the patrol robot, and the patrol robot automatically enters the target area to perform the patrol work.
  • the mobile robot is a handling robot that performs sorting and handling tasks, and the server or smart terminal sends the at least one target area to the handling robot, and the handling robot automatically enters the target area to perform the handling and sorting tasks.
  • the mobile robot may be based on the smart terminal or the user’s sorting of the multiple target areas, or may be based on the relationship between the multiple target areas and the current position of the mobile robot.
  • the distance is used to sort the multiple target areas. So that the mobile robot performs related operations based on the sorted multiple target regions. For example, if there are two target areas, the first target area is two meters away from the mobile robot, and the second target area is four meters away from the mobile robot, the mobile robot first performs related operations based on the first target area. Perform related operations on the second target area.
  • the mobile robot obtains an interactive instruction from the smart terminal or the server that includes at least one target area, and the processing device also obtains and creates the at least one target area from the smart terminal through the interface device.
  • the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
  • the processing device of the mobile robot analyzes the interaction instruction to obtain the at least one target area described by using coordinate information in the terminal coordinate system of the smart terminal.
  • the consensus element related to the creation of at least one target area refers to the data required to determine the coordinate information of the at least one target area in the robot coordinate system, including but not limited to: video streams captured by smart terminals, smart Positioning features shared by the terminal and the mobile robot, etc.
  • step S510 the processing device also performs steps S510 and S520.
  • step S510 the corresponding relationship is determined based on the coordinate information of the consensus element in the robot coordinate system and the coordinate information in the terminal coordinate system. .
  • the consensus element is a positioning feature shared by the smart terminal and the mobile robot.
  • the shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system.
  • the intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined.
  • the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg
  • the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg.
  • the processing device of the mobile robot can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system. Correspondence of the coordinates below. Furthermore, the correspondence between all coordinates in the terminal coordinate system of the smart terminal and all coordinates in the robot coordinate system of the mobile robot can be determined.
  • the consensus element is an image containing an object corresponding to the positioning feature of the mobile robot map.
  • the video stream obtained by the mobile robot contains an image of the chair.
  • the processing device of the mobile robot can use an image matching algorithm to compare the positioning feature in the at least one frame of image with the physical pre-built mobile robot. The spatial map, location features, and coordinate information are matched, so as to determine the location features in the image that match the mobile robot map.
  • the processing device of the mobile robot invokes the same extraction algorithm that extracts the location features in the image when the mobile robot constructs the map, and extracts the candidate location features in the image based on the extraction algorithm.
  • the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship.
  • texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.
  • examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.
  • the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image.
  • the processing device of the mobile robot uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'.
  • the processing device of the mobile robot can determine the correspondence between the coordinates of the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map. For example, the processing device of the mobile robot obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and can obtain any coordinate in the terminal coordinate system and the mobile robot coordinate system Correspondence between coordinates.
  • step S520 the processing device of the mobile robot determines the at least one based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal.
  • the coordinate information of the target area in the robot coordinate system of the mobile robot For example, if the target area corresponds to the area where garbage is scattered in the actual physical space, the mobile robot can determine that the target area is in the mobile robot map based on the multiple coordinates of the target area in the smart terminal map and the corresponding relationship. coordinate.
  • the processing device of the mobile robot controls the execution device of the mobile robot to perform operations related to the at least one target region based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot obtained in step S520 .
  • the description of the processing device controlling the execution device to perform the operation related to the at least one target area is the same as or similar to that in step S420, and details are omitted here.
  • the processing device of the mobile robot may also determine that the at least one target area is located in the at least one target area based on the consensus element of the positioning feature shared by the mobile robot and the smart terminal.
  • the coordinate information in the robot coordinate system of the mobile robot may be determined.
  • the mobile robot can obtain the coordinate information of the at least one target area in the mobile robot coordinate system based on the interaction method described in any embodiment of the present application, and then the processing device of the mobile robot can control The execution device executes an operation related to the at least one target area.
  • the smart terminal detects the user's input in a state where the display device previews the indoor space interface.
  • the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the smart terminal.
  • the processing device of the smart terminal responds to the detected user input to create a target area including scattered garbage in the previewed indoor space interface.
  • the coordinates of the target area including scattered garbage in the robot coordinate system of the mobile robot can be determined.
  • the related description of the consensus element is the same or similar to the consensus element mentioned in step S210, and will not be described in detail here.
  • the processing device of the smart terminal generates an interactive command based on the coordinates of the target area including the scattered garbage in the robot coordinate system of the cleaning robot and sends it to the cleaning robot or sends it to the cleaning robot via the server.
  • the interactive instruction may only include the coordinates of the target area where the garbage is scattered in the robot coordinate system of the cleaning robot.
  • the processing device of the cleaning robot receives the interactive instruction through the interface device, it can directly generate a navigation route into the target area including scattered garbage based on the interactive instruction, and control the mobile device to perform navigational movement based on the navigation route, And when the cleaning robot reaches the target area including scattered garbage, the cleaning device is controlled to clean the scattered garbage on the ground.
  • the interactive instruction may also include a second input of the user.
  • the user's second input includes, but is not limited to: cleaning or not cleaning the target area, the strength of cleaning the target area, and entering or not entering the target area.
  • the second input is a deep cleaning of the target area, and when the processing device of the cleaning robot receives the interactive instruction through the interface device, it generates a navigation route into the target area including scattered garbage based on the interactive instruction, and based on the The navigation route controls the mobile device to perform navigational movement.
  • the cleaning robot controls the fan, side brush, and rolling brush of the cleaning device to make the cleaning device deeply clean the scattered ground Rubbish.
  • the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the server.
  • the processing device of the smart terminal responds to the detected user input to create a target area including pet feces in the previewed indoor space interface.
  • the server obtains the target area including pet feces from the smart terminal.
  • the processing device on the server side can also determine the coordinates of the target area including pet feces in the robot coordinate system of the mobile robot based on the coordinates of the consensus elements in the robot coordinate system and the coordinates in the terminal coordinate system, respectively.
  • the related description of the consensus elements is the same or similar to the consensus elements mentioned in steps S311 and S313, and will not be described in detail here.
  • the server generates an interactive command based on the coordinates of the target area including pet feces in the robot coordinate system of the mobile robot to send to the mobile robot through the interface device.
  • the interactive instruction includes the coordinates of the target area where the garbage is scattered in the robot coordinate system of the cleaning robot and the second input of the user not to enter the target area.
  • the processing device of the cleaning robot receives the interactive instruction through the interface device, it generates a navigation route that does not enter the target area including pet feces based on the interactive instruction, and controls the mobile device to perform navigation movement based on the navigation route.
  • the second input is not limited to not entering the target area, and the second input may be related to the target area created by the smart terminal based on actual user input.
  • the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the cleaning robot.
  • the processing device of the smart terminal responds to the detected user input to create a target area including a wrap in the previewed indoor space interface.
  • the processing device of the cleaning robot obtains the interactive instruction from the smart terminal or forwarded via the server through the interface device, and the interactive instruction is the coordinate of the target area including the winding object in the coordinate system of the smart terminal.
  • the processing device of the cleaning robot is also based on the coordinates of the consensus elements in the robot coordinate system and the coordinates in the terminal coordinate system, so as to determine the coordinates of the target area including the winding object in the robot coordinate system of the cleaning robot .
  • the related description of the consensus element is the same or similar to the consensus element mentioned in step S510, and will not be described in detail here.
  • the cleaning robot generates a navigation route that does not enter the target area including the winding object based on the coordinates of the target area including the winding object in the robot coordinate system of the cleaning robot and the user's second input of not entering the target area, and
  • the mobile device is controlled to perform navigation movement based on the navigation route.
  • the second input is not limited to not entering the target area, and the second input is related to the target area created by the smart terminal based on actual user input.
  • This application also provides a control system for a mobile robot, including an intelligent terminal and a mobile robot.
  • the hardware devices of the mobile robot and the smart terminal in the control system and the interaction methods each performed are the same as or similar to the hardware devices of the mobile robot and the smart terminal and the interaction methods each performed in the previous embodiments. This will not be detailed here.
  • the present application also provides a computer-readable storage medium for storing at least one program, and the at least one program, when called, executes the interaction method described in the above-mentioned embodiment of this application with respect to FIG. 2.
  • the interaction method is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, U disk, mobile hard disk, or any other medium that can be used to store desired program codes in the form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • the instruction is sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave
  • computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media.
  • the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks, and Blu-ray disks.
  • CD compact disks
  • laser disks optical disks
  • DVD digital versatile disks
  • floppy disks floppy disks
  • Blu-ray disks Disks usually copy data magnetically, while optical disks use lasers for optical Copy data locally.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un terminal intelligent, un système de commande et un procédé d'interaction avec un robot mobile. Le procédé comprend les étapes suivantes : dans un premier temps, détecter une entrée d'un utilisateur dans un état où un appareil d'affichage prévisualise une interface d'espace physique ; puis, en réponse à l'entrée détectée, créer au moins une région cible dans l'interface d'espace physique prévisualisée ; et enfin, générer une instruction interactive sur la base de l'au moins une région cible, et envoyer l'instruction interactive à un robot mobile, de sorte que le robot mobile effectue un mouvement de navigation et une commande de comportement sur la base de la région cible nette et précise créée par le terminal intelligent.
PCT/CN2019/108590 2019-09-27 2019-09-27 Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile WO2021056428A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/108590 WO2021056428A1 (fr) 2019-09-27 2019-09-27 Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile
CN201980094943.6A CN113710133B (zh) 2019-09-27 2019-09-27 智能终端、控制系统及与移动机器人的交互方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108590 WO2021056428A1 (fr) 2019-09-27 2019-09-27 Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile

Publications (1)

Publication Number Publication Date
WO2021056428A1 true WO2021056428A1 (fr) 2021-04-01

Family

ID=75164788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108590 WO2021056428A1 (fr) 2019-09-27 2019-09-27 Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile

Country Status (2)

Country Link
CN (1) CN113710133B (fr)
WO (1) WO2021056428A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848892A (zh) * 2021-09-10 2021-12-28 广东盈峰智能环卫科技有限公司 一种机器人清扫区域划分方法、路径规划方法及装置
CN114153310A (zh) * 2021-11-18 2022-03-08 天津塔米智能科技有限公司 一种机器人迎宾方法、装置、设备和介质
CN114431800A (zh) * 2022-01-04 2022-05-06 北京石头世纪科技股份有限公司 清洁机器人划区清洁的控制方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (zh) * 2013-07-29 2016-03-16 三星电子株式会社 自动清扫系统、清扫机器人和控制清扫机器人的方法
CN106933227A (zh) * 2017-03-31 2017-07-07 联想(北京)有限公司 一种引导智能机器人的方法以及电子设备
US20180055312A1 (en) * 2016-08-30 2018-03-01 Lg Electronics Inc. Robot cleaner, method of operating the same, and augmented reality system
CN109262607A (zh) * 2018-08-15 2019-01-25 武汉华安科技股份有限公司 机器人坐标系转换方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6259233B2 (ja) * 2013-09-11 2018-01-10 学校法人常翔学園 移動ロボット、移動ロボット制御システム、及びプログラム
CN109725632A (zh) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 可移动智能设备控制方法、可移动智能设备及智能扫地机
CN110147091B (zh) * 2018-02-13 2022-06-28 深圳市优必选科技有限公司 机器人运动控制方法、装置及机器人
CN110200549A (zh) * 2019-04-22 2019-09-06 深圳飞科机器人有限公司 清洁机器人控制方法及相关产品

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (zh) * 2013-07-29 2016-03-16 三星电子株式会社 自动清扫系统、清扫机器人和控制清扫机器人的方法
US20180055312A1 (en) * 2016-08-30 2018-03-01 Lg Electronics Inc. Robot cleaner, method of operating the same, and augmented reality system
CN106933227A (zh) * 2017-03-31 2017-07-07 联想(北京)有限公司 一种引导智能机器人的方法以及电子设备
CN109262607A (zh) * 2018-08-15 2019-01-25 武汉华安科技股份有限公司 机器人坐标系转换方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848892A (zh) * 2021-09-10 2021-12-28 广东盈峰智能环卫科技有限公司 一种机器人清扫区域划分方法、路径规划方法及装置
CN113848892B (zh) * 2021-09-10 2024-01-16 广东盈峰智能环卫科技有限公司 一种机器人清扫区域划分方法、路径规划方法及装置
CN114153310A (zh) * 2021-11-18 2022-03-08 天津塔米智能科技有限公司 一种机器人迎宾方法、装置、设备和介质
CN114431800A (zh) * 2022-01-04 2022-05-06 北京石头世纪科技股份有限公司 清洁机器人划区清洁的控制方法、装置及电子设备
CN114431800B (zh) * 2022-01-04 2024-04-16 北京石头世纪科技股份有限公司 清洁机器人划区清洁的控制方法、装置及电子设备

Also Published As

Publication number Publication date
CN113710133B (zh) 2022-09-09
CN113710133A (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
US11385720B2 (en) Picture selection method of projection touch
CN108885459B (zh) 导航方法、导航系统、移动控制系统及移动机器人
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
WO2021103987A1 (fr) Procédé de commande pour robot de balayage, robot de balayage et support de rangement
WO2021056428A1 (fr) Terminal intelligent, système de commande et procédé d'interaction avec un robot mobile
JP5942456B2 (ja) 画像処理装置、画像処理方法及びプログラム
KR102577785B1 (ko) 청소 로봇 및 그의 태스크 수행 방법
CN110310175A (zh) 用于移动增强现实的系统和方法
JP5807686B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP2019071046A (ja) ロボット仮想境界線
WO2020223975A1 (fr) Procédé de localisation de dispositif sur carte, serveur, et robot mobile
JP5213183B2 (ja) ロボット制御システム及びロボット制御プログラム
CN113116224B (zh) 机器人及其控制方法
JP2011189481A (ja) 制御装置、制御方法およびプログラム
US9477302B2 (en) System and method for programing devices within world space volumes
US20200357177A1 (en) Apparatus and method for generating point cloud data
CN111643899A (zh) 一种虚拟物品显示方法、装置、电子设备和存储介质
CN115164906B (zh) 定位方法、机器人和计算机可读存储介质
EP3422145A1 (fr) Fourniture de contenu de réalité virtuelle
US10241588B1 (en) System for localizing devices in a room
WO2021248857A1 (fr) Procédé et système de discrimination d'attribut d'obstacle et robot intelligent
CN108874141B (zh) 一种体感浏览方法和装置
US20150153715A1 (en) Rapidly programmable locations in space
CN110962132B (zh) 一种机器人系统
WO2021125019A1 (fr) Système d'informations, procédé de traitement d'informations, programme de traitement d'informations et système de robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1