WO2021056428A1 - Intelligent terminal, control system, and method for interaction with mobile robot - Google Patents

Intelligent terminal, control system, and method for interaction with mobile robot Download PDF

Info

Publication number
WO2021056428A1
WO2021056428A1 PCT/CN2019/108590 CN2019108590W WO2021056428A1 WO 2021056428 A1 WO2021056428 A1 WO 2021056428A1 CN 2019108590 W CN2019108590 W CN 2019108590W WO 2021056428 A1 WO2021056428 A1 WO 2021056428A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
mobile robot
robot
coordinate system
input
Prior art date
Application number
PCT/CN2019/108590
Other languages
French (fr)
Chinese (zh)
Inventor
李重兴
崔彧玮
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to CN201980094943.6A priority Critical patent/CN113710133B/en
Priority to PCT/CN2019/108590 priority patent/WO2021056428A1/en
Publication of WO2021056428A1 publication Critical patent/WO2021056428A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are an intelligent terminal, a control system, and a method for interaction with a mobile robot. The method comprises: firstly, detecting an input of a user in a state where a display apparatus previews a physical space interface; then, in response to the detected input, creating at least one target region in the previewed physical space interface; and finally, generating an interactive instruction on the basis of the at least one target region, and sending the interactive instruction to a mobile robot, such that the mobile robot performs navigation movement and behavior control on the basis of the accurate and clear target region created by the intelligent terminal.

Description

智能终端、控制系统及与移动机器人的交互方法Intelligent terminal, control system and interaction method with mobile robot 技术领域Technical field
本申请涉及移动机器人交互技术领域,特别是涉及一种智能终端、控制系统及与移动机器人的交互方法。This application relates to the field of mobile robot interaction technology, and in particular to an intelligent terminal, a control system, and an interaction method with a mobile robot.
背景技术Background technique
移动机器人在工作期间是依靠预先构建的地图进行导航移动及行为控制的。其中,当用户需要移动机器人在目标区域执行预定操作或者不在目标区域执行预定操作时,在现有技术中通常是通过用户的声音、肢体指令等方式来确定一指定位置,然后移动机器人基于该指定位置为中心的预设范围确定一目标区域;或者,用户通过编辑移动机器人预先构建的地图使移动机器人确定目标区域。During work, mobile robots rely on pre-built maps for navigation, movement and behavior control. Among them, when the user needs the mobile robot to perform a predetermined operation in the target area or does not perform a predetermined operation in the target area, in the prior art, a designated position is usually determined by the user's voice, body instructions, etc., and then the mobile robot is based on the designation. The preset range with the position as the center determines a target area; or, the user can make the mobile robot determine the target area by editing a map pre-built by the mobile robot.
然而在实际应用中,用户往往希望移动机器人基于精准的目标区域执行预定的导航移动或行为控制。但是,对于如清洁机器人、巡视机器人等移动机器人来说,无法根据用户的相应指令确定精准的预设范围,尤其在目标区域是不规则的情况下,用户无法用声音和肢体等指令准确地描述目标区域,移动机器人也无法准确地确定精准的目标区域。虽然通过用户编辑地图的方式可以使移动机器人确定精准的预设范围,但是移动机器人构建的地图对于用户来说是不直观的,用户不能立即确定实际物理空间中的目标区域在机器人地图中的位置。However, in practical applications, users often expect mobile robots to perform predetermined navigational movements or behavioral control based on a precise target area. However, for mobile robots such as cleaning robots and patrol robots, it is impossible to determine the precise preset range according to the corresponding instructions of the user. Especially when the target area is irregular, the user cannot accurately describe it with instructions such as voice and body. In the target area, mobile robots cannot accurately determine the precise target area. Although the user can edit the map to make the mobile robot determine the precise preset range, the map constructed by the mobile robot is not intuitive to the user, and the user cannot immediately determine the location of the target area in the actual physical space on the robot map. .
发明内容Summary of the invention
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种智能终端、控制系统及与移动机器人的交互方法,用于解决现有技术中移动机器人无法基于用户指令确定精准的目标区域和用户在编辑移动机器人地图时不能立即确定实际物理空间中的目标区域在机器人地图中准确位置的问题。In view of the above-mentioned shortcomings of the prior art, the purpose of this application is to provide an intelligent terminal, a control system, and an interaction method with a mobile robot to solve the problem that the mobile robot in the prior art cannot determine an accurate target area and user based on user instructions. When editing a mobile robot map, it is impossible to immediately determine the exact location of the target area in the actual physical space on the robot map.
为实现上述目的及其他相关目的,本申请的第一方面提供一种与移动机器人的交互方法,用于至少包含显示装置的智能终端,包括以下步骤:在所述显示装置预览物理空间界面的状态下检测用户的输入;响应检测到的输入以在所述预览的物理空间界面中创建至少一个目标区域;所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系;基于所述至少一个目标区域生成一交互指令以发送至所述移动机器人。In order to achieve the above and other related purposes, the first aspect of the present application provides an interaction method with a mobile robot, which is used in a smart terminal including at least a display device, including the following steps: previewing the state of the physical space interface on the display device The user’s input is detected in the next step; in response to the detected input, at least one target area is created in the previewed physical space interface; the target area includes the coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is The coordinate information in the robot coordinate system of the mobile robot has a corresponding relationship; an interactive command is generated based on the at least one target area and sent to the mobile robot.
在本申请的第一方面的某些实施方式中,所述在显示装置预览物理空间界面的状态下检 测用户的输入的步骤包括:在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流;利用所述智能终端的输入装置检测用户在所述物理空间界面中的输入。In some implementation manners of the first aspect of the present application, the step of detecting the user's input in a state where the display device previews the physical space interface includes: displaying the smart device in the physical space interface previewed by the display device. The video stream captured by the camera device of the terminal in real time; the input device of the smart terminal is used to detect the user's input in the physical space interface.
在本申请的第一方面的某些实施方式中,所检测的输入包括以下至少一种:滑动输入操作、点击输入操作。In some implementations of the first aspect of the present application, the detected input includes at least one of the following: a sliding input operation and a tap input operation.
在本申请的第一方面的某些实施方式中,所述在显示装置预览物理空间界面的状态下检测用户的输入的步骤包括:在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流;利用检测所述智能终端中的移动传感装置以获得用户的输入。In some implementation manners of the first aspect of the present application, the step of detecting the user's input in a state where the display device previews the physical space interface includes: displaying the smart device in the physical space interface previewed by the display device. The video stream captured by the camera device of the terminal in real time; the mobile sensor device in the smart terminal is detected to obtain the user's input.
在本申请的第一方面的某些实施方式中,还包括:在预览所述物理空间界面的状态下构建所述终端坐标系,以在完成所述终端坐标系构建的状态下响应检测到的输入。In some implementations of the first aspect of the present application, the method further includes: constructing the terminal coordinate system in the state of previewing the physical space interface, so as to respond to the detected terminal coordinate system in the state of completing the construction of the terminal coordinate system enter.
在本申请的第一方面的某些实施方式中,所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端中;或者所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的云端服务器中;或者所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的移动机器人中。In some implementations of the first aspect of the present application, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal; or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the In the cloud server connected to the smart terminal network; or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network.
在本申请的第一方面的某些实施方式中,所述交互方法还包括:基于从所预览的物理空间界面中提取到的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。In some implementations of the first aspect of the present application, the interaction method further includes: coordinate information in the robot coordinate system and the terminal coordinate system based on the consensus elements extracted from the previewed physical space interface. Determine the corresponding relationship based on the coordinate information below; determine the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
在本申请的第一方面的某些实施方式中,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以发送至所述移动机器人。In some implementations of the first aspect of the present application, the step of generating an interactive command based on at least one target area to be sent to the mobile robot includes: generating the information described by the coordinate information in the robot coordinate system. At least one interactive instruction of the target area is sent to the mobile robot.
在本申请的第一方面的某些实施方式中,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:生成包含所述至少一个目标区域、和与创建所述至少一个目标区域相关的共识要素的交互指令,以发送至所述移动机器人;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。In some implementation manners of the first aspect of the present application, the step of generating an interactive command based on at least one target area to send to the mobile robot includes: generating the at least one target area, and creating the An interactive instruction of at least one consensus element related to the target area is sent to the mobile robot; wherein the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
在本申请的第一方面的某些实施方式中,还包括以下至少一种步骤:利用所述物理空间界面提示用户进行输入操作;利用声音提示用户进行输入操作;或者利用振动提示用户进行输入操作。In some implementations of the first aspect of the present application, it further includes at least one of the following steps: using the physical space interface to prompt the user to perform an input operation; using voice to prompt the user to perform an input operation; or using vibration to prompt the user to perform an input operation .
在本申请的第一方面的某些实施方式中,在所述显示装置预览物理空间界面的状态下检测用户的输入的步骤中,检测用户的输入为第一输入,所述方法还包括:基于所述目标区域以及检测用户的第二输入生成一交互指令以发送至所述移动机器人。In some implementation manners of the first aspect of the present application, in the step of detecting the user's input while the display device is previewing the physical space interface, detecting that the user's input is the first input, the method further includes: The target area and detecting the second input of the user generate an interactive command to be sent to the mobile robot.
在本申请的第一方面的某些实施方式中,所述第二输入包括以下任一种:清扫或不清扫目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。In some implementations of the first aspect of the present application, the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area.
本申请的第二方面还提供一种智能终端,包括:显示装置,用于为一物理空间界面提供预览操作;存储装置,用于存储至少一个程序;接口装置,用于与一移动机器人进行通信交互;处理装置,与所述显示装置、存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述显示装置、存储装置和接口装置执行如本申请的第一方面中任一所述的交互方法。The second aspect of the present application also provides an intelligent terminal, including: a display device, used to provide a preview operation for a physical space interface; a storage device, used to store at least one program; an interface device, used to communicate with a mobile robot Interaction; a processing device, connected to the display device, storage device, and interface device, and used to execute the at least one program to coordinate the display device, storage device, and interface device to execute any one of the first aspect of this application The described interaction method.
本申请的第三方面还提供一种服务端,包括:存储装置,用于存储至少一个程序;接口装置,用于协助一智能终端和移动机器人进行通信交互;处理装置,与所述存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行如下交互方法:获取来自所述智能终端的至少一个目标区域;其中,所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系;基于所述至少一个目标区域生成一交互指令以通过所述接口装置发送至所述移动机器人。The third aspect of the present application also provides a server, including: a storage device for storing at least one program; an interface device for assisting an intelligent terminal and a mobile robot to communicate and interact; a processing device, and the storage device and The interface device is connected to execute the at least one program to coordinate the storage device and the interface device to perform the following interaction method: obtain at least one target area from the smart terminal; wherein the target area passes through the smart terminal Obtained by detecting user input, the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system of the mobile robot; based on the at least one The target area generates an interactive command to be sent to the mobile robot through the interface device.
在本申请的第三方面的某些实施方式中,所述存储装置预存有所述机器人坐标系;或者所述处理装置通过接口装置从所述智能终端或移动机器人获取所述机器人坐标系。In some implementations of the third aspect of the present application, the storage device prestores the robot coordinate system; or the processing device obtains the robot coordinate system from the smart terminal or mobile robot through an interface device.
在本申请的第三方面的某些实施方式中,所述处理装置还通过接口装置获取所述智能终端所摄取的视频流;所述处理装置基于所述视频流所提供的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;以及基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。In some implementation manners of the third aspect of the present application, the processing device also obtains the video stream taken by the smart terminal through the interface device; the processing device is based on the consensus elements provided by the video stream in the Determining the corresponding relationship between the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system; and determining the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
在本申请的第三方面的某些实施方式中,所述处理装置基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以通过所述接口装置发送至所述移动机器人。In some implementations of the third aspect of the present application, the step of the processing device generating an interactive command based on at least one target area to send to the mobile robot includes: The interactive instruction of the at least one target area is sent to the mobile robot through the interface device.
在本申请的第三方面的某些实施方式中,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置;生成包含所述至少一个目标区域和所述共识要素的交互指令,以通过所述接口装置发送至所述移动机器人。In some implementation manners of the third aspect of the present application, the step of generating an interactive command based on at least one target area to send to the mobile robot includes: acquiring information from the smart terminal and creating the at least one target Region-related consensus elements; wherein the consensus elements are used to determine the coordinate position of the at least one target area in the robot coordinate system; generate an interactive instruction containing the at least one target area and the consensus element to Send to the mobile robot through the interface device.
在本申请的第三方面的某些实施方式中,所述处理装置还通过所述接口装置获取来自所述智能终端的第二输入,所述处理装置还执行基于所述目标区域以及第二输入生成一交互指令以发送至所述移动机器人。In some implementation manners of the third aspect of the present application, the processing device further obtains a second input from the smart terminal through the interface device, and the processing device also performs a process based on the target area and the second input. An interactive command is generated to send to the mobile robot.
在本申请的第三方面的某些实施方式中,所述第二输入包括以下任一种:清扫或不清扫 目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。In some implementations of the third aspect of the present application, the second input includes any of the following: cleaning or not cleaning the target area, entering or not entering the target area, and sorting or not sorting items in the target area.
本申请的第四方面还提供一种移动机器人,包括:存储装置,用于存储至少一个程序,以及存储有预先构建的机器人坐标系;接口装置,用于与一智能终端进行通信交互;执行装置,用于受控执行相应操作;处理装置,与所述存储装置、接口装置和执行装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行如下交互方法:获取来自所述智能终端的交互指令;其中,所述交互指令包含至少一个目标区域;所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述机器人坐标系中的坐标信息具有对应关系;控制所述执行装置执行与所述至少一个目标区域相关的操作。The fourth aspect of the present application also provides a mobile robot, including: a storage device for storing at least one program and a pre-built robot coordinate system; an interface device for communicating and interacting with an intelligent terminal; an execution device , Used for controlled execution of corresponding operations; processing device, connected to the storage device, interface device, and execution device, and used to execute the at least one program to coordinate the storage device and the interface device to perform the following interaction method: get from The interactive instruction of the smart terminal; wherein the interactive instruction includes at least one target area; the target area is obtained by detecting user input via the intelligent terminal, and the target area includes the terminal coordinate system of the intelligent terminal Coordinate information, where the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system; and the execution device is controlled to perform an operation related to the at least one target area.
在本申请的第四方面的某些实施方式中,所述处理装置通过接口装置向所述智能终端或一云端服务器提供所述移动机器人的机器人坐标系,以供获取所述交互指令。In some implementations of the fourth aspect of the present application, the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through an interface device for obtaining the interactive instruction.
在本申请的第四方面的某些实施方式中,所述处理装置执行与所述至少一个目标区域相关的操作的步骤包括:解析所述交互指令以至少得到:包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域;控制所述执行装置执行与所述至少一个目标区域相关的操作。In some implementation manners of the fourth aspect of the present application, the step of the processing device performing an operation related to the at least one target area includes: parsing the interactive instruction to at least obtain: including using coordinates in the robot coordinate system The at least one target area described by the information; controlling the execution device to perform operations related to the at least one target area.
在本申请的第四方面的某些实施方式中,所述处理装置通过接口装置还获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置;所述处理装置还执行以下步骤:基于所述共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;以及基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。In some implementation manners of the fourth aspect of the present application, the processing device also obtains the consensus element related to the creation of the at least one target area from the smart terminal through the interface device; wherein, the consensus element is used for Determine the coordinate position of the at least one target area in the robot coordinate system; the processing device further performs the following steps: based on the coordinate information of the consensus element in the robot coordinate system and the coordinates in the terminal coordinate system Information, determining the corresponding relationship; and determining the coordinate information of the at least one target area in the robot coordinate system of the mobile robot based on the corresponding relationship.
在本申请的第四方面的某些实施方式中,所述处理装置通过所述接口装置还获取来自所述智能终端的第二输入,所述处理装置还执行基于所述第二输入控制所述执行装置执行与所述至少一个目标区域相关的操作。In some implementation manners of the fourth aspect of the present application, the processing device also obtains a second input from the smart terminal through the interface device, and the processing device also performs control of the second input based on the second input. The execution device executes an operation related to the at least one target area.
在本申请的第四方面的某些实施方式中,所述第二输入包括以下任一种:清扫或不清扫目标区域、清扫目标区域的力度、进入或不进入目标区域、整理或不整理目标区域内的物品。In some implementations of the fourth aspect of the present application, the second input includes any one of the following: cleaning or not cleaning the target area, strength of cleaning the target area, entering or not entering the target area, sorting or not sorting the target Items in the area.
在本申请的第四方面的某些实施方式中,所述执行装置包括移动装置,所述处理装置基于所述第二输入生成与所述至少一个目标区域相关的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。In some implementations of the fourth aspect of the present application, the execution device includes a mobile device, and the processing device generates a navigation route related to the at least one target area based on the second input, and based on the navigation The route controls the mobile device to perform navigational movement.
在本申请的第四方面的某些实施方式中,所述执行装置包括清洁装置,所述处理装置基于所述第二输入控制所述清洁装置在所述至少一个目标区域内的清洁操作。In some implementations of the fourth aspect of the present application, the execution device includes a cleaning device, and the processing device controls a cleaning operation of the cleaning device in the at least one target area based on the second input.
在本申请的第四方面的某些实施方式中,所述移动机器人包括:清洁机器人、巡视机器 人、搬运机器人。In some implementations of the fourth aspect of the present application, the mobile robot includes: a cleaning robot, a patrol robot, and a handling robot.
本申请的第五方面还提供一种移动机器人的控制系统,包括:如本申请的第二方面所述的智能终端;如本申请的第四方面中任一所述的移动机器人。The fifth aspect of the present application also provides a control system for a mobile robot, including: the smart terminal as described in the second aspect of the present application; and the mobile robot as described in any of the fourth aspect of the present application.
本申请的第五方面还提供一种计算机可读存储介质,存储至少一种程序,所述至少一种程序在被调用时执行并实现如本申请的第一方面中任一所述的交互方法。The fifth aspect of the present application also provides a computer-readable storage medium that stores at least one program that executes and implements the interaction method as described in any one of the first aspect of the present application when the at least one program is called .
如上所述,本申请的智能终端、控制系统及与移动机器人的交互方法,利用具有定位建图功能和显示装置的智能终端,检测用户的输入进而创建至少一个目标区域,通过在智能终端、服务端、移动机器人中至少一侧将用智能终端的坐标信息描述的目标区域匹配到移动机器人的地图中,可以使移动机器人基于移动机器人地图中精确的目标区域在所述目标区域内执行预定操作或者不在目标区域内执行预定操作,提升了在人机交互过程中,移动机器人确定用户指定的目标区域的精确度,降低了用户编辑移动机器人地图时确定目标区域在地图中位置的难度。As mentioned above, the smart terminal, control system, and interaction method with mobile robots of the present application use smart terminals with positioning mapping functions and display devices to detect user input and create at least one target area. At least one side of the mobile robot and the mobile robot matches the target area described by the coordinate information of the smart terminal to the map of the mobile robot, so that the mobile robot can perform a predetermined operation in the target area based on the precise target area in the mobile robot map or Not performing predetermined operations in the target area improves the accuracy of the mobile robot in determining the target area specified by the user during the human-computer interaction process, and reduces the difficulty of determining the location of the target area on the map when the user edits the mobile robot map.
附图说明Description of the drawings
图1显示为本申请的智能终端在一实施方式中的结构示意图。FIG. 1 shows a schematic diagram of the structure of the smart terminal of this application in an embodiment.
图2显示为本申请的与移动机器人的交互方法在一实施例中的流程示意图。FIG. 2 shows a schematic flowchart of an embodiment of the method for interacting with a mobile robot according to the present application.
图3a显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在一实施方式中的示意图。Fig. 3a shows a schematic diagram of a target area created in the previewed physical space interface by the smart terminal of this application in an implementation manner.
图3b显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在另一实施方式中的示意图。Fig. 3b shows a schematic diagram of another embodiment of the target area created by the smart terminal of the present application in the previewed physical space interface.
图3c显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在又一实施方式中的示意图。Fig. 3c shows a schematic diagram of the target area created in the previewed physical space interface by the smart terminal of this application in another embodiment.
图4显示为本申请的智能终端建立的坐标系在一具体实施例中的示意图。Fig. 4 shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment.
图5显示为本申请的智能终端的虚拟按键在一实施例中的示意图。FIG. 5 shows a schematic diagram of a virtual button of the smart terminal of this application in an embodiment.
图6显示为本申请的智能终端、服务端、移动机器人之间进行交互的网络架构示意图。Figure 6 shows a schematic diagram of the network architecture for interaction between the smart terminal, the server, and the mobile robot of this application.
图7显示为本申请的与移动机器人的交互方法在另一实施例中的流程示意图。FIG. 7 shows a schematic flowchart of another embodiment of the method for interacting with a mobile robot according to the present application.
图8显示为本申请的服务端在一实施方式中的结构示意图。FIG. 8 shows a schematic diagram of the structure of the server of this application in an embodiment.
图9显示为本申请的移动机器人在一实施方式中的结构示意图。FIG. 9 shows a schematic diagram of the structure of the mobile robot of this application in an embodiment.
图10显示为本申请的交互方法在又一实施例中的流程示意图。FIG. 10 shows a schematic flowchart of another embodiment of the interaction method of this application.
具体实施方式detailed description
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。The following specific examples illustrate the implementation of this application. Those familiar with this technology can easily understand the other advantages and effects of this application from the content disclosed in this specification.
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本公开的精神和范围的情况下进行机械组成、结构、电气以及操作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求书所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。空间相关的术语,例如“上”、“下”、“左”、“右”、“下面”、“下方”、“下部”、“上方”、“上部”等,可在文中使用以便于说明图中所示的一个元件或特征与另一元件或特征的关系。In the following description, with reference to the accompanying drawings, the accompanying drawings describe several embodiments of the present application. It should be understood that other embodiments can also be used, and mechanical, structural, electrical, and operational changes can be made without departing from the spirit and scope of the present disclosure. The following detailed description should not be considered restrictive, and the scope of the embodiments of the present application is limited only by the claims of the published patent. The terms used here are only for describing specific embodiments, and are not intended to limit the application. Space-related terms, such as "upper", "lower", "left", "right", "below", "below", "lower", "above", "upper", etc., can be used in the text for ease of explanation The relationship between one element or feature shown in the figure and another element or feature.
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一输入可以被称作第二输入,并且类似地,第二输入可以被称作第一输入,而不脱离各种所描述的实施例的范围。第一输入和输入均是在描述一个输入,但是除非上下文以其他方式明确指出,否则它们不是同一个输入。Although the terms first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, the first input may be referred to as the second input, and similarly, the second input may be referred to as the first input without departing from the scope of the various described embodiments. Both the first input and the input are describing an input, but unless the context clearly indicates otherwise, they are not the same input.
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to also include the plural forms, unless the context dictates to the contrary. It should be further understood that the terms "comprising" and "including" indicate the presence of the described features, steps, operations, elements, components, items, types, and/or groups, but do not exclude one or more other features, steps, operations, The existence, appearance or addition of elements, components, items, categories, and/or groups. The terms "or" and "and/or" used herein are interpreted as inclusive or mean any one or any combination. Therefore, "A, B or C" or "A, B and/or C" means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C" . An exception to this definition will only occur when the combination of elements, functions, steps, or operations is inherently mutually exclusive in some way.
在实际应用中,如果用户需要移动机器人基于目标区域执行预定操作,通常是通过用户的声音、肢体指令等方式来确定一指定位置,然后移动机器人在以该指定位置为中心的预设范围确定所述目标区域。但是移动机器人无法根据用户的相应指令确定精准的预设范围,尤其在目标区域是不规则的情况下,移动机器人无法准确地确定精准的目标区域。使得移动机器人不能基于精准的目标区域执行导航移动或行为控制。以清洁机器人为例,用户想让清洁机器人去饭桌周围清扫,通常发出包含“饭桌”的指令,移动机器人通常只是以饭桌为中心的预设范围确定目标区域进行清扫。如果在实际应用中,用户想让清洁机器人清扫的范围是不规整的,用户则无法用声音、肢体等指令准确地描述目标区域,那么清洁机器人就很难对准确的目标区域进行有效清扫。In practical applications, if the user needs the mobile robot to perform a predetermined operation based on the target area, it is usually determined by the user’s voice, body instructions, etc., to determine a designated position, and then the mobile robot determines the position in the preset range centered on the designated position. The target area. However, the mobile robot cannot determine the precise preset range according to the corresponding instructions of the user, especially when the target area is irregular, the mobile robot cannot accurately determine the precise target area. This makes the mobile robot unable to perform navigational movement or behavior control based on a precise target area. Taking the cleaning robot as an example, the user wants the cleaning robot to clean around the dining table, and usually sends an instruction including the “dining table”. The mobile robot usually only determines the target area with a preset range centered on the dining table to clean. In practical applications, if the user wants the cleaning robot to clean the area irregularly, and the user cannot accurately describe the target area with instructions such as voice and limbs, it will be difficult for the cleaning robot to effectively clean the accurate target area.
为了使移动机器人基于精准的目标区域执行导航移动、行为控制,用户通过编辑移动机 器人预先构建的地图使移动机器人确定目标区域在其地图中的坐标信息。然而,在实际应用中,移动机器人构建的地图对于用户来说是不直观的、难以辨别的,用户不能立即确定实际物理空间中的目标区域在移动机器人地图中的位置。In order to enable the mobile robot to perform navigational movement and behavior control based on the precise target area, the user edits the pre-built map of the mobile robot to make the mobile robot determine the coordinate information of the target area in its map. However, in actual applications, the map constructed by the mobile robot is not intuitive and difficult to distinguish for the user, and the user cannot immediately determine the location of the target area in the actual physical space on the mobile robot map.
为此,本申请提供一种智能终端、控制系统及与移动机器人的交互方法,用于基于用户的输入在智能终端上创建至少一个目标区域以供所述智能终端基于所述至少一个目标区域生成发送给所述移动机器人的交互指令。To this end, this application provides an intelligent terminal, a control system, and an interaction method with a mobile robot, which are used to create at least one target area on the intelligent terminal based on user input for the intelligent terminal to generate based on the at least one target area An interactive instruction sent to the mobile robot.
其中,所述移动机器人是自动执行特定工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业、商业或家庭,可用于取代保安巡视、取代迎宾员或点餐员、或取代人们清洁地面,还可用于家庭陪伴、辅助办公等。所述移动机器人设置至少一个摄像装置,用于摄取移动机器人的操作环境的图像,从而进行VSLAM(Visual Simultaneous Localization and Mapping,视觉同时定位与地图构建);根据构建的地图,移动机器人能够进行巡视、清洁、整理等工作的路径规划。通常,移动机器人将自身运行工作期间构建的地图缓存在本地存储空间内,或者上传至服务端或云端进行存储,也可以上传至用户的智能终端进行存储。Wherein, the mobile robot is a machine device that automatically performs specific tasks. It can accept human commands, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors. It can be used in industry, commerce or households. It can be used to replace security patrols, to replace greeters or orderers, or to replace people to clean the ground. It can also be used for family accompaniment, auxiliary office, etc. The mobile robot is provided with at least one camera device for capturing images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction); according to the constructed map, the mobile robot can perform inspections, Path planning for cleaning and tidying up. Generally, the mobile robot caches the map built during its operation in the local storage space, or uploads it to the server or the cloud for storage, or uploads it to the user's smart terminal for storage.
所述与移动机器人的交互方法用于至少包含显示装置的智能终端。请参阅图1,图1显示为本申请的智能终端在一实施方式中的结构示意图。如图所示,所述智能终端包括显示装置11、存储装置12、接口装置13、处理装置14、摄像装置(未图示)等。例如,所述智能终端可以是智能手机、AR眼镜、平板电脑等设备。The interaction method with a mobile robot is used in an intelligent terminal including at least a display device. Please refer to FIG. 1. FIG. 1 shows a schematic diagram of the structure of the smart terminal of this application in an embodiment. As shown in the figure, the smart terminal includes a display device 11, a storage device 12, an interface device 13, a processing device 14, a camera device (not shown), and the like. For example, the smart terminal may be a device such as a smart phone, AR glasses, and a tablet computer.
其中,所述显示装置11是一种人机接口装置,用于提供一物理空间界面以供用户预览。所述显示装置11能把智能终端地图的坐标信息或各种数据信息变换成各种文字、数字、符号或直观的图像等电子文件显示出来。并可利用所述输入装置或移动传感装置把用户输入或数据输入所述智能终端,并借助智能终端的处理装置14随时增添、删改、变换显示内容。显示装置11根据显示器件的不同可分为等离子、液晶、发光二极管和阴极射线管等不同类型的显示装置。本申请的显示装置11可以给用户提供物理空间界面以供用户查看和使用智能终端的电子文件。所述显示装置11的物理空间界面向用户显示了通过智能终端摄像装置摄取的与实际的物理空间相对应的图像。所述物理空间界面通过调用智能终端的摄像装置摄取的实际物理空间的图像来向用户展示直观的物理空间。例如,所述智能终端的摄像装置正在摄取厨房地面上散落的一堆米,则所述物理空间界面显示了包括散落的一堆米和厨房地面的图像。所述物理空间为移动机器人工作的实际空间。例如,所述移动机器人为清洁机器人,所述物理空间可以为清洁机器人需要清扫的用户生活、工作的物理空间。Wherein, the display device 11 is a human-machine interface device for providing a physical space interface for the user to preview. The display device 11 can transform the coordinate information or various data information of the smart terminal map into various characters, numbers, symbols, or intuitive images and other electronic files for display. The input device or the mobile sensor device can be used to input user input or data into the smart terminal, and the display content can be added, deleted, or changed at any time with the help of the processing device 14 of the smart terminal. The display device 11 can be divided into different types of display devices such as plasma, liquid crystal, light emitting diode, and cathode ray tube according to different display devices. The display device 11 of the present application can provide the user with a physical space interface for the user to view and use the electronic file of the smart terminal. The physical space interface of the display device 11 displays the image corresponding to the actual physical space captured by the smart terminal camera device to the user. The physical space interface displays the intuitive physical space to the user by calling the image of the actual physical space captured by the camera device of the smart terminal. For example, if the camera device of the smart terminal is capturing a pile of rice scattered on the kitchen floor, the physical space interface displays an image including the scattered pile of rice and the kitchen floor. The physical space is the actual space where the mobile robot works. For example, the mobile robot is a cleaning robot, and the physical space may be a physical space where the user needs to be cleaned by the cleaning robot to live and work.
所述存储装置12用于存储至少一个程序。其中,所述至少一种程序可供所述处理装置执行本申请所述的交互方法。所述存储装置12还存储有所述移动机器人的机器人坐标系中的坐标信息。The storage device 12 is used to store at least one program. Wherein, the at least one program can be used by the processing device to execute the interaction method described in this application. The storage device 12 also stores coordinate information in the robot coordinate system of the mobile robot.
在此,存储装置12包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM)。例如,存储装置12包括闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置12还可以包括远离一个或多个处理装置的存储器,例如,经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储装置12还包括存储器控制器,其可控制智能终端的诸如中央处理器(CPU)和接口装置13之类或其他组件对存储器的访问控制。Here, the storage device 12 includes but is not limited to: read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), and nonvolatile RAM (Nonvolatile RAM, NVRAM for short). For example, the storage device 12 includes a flash memory device or other non-volatile solid-state storage devices. In some embodiments, the storage device 12 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof. The storage device 12 also includes a memory controller, which can control the access control of the smart terminal such as a central processing unit (CPU) and the interface device 13 or other components to the memory.
接口装置13用于与一移动机器人或服务端进行通信交互。例如,所述接口装置13可以将所述智能终端生成的交互指令发送给服务端或者所述移动机器人。又如,所述接口装置13向所述移动机器人或者服务端发送获取移动机器人坐标系下的坐标信息的指令。所述接口装置13包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述接口装置13与所述显示装置11、存储装置12、处理装置14、互联网、位于一物理空间中的移动机器人移动机器人、服务端等数据连接。The interface device 13 is used to communicate and interact with a mobile robot or a server. For example, the interface device 13 may send the interactive instruction generated by the smart terminal to the server or the mobile robot. For another example, the interface device 13 sends an instruction for acquiring coordinate information in the coordinate system of the mobile robot to the mobile robot or the server. The interface device 13 includes a network interface, a data line interface, and the like. The network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like. The data line interface includes but is not limited to: USB interface, RS232, etc. The interface device 13 is connected to the display device 11, the storage device 12, the processing device 14, the Internet, a mobile robot located in a physical space, a server, and other data.
处理装置14与所述显示装置11、存储装置12和接口装置13相连,用于执行所述至少一个程序,以协调所述显示装置11、存储装置12和接口装置13执行本申请所述的交互方法。所述处理装置14包括一个或多个处理器。处理装置14可操作地与存储装置12执行数据读写操作。处理装置14执行诸如提取图像、暂存特征、基于特征在地图中进行定位等。所述处理装置14包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。处理装置14还与输入装置可操作地耦接,该输入装置可使得用户能够所述智能终端进行交互。因此,输入装置可包括按钮、键盘、鼠标、触控板等。The processing device 14 is connected to the display device 11, the storage device 12, and the interface device 13, and is used to execute the at least one program to coordinate the display device 11, the storage device 12, and the interface device 13 to perform the interaction described in this application method. The processing device 14 includes one or more processors. The processing device 14 is operable to perform data read and write operations with the storage device 12. The processing device 14 performs operations such as extracting images, temporarily storing features, and locating in a map based on features. The processing device 14 includes one or more general-purpose microprocessors, one or more special-purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors. Logic array (Field Programmable Gate Array, FPGA for short), or any combination of them. The processing device 14 is also operatively coupled with an input device, which can enable a user to interact with the smart terminal. Therefore, input devices may include buttons, keyboards, mice, touch pads, and so on.
所述摄像装置用于实时摄取实际物理空间中的图像,其包括但不限于:单目摄像装置、双目摄像装置、多目摄像装置、深度摄像装置等。The camera device is used to capture images in the actual physical space in real time, and includes, but is not limited to: a monocular camera device, a binocular camera device, a multi-eye camera device, a depth camera device, and the like.
请参阅图2,图2显示为本申请的与移动机器人的交互方法在一实施例中的流程示意图。 在实施例中,所述交互方法用于智能终端与移动机器人的交互,所述智能终端具有显示装置。Please refer to FIG. 2. FIG. 2 shows a schematic flowchart of an embodiment of the method for interacting with a mobile robot according to the present application. In an embodiment, the interaction method is used for interaction between a smart terminal and a mobile robot, and the smart terminal has a display device.
在步骤S110中,在所述显示装置预览物理空间界面的状态下检测用户的输入。所述预览物理空间界面的状态是指显示装置的物理空间界面可以实时地显示智能终端的摄像装置摄取的实际物理空间的图像以供用户查看和使用。当所述显示装置在预览物理空间界面的状态下,用户可以实时查看所述智能终端所拍摄的图像,使得用户可以基于所述物理空间界面所显示的直观图像来对应到实际物理空间中的区域、位置。以所述智能终端为智能手机为例,用户进入智能手机的AR应用后,在手机的AR应用界面可以实时地显示智能手机所拍摄的实际物理空间的图像,用户基于AR的应用界面显示的图像可立即对应到实际物理空间的区域。在执行步骤S110之前,还包括智能终端在预览所述物理空间界面的状态下构建所述终端坐标系的步骤,以在完成构建所述终端坐标系的状态下响应检测到的输入。在实施例中,智能终端首先在预览所述物理空间界面的状态下构建对应实际物理空间的地图并存储与地图对应的坐标信息。构建终端坐标系是为了描述与所述智能终端地图对应的坐标信息。所述坐标信息包括:定位特征、定位特征在地图中的坐标等。其中,所述定位特征包括但不限于:特征点、特征线等。In step S110, the user's input is detected in a state where the display device previews the physical space interface. The state of the preview physical space interface means that the physical space interface of the display device can display real-time images of the actual physical space captured by the camera device of the smart terminal for the user to view and use. When the display device is previewing the physical space interface, the user can view the image taken by the smart terminal in real time, so that the user can correspond to the area in the actual physical space based on the intuitive image displayed on the physical space interface ,position. Taking the smart terminal as a smart phone as an example, after the user enters the AR application of the smart phone, the AR application interface of the mobile phone can display the image of the actual physical space taken by the smart phone in real time, and the image displayed by the user based on the AR application interface Can immediately correspond to the area of the actual physical space. Before step S110 is executed, it further includes the step of constructing the terminal coordinate system by the smart terminal in the state of previewing the physical space interface, so as to respond to the detected input in the state of completing the construction of the terminal coordinate system. In the embodiment, the smart terminal first constructs a map corresponding to the actual physical space while previewing the physical space interface and stores coordinate information corresponding to the map. The terminal coordinate system is constructed to describe the coordinate information corresponding to the smart terminal map. The coordinate information includes: positioning features, coordinates of the positioning features in the map, and the like. Wherein, the positioning feature includes, but is not limited to: feature points, feature lines, and so on.
在一实施例中,所述智能终端的摄像装置在智能终端的运动过程中会不断拍摄实际物理空间中的图像,所述智能终端基于所拍摄的实际物理空间中的图像和智能终端运动过程中的位置来构建地图。所述智能终端构建的地图用于描述实际物理空间中的物体在地图中的位置及所占范围。并且所述智能终端构建的地图与所述移动机器人构建的地图对应的实际物理空间需要有重合的部分,以便执行步骤S110~S130。例如,所述移动机器人地图对应多个定位特征,则与所述智能终端地图对应的定位特征中应至少有一个定位特征与移动机器人地图对应的定位特征是相同的。In an embodiment, the camera device of the smart terminal will continuously take images in the actual physical space during the movement of the smart terminal. The smart terminal is based on the captured images in the actual physical space and the movement of the smart terminal. Location to construct the map. The map constructed by the smart terminal is used to describe the location and the area occupied by objects in the actual physical space on the map. In addition, the actual physical space corresponding to the map constructed by the smart terminal and the map constructed by the mobile robot needs to overlap in order to perform steps S110 to S130. For example, if the mobile robot map corresponds to multiple positioning features, at least one of the positioning features corresponding to the smart terminal map should be the same as the positioning feature corresponding to the mobile robot map.
其中,所述步骤S110还包括在所述智能终端构建完地图以后提示用户开始进行所述输入操作的步骤。Wherein, the step S110 further includes the step of prompting the user to start the input operation after the smart terminal has constructed the map.
在一实施例中,利用所述物理空间界面提示用户进行输入操作。例如:所述智能终端显示装置显示“请用户进行输入操作”等文字提示用户进行输入操作或者显示预设的图形来提示用户智能终端已经构建完地图用户可以进行输入操作。In an embodiment, the physical space interface is used to prompt the user to perform an input operation. For example, the smart terminal display device displays a text such as "please input the user" to prompt the user to perform an input operation or displays a preset graphic to prompt the user that the smart terminal has constructed a map and the user can perform an input operation.
在另一实施例中,利用声音提示用户进行输入操作。例如,所述智能终端的音频装置发出“请用户进行输入操作”等提示语音或者可以发出预设的音乐或响声来提示用户进行输入操作。In another embodiment, a voice is used to prompt the user to perform an input operation. For example, the audio device of the smart terminal emits a prompt voice such as "please the user to perform an input operation" or may emit a preset music or sound to prompt the user to perform an input operation.
在又一实施例中,利用振动提示用户进行输入操作。例如,所述智能终端的振动装置可以产生振动来提示用户进行输入操作。其中,所述输入包括但不限于第一输入和第二输入等, 所述第一输入和第二输入将在后续详述。In another embodiment, vibration is used to prompt the user to perform an input operation. For example, the vibration device of the smart terminal may generate vibration to prompt the user to perform an input operation. Wherein, the input includes, but is not limited to, a first input and a second input, etc. The first input and the second input will be described in detail later.
在一实施例中,步骤S110包括,在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流,智能终端利用所述智能终端的输入装置检测用户在所述物理空间界面中的输入。In one embodiment, step S110 includes displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device, and the smart terminal uses the input device of the smart terminal to detect that the user is at the location. Describe the input in the physical space interface.
所述视频流是摄像装置实时地、连续地所摄取的多帧图像,所述视频流可以通过移动智能终端来获取以供显示装置在预览的物理空间界面下实时连续地显示。例如;用户使用手机拍照软件拍摄景物时,手机的显示屏在预览界面下实时连续地显示摄像装置所摄取的景物图像,以供用户基于摄像装置摄取的视频流来调整拍摄角度进行拍照。The video stream is a multi-frame image captured by a camera device in real time and continuously, and the video stream can be acquired by a mobile smart terminal for the display device to continuously display in real time under the previewed physical space interface. For example, when a user uses a mobile phone camera software to shoot a scene, the display screen of the mobile phone continuously displays the scene images taken by the camera device in real time under the preview interface, so that the user can adjust the shooting angle and take pictures based on the video stream captured by the camera device.
所述输入装置是可以检测和感知用户在物理空间界面中的输入的装置。例如,智能终端的触摸显示屏、智能终端的按键、按钮等。所检测的输入包括以下至少一种:滑动输入操作、点击输入操作。所检测的输入与所述输入装置相对应。基于检测的输入,所述智能终端的处理装置可以确定输入操作对应到智能终端所构建的地图中的位置或区域。The input device is a device that can detect and perceive the user's input in the physical space interface. For example, the touch screen of the smart terminal, the keys and buttons of the smart terminal, etc. The detected input includes at least one of the following: a sliding input operation and a tap input operation. The detected input corresponds to the input device. Based on the detected input, the processing device of the smart terminal can determine that the input operation corresponds to the location or area on the map constructed by the smart terminal.
在一具体实施例中,所述输入装置为触摸显示屏,则触摸显示屏所检测的用户输入可以是滑动操作也可以是点击操作。例如,用户在所述触摸显示屏上连续滑动,基于用户的滑动操作显示装置可连续检测和感知用户滑动操作对应的滑动轨迹,所述智能终端的处理装置可以确定滑动轨迹在智能终端地图中的对应位置。In a specific embodiment, the input device is a touch display screen, and the user input detected by the touch display screen may be a sliding operation or a click operation. For example, if the user slides continuously on the touch screen, the display device can continuously detect and perceive the sliding track corresponding to the user's sliding operation based on the user's sliding operation, and the processing device of the smart terminal can determine the sliding track in the map of the smart terminal. Corresponding location.
又如,用户在所述触摸屏上点击,基于在触摸屏上点击的多个位置,所述智能终端的处理装置可以确定所述多个位置对应到智能终端地图中的位置。在另一具体实施例中,所述输入装置为按键,则在所述触摸屏上的点击操作可转换为对按键的点击操作。例如,在智能终端显示装置显示的视频流中存在一个目标点,所述目标点显示的位置在显示装置的固定位置,可以在显示在中心位置也可以显示在其他位置。通过移动智能终端来使目标点对应到实际物理空间中的不同位置,进而用户可以通过点击按键来进行输入。For another example, the user clicks on the touch screen, and based on the multiple locations clicked on the touch screen, the processing device of the smart terminal may determine that the multiple locations correspond to locations on the smart terminal map. In another specific embodiment, if the input device is a button, the click operation on the touch screen can be converted into a click operation on the button. For example, there is a target point in the video stream displayed by the display device of the smart terminal, and the display position of the target point is at a fixed position of the display device, and may be displayed in the center position or in other positions. The mobile smart terminal is used to make the target point correspond to different positions in the actual physical space, and then the user can input by clicking the button.
在另一实施例中,步骤S110包括,在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流,利用检测所述智能终端中的移动传感装置以获得用户的输入。所述移动传感装置可以检查并记录所述智能终端的位置和方向以及智能终端的移动位置,并且所述智能终端的处理装置可以确定移动位置在所述智能终端地图中的位置。所述移动传感装置包括但不限于加速度计、陀螺仪等传感装置,通过显示装置显示的视频流,用户可以通过移动智能终端,使智能终端在所述物理空间的移动轨迹构成一用户输入。In another embodiment, step S110 includes displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device, and detecting the movement sensor device in the smart terminal to Obtain user input. The mobile sensing device can check and record the location and direction of the smart terminal and the mobile location of the smart terminal, and the processing device of the smart terminal can determine the location of the mobile location on the smart terminal map. The movement sensing device includes, but is not limited to, accelerometer, gyroscope and other sensing devices. Through the video stream displayed by the display device, the user can move the smart terminal to make the movement track of the smart terminal in the physical space constitute a user input .
所述智能装置的处理装置可基于在所述显示装置预览物理空间界面的状态下检测到的用户的输入来执行步骤S120。The processing device of the smart device may perform step S120 based on the user's input detected in the state where the display device previews the physical space interface.
在步骤S120中,响应检测到的输入以在所述预览的物理空间界面中创建至少一个目标区 域。所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系。In step S120, in response to the detected input, at least one target area is created in the previewed physical space interface. The target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system of the mobile robot.
其中,所述处理装置实时响应所述输入装置或者所述移动传感装置检测到的输入以为了在所述预览的物理空间界面中创建至少一个目标区域。所述至少一个目标区域是由用户的所述输入操作创建的。Wherein, the processing device responds to the input detected by the input device or the movement sensing device in real time in order to create at least one target area in the previewed physical space interface. The at least one target area is created by the input operation of the user.
在一实施例中,所述输入操作为在触摸屏上的点击操作,所述触摸屏可以是基于电容改变而感知点击位置的触摸屏或基于电阻的改变而感知点击位置的触摸屏。某一时刻在触摸屏上的点击位置可引起触摸屏的电容或电阻的改变,以上任意一种改变可以使所述智能终端的处理装置能将所述点击位置对应到所预览的物理空间界面中显示的视频流中与所述某一时刻相对应的图像中的位置,基于所述视频流图像中的多个点击位置可在所述预览的物理空间界面中创建至少一个目标区域,并且所述处理装置能将在所述预览的物理空间界面中创建的至少一个目标区域对应到智能终端所构建的地图中。其中,基于预设的规则和视频流图像中的多个点击位置可在所述预览的物理空间界面中创建至少一个目标区域,以创建一个目标区域为例。In an embodiment, the input operation is a click operation on a touch screen, and the touch screen may be a touch screen that senses a click position based on a change in capacitance or a touch screen that senses a click position based on a change in resistance. The click position on the touch screen at a certain moment can cause the capacitance or resistance of the touch screen to change. Any of the above changes can enable the processing device of the smart terminal to correspond the click position to the previewed physical space interface. The position in the image corresponding to the certain moment in the video stream, at least one target area can be created in the physical space interface of the preview based on multiple click positions in the video stream image, and the processing device At least one target area created in the previewed physical space interface can be mapped to the map constructed by the smart terminal. Wherein, based on preset rules and multiple click positions in the video stream image, at least one target area can be created in the previewed physical space interface, taking the creation of a target area as an example.
在一具体实施例中,所述预设的规则为用连接线依次连接每个点击位置形成一个目标区域。所述连接线可以是直线也可以是曲线。在另一具体实施例中,所述预设的规则为基于用连接线连接所述多个点击位置所构成的图形的外接图形形成一个目标区域。所述外接图形包括但不限于矩形、外接圆、外界多边形或不规则图形等。在又一具体实施例中,所述预设的规则为基于用连接线连接所述多个点击位置所构成的图形的内接图形形成一个目标区域。所述内接图形包括但不限于矩形、内接圆、内界多边形或不规则图形等。需要说明的是,所述智能终端所创建的目标区域所使用的预设的规则可以基于用户的选择而改变,也可以对于任何点击操作均采取同样的预设的规则。用户选择预设规则的操作可以在点击操作之前执行也可以在点击操作之后执行。In a specific embodiment, the preset rule is to sequentially connect each click position with a connecting line to form a target area. The connecting line may be a straight line or a curved line. In another specific embodiment, the preset rule is to form a target area based on the circumscribed figure of the figure formed by connecting the multiple click positions with a connecting line. The circumscribed graphics include but are not limited to rectangles, circumscribed circles, external polygons or irregular graphics. In another specific embodiment, the preset rule is to form a target area based on an inscribed graphic formed by connecting the multiple click positions with a connecting line. The inscribed graphics include, but are not limited to, rectangles, inscribed circles, internal polygons, or irregular graphics. It should be noted that the preset rules used in the target area created by the smart terminal can be changed based on the user's selection, or the same preset rules can be adopted for any click operation. The operation of the user selecting the preset rule can be performed before the click operation or after the click operation.
以移动机器人为清洁机器人为例,用户想让清洁机器人进入散落垃圾的区域执行清扫工作,用户需要在智能终端的触摸屏中执行点击操作以使智能终端创建一目标区域。例如,请参阅图3a,图3a显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在一实施方式中的示意图。用户选择的预设的规则为基于用连接线连接所述多个点击位置所构成的图形的外接圆形成所述一个目标区域。用户在触摸屏上基于散落垃圾的区域在图像中的位置通过手指或触屏笔在散落垃圾的区域周围点击多次。所述智能终端的处理装置基于用户的点击和用户选择的预设规则创建一圆形的目标区域。请参阅图3b,图3b显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在另一实施方式中的示意图。用户选择 的预设的规则为基于用连接线连接所述多个点击位置所构成的图形的外接矩形形成所述一个目标区域。用户在触摸屏上基于散落垃圾的区域在图像中的位置通过手指或触屏笔在散落垃圾的区域周围点击多次。所述智能终端的处理装置基于用户的点击和用户选择的预设规则创建一矩形的目标区域。再如,请参阅图3c,图3c显示为本申请的智能终端在所述预览的物理空间界面中创建的目标区域在又一实施方式中的示意图。用户选择的预设的规则为用曲线依次连接每个点击位置形成一个不规则的目标区域。用户在触摸屏上基于散落垃圾的区域在图像中的位置通过手指或触屏笔在散落垃圾的区域周围点击多次。所述智能终端的处理装置基于用户的点击和用户选择的预设规则创建一个不规则的目标区域。在另一实施例中,所述输入操作为在触摸屏上的滑动操作,用户在触摸屏上的连续滑动可引起触摸屏的电容或电阻的改变,以上任意一种改变可以使所述智能终端的处理装置能将用户在触摸屏上每一时刻的滑动位置对应到所预览的物理空间界面中显示的每一时刻所对应的图像中的位置,基于所述连续滑动操作所对应的图形位置可在所述预览的物理空间界面中创建至少一个目标区域。所述智能终端基于用户不同的连续滑动操作可以创建不同形状的目标区域。Taking the mobile robot as a cleaning robot as an example, the user wants the cleaning robot to enter the area where the garbage is scattered to perform cleaning work. The user needs to perform a tap operation on the touch screen of the smart terminal to make the smart terminal create a target area. For example, please refer to FIG. 3a, which shows a schematic diagram of a target area created in the previewed physical space interface by the smart terminal of this application in an embodiment. The preset rule selected by the user is that the one target area is formed based on the circumscribed circle of the figure formed by connecting the multiple click positions with a connecting line. The user clicks multiple times around the garbage-scattered area with a finger or a touch screen pen on the touch screen based on the location of the garbage-scattered area in the image. The processing device of the smart terminal creates a circular target area based on the user's click and the preset rule selected by the user. Please refer to FIG. 3b. FIG. 3b shows a schematic diagram of another embodiment of the target area created by the smart terminal of the present application in the previewed physical space interface. The preset rule selected by the user is that the one target area is formed based on the circumscribed rectangle of the figure formed by connecting the multiple click positions with a connecting line. The user clicks multiple times around the garbage-scattered area with a finger or a touch screen pen on the touch screen based on the location of the garbage-scattered area in the image. The processing device of the smart terminal creates a rectangular target area based on a user's click and a preset rule selected by the user. For another example, please refer to FIG. 3c, which shows a schematic diagram of the target area created by the smart terminal of this application in the previewed physical space interface in another embodiment. The preset rule selected by the user is to connect each click position with a curve to form an irregular target area. The user clicks multiple times around the garbage-scattered area with a finger or a touch screen pen on the touch screen based on the location of the garbage-scattered area in the image. The processing device of the smart terminal creates an irregular target area based on the user's click and the preset rule selected by the user. In another embodiment, the input operation is a sliding operation on the touch screen. The user's continuous sliding on the touch screen can cause the capacitance or resistance of the touch screen to change. Any of the above changes can make the processing device of the smart terminal The user's sliding position at each moment on the touch screen can be mapped to the position in the image corresponding to each moment displayed in the previewed physical space interface, and the graphic position corresponding to the continuous sliding operation can be displayed in the preview Create at least one target area in the physical space interface. The smart terminal can create target areas of different shapes based on different continuous sliding operations of the user.
在某些实施例中,如图3a、图3b、图3c所示,所述智能终端创建完成目标区域以后,用户对目标区域进行确认后可以点击触摸屏上的“确认”虚拟按键以使所述智能终端执行步骤S130,或者点击触摸屏上的“修改”虚拟按键以重新执行输入操作。在其它实施例中,用户也可以发出预设的确认语音指令表示已经对所述目标区域进行确认以使所述智能终端执行步骤S130,例如,用户发出“确认”的语音指令。或者,用户发出预设的修改语音指令以重新执行输入操作,例如,用户发出“修改”的语音指令。In some embodiments, as shown in Figure 3a, Figure 3b, and Figure 3c, after the smart terminal has created the target area, the user can click the "Confirm" virtual button on the touch screen after confirming the target area to make the The smart terminal executes step S130, or clicks the "modify" virtual button on the touch screen to perform the input operation again. In other embodiments, the user may also issue a preset confirmation voice instruction to indicate that the target area has been confirmed so that the smart terminal executes step S130, for example, the user issues a voice instruction of "confirm". Or, the user issues a preset modified voice instruction to re-execute the input operation, for example, the user issues a "modified" voice instruction.
其中,用户在所述预览的物理空间界面中创建多个目标区域时,以所述用户的输入是点击操作为例,智能终端可以利用预设时间间隔来创建多个所述目标区域。例如,超过所述预设时间间隔即认定下一个在所述输入装置上的点击操作是为了创建新的目标区域,在超过所述时间间隔之前所述智能终端可以利用声音、振动或者所述物理空间界面来提示用户尽快进行当前目标区域的输入操作。Wherein, when the user creates multiple target areas in the previewed physical space interface, taking the user's input as a click operation as an example, the smart terminal may use a preset time interval to create multiple target areas. For example, if the preset time interval is exceeded, it is determined that the next click operation on the input device is to create a new target area. Before the time interval is exceeded, the smart terminal can use sound, vibration, or the physical The space interface prompts the user to enter the current target area as soon as possible.
用户在所述预览的物理空间界面中创建的目标区域为多个时,所述智能终端可以基于所述多个目标区域创建的时间来对所述多个目标区域进行排序以生成多个有序的目标区域以便基于所述多个有序的目标区域生成的交互指令,便于移动机器人基于排序后的所述多个目标区域执行相关操作。例如,基于用户输入,将最先创建的目标区域排序为第一个目标区域。所述智能终端还可以基于用户定义的排序生成多个有序的目标区域以便所述智能终端基于所述多个有序的目标区域生成交互指令,便于移动机器人基于排序后的所述多个目标区域执行相关操作。例如,用户基于多个目标区域需要被清扫的紧急程度来对所述多个目标区域进行 排序。When there are multiple target areas created by the user in the previewed physical space interface, the smart terminal may sort the multiple target areas based on the time when the multiple target areas are created to generate multiple ordered areas. In order to facilitate the mobile robot to perform related operations based on the sorted target regions based on the interactive instructions generated from the plurality of ordered target regions. For example, based on user input, the first created target area is sorted as the first target area. The smart terminal may also generate a plurality of ordered target areas based on a user-defined ranking so that the smart terminal generates an interactive instruction based on the plurality of ordered target areas, so that the mobile robot can be based on the sorted multiple targets Perform related operations in the area. For example, the user sorts the multiple target areas based on the urgency of the multiple target areas that need to be cleaned.
在一实施例中,步骤S120还包括步骤S121(未图示)和步骤S122(未图示)。在步骤S121中,所述智能终端的处理装置基于从所预览的物理空间界面中提取到的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系。所述共识要素是可以使所述智能终端、移动机器人、或服务端在获得机器人坐标系下的坐标信息和终端坐标系下的坐标信息后能确定上述两个坐标系下坐标信息的对应关系的要素。所述共识要素包括但不限于:移动机器人和所述智能终端共有的定位特征、包含了与移动机器人地图的定位特征相对应的物体的图像等。In an embodiment, step S120 further includes step S121 (not shown) and step S122 (not shown). In step S121, the processing device of the smart terminal determines the corresponding information based on the coordinate information of the robot coordinate system and the coordinate information of the terminal coordinate system of the consensus elements extracted from the previewed physical space interface. relationship. The consensus element enables the smart terminal, mobile robot, or server to determine the corresponding relationship between the coordinate information in the above two coordinate systems after obtaining the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system. Elements. The consensus elements include, but are not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
其中,机器人坐标系中的坐标信息可以是长期存储在所述智能终端中也可以是在执行所述交互方法时向所述移动机器人或者服务端获取。Wherein, the coordinate information in the robot coordinate system may be stored in the smart terminal for a long time, or may be obtained from the mobile robot or the server when the interaction method is executed.
所述移动机器人的机器人坐标系是为了描述与所述移动机器人地图对应的坐标信息。所述坐标信息包括:定位特征、定位特征在地图中的坐标等。通过定位特征在地图中的坐标进而可以确定定位特征所描述的实际物理空间中的物体在所述地图中的位置。其中,所述定位特征包括但不限于:特征点、特征线等。所述定位特征举例由描述子来描述。例如,基于SIFT算法(Scale-invariant feature transform尺度不变特征变换),从多个图像中提取定位特征,并基于该多个图像中包含定位特征的图像块得到用于描述该定位特征的灰度值序列,并将该灰度值序列即为描述子。又如,所述描述子用以通过编码定位特征的周围亮度信息来描述所述定位特征,以所述定位特征为中心在其周围一圈采样若干个点,其中采样点的数量为但不限于256或512个,将这些采样点两两比较,得到这些采样点之间的亮度关系并将亮度关系转换成二进制字符串或其他编码格式。The robot coordinate system of the mobile robot is used to describe coordinate information corresponding to the mobile robot map. The coordinate information includes: positioning features, coordinates of the positioning features in the map, and the like. The position of the object in the actual physical space described by the positioning feature in the map can be determined through the coordinates of the positioning feature in the map. Wherein, the positioning feature includes, but is not limited to: feature points, feature lines, and so on. Examples of the positioning feature are described by a descriptor. For example, based on the SIFT algorithm (Scale-invariant feature transform), the location feature is extracted from multiple images, and the gray scale used to describe the location feature is obtained based on the image blocks containing the location feature in the multiple images Value sequence, and the gray value sequence is the descriptor. For another example, the descriptor is used to describe the location feature by encoding the surrounding brightness information of the location feature, and take the location feature as the center to sample several points in a circle around it, wherein the number of sampling points is but not limited to 256 or 512, compare these sampling points in pairs to obtain the brightness relationship between these sampling points and convert the brightness relationship into a binary string or other encoding format.
所述共有的定位特征既是智能终端在终端坐标系下构建的地图的定位特征也是移动机器人在机器人坐标系下构建的地图的定位特征。所述智能终端在构建地图时基于所预览的物理空间界面所显示的视频流中提取了多个用于描述实际物理空间中物体的定位特征。并确定了所述多个定位特征在所述智能终端坐标系下的坐标。例如,智能终端在终端坐标系下构建的地图的定位特征包括餐桌腿所对应的定位特征,移动机器人在机器人坐标系下构建的地图的定位特征也包括餐桌腿所对应的定位特征,则所述智能终端的处理装置基于餐桌腿所对应的定位特征在所述机器人坐标系下的坐标和终端坐标系下的坐标,可以确定餐桌腿所对应的定位特征在所述机器人坐标系下和终端坐标系下的坐标的对应关系,进而可以确定所述智能终端的终端坐标系下的所有坐标与所述移动机器人的机器人坐标系下的所有坐标的对应关系。得到所述对应关系后可执行步骤S122。The shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system. The intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined. For example, the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg, and the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg. Based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system, the processing device of the intelligent terminal can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system. The corresponding relationship of the coordinates below can then determine the corresponding relationship between all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot. After obtaining the corresponding relationship, step S122 may be executed.
其中,所述包含了与移动机器人地图的定位特征相对应的物体的图像是指所述智能终端 的处理装置获取了所述智能终端所摄取的视频流。其中,所述视频流中的至少一帧图像中对应的实际物理空间的物体的定位特征是机器人地图的定位特征。例如,所述移动机器人地图的一个定位特征对应了实际物理空间中的椅子,则所述视频流中包含所述椅子的图像。Wherein, the image containing the object corresponding to the location feature of the mobile robot map means that the processing device of the smart terminal has obtained the video stream taken by the smart terminal. Wherein, the location feature of the object in the actual physical space corresponding to at least one frame of the image in the video stream is the location feature of the robot map. For example, if a location feature of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
所述智能终端的处理装置获取了移动机器人的机器人坐标系的坐标信息和所述视频流中的至少一帧图像。所述处理装置通过图像匹配算法将所述至少一帧图像中的定位特征与移动机器人预先构建的所述物理空间的地图、定位特征及坐标信息进行匹配,从而确定所述图像中与所述移动机器人地图中相匹配的定位特征。在此,在一些示例中,所述智能终端预先配置有与移动机器人提取图像中定位特征相同的提取算法,并基于该提取算法提取所述图像中的候选定位特征。其中,该提取算法包括但不限于:基于纹理、形状、空间关系中至少一种特征的提取算法。其中基于纹理特征的提取算法举例包括以下至少一种灰度共生矩阵的纹理特征分析、棋盘格特征法、随机场模型法等;基于形状特征的提取算法举例包括以下至少一种傅里叶形状描述法、形状定量测度法等;基于空间关系特征的提取算法举例为将图像中分割出来的多个图像块之间的相互的空间位置或相对方向关系,这些关系包括但不限于连接/邻接关系、交叠/重叠关系和包含/包容关系等。智能终端利用图像匹配技术将所述图像中的候选定位特征fs1与移动机器人地图对应的定位特征fs2进行匹配,从而得到相匹配的定位特征fs1’。智能终端基于fs1’在智能终端地图中的坐标和在移动机器人地图中的坐标可以确定所述智能终端地图和所述移动机器人地图之间坐标的对应关系。得到所述对应关系后可执行步骤S122。The processing device of the smart terminal obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. The processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map. Here, in some examples, the smart terminal is pre-configured with an extraction algorithm that is the same as the location feature in the image extracted by the mobile robot, and extracts the candidate location feature in the image based on the extraction algorithm. Wherein, the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship. Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlap/overlap relationship and containment/containment relationship, etc. The intelligent terminal uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'. The intelligent terminal can determine the correspondence between the coordinates of the intelligent terminal map and the mobile robot map based on the coordinates of fs1' in the intelligent terminal map and the coordinates in the mobile robot map. Step S122 may be executed after obtaining the corresponding relationship.
在步骤S122中,基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。所述智能终端的处理装置基于机器人坐标系和终端坐标系下坐标信息的对应关系和所述目标区域在所述智能终端的终端坐标系下的坐标信息可以确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。In step S122, the coordinate information of the at least one target area in the robot coordinate system of the mobile robot is determined based on the corresponding relationship. The processing device of the smart terminal can determine that the at least one target area is in the terminal coordinate system based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal. The coordinate information in the robot coordinate system of the mobile robot.
其中,在所述智能终端获取视频流的情况下,所述处理装置也可以基于移动机器人和所述智能终端共有的定位特征这一共识要素来确定所述对应关系。Wherein, when the smart terminal acquires the video stream, the processing device may also determine the corresponding relationship based on a consensus element of a positioning feature shared by the mobile robot and the smart terminal.
为了降低所述处理装置确定所述对应关系的计算量。在一具体实施例中,请参阅图4,图4显示为本申请的智能终端建立的坐标系在一具体实施例中的示意图,如图所示。所述智能终端在建立坐标系时以所述机器人地图中一个定位特征的坐标点O”为智能终端的终端坐标系的起始坐标点。基于上述方式建立的智能终端的坐标系和所述目标区域在该坐标系下构建的地图中的坐标信息可以直接确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标。例如,在所述智能终端坐标系下的P点为所述目标区域中的一点,根据向量O’O”和O”P可确定向量O’P即P点在所述移动机器人坐标系下的坐标,进而确定所述至少一个目 标区域在所述移动机器人的机器人坐标系中的坐标。In order to reduce the amount of calculation for the processing device to determine the corresponding relationship. In a specific embodiment, please refer to FIG. 4, which shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment, as shown in the figure. When the smart terminal establishes the coordinate system, the coordinate point O" of a positioning feature in the robot map is used as the starting coordinate point of the terminal coordinate system of the smart terminal. The coordinate system of the smart terminal and the target established based on the above method The coordinate information in the map constructed under the coordinate system of the area can directly determine the coordinates of the at least one target area in the robot coordinate system of the mobile robot. For example, the point P in the coordinate system of the smart terminal is For a point in the target area, the vector O’P, that is, the coordinates of the point P in the mobile robot coordinate system, can be determined according to the vectors O’O” and O”P, so as to determine that the at least one target area is in the mobile robot The coordinates in the robot coordinate system.
在步骤S130中,基于所述至少一个目标区域生成一交互指令以发送至所述移动机器人。所述交互指令包括至少一个目标区域和所述移动机器人执行的相应操作。所述交互指令用于指示所述移动机器人在所述目标区域中执行相应操作或者不在所述目标区域中执行相应操作。In step S130, an interactive command is generated based on the at least one target area to be sent to the mobile robot. The interactive instruction includes at least one target area and a corresponding operation performed by the mobile robot. The interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area.
在一实施例中,在所述显示装置预览物理空间界面的状态下检测用户的输入的步骤中,检测用户的输入为第一输入。所述智能终端基于所述第一输入创建至少一个目标区域。为了基于所述至少一个目标区域生成一交互指令,所述交互方法还包括:检测用户的第二输入,所述第二输入与所述移动机器人执行的相应操作相对应,基于所述目标区域以及所述第二输入生成一交互指令以发送至所述移动机器人。所述智能终端检测所述第二输入可以在所述第一输入之前执行也可以在所述第一输入之后执行。In an embodiment, in the step of detecting the user's input while the display device is previewing the physical space interface, it is detected that the user's input is the first input. The smart terminal creates at least one target area based on the first input. In order to generate an interactive instruction based on the at least one target area, the interactive method further includes: detecting a second input of the user, the second input corresponding to a corresponding operation performed by the mobile robot, based on the target area and The second input generates an interactive command to be sent to the mobile robot. The detection by the smart terminal of the second input may be performed before the first input or after the first input.
所述第二输入包括以下任意一种:清扫或不清扫目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。例如,所述移动机器人为清洁机器人,如果所述目标区域对应了地面散落垃圾的区域,所述第二输入为清扫目标区域。如果所述目标区域对应了障碍物的区域,所述第二输入为不清扫目标区域。又如,所述移动机器人为巡视机器人,如果所述目标区域对应了用户需要查看的区域,所述第二输入为进入目标区域。如果所述目标区域对应了用户不需要查看的区域,所述第二输入为不进入目标区域。再如,所述移动机器人为搬运机器人,如果所述目标区域对应了用户需要整理物品的区域,所述第二输入为整理目标区域内的物品。如果所述目标区域对应了用户不需要整理物品的区域,所述第二输入为不整理目标区域内的物品。The second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area. For example, the mobile robot is a cleaning robot, and if the target area corresponds to an area where garbage is scattered on the ground, the second input is to clean the target area. If the target area corresponds to an obstacle area, the second input is that the target area is not cleaned. For another example, the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is to enter the target area. If the target area corresponds to an area that the user does not need to view, the second input is not to enter the target area. For another example, the mobile robot is a transport robot, and if the target area corresponds to an area where the user needs to sort items, the second input is to sort items in the target area. If the target area corresponds to an area where the user does not need to sort items, the second input is not to sort items in the target area.
所述第二输入可以采取语音方式输入、点击虚拟按键方式输入。例如,用户想让移动机器人进入散落垃圾的区域执行清扫工作,用户需要在智能终端的输入装置中执行所述第一输入以使智能终端创建一目标区域。请参阅图5,图5显示为本申请的智能终端的虚拟按键在一实施例中的示意图,如图5所示,用户可以通过点击智能终端菜单栏中“清扫目标区域”的虚拟按键的方式完成所述第二输入以使所述智能终端生成交互指令。其中,所述“清扫目标区域”的虚拟按键的展现形式并不限于文字也可以是图案。The second input can be input by voice or by clicking a virtual button. For example, if a user wants a mobile robot to enter an area where garbage is scattered to perform cleaning work, the user needs to perform the first input in the input device of the smart terminal to make the smart terminal create a target area. Please refer to Figure 5. Figure 5 shows a schematic diagram of the virtual button of the smart terminal of this application in an embodiment. As shown in Figure 5, the user can click the virtual button of the "clean target area" in the menu bar of the smart terminal. The second input is completed to enable the smart terminal to generate an interactive instruction. Wherein, the display form of the virtual button of the "cleaning target area" is not limited to text, but may also be a pattern.
在另一实施例中,所述交互指令与所述移动机器人的功能相关,无需用户进行第二输入即可生成所述交互指令,在本实施例中所述交互指令只包括所述至少一个目标区域。例如,所述移动机器人为执行清扫工作的清洁机器人,智能终端将所述至少一个目标区域发送到所述清洁机器人,清洁机器人生成导航路线并基于所述导航路线自动去清扫所述目标区域。又如,所述移动机器人为执行巡视工作的巡视机器人,智能终端将所述至少一个目标区域发送 到所述巡视机器人,巡视机器人生成导航路线并基于所述导航路线自动去进入所述目标区域执行巡视工作。再如,所述移动机器人为执行整理搬运工作的搬运机器人,智能终端将所述至少一个目标区域发送到所述搬运机器人,搬运机器人生成导航路线并基于所述导航路线自动进入所述目标区域执行搬运整理工作。In another embodiment, the interactive instruction is related to the function of the mobile robot, and the interactive instruction can be generated without the user's second input. In this embodiment, the interactive instruction only includes the at least one target area. For example, the mobile robot is a cleaning robot that performs cleaning work, the smart terminal sends the at least one target area to the cleaning robot, and the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route. For another example, the mobile robot is a patrol robot that performs patrol work, the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area based on the navigation route. Inspection work. For another example, the mobile robot is a handling robot that performs sorting and handling tasks, the intelligent terminal sends the at least one target area to the handling robot, and the handling robot generates a navigation route and automatically enters the target area based on the navigation route for execution Handling and finishing work.
如上所述的与移动机器人的交互方法不但可以使用户基于所述智能终端提供的直观的视频流确定精准的输入进而使所述智能终端响应检测到的输入以在所述预览的物理空间界面中创建至少一个精准的目标区域,而且可以基于所述至少一个目标区域在移动机器人地图中的位置以生成发送给移动机器人的交互指令。移动机器人解析所述交互指令获取所述至少一个目标区域在机器人地图中的位置进而在所述目标区域执行相应操作或者不在所述目标区域执行相应操作。The interaction method with the mobile robot as described above not only enables the user to determine the precise input based on the intuitive video stream provided by the smart terminal, but also enables the smart terminal to respond to the detected input to display in the previewed physical space interface. At least one precise target area is created, and an interactive instruction sent to the mobile robot may be generated based on the position of the at least one target area in the mobile robot map. The mobile robot parses the interactive instruction to obtain the position of the at least one target area on the robot map, and then performs a corresponding operation in the target area or does not perform a corresponding operation in the target area.
在一实施例中,所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端中。则所述交互方法还包括步骤S210、步骤S220和步骤S230。其中,机器人坐标系中的坐标信息可以是长期存储在所述智能终端中也可以是在执行所述交互方法时向所述移动机器人或者服务端获取。In an embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal. Then, the interaction method further includes step S210, step S220, and step S230. Wherein, the coordinate information in the robot coordinate system may be stored in the smart terminal for a long time, or may be obtained from the mobile robot or the server when the interaction method is executed.
所述智能终端的处理装置基于存储装置存储的所述移动机器人坐标系的坐标信息和所述智能终端终端坐标系的坐标信息,执行步骤S210。在步骤S210中,所述智能终端的处理装置基于从所预览的物理空间界面中提取到的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系。The processing device of the smart terminal executes step S210 based on the coordinate information of the mobile robot coordinate system and the coordinate information of the smart terminal terminal coordinate system stored in the storage device. In step S210, the processing device of the smart terminal determines the corresponding coordinates based on the coordinate information of the robot coordinate system and the coordinate information of the terminal coordinate system of the consensus elements extracted from the previewed physical space interface. relationship.
所述共识要素是可以使所述智能终端、移动机器人、或服务端在获得机器人坐标系下的坐标信息和终端坐标系下的坐标信息后能确定上述两个坐标系下的坐标信息的对应关系的要素。所述共识要素包括但不限于:移动机器人和所述智能终端共有的定位特征、包含了与移动机器人地图的定位特征相对应的物体的图像等。The consensus element is that the smart terminal, mobile robot, or server can determine the corresponding relationship between the coordinate information in the above two coordinate systems after obtaining the coordinate information in the robot coordinate system and the coordinate information in the terminal coordinate system Elements. The consensus elements include, but are not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
其中,所述共有的定位特征既是智能终端在终端坐标系下构建的地图的定位特征也是移动机器人在机器人坐标系下构建的地图的定位特征。所述智能终端在构建地图时基于所预览的物理空间界面所显示的视频流中提取了多个用于描述实际物理空间中物体的定位特征。并确定了所述多个定位特征在所述智能终端坐标系下的坐标。例如,智能终端在终端坐标系下构建的地图的定位特征包括餐桌腿所对应的定位特征,移动机器人在机器人坐标系下构建的地图的定位特征也包括餐桌腿所对应的定位特征,则所述智能终端的处理装置基于餐桌腿所对应的定位特征在所述机器人坐标系下的坐标和终端坐标系下的坐标,可以确定餐桌腿所对应的定位特征在所述机器人坐标系下和终端坐标系下的坐标的对应关系,进而可以确定所述智能终端的终端坐标系下的所有坐标与所述移动机器人的机器人坐标系下的所有坐标的对应 关系。得到所述对应关系后可执行步骤S220。在步骤S220中,基于步骤S120所述的响应检测到的输入以在所述预览的物理空间界面中创建至少一个目标区域的方法可以得到所创建的目标区域在所述智能终端的终端坐标系下的坐标信息。所述智能终端的处理装置基于机器人坐标系和终端坐标系下坐标信息的对应关系和所述目标区域在所述智能终端的终端坐标系下的坐标信息可以确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。Wherein, the shared positioning feature is both the positioning feature of the map constructed by the smart terminal in the terminal coordinate system and the positioning feature of the map constructed by the mobile robot in the robot coordinate system. The intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined. For example, the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg, and the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg. Based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system, the processing device of the intelligent terminal can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system. The corresponding relationship of the coordinates below can then determine the corresponding relationship between all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot. Step S220 may be executed after obtaining the corresponding relationship. In step S220, the method of creating at least one target area in the previewed physical space interface based on the detected input in step S120 can obtain that the created target area is in the terminal coordinate system of the smart terminal The coordinate information. The processing device of the smart terminal can determine that the at least one target area is in the terminal coordinate system based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal. The coordinate information in the robot coordinate system of the mobile robot.
其中,所述包含了与移动机器人地图的定位特征相对应的物体的图像是指所述智能终端的处理装置获取了所述智能终端所摄取的视频流。其中,所述视频流中的至少一帧图像中对应的实际物理空间的物体的定位特征是机器人地图的定位特征。例如,所述移动机器人地图的一个定位特征对应了实际物理空间中的椅子,则所述视频流中包含所述椅子的图像。Wherein, the image containing the object corresponding to the positioning feature of the mobile robot map means that the processing device of the smart terminal has obtained the video stream taken by the smart terminal. Wherein, the location feature of the object in the actual physical space corresponding to at least one frame of the image in the video stream is the location feature of the robot map. For example, if a location feature of the mobile robot map corresponds to a chair in the actual physical space, the video stream contains an image of the chair.
所述智能终端的处理装置获取了移动机器人的机器人坐标系的坐标信息和所述视频流中的至少一帧图像。所述处理装置通过图像匹配算法将所述至少一帧图像中的定位特征与移动机器人预先构建的所述物理空间的地图、定位特征及坐标信息进行匹配,从而确定所述图像中与所述移动机器人地图中相匹配的定位特征。在此,在一些示例中,所述智能终端预先配置有与移动机器人提取图像中定位特征相同的提取算法,并基于该提取算法提取所述图像中的候选定位特征。其中,该提取算法包括但不限于:基于纹理、形状、空间关系中至少一种特征的提取算法。其中基于纹理特征的提取算法举例包括以下至少一种灰度共生矩阵的纹理特征分析、棋盘格特征法、随机场模型法等;基于形状特征的提取算法举例包括以下至少一种傅里叶形状描述法、形状定量测度法等;基于空间关系特征的提取算法举例为将图像中分割出来的多个图像块之间的相互的空间位置或相对方向关系,这些关系包括但不限于连接/邻接关系、交叠/重叠关系和包含/包容关系等。智能终端利用图像匹配技术将所述图像中的候选定位特征fs1与移动机器人地图对应的定位特征fs2进行匹配,从而得到相匹配的定位特征fs1’。智能终端基于fs1’在智能终端地图中的坐标和在移动机器人地图中的坐标可以确定所述智能终端地图和所述移动机器人地图之间坐标的对应关系。得到所述对应关系后可执行步骤S220。在步骤S220中,基于步骤S120中所述的响应检测到的输入以在所述预览的物理空间界面中创建至少一个目标区域的方法可以得到所创建的目标区域在所述智能终端的终端坐标系下的坐标信息。基于机器人坐标系和终端坐标系下坐标信息的对应关系和所述目标区域在所述智能终端的终端坐标系下的坐标信息,确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。The processing device of the smart terminal obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. The processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map. Here, in some examples, the smart terminal is pre-configured with an extraction algorithm that is the same as the location feature in the image extracted by the mobile robot, and extracts the candidate location feature in the image based on the extraction algorithm. Wherein, the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship. Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlapping/overlapping relations and inclusion/containment relations, etc. The intelligent terminal uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'. The smart terminal can determine the correspondence between the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map. Step S220 may be executed after obtaining the corresponding relationship. In step S220, the method of creating at least one target area in the previewed physical space interface based on the detected input in step S120 can obtain that the created target area is in the terminal coordinate system of the smart terminal. Coordinate information under. Based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal, it is determined that the at least one target area is in the robot coordinate system of the mobile robot The coordinate information.
其中,在所述智能终端获取视频流的情况下,所述处理装置也可以基于移动机器人和所述智能终端共有的定位特征这一共识要素来确定所述对应关系。Wherein, when the smart terminal acquires the video stream, the processing device may also determine the corresponding relationship based on a consensus element of a positioning feature shared by the mobile robot and the smart terminal.
为了降低所述处理装置确定所述对应关系的计算量。在一具体实施例中,请参阅图4,图4显示为本申请的智能终端建立的坐标系在一具体实施例中的示意图,如图所示。所述智能终端在建立坐标系时以所述机器人地图中一个定位特征的坐标点O”为智能终端的终端坐标系的起始坐标点。基于上述方式建立的智能终端的坐标系和所述目标区域在该坐标系下构建的地图中的坐标信息可以直接确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标。例如,在所述智能终端坐标系下的P点为所述目标区域中的一点,根据向量O’O”和O”P可确定向量O’P即P点在所述移动机器人坐标系下的坐标,进而确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标。In order to reduce the amount of calculation for the processing device to determine the corresponding relationship. In a specific embodiment, please refer to FIG. 4, which shows a schematic diagram of a coordinate system established by the smart terminal of this application in a specific embodiment, as shown in the figure. When the smart terminal establishes the coordinate system, the coordinate point O" of a positioning feature in the robot map is used as the starting coordinate point of the terminal coordinate system of the smart terminal. The coordinate system of the smart terminal and the target established based on the above method The coordinate information in the map constructed under the coordinate system of the area can directly determine the coordinates of the at least one target area in the robot coordinate system of the mobile robot. For example, the point P in the coordinate system of the smart terminal is For a point in the target area, the vector O’P, that is, the coordinates of the point P in the mobile robot coordinate system, can be determined according to the vectors O’O” and O”P, so as to determine that the at least one target area is in the mobile robot The coordinates in the robot coordinate system.
所述处理装置基于所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息执行步骤S230,在步骤S230中,生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以发送至所述移动机器人。所述步骤S230的交互指令包括至少一个目标区域和所述移动机器人执行的相应操作。所述交互指令用于指示所述移动机器人在所述目标区域中执行相应操作或者不在所述目标区域中执行相应操作。其中生成所述交互指令的方法及相应描述与步骤S130中的相同或相似,在此不再赘述。The processing device executes step S230 based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot. In step S230, it generates the at least one target described by the coordinate information in the robot coordinate system. The interactive command of the area is sent to the mobile robot. The interactive instruction of step S230 includes at least one target area and corresponding operations performed by the mobile robot. The interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area. The method of generating the interactive instruction and the corresponding description are the same as or similar to those in step S130, and will not be repeated here.
请参阅图6,图6显示为本申请的智能终端10、服务端20、移动机器人30之间进行交互的网络架构示意图。其中,所述交互指令可以通过所述智能终端10的接口装置直接发送给所述移动机器人30也可以通过所述接口装置发送给所述服务端20再经由服务端20发送给所述移动机器人30。Please refer to FIG. 6. FIG. 6 shows a schematic diagram of the network architecture for interaction among the smart terminal 10, the server 20, and the mobile robot 30 of this application. The interactive instruction may be directly sent to the mobile robot 30 through the interface device of the smart terminal 10, or may be sent to the server 20 through the interface device, and then sent to the mobile robot 30 through the server 20. .
当所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的云端服务器中或者所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的移动机器人中时。所述智能终端的处理装置还可以生成包含所述至少一个目标区域、和与创建所述至少一个目标区域相关的共识要素的交互指令,以发送至所述移动机器人或者通过服务端发送至所述移动机器人。其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。其中所述共识要素与创建所述至少一个目标区域相关,包括但不限于:移动机器人和所述智能终端共有的定位特征、包含了与移动机器人地图的定位特征相对应的物体的图像等。When the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the cloud server connected to the smart terminal network or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network Time. The processing device of the smart terminal may also generate an interactive instruction containing the at least one target area and consensus elements related to the creation of the at least one target area, to be sent to the mobile robot or sent to the mobile robot through the server. move robot. Wherein, the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system. The consensus element is related to the creation of the at least one target area, including but not limited to: positioning features shared by the mobile robot and the smart terminal, images containing objects corresponding to the positioning features of the mobile robot map, and the like.
请参阅图7,图7显示为本申请的与移动机器人的交互方法在另一实施例中的流程示意图。在本实施例中,所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的服务端。Please refer to FIG. 7. FIG. 7 shows a schematic flowchart of another embodiment of the method for interacting with a mobile robot according to the present application. In this embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the server connected to the smart terminal network.
所述服务端可为单台计算机设备、基于云架构的服务系统、云端服务器等。其中,所述单台计算机设备可以是自主配置的可执行所述交互方法的计算机设备,其可位于私有机房或 位于公共机房中的某个被租用的机位中。所述云架构的服务系统包括公共云(Public Cloud)服务端与私有云(Private Cloud)服务端,其中,所述公共或私有云服务端包括Software-as-a-Service(软件即服务,简称SaaS)、Platform-as-a-Service(平台即服务,简称PaaS)及Infrastructure-as-a-Service(基础设施即服务,简称IaaS)等。所述私有云服务端例如阿里云计算服务平台、亚马逊(Amazon)云计算服务平台、百度云计算平台、腾讯云计算平台等等。The server can be a single computer device, a service system based on a cloud architecture, a cloud server, etc. Wherein, the single computer device may be an autonomously configured computer device that can execute the interaction method, and it may be located in a private computer room or in a rented computer room in a public computer room. The service system of the cloud architecture includes a public cloud (Public Cloud) server and a private cloud (Private Cloud) server, where the public or private cloud server includes Software-as-a-Service (Software-as-a-Service, abbreviated as Software-as-a-Service). SaaS), Platform-as-a-Service (Platform-as-a-Service, PaaS for short) and Infrastructure-as-a-Service (Infrastructure-as-a-Service, IaaS for short), etc. The private cloud service terminal is, for example, Alibaba Cloud Computing Service Platform, Amazon Cloud Computing Service Platform, Baidu Cloud Computing Platform, Tencent Cloud Computing Platform, and so on.
请参阅图8,图8显示为本申请的服务端在一实施方式中的结构示意图。如图所示,所述服务端包括存储装置21、接口装置22、处理装置23等。Please refer to FIG. 8. FIG. 8 shows a schematic structural diagram of the server of this application in an embodiment. As shown in the figure, the server includes a storage device 21, an interface device 22, a processing device 23, and the like.
所述存储装置21用于存储至少一个程序。其中,所述至少一种程序可供所述处理装置23执行图7实施例中所述的交互方法。所述存储装置21还预存有所述移动机器人的机器人坐标系中的坐标信息或者所述服务端的处理装置23可以是在执行所述交互方法时通过所述接口装置22从所述智能终端或移动机器人获取所述机器人坐标系的坐标信息。The storage device 21 is used to store at least one program. The at least one program can be used by the processing device 23 to execute the interaction method described in the embodiment of FIG. 7. The storage device 21 also pre-stores the coordinate information in the robot coordinate system of the mobile robot or the processing device 23 of the server can be used from the smart terminal or mobile through the interface device 22 when the interaction method is executed. The robot obtains the coordinate information of the robot coordinate system.
在此,存储装置21包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM)。例如存储装置包括闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置21还可以包括远离一个或多个处理装置的存储器,例如,经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储装置21还包括存储器控制器,其可控制服务端的诸如中央处理器(CPU)和接口装置22之类或其他组件对存储器的访问控制。Here, the storage device 21 includes, but is not limited to: read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), and nonvolatile RAM (Nonvolatile RAM, NVRAM for short). For example, the storage device includes a flash memory device or other non-volatile solid-state storage devices. In some embodiments, the storage device 21 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof. The storage device 21 also includes a memory controller, which can control the access control of the server, such as a central processing unit (CPU) and the interface device 22 or other components, to the memory.
接口装置22用于协助一智能终端和移动机器人进行通信交互。例如,所述接口装置22可以接收所述智能终端生成的交互指令,将所述智能终端生成的交互指令发送给所述移动机器人。又如,所述服务端的接口装置22向所述移动机器人或者智能终端发送获取移动机器人坐标系下的坐标信息的指令。再如,所述接口装置22还获取所述智能终端所摄取的视频流、所述智能终端的第二输入、获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素并发送至所述移动机器人。所述接口装置22包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述接口装置22与所述存储装置21、处理装置23、互联网、位于一物理空间中的移动机器人、智能终端等数据连接。The interface device 22 is used to assist an intelligent terminal and a mobile robot to communicate and interact. For example, the interface device 22 may receive an interactive instruction generated by the smart terminal, and send the interactive instruction generated by the smart terminal to the mobile robot. For another example, the interface device 22 of the server sends an instruction for acquiring coordinate information in the coordinate system of the mobile robot to the mobile robot or smart terminal. For another example, the interface device 22 also obtains the video stream taken by the smart terminal, the second input of the smart terminal, and obtains the consensus element related to the creation of the at least one target area from the smart terminal and sends it To the mobile robot. The interface device 22 includes a network interface, a data line interface, and the like. The network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like. The data line interface includes but is not limited to: USB interface, RS232, etc. The interface device 22 is data connected with the storage device 21, the processing device 23, the Internet, a mobile robot located in a physical space, an intelligent terminal, and the like.
处理装置23与所述存储装置21和接口装置22相连,用于执行所述至少一个程序,以协 调所述存储装置21和接口装置22执行如图7所述的交互方法。所述处理装置23包括一个或多个处理器。处理装置23可操作地与存储装置执行数据读写操作。处理装置23执行诸如提取图像、暂存特征、基于特征在地图中进行定位等。所述处理装置23包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。The processing device 23 is connected to the storage device 21 and the interface device 22, and is used to execute the at least one program to coordinate the storage device 21 and the interface device 22 to perform the interaction method as described in FIG. 7. The processing device 23 includes one or more processors. The processing device 23 is operable to perform data read and write operations with the storage device. The processing device 23 performs operations such as extracting images, temporarily storing features, positioning in a map based on features, and the like. The processing device 23 includes one or more general-purpose microprocessors, one or more special-purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors. Logic array (Field Programmable Gate Array, FPGA for short), or any combination of them.
所述服务端的处理装置23基于存储装置21存储的所述移动机器人坐标系的坐标信息执行步骤S310,在步骤S310中,获取来自所述智能终端的至少一个目标区域。其中,所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系。The processing device 23 of the server executes step S310 based on the coordinate information of the mobile robot coordinate system stored in the storage device 21. In step S310, at least one target area from the smart terminal is acquired. Wherein, the target area is obtained by detecting user input by the smart terminal, and the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is the same as that in the robot coordinate system of the mobile robot. The coordinate information has a corresponding relationship.
其中,所述至少一个目标区域是所述智能终端的处理装置通过响应在所述显示装置预览物理空间界面的状态下检测到的用户的输入创建得到的包括所述智能终端的终端坐标系的坐标信息的目标区域。所述智能终端的处理装置能将在所述预览的物理空间界面中创建的至少一个目标区域对应到智能终端所构建的地图中,进而确定所述至少一个目标区域在所述智能终端地图中的坐标信息。所述检测用户输入和响应所述输入创建至少一个目标区域的方式与图2所述的交互方法中的方式相同或相似在此不再详述。Wherein, the at least one target area is created by the processing device of the smart terminal in response to user input detected in a state where the display device previews the physical space interface and includes the coordinates of the terminal coordinate system of the smart terminal. The target area of the information. The processing device of the smart terminal can correspond at least one target area created in the previewed physical space interface to the map constructed by the smart terminal, and then determine the location of the at least one target area in the smart terminal map. Coordinate information. The manner of detecting user input and creating at least one target area in response to the input is the same as or similar to the manner in the interaction method described in FIG.
在此,所述服务端的处理装置获取了所述移动机器人坐标系的坐标信息和所述至少一个目标区域在所述智能终端地图中的坐标信息。由于智能终端依据其终端坐标系构建的地图时与所述移动机器人基于机器人坐标系而构建的地图所对应的实际物理空间有重合的部分。所以所述服务端的处理装置可以基于所述目标区域在智能终端构建的地图中的坐标信息可以得到所述目标区域在移动机器人构建的地图中的坐标信息。Here, the processing device of the server obtains the coordinate information of the mobile robot coordinate system and the coordinate information of the at least one target area in the smart terminal map. Because the map constructed by the smart terminal according to its terminal coordinate system overlaps with the actual physical space corresponding to the map constructed by the mobile robot based on the robot coordinate system. Therefore, the processing device of the server can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the smart terminal.
在步骤S320中,基于所述至少一个目标区域生成一交互指令以发送至所述移动机器人。所述交互指令包括至少一个目标区域和所述移动机器人执行的相应操作。所述交互指令用于指示所述移动机器人在所述目标区域中执行相应操作或者不在所述目标区域中执行相应操作。In step S320, an interactive command is generated based on the at least one target area to be sent to the mobile robot. The interactive instruction includes at least one target area and a corresponding operation performed by the mobile robot. The interactive instruction is used to instruct the mobile robot to perform a corresponding operation in the target area or not to perform a corresponding operation in the target area.
在一具体实施例中,在所述智能终端的显示装置预览物理空间界面的状态下检测用户的输入的步骤中,检测用户的输入为第一输入。所述智能终端基于所述第一输入创建至少一个目标区域。所述服务端的处理装置为了基于所述至少一个目标区域生成一交互指令,所述交互方法还包括:所述处理装置还通过所述接口装置获取来自所述智能终端的第二输入,还基于所述目标区域以及第二输入生成一交互指令以发送至所述移动机器人。所述第二输入与所 述移动机器人执行的相应操作相对应。其中,获取所述第二输入的步骤可以在所述第一输入之前执行也可以在所述第一输入之后执行。In a specific embodiment, in the step of detecting the user's input in a state where the display device of the smart terminal previews the physical space interface, it is detected that the user's input is the first input. The smart terminal creates at least one target area based on the first input. In order for the processing device of the server to generate an interactive instruction based on the at least one target area, the interactive method further includes: the processing device further obtains a second input from the smart terminal through the interface device, and is also based on the The target area and the second input generate an interactive command to be sent to the mobile robot. The second input corresponds to a corresponding operation performed by the mobile robot. Wherein, the step of obtaining the second input may be performed before the first input or may be performed after the first input.
所述第二输入包括以下任意一种:清扫或不清扫目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。例如,所述移动机器人为清洁机器人,如果所述目标区域对应了地面散落垃圾的区域,所述第二输入为清扫目标区域。如果所述目标区域对应了障碍物的区域,所述第二输入为不清扫目标区域。又如,所述移动机器人为巡视机器人,如果所述目标区域对应了用户需要查看的区域,所述第二输入为进入目标区域。如果所述目标区域对应了用户不需要查看的区域,所述第二输入为不进入目标区域。再如,所述移动机器人为搬运机器人,如果所述目标区域对应了用户需要整理物品的区域,所述第二输入为整理目标区域内的物品。如果所述目标区域对应了用户不需要整理物品的区域,所述第二输入为不整理目标区域内的物品。The second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area. For example, the mobile robot is a cleaning robot, and if the target area corresponds to an area where garbage is scattered on the ground, the second input is to clean the target area. If the target area corresponds to an obstacle area, the second input is that the target area is not cleaned. For another example, the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is to enter the target area. If the target area corresponds to an area that the user does not need to view, the second input is not to enter the target area. For another example, the mobile robot is a transport robot, and if the target area corresponds to an area where the user needs to sort items, the second input is to sort items in the target area. If the target area corresponds to an area where the user does not need to sort items, the second input is not to sort items in the target area.
在另一具体实施例中,所述交互指令与所述移动机器人的功能相关,无需用户进行第二输入即可生成所述交互指令,在本实施例中所述交互指令只包括所述至少一个目标区域。例如,所述移动机器人为执行清扫工作的清洁机器人,智能终端将所述至少一个目标区域发送到所述清洁机器人,清洁机器人生成导航路线并基于所述导航路线自动去清扫所述目标区域。又如,所述移动机器人为执行巡视工作的巡视机器人,智能终端将所述至少一个目标区域发送到所述巡视机器人,巡视机器人生成导航路线并基于所述导航路线自动去进入所述目标区域执行巡视工作。再如,所述移动机器人为执行整理搬运工作的搬运机器人,智能终端将所述至少一个目标区域发送到所述搬运机器人,搬运机器人生成导航路线并基于所述导航路线自动进入所述目标区域执行搬运整理工作。In another specific embodiment, the interactive instruction is related to the function of the mobile robot, and the interactive instruction can be generated without the user's second input. In this embodiment, the interactive instruction only includes the at least one target area. For example, the mobile robot is a cleaning robot that performs cleaning work, the smart terminal sends the at least one target area to the cleaning robot, and the cleaning robot generates a navigation route and automatically cleans the target area based on the navigation route. For another example, the mobile robot is a patrol robot that performs patrol work, the intelligent terminal sends the at least one target area to the patrol robot, and the patrol robot generates a navigation route and automatically enters the target area based on the navigation route. Inspection work. For another example, the mobile robot is a handling robot that performs sorting and handling tasks, the intelligent terminal sends the at least one target area to the handling robot, and the handling robot generates a navigation route and automatically enters the target area based on the navigation route for execution Handling and finishing work.
在一实施例中,所述处理装置还通过接口装置获取所述智能终端所摄取的视频流,步骤S310还包括S311和S312,在步骤S311中,所述处理装置基于所述视频流所提供的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系。In an embodiment, the processing device further obtains the video stream captured by the smart terminal through the interface device, and step S310 further includes S311 and S312. In step S311, the processing device is based on the video stream provided by the video stream. The consensus elements are respectively coordinate information in the robot coordinate system and coordinate information in the terminal coordinate system to determine the corresponding relationship.
其中,所述共识要素包括:包含了与移动机器人地图的定位特征相对应的物体的图像。例如,所述移动机器人地图的一个定位特征对应了实际物理空间中的椅子,服务端所获取的视频流中存在包含所述椅子的至少一帧图像。所述服务端的处理装置获取了移动机器人的机器人坐标系的坐标信息和所述视频流中的至少一帧图像。所述处理装置通过图像匹配算法将所述至少一帧图像中的定位特征与移动机器人预先构建的所述物理空间的地图、定位特征及坐标信息进行匹配,从而确定所述图像中与所述移动机器人地图中相匹配的定位特征。在此,在一些示例中,所述服务端预先配置有与移动机器人提取图像中定位特征相同提取方式的提取算法,并基于该提取算法提取所述图像中的候选定位特征。其中,该提取算法包括但不限 于:基于纹理、形状、空间关系中至少一种特征的提取算法。其中基于纹理特征的提取算法举例包括以下至少一种灰度共生矩阵的纹理特征分析、棋盘格特征法、随机场模型法等;基于形状特征的提取算法举例包括以下至少一种傅里叶形状描述法、形状定量测度法等;基于空间关系特征的提取算法举例为将图像中分割出来的多个图像块之间的相互的空间位置或相对方向关系,这些关系包括但不限于连接/邻接关系、交叠/重叠关系和包含/包容关系等。服务端利用图像匹配技术将所述图像中的候选定位特征fs1与移动机器人地图对应的定位特征fs2进行匹配,从而得到相匹配的定位特征fs1’。服务端基于fs1’在智能终端地图中的坐标和在移动机器人地图中的坐标可以确定所述智能终端地图和所述移动机器人地图之间坐标的对应关系。例如,所述服务端的处理装置得到了椅子的定位特征在所述移动机器人坐标系下的坐标和在所述终端坐标系下的坐标,可得到终端坐标系下任意一坐标与移动机器人坐标系中坐标的对应关系。Wherein, the consensus element includes: an image containing an object corresponding to the positioning feature of the mobile robot map. For example, a positioning feature of the mobile robot map corresponds to a chair in an actual physical space, and at least one frame of image containing the chair is present in the video stream obtained by the server. The processing device of the server obtains the coordinate information of the robot coordinate system of the mobile robot and at least one frame of image in the video stream. The processing device matches the location feature in the at least one frame of image with the map, location feature, and coordinate information of the physical space pre-built by the mobile robot through an image matching algorithm, thereby determining that the image is in line with the movement. Matching positioning features in the robot map. Here, in some examples, the server is pre-configured with an extraction algorithm that uses the same extraction method as the mobile robot to extract the location features in the image, and extracts candidate location features in the image based on the extraction algorithm. Among them, the extraction algorithm includes but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship. Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlapping/overlapping relations and inclusion/containment relations, etc. The server uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, so as to obtain a matching location feature fs1'. The server can determine the correspondence between the coordinates of the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map. For example, the processing device on the server side obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and can obtain any coordinate in the terminal coordinate system and the mobile robot coordinate system. Correspondence of coordinates.
在步骤S312中,所述服务端的处理装置基于机器人坐标系和终端坐标系下坐标信息的对应关系和所述目标区域在所述智能终端的终端坐标系下的坐标信息,确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。例如,所述目标区域对应实际物理空间插排线所在的目标区域,则服务端基于该目标区域在智能终端地图中的多个坐标和所述对应关系可确定该目标区域在移动机器人地图中的多个坐标。In step S312, the processing device of the server determines the at least one target based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal. The coordinate information of the area in the robot coordinate system of the mobile robot. For example, if the target area corresponds to the target area where the actual physical space plug-in line is located, the server can determine the position of the target area in the mobile robot map based on the multiple coordinates of the target area in the smart terminal map and the corresponding relationship. Multiple coordinates.
所述服务端的处理装置基于步骤S312得到的所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以通过所述服务端的接口装置发送至所述移动机器人。例如,所述交互指令包含用移动机器人坐标系中坐标信息描述的对应一实际物理空间中插排线所在区域,所述移动机器人可以基于插排线所在区域直接执行预设操作或者执行第二输入所规定的操作。所述交互指令与步骤S320中的相同或相似在此不再详述。The processing device of the server generates an interaction including the at least one target area described by the coordinate information in the robot coordinate system based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot obtained in step S312 The instruction is sent to the mobile robot through the interface device of the server. For example, the interactive instruction includes the area corresponding to the plug-in line in the actual physical space described by the coordinate information in the coordinate system of the mobile robot, and the mobile robot can directly perform a preset operation or perform a second input based on the area where the plug-in line is located. The required operation. The interaction instruction is the same as or similar to that in step S320 and will not be described in detail here.
在另一实施例中,所述服务端不基于所述处理装置获取的所述智能终端所摄取的视频流确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。步骤S310还包括步骤S313,在步骤S313中,所述服务端的处理装置通过所述接口装置获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素。其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。In another embodiment, the server does not determine the coordinate position of the at least one target area in the robot coordinate system based on the video stream captured by the smart terminal acquired by the processing device. Step S310 also includes step S313. In step S313, the processing device of the server obtains the consensus element related to the creation of the at least one target area from the smart terminal through the interface device. Wherein, the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system.
例如,所述共识要素为所述智能终端和所述移动机器人共有的定位特征。所述共有的定位特征既是智能终端在终端坐标系下构建的地图的定位特征也是移动机器人在机器人坐标系下构建的地图的定位特征。所述智能终端在构建地图时基于所预览的物理空间界面所显示的视频流中提取了多个用于描述实际物理空间中物体的定位特征。并确定了所述多个定位特征 在所述智能终端坐标系下的坐标。例如,智能终端在终端坐标系下构建的地图的定位特征包括餐桌腿所对应的定位特征,移动机器人在机器人坐标系下构建的地图的定位特征也包括餐桌腿所对应的定位特征,则所述服务端的处理装置基于餐桌腿所对应的定位特征在所述机器人坐标系下的坐标和终端坐标系下的坐标,可以确定餐桌腿所对应的定位特征在所述机器人坐标系下和终端坐标系下的坐标的对应关系,进而可以确定所述智能终端的终端坐标系下的所有坐标与所述移动机器人的机器人坐标系下的所有坐标的对应关系。For example, the consensus element is a positioning feature shared by the smart terminal and the mobile robot. The shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system. The intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined. For example, the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg, and the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg. The processing device on the server side can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system. The corresponding relationship of the coordinates of the mobile robot can be determined to determine the corresponding relationship of all the coordinates in the terminal coordinate system of the smart terminal and all the coordinates in the robot coordinate system of the mobile robot.
所述服务端的处理装置基于所述对应关系确定所述至少一个目标区域在所述机器人坐标系中的坐标位置,生成包含所述至少一个目标区域和所述共识要素的交互指令,以通过所述接口装置发送至所述移动机器人。所述移动机器人基于获取的所述目标区域可以直接执行与所述至少一个目标区域相关的操作。所述移动机器人也可以通过服务端的接口装置或者智能终端的接口装置获取所述智能终端坐标系下的坐标信息,基于所述共识要素和所述坐标信息确定所述至少一个目标区域在所述机器人坐标系中的坐标位置进而基于所述目标区域执行相关操作。The processing device of the server determines the coordinate position of the at least one target area in the robot coordinate system based on the corresponding relationship, and generates an interactive instruction including the at least one target area and the consensus element to pass the The interface device is sent to the mobile robot. The mobile robot may directly perform operations related to the at least one target area based on the acquired target area. The mobile robot may also obtain the coordinate information in the coordinate system of the smart terminal through the interface device of the server or the interface device of the smart terminal, and determine that the at least one target area is in the robot based on the consensus element and the coordinate information. The coordinate position in the coordinate system then performs related operations based on the target area.
其中,在所述智能终端获取视频流的情况下,所述服务端的处理装置也可以基于移动机器人和所述智能终端共有的定位特征这一共识要素来确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。Wherein, in the case that the smart terminal obtains the video stream, the processing device of the server may also determine that the at least one target area is in the mobile robot based on the consensus element of the positioning feature shared by the mobile robot and the smart terminal. The coordinate information in the robot coordinate system of the robot.
请参阅图9,图9显示为本申请的移动机器人在一实施方式中的结构示意图。如图所示,所述移动机器人包括存储装置31、接口装置33、处理装置34、执行装置32等。Please refer to FIG. 9. FIG. 9 shows a schematic structural diagram of the mobile robot according to an embodiment of the present application. As shown in the figure, the mobile robot includes a storage device 31, an interface device 33, a processing device 34, an execution device 32, and the like.
所述移动机器人是自动执行特定工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业、商业或家庭,可用于取代保安巡视、取代迎宾员或点餐员、或取代人们清洁地面,还可用于家庭陪伴、辅助办公等。所述移动机器人设置至少一个摄像装置,用于摄取移动机器人的操作环境的图像,从而进行VSLAM(Visual Simultaneous Localization and Mapping,视觉同时定位与地图构建);根据构建的地图,移动机器人能够进行巡视、清洁、整理等工作的路径规划。通常,移动机器人将自身运行工作期间构建的地图缓存在本地存储装置,或者上传至服务端或云端进行存储,也可以上传至用户的智能终端进行存储。The mobile robot is a machine device that automatically performs specific tasks. It can accept human commands, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors. It can be used in industry, commerce or households. It can be used to replace security patrols, to replace greeters or orderers, or to replace people to clean the ground. It can also be used for family accompaniment, auxiliary office, etc. The mobile robot is provided with at least one camera device for capturing images of the operating environment of the mobile robot, so as to perform VSLAM (Visual Simultaneous Localization and Mapping, visual simultaneous positioning and map construction); according to the constructed map, the mobile robot can perform inspections, Path planning for cleaning and tidying up. Generally, the mobile robot caches the map built during its operation in a local storage device, or uploads it to the server or the cloud for storage, or uploads it to the user's smart terminal for storage.
按照所述移动机器人的功能分类,所述移动机器人包括但不限于:清洁机器人、巡视机器人、搬运机器人。所述清洁机器人是用于执行清洁、清扫操作的移动机器人。所述巡视机器人是用于执行监控操作的移动机器人。所述搬运机器人是执行搬运、整理操作的移动机器人。According to the functional classification of the mobile robot, the mobile robot includes, but is not limited to: a cleaning robot, a patrol robot, and a handling robot. The cleaning robot is a mobile robot for performing cleaning and cleaning operations. The patrol robot is a mobile robot for performing monitoring operations. The handling robot is a mobile robot that performs handling and sorting operations.
所述执行装置32用于受控执行相应操作,其与所述移动机器人的种类相对应。例如,所 述机器人为清洁机器人,所述执行装置32包括用于执行清洁、清扫操作的清洁装置和用于执行导航移动操作的移动装置。所述清洁装置包括但不限于:边刷、滚刷、风机等。所述移动装置包括但不限于:行走机构和驱动机构。其中,所述行走机构可设置于清洁机器人的底部,所述驱动机构内置于所述清洁机器人的壳体内。又如,所述移动机器人为搬运机器人,所述执行装置32包括用于执行搬运、整理操作的搬运装置和用于执行导航移动操作的移动装置。所述搬运装置包括但不限于:机械手、机械臂、电机等。所述移动装置包括但不限于:行走机构和驱动机构。其中,所述行走机构可设置于搬运机器人的底部,所述驱动机构内置于所述搬运机器人的壳体内。再如,所述移动机器人为巡视机器人,所述执行装置32包括用于执行监控的摄像装置和用于执行导航移动操作的移动装置。所述摄像装置包括但不限于:彩色摄像装置、灰度摄像装置、红外摄像装置等,所述移动装置包括但不限于:行走机构和驱动机构。其中,所述行走机构可设置于巡视机器人的底部,所述驱动机构内置于所述巡视机器人的壳体内。The execution device 32 is used for controlled execution of corresponding operations, which corresponds to the type of the mobile robot. For example, the robot is a cleaning robot, and the execution device 32 includes a cleaning device for performing cleaning and cleaning operations, and a moving device for performing navigation and movement operations. The cleaning device includes, but is not limited to: side brushes, rolling brushes, fans and the like. The moving device includes, but is not limited to: a walking mechanism and a driving mechanism. Wherein, the walking mechanism may be arranged at the bottom of the cleaning robot, and the driving mechanism is built in the housing of the cleaning robot. For another example, the mobile robot is a transport robot, and the execution device 32 includes a transport device for carrying and sorting operations and a mobile device for performing navigation and movement operations. The conveying device includes but is not limited to: a manipulator, a manipulator, a motor, and the like. The moving device includes, but is not limited to: a walking mechanism and a driving mechanism. Wherein, the walking mechanism may be arranged at the bottom of the handling robot, and the driving mechanism is built in the housing of the handling robot. For another example, the mobile robot is a patrol robot, and the execution device 32 includes a camera device for performing monitoring and a mobile device for performing navigation movement operations. The camera device includes but is not limited to: a color camera device, a grayscale camera device, an infrared camera device, etc., and the mobile device includes, but is not limited to, a walking mechanism and a driving mechanism. Wherein, the walking mechanism may be arranged at the bottom of the patrol robot, and the driving mechanism is built in the housing of the patrol robot.
所述存储装置31用于存储至少一个程序以及存储有预先构建的机器人坐标系。其中,所述至少一种程序可供所述处理装置执行图10实施例中所述的交互方法。The storage device 31 is used to store at least one program and a pre-built robot coordinate system. Wherein, the at least one program can be used by the processing device to execute the interaction method described in the embodiment of FIG. 10.
在此,存储装置31包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM)。例如存储装置31包括闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置31还可以包括远离一个或多个处理装置的存储器,例如,经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储装置31还包括存储器控制器,其可控制移动机器人的诸如中央处理器(CPU)和接口装置33之类或其他组件对存储器的访问控制。Here, the storage device 31 includes, but is not limited to: Read-Only Memory (Read-Only Memory, ROM for short), Random Access Memory (RAM for short), and Nonvolatile RAM (Nonvolatile RAM, NVRAM for short). For example, the storage device 31 includes a flash memory device or other non-volatile solid-state storage devices. In some embodiments, the storage device 31 may also include a storage remote from one or more processing devices, for example, a network-attached storage accessed via an RF circuit or an external port and a communication network, where the communication network may be the Internet, a Or multiple intranets, local area networks (LAN), wide area networks (WLAN), storage local area networks (SAN), etc., or appropriate combinations thereof. The storage device 31 also includes a memory controller, which can control access control of the mobile robot such as a central processing unit (CPU) and an interface device 33 or other components to the memory.
接口装置33用于与一智能终端和服务端进行通信交互。例如,所述接口装置33可以接收智能终端发送的或经由所述服务端发送的所述智能终端生成的交互指令。又如,所述移动机器人的接口装置33获取所述服务端或者智能终端发送的所述智能终端所摄取的视频流、所述智能终端的第二输入、获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素。再如,所述移动机器人通过接口装置33向所述智能终端或一云端服务器提供所述移动机器人的机器人坐标系,以供获取所述交互指令。所述接口装置33包括网络接口、数据线接口等。其中所述网络接口包括但不限于:以太网的网络接口装置、基于移动网络(3G、4G、5G等)的网络接口装置、基于近距离通信(WiFi、蓝牙等)的网络接口装置等。所述数据线接口包括但不限于:USB接口、RS232等。所述接口装置33与所述存储装置31、处理装置 34、互联网、服务端、智能终端、执行装置32等数据连接。The interface device 33 is used to communicate and interact with an intelligent terminal and a server. For example, the interface device 33 may receive an interaction instruction sent by the smart terminal or generated by the smart terminal via the server. For another example, the interface device 33 of the mobile robot obtains the video stream taken by the smart terminal and the second input of the smart terminal sent by the server or the smart terminal, and obtains the video stream from the smart terminal and the creation site. Describe at least one consensus element related to the target area. For another example, the mobile robot provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through the interface device 33 for obtaining the interactive instruction. The interface device 33 includes a network interface, a data line interface, and the like. The network interface includes, but is not limited to: an Ethernet network interface device, a network interface device based on mobile networks (3G, 4G, 5G, etc.), a network interface device based on short-distance communication (WiFi, Bluetooth, etc.), and the like. The data line interface includes but is not limited to: USB interface, RS232, etc. The interface device 33 is data connected to the storage device 31, the processing device 34, the Internet, the server, the smart terminal, the execution device 32, and the like.
处理装置34与所述存储装置31、执行装置32和接口装置33相连,用于执行所述至少一个程序,以协调所述存储装置31和接口装置33执行图10实施例中所述的交互方法。所述处理装置34包括一个或多个处理器。处理装置34可操作地与存储装置31执行数据读写操作。处理装置34执行诸如提取图像、暂存特征、基于特征在地图中进行定位等。所述处理装置34包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。The processing device 34 is connected to the storage device 31, the execution device 32, and the interface device 33, and is used to execute the at least one program to coordinate the storage device 31 and the interface device 33 to execute the interaction method described in the embodiment of FIG. 10 . The processing device 34 includes one or more processors. The processing device 34 is operable to perform data read and write operations with the storage device 31. The processing device 34 performs operations such as extracting images, temporarily storing features, positioning in a map based on features, and the like. The processing device 34 includes one or more general-purpose microprocessors, one or more special purpose processors (ASIC), one or more digital signal processors (Digital Signal Processor, DSP for short), and one or more field programmable processors. Logic array (Field Programmable Gate Array, FPGA for short), or any combination of them.
通过前文提到的所述的交互方法的实施例,在所述处理装置通过接口装置向所述智能终端或一服务端提供所述移动机器人的机器人坐标系的基础上,所述智能终端或者所述服务端均可以生成一包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以通过所述智能终端的接口装置或者所述服务端的接口装置发送至所述移动机器人。并且所述移动机器人的处理装置可以解析所述交互指令以至少得到:包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域。例如,所述交互指令是基于所述目标区域以及第二输入生成的一交互指令,则所述移动机器人解析所述交互指令得到包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域和所述第二输入。所述移动机器人处理装置基于所述第二输入和所述目标区域控制所述执行装置执行相关操作。其中所述第二输入与前文提到的所述第二输入相同或相似,在此不再详述。又如,所述交互指令与所述移动机器人的功能相关且只包括所述至少一个目标区域,无需用户进行第二输入即可生成所述交互指令。则所述移动机器人解析所述交互指令得到包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域。所述处理装置基于移动机器人的预设功能和所述目标区域控制所述执行装置执行相关操作。Through the aforementioned embodiment of the interaction method, on the basis that the processing device provides the robot coordinate system of the mobile robot to the intelligent terminal or a server through the interface device, the intelligent terminal or the The server may generate an interactive command including the at least one target area described by the coordinate information in the robot coordinate system to be sent to the mobile robot through the interface device of the smart terminal or the interface device of the server. And the processing device of the mobile robot may parse the interactive instruction to obtain at least: the at least one target area described by the coordinate information in the robot coordinate system. For example, if the interactive instruction is an interactive instruction generated based on the target area and a second input, the mobile robot parses the interactive instruction to obtain the at least one target area described by the coordinate information in the robot coordinate system And the second input. The mobile robot processing device controls the execution device to perform related operations based on the second input and the target area. The second input is the same as or similar to the second input mentioned above, and will not be described in detail here. For another example, the interactive instruction is related to the function of the mobile robot and only includes the at least one target area, and the interactive instruction can be generated without the user's second input. Then, the mobile robot parses the interactive instruction to obtain the at least one target area described by using coordinate information in the robot coordinate system. The processing device controls the execution device to perform related operations based on the preset function of the mobile robot and the target area.
在又一实施例中,所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的移动机器人中。请参阅图10,图10显示为本申请的交互方法在又一实施例中的流程示意图。In another embodiment, the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network. Please refer to FIG. 10, which shows a schematic flowchart of another embodiment of the interaction method of this application.
所述移动机器人的处理装置基于存储装置存储的所述移动机器人坐标系的坐标信息执行步骤S410,在步骤S410中,处理装置获取来自所述智能终端或者服务端的交互指令。其中,所述交互指令包含至少一个目标区域;所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述机器人坐标系中的坐标信息具有对应关系。The processing device of the mobile robot executes step S410 based on the coordinate information of the coordinate system of the mobile robot stored in the storage device. In step S410, the processing device obtains an interactive instruction from the smart terminal or the server. Wherein, the interaction instruction includes at least one target area; the target area is obtained by detecting user input by the smart terminal, and the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is the same as The coordinate information in the robot coordinate system has a corresponding relationship.
其中,所述至少一个目标区域是所述智能终端的处理装置通过响应在所述显示装置预览 物理空间界面的状态下检测到的用户的输入创建得到的包括所述智能终端的终端坐标系的坐标信息的目标区域。所述智能终端的处理装置能将在所述预览的物理空间界面中创建的至少一个目标区域对应到智能终端所构建的地图中,进而确定所述至少一个目标区域在所述智能终端地图中的坐标信息。所述检测用户输入和响应所述输入创建至少一个目标区域的方式与图2所述的交互方法中的方式相同或相似在此不再详述。Wherein, the at least one target area is created by the processing device of the smart terminal in response to user input detected in a state where the display device previews the physical space interface and includes the coordinates of the terminal coordinate system of the smart terminal. The target area of the information. The processing device of the smart terminal can correspond at least one target area created in the previewed physical space interface to the map constructed by the smart terminal, and then determine the location of the at least one target area in the smart terminal map. Coordinate information. The manner of detecting user input and creating at least one target area in response to the input is the same as or similar to the manner in the interaction method described in FIG.
在此,所述移动机器人的处理装置获取了所述移动机器人坐标系的坐标信息和所述至少一个目标区域在所述智能终端地图中的坐标信息。由于智能终端依据其终端坐标系构建的地图时与所述移动机器人基于机器人坐标系而构建的地图所对应的实际物理空间有重合的部分。所以所述移动机器人的处理装置可以基于所述目标区域在智能终端构建的地图中的坐标信息可以得到所述目标区域在移动机器人构建的地图中的坐标信息。Here, the processing device of the mobile robot obtains the coordinate information of the mobile robot coordinate system and the coordinate information of the at least one target area in the smart terminal map. Because the map constructed by the smart terminal according to its terminal coordinate system overlaps with the actual physical space corresponding to the map constructed by the mobile robot based on the robot coordinate system. Therefore, the processing device of the mobile robot can obtain the coordinate information of the target area in the map constructed by the mobile robot based on the coordinate information of the target area in the map constructed by the intelligent terminal.
在步骤S420中,控制所述执行装置执行与所述至少一个目标区域相关的操作。In step S420, the execution device is controlled to perform an operation related to the at least one target area.
在一具体实施例中,在所述智能终端的显示装置预览物理空间界面的状态下检测用户的输入的步骤中,检测用户的输入为第一输入。所述智能终端基于所述第一输入创建至少一个目标区域生成一交互指令。所述移动机器人处理装置还通过所述接口装置获取来自所述智能终端的第二输入,获取所述第二输入的步骤可以在所述第一输入之前执行也可以在所述第一输入之后执行。所述处理装置还执行基于所述第二输入控制所述执行装置执行与所述至少一个目标区域相关的操作。例如,所述执行装置包括移动装置,所述处理装置基于所述第二输入生成与所述至少一个目标区域相关的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。又如,所述执行装置包括清洁装置,所述处理装置基于所述第二输入控制所述清洁装置在所述至少一个目标区域内的清洁操作。再如,所述执行装置包括摄像装置,所述处理装置基于所述第二输入控制所述摄像装置在所述至少一个目标区域内的摄像操作。In a specific embodiment, in the step of detecting the user's input in a state where the display device of the smart terminal previews the physical space interface, it is detected that the user's input is the first input. The smart terminal creates at least one target area based on the first input to generate an interactive instruction. The mobile robot processing device also obtains a second input from the smart terminal through the interface device, and the step of obtaining the second input may be performed before the first input or after the first input . The processing device further controls the execution device to perform an operation related to the at least one target area based on the second input. For example, the execution device includes a mobile device, and the processing device generates a navigation route related to the at least one target area based on the second input, and controls the mobile device to perform navigation movement based on the navigation route. For another example, the execution device includes a cleaning device, and the processing device controls a cleaning operation of the cleaning device in the at least one target area based on the second input. For another example, the execution device includes a camera device, and the processing device controls a camera operation of the camera device in the at least one target area based on the second input.
所述第二输入包括以下任意一种:清扫或不清扫目标区域、进入或不进入目标区域、清扫目标区域的力度、整理或不整理目标区域内的物品。例如,所述移动机器人为清洁机器人,如果所述目标区域对应了地面散落垃圾的区域,所述第二输入为清扫目标区域,所述处理装置基于所述第二输入生成进入所述至少一个目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动,并且当所述清洁机器人到达所述至少一个目标区域时控制所述清洁装置清扫地面散落的垃圾。基于所述地面不同种类的垃圾,所述处理装置还可以基于所述第二输入控制清扫目标区域的力度。如果所述目标区域对应了障碍物的区域,所述第二输入为不清扫目标区域。所述处理装置基于所述第二输入生成不进入所述至少一个目标区域相关的导航路线,并基于所述导航路线控制所述移动装置执行导航移动以远离、绕过所述至少一个目标区域。又如,所述移动机器人为巡视机器人,如果所述目标区域对应了用户需要查 看的区域,所述第二输入为进入目标区域,处理装置基于所述第二输入生成进入所述至少一个目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动,并且当所述巡视机器人到达所述至少一个目标区域时控制所述摄像装置摄取所述至少一个目标区域的影像。如果所述目标区域对应了用户不需要查看的区域,所述第二输入为不进入目标区域,处理装置基于所述第二输入生成不进入所述至少一个目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。再如,所述移动机器人为搬运机器人,如果所述目标区域对应了用户需要整理物品的区域,所述第二输入为整理目标区域内的物品,处理装置基于所述第二输入生成进入所述至少一个目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动,并且当所述搬运机器人到达所述至少一个目标区域时控制所述搬运装置搬运整理所述至少一个目标区域的物品。如果所述目标区域对应了用户不需要整理物品的区域,所述第二输入为不整理目标区域内的物品,处理装置基于所述第二输入控制所述搬运装置不整理所述至少一个目标区域的内的物品。The second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, strength of cleaning the target area, sorting or not sorting the items in the target area. For example, the mobile robot is a cleaning robot, and if the target area corresponds to an area where garbage is scattered on the ground, the second input is a cleaning target area, and the processing device generates entry into the at least one target based on the second input The navigation route of the area, and based on the navigation route, the mobile device is controlled to perform navigation movement, and when the cleaning robot reaches the at least one target area, the cleaning device is controlled to clean up garbage scattered on the ground. Based on the different types of garbage on the ground, the processing device may also control the force of cleaning the target area based on the second input. If the target area corresponds to an obstacle area, the second input is that the target area is not cleaned. The processing device generates a navigation route related to not entering the at least one target area based on the second input, and controls the mobile device to perform a navigation movement based on the navigation route to move away from or bypass the at least one target area. For another example, the mobile robot is a patrol robot, and if the target area corresponds to an area that the user needs to view, the second input is entering the target area, and the processing device generates entry to the at least one target area based on the second input And control the mobile device to perform navigational movement based on the navigation route, and control the camera device to capture images of the at least one target area when the patrol robot reaches the at least one target area. If the target area corresponds to an area that the user does not need to view, the second input is not entering the target area, and the processing device generates a navigation route that does not enter the at least one target area based on the second input, and based on the The navigation route controls the mobile device to perform navigation movement. For another example, the mobile robot is a transport robot, and if the target area corresponds to the area where the user needs to sort items, the second input is to sort items in the target area, and the processing device generates entry into the A navigation route of at least one target area, and based on the navigation route, the mobile device is controlled to perform navigational movement, and when the handling robot reaches the at least one target area, the handling device is controlled to carry and sort the at least one target area Items. If the target area corresponds to an area where the user does not need to organize items, the second input is not to organize items in the target area, and the processing device controls the handling device to not organize the at least one target area based on the second input Of items within.
在另一具体实施例中,所述交互指令与所述移动机器人的功能相关,所述移动机器人的处理装置不需通过所述接口装置获取来自所述智能终端的第二输入即可控制所述执行装置执行与所述至少一个目标区域相关的操作。在本具体实施例中所述交互指令只包括所述至少一个目标区域。例如,所述移动机器人为执行清扫工作的清洁机器人,服务端或智能终端将所述至少一个目标区域发送到所述清洁机器人,清洁机器人自动去清扫所述目标区域。又如,所述移动机器人为执行巡视工作的巡视机器人,服务端或智能终端将所述至少一个目标区域发送到所述巡视机器人,巡视机器人自动去进入所述目标区域执行巡视工作。再如,所述移动机器人为执行整理搬运工作的搬运机器人,服务端或智能终端将所述至少一个目标区域发送到所述搬运机器人,搬运机器人自动进入所述目标区域执行搬运整理工作。In another specific embodiment, the interactive instruction is related to the function of the mobile robot, and the processing device of the mobile robot does not need to obtain the second input from the smart terminal through the interface device to control the The execution device executes an operation related to the at least one target area. In this specific embodiment, the interaction instruction only includes the at least one target area. For example, the mobile robot is a cleaning robot that performs cleaning work, the server or smart terminal sends the at least one target area to the cleaning robot, and the cleaning robot automatically cleans the target area. For another example, the mobile robot is a patrol robot that performs patrol work, the server or smart terminal sends the at least one target area to the patrol robot, and the patrol robot automatically enters the target area to perform the patrol work. For another example, the mobile robot is a handling robot that performs sorting and handling tasks, and the server or smart terminal sends the at least one target area to the handling robot, and the handling robot automatically enters the target area to perform the handling and sorting tasks.
需要说明的是,当所述目标区域为多个时,所述移动机器人可以基于所述智能终端或用户对所述多个目标区域的排序,也可以依据多个目标区域与移动机器人当前位置的距离来对所述多个目标区域进行排序。以便移动机器人基于排序后的所述多个目标区域执行相关操作。例如,所述目标区域为两个,第一个目标区域距离移动机器人两米,第二个目标区域距离移动机器人四米,则所述移动机器人先基于所述第一个目标区域执行相关操作再对第二个目标区域执行相关操作。It should be noted that when there are multiple target areas, the mobile robot may be based on the smart terminal or the user’s sorting of the multiple target areas, or may be based on the relationship between the multiple target areas and the current position of the mobile robot. The distance is used to sort the multiple target areas. So that the mobile robot performs related operations based on the sorted multiple target regions. For example, if there are two target areas, the first target area is two meters away from the mobile robot, and the second target area is four meters away from the mobile robot, the mobile robot first performs related operations based on the first target area. Perform related operations on the second target area.
在一具体实施例中,所述移动机器人获取来至所述智能终端或者服务端的包含至少一个目标区域的交互指令并且处理装置通过接口装置还获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素。其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。所述移动机器人的处理装置解析所述交互指令可得到包含利用 智能终端的终端坐标系中的坐标信息描述的所述至少一个目标区域。所述与创建至少一个目标区域相关的共识要素是指为了确定所述至少一个目标区域在所述机器人坐标系下的坐标信息所需要的数据包括但不限于:智能终端所摄取的视频流、智能终端和所述移动机器人共有的定位特征等。In a specific embodiment, the mobile robot obtains an interactive instruction from the smart terminal or the server that includes at least one target area, and the processing device also obtains and creates the at least one target area from the smart terminal through the interface device. Consensus elements related to the region. Wherein, the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system. The processing device of the mobile robot analyzes the interaction instruction to obtain the at least one target area described by using coordinate information in the terminal coordinate system of the smart terminal. The consensus element related to the creation of at least one target area refers to the data required to determine the coordinate information of the at least one target area in the robot coordinate system, including but not limited to: video streams captured by smart terminals, smart Positioning features shared by the terminal and the mobile robot, etc.
在此,所述处理装置还执行步骤S510和步骤S520,在步骤S510中,基于所述共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系。Here, the processing device also performs steps S510 and S520. In step S510, the corresponding relationship is determined based on the coordinate information of the consensus element in the robot coordinate system and the coordinate information in the terminal coordinate system. .
例如,所述共识要素为所述智能终端和所述移动机器人共有的定位特征。所述共有的定位特征既是智能终端在终端坐标系下构建的地图的定位特征也是移动机器人在机器人坐标系下构建的地图的定位特征。所述智能终端在构建地图时基于所预览的物理空间界面所显示的视频流中提取了多个用于描述实际物理空间中物体的定位特征。并确定了所述多个定位特征在所述智能终端坐标系下的坐标。例如,智能终端在终端坐标系下构建的地图的定位特征包括餐桌腿所对应的定位特征,移动机器人在机器人坐标系下构建的地图的定位特征也包括餐桌腿所对应的定位特征,则所述移动机器人的处理装置基于餐桌腿所对应的定位特征在所述机器人坐标系下的坐标和终端坐标系下的坐标,可以确定餐桌腿所对应的定位特征在所述机器人坐标系下和终端坐标系下的坐标的对应关系。进而可以确定所述智能终端的终端坐标系下的所有坐标与所述移动机器人的机器人坐标系下的所有坐标的对应关系。For example, the consensus element is a positioning feature shared by the smart terminal and the mobile robot. The shared positioning feature is not only the positioning feature of the map constructed by the smart terminal in the terminal coordinate system, but also the positioning feature of the map constructed by the mobile robot in the robot coordinate system. The intelligent terminal extracts multiple positioning features for describing objects in the actual physical space based on the video stream displayed on the previewed physical space interface when constructing the map. And the coordinates of the multiple positioning features in the coordinate system of the smart terminal are determined. For example, the location feature of the map constructed by the smart terminal in the terminal coordinate system includes the location feature corresponding to the table leg, and the location feature of the map constructed by the mobile robot in the robot coordinate system also includes the location feature corresponding to the table leg. The processing device of the mobile robot can determine that the positioning feature corresponding to the table leg is in the robot coordinate system and the terminal coordinate system based on the coordinates of the positioning feature corresponding to the table leg in the robot coordinate system and the coordinate in the terminal coordinate system. Correspondence of the coordinates below. Furthermore, the correspondence between all coordinates in the terminal coordinate system of the smart terminal and all coordinates in the robot coordinate system of the mobile robot can be determined.
又如,所述共识要素为包含了与移动机器人地图的定位特征相对应的物体的图像。例如,所述移动机器人地图的一个定位特征对应了实际物理空间中的椅子,则移动机器人所获取的视频流中包含所述椅子的图像。所述移动机器人的处理装置基于机器人坐标系的坐标信息和所述视频流中的至少一帧图像可以通过图像匹配算法将所述至少一帧图像中的定位特征与移动机器人预先构建的所述物理空间的地图、定位特征及坐标信息进行匹配,从而确定所述图像中与所述移动机器人地图中相匹配的定位特征。在此,在一些示例中,所述移动机器人的处理装置调用与移动机器人构建地图时提取图像中定位特征相同的提取算法,并基于该提取算法提取所述图像中的候选定位特征。其中,该提取算法包括但不限于:基于纹理、形状、空间关系中至少一种特征的提取算法。其中基于纹理特征的提取算法举例包括以下至少一种灰度共生矩阵的纹理特征分析、棋盘格特征法、随机场模型法等;基于形状特征的提取算法举例包括以下至少一种傅里叶形状描述法、形状定量测度法等;基于空间关系特征的提取算法举例为将图像中分割出来的多个图像块之间的相互的空间位置或相对方向关系,这些关系包括但不限于连接/邻接关系、交叠/重叠关系和包含/包容关系等。移动机器人的处理装置利用图像匹配技术将所述图像中的候选定位特征fs1与移动机器人地图对应的定位特征fs2进行匹配,从而得到相匹配的定位特征fs1’。移动机器人的处理装置基于fs1’在智能终端地图中 的坐标和在移动机器人地图中的坐标可以确定所述智能终端地图和所述移动机器人地图之间坐标的对应关系。例如,所述移动机器人的处理装置得到了椅子的定位特征在所述移动机器人坐标系下的坐标和在所述终端坐标系下的坐标,可得到终端坐标系下任意一坐标与移动机器人坐标系中坐标的对应关系。For another example, the consensus element is an image containing an object corresponding to the positioning feature of the mobile robot map. For example, if a location feature of the mobile robot map corresponds to a chair in the actual physical space, the video stream obtained by the mobile robot contains an image of the chair. Based on the coordinate information of the robot coordinate system and the at least one frame of image in the video stream, the processing device of the mobile robot can use an image matching algorithm to compare the positioning feature in the at least one frame of image with the physical pre-built mobile robot. The spatial map, location features, and coordinate information are matched, so as to determine the location features in the image that match the mobile robot map. Here, in some examples, the processing device of the mobile robot invokes the same extraction algorithm that extracts the location features in the image when the mobile robot constructs the map, and extracts the candidate location features in the image based on the extraction algorithm. Wherein, the extraction algorithm includes, but is not limited to: an extraction algorithm based on at least one feature of texture, shape, and spatial relationship. Examples of extraction algorithms based on texture features include at least one of the following gray-level co-occurrence matrix texture feature analysis, checkerboard feature method, random field model method, etc.; examples of extraction algorithms based on shape features include the following at least one Fourier shape description Method, shape quantitative measurement method, etc.; the extraction algorithm based on spatial relationship features is an example of the mutual spatial position or relative direction relationship between multiple image blocks segmented from the image. These relationships include but are not limited to connection/adjacent relationship, Overlapping/overlapping relations and inclusion/containment relations, etc. The processing device of the mobile robot uses image matching technology to match the candidate location feature fs1 in the image with the location feature fs2 corresponding to the mobile robot map, thereby obtaining a matching location feature fs1'. The processing device of the mobile robot can determine the correspondence between the coordinates of the smart terminal map and the mobile robot map based on the coordinates of fs1' in the smart terminal map and the coordinates in the mobile robot map. For example, the processing device of the mobile robot obtains the coordinates of the positioning feature of the chair in the mobile robot coordinate system and the coordinates in the terminal coordinate system, and can obtain any coordinate in the terminal coordinate system and the mobile robot coordinate system Correspondence between coordinates.
在步骤S520中,所述移动机器人的处理装置基于机器人坐标系和终端坐标系下坐标信息的对应关系和所述目标区域在所述智能终端的终端坐标系下的坐标信息,确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。例如,所述目标区域对应实际物理空间散落垃圾所在区域,则移动机器人基于该目标区域在智能终端地图中的多个坐标和所述对应关系可确定所述目标区域在移动机器人地图中的多个坐标。In step S520, the processing device of the mobile robot determines the at least one based on the correspondence between the robot coordinate system and the coordinate information in the terminal coordinate system and the coordinate information of the target area in the terminal coordinate system of the smart terminal. The coordinate information of the target area in the robot coordinate system of the mobile robot. For example, if the target area corresponds to the area where garbage is scattered in the actual physical space, the mobile robot can determine that the target area is in the mobile robot map based on the multiple coordinates of the target area in the smart terminal map and the corresponding relationship. coordinate.
所述移动机器人的处理装置基于步骤S520得到的所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息控制所述移动机器人的执行装置执行与所述至少一个目标区域相关的操作。其中,所述处理装置控制所述执行装置执行与所述至少一个目标区域相关的操作的描述与步骤S420中的相同或相似在此不再详述。The processing device of the mobile robot controls the execution device of the mobile robot to perform operations related to the at least one target region based on the coordinate information of the at least one target area in the robot coordinate system of the mobile robot obtained in step S520 . Wherein, the description of the processing device controlling the execution device to perform the operation related to the at least one target area is the same as or similar to that in step S420, and details are omitted here.
其中,在所述移动机器人获取视频流的情况下,所述移动机器人的处理装置也可以基于移动机器人和所述智能终端共有的定位特征这一共识要素来确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。Wherein, in the case that the mobile robot obtains the video stream, the processing device of the mobile robot may also determine that the at least one target area is located in the at least one target area based on the consensus element of the positioning feature shared by the mobile robot and the smart terminal. The coordinate information in the robot coordinate system of the mobile robot.
综上所述,所述移动机器人均可以基于本申请任一实施例所述的交互方法获取所述至少一个目标区域在所述移动机器人坐标系中的坐标信息,进而移动机器人的处理装置可以控制所述执行装置执行与所述至少一个目标区域相关的操作。In summary, the mobile robot can obtain the coordinate information of the at least one target area in the mobile robot coordinate system based on the interaction method described in any embodiment of the present application, and then the processing device of the mobile robot can control The execution device executes an operation related to the at least one target area.
以所述移动机器人为工作在室内的清洁机器人为例,智能终端在所述显示装置预览室内空间界面的状态下检测用户的输入。Taking the mobile robot as a cleaning robot working indoors as an example, the smart terminal detects the user's input in a state where the display device previews the indoor space interface.
在一实施例中,所述清洁机器人在机器人坐标系下构建的地图的定位特征及定位特征在地图中的坐标预存在所述智能终端中。智能终端的处理装置响应检测到的用户输入以在所述预览的室内空间界面中创建了一个包括散落垃圾的目标区域。基于共识要素分别在所述机器人坐标系下的坐标和终端坐标系下的坐标,进而可确定所述包括散落垃圾的目标区域在所述移动机器人的机器人坐标系中的坐标。所述共识要素的相关描述与步骤S210中提到的共识要素相同或相似,在此不在详述。In an embodiment, the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the smart terminal. The processing device of the smart terminal responds to the detected user input to create a target area including scattered garbage in the previewed indoor space interface. Based on the coordinates of the consensus elements in the robot coordinate system and the coordinates in the terminal coordinate system, the coordinates of the target area including scattered garbage in the robot coordinate system of the mobile robot can be determined. The related description of the consensus element is the same or similar to the consensus element mentioned in step S210, and will not be described in detail here.
智能终端的处理装置基于包括散落垃圾的目标区域在所述清洁机器人的机器人坐标系中的坐标生成一交互指令发送至所述清洁机器人或者经由服务端发送至所述清洁机器人。所述交互指令可以只包含包括散落垃圾的目标区域在所述清洁机器人的机器人坐标系中的坐标。清洁机器人的处理装置通过接口装置接收到所述交互指令时,可以直接基于交互指令生成进 入所述包括散落垃圾的目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动,并且当所述清洁机器人到达所述包括散落垃圾的目标区域时控制所述清洁装置清扫地面散落的垃圾。所述交互指令还可以包括用户的第二输入。对于清洁机器人,用户的第二输入包括但不限于:清扫或不清扫目标区域、清扫目标区域的力度、进入或不进入目标区域。例如,所述第二输入为深度清扫目标区域,清洁机器人的处理装置通过接口装置接收到所述交互指令时,基于交互指令生成进入所述包括散落垃圾的目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动,当所述清洁机器人到达所述包括散落垃圾的目标区域时控制所述清洁装置的风机、边刷、滚刷,使所述清洁装置深度清扫地面散落的垃圾。The processing device of the smart terminal generates an interactive command based on the coordinates of the target area including the scattered garbage in the robot coordinate system of the cleaning robot and sends it to the cleaning robot or sends it to the cleaning robot via the server. The interactive instruction may only include the coordinates of the target area where the garbage is scattered in the robot coordinate system of the cleaning robot. When the processing device of the cleaning robot receives the interactive instruction through the interface device, it can directly generate a navigation route into the target area including scattered garbage based on the interactive instruction, and control the mobile device to perform navigational movement based on the navigation route, And when the cleaning robot reaches the target area including scattered garbage, the cleaning device is controlled to clean the scattered garbage on the ground. The interactive instruction may also include a second input of the user. For the cleaning robot, the user's second input includes, but is not limited to: cleaning or not cleaning the target area, the strength of cleaning the target area, and entering or not entering the target area. For example, the second input is a deep cleaning of the target area, and when the processing device of the cleaning robot receives the interactive instruction through the interface device, it generates a navigation route into the target area including scattered garbage based on the interactive instruction, and based on the The navigation route controls the mobile device to perform navigational movement. When the cleaning robot reaches the target area including scattered garbage, it controls the fan, side brush, and rolling brush of the cleaning device to make the cleaning device deeply clean the scattered ground Rubbish.
在另一实施例中,所述清洁机器人在机器人坐标系下构建的地图的定位特征及定位特征在地图中的坐标预存在所述服务端中。例如,智能终端的处理装置响应检测到的用户输入以在所述预览的室内空间界面中创建了一个包括宠物粪便的目标区域。服务端获取来自所述智能终端的包括宠物粪便的目标区域。服务端的处理装置同样基于共识要素分别在所述机器人坐标系下的坐标和终端坐标系下的坐标,进而可确定所述包括宠物粪便的目标区域在所述移动机器人的机器人坐标系中的坐标。所述共识要素的相关描述与步骤S311和S313中提到的共识要素相同或相似,在此不在详述。In another embodiment, the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the server. For example, the processing device of the smart terminal responds to the detected user input to create a target area including pet feces in the previewed indoor space interface. The server obtains the target area including pet feces from the smart terminal. The processing device on the server side can also determine the coordinates of the target area including pet feces in the robot coordinate system of the mobile robot based on the coordinates of the consensus elements in the robot coordinate system and the coordinates in the terminal coordinate system, respectively. The related description of the consensus elements is the same or similar to the consensus elements mentioned in steps S311 and S313, and will not be described in detail here.
所述服务端基于所述包括宠物粪便的目标区域在所述移动机器人的机器人坐标系中的坐标生成一交互指令以通过所述接口装置发送至所述移动机器人。所述交互指令包含包括散落垃圾的目标区域在所述清洁机器人的机器人坐标系中的坐标和用户的不进入目标区域的第二输入。清洁机器人的处理装置通过接口装置接收到所述交互指令时,基于交互指令生成不进入包括宠物粪便的目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。需要说明的是,所述第二输入并不限于不进入目标区域,所述第二输入可以与智能终端基于实际用户输入创建的目标区域相关。The server generates an interactive command based on the coordinates of the target area including pet feces in the robot coordinate system of the mobile robot to send to the mobile robot through the interface device. The interactive instruction includes the coordinates of the target area where the garbage is scattered in the robot coordinate system of the cleaning robot and the second input of the user not to enter the target area. When the processing device of the cleaning robot receives the interactive instruction through the interface device, it generates a navigation route that does not enter the target area including pet feces based on the interactive instruction, and controls the mobile device to perform navigation movement based on the navigation route. It should be noted that the second input is not limited to not entering the target area, and the second input may be related to the target area created by the smart terminal based on actual user input.
在又一实施例中,所述清洁机器人在机器人坐标系下构建的地图的定位特征及定位特征在地图中的坐标预存在所述清洁机器人中。例如,智能终端的处理装置响应检测到的用户输入以在所述预览的室内空间界面中创建了一个包括缠绕物的目标区域。清洁机器人的处理装置通过接口装置获取来自所述智能终端的或者经由服务端转发的交互指令,所述交互指令为包括缠绕物的目标区域在智能终端坐标系下的坐标。清洁机器人的处理装置同样基于共识要素分别在所述机器人坐标系下的坐标和终端坐标系下的坐标,进而可确定所述包括缠绕物的目标区域在所述清洁机器人的机器人坐标系中的坐标。所述共识要素的相关描述与步骤S510中提到的共识要素相同或相似,在此不在详述。In another embodiment, the location feature of the map constructed by the cleaning robot in the robot coordinate system and the coordinates of the location feature in the map are pre-stored in the cleaning robot. For example, the processing device of the smart terminal responds to the detected user input to create a target area including a wrap in the previewed indoor space interface. The processing device of the cleaning robot obtains the interactive instruction from the smart terminal or forwarded via the server through the interface device, and the interactive instruction is the coordinate of the target area including the winding object in the coordinate system of the smart terminal. The processing device of the cleaning robot is also based on the coordinates of the consensus elements in the robot coordinate system and the coordinates in the terminal coordinate system, so as to determine the coordinates of the target area including the winding object in the robot coordinate system of the cleaning robot . The related description of the consensus element is the same or similar to the consensus element mentioned in step S510, and will not be described in detail here.
所述清洁机器人基于所述包括缠绕物的目标区域在所述清洁机器人的机器人坐标系中的坐标和用户的不进入目标区域的第二输入生成不进入包括缠绕物的目标区域的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。需要说明的是,所述第二输入并不限于不进入目标区域,所述第二输入与智能终端基于实际用户输入创建的目标区域相关。The cleaning robot generates a navigation route that does not enter the target area including the winding object based on the coordinates of the target area including the winding object in the robot coordinate system of the cleaning robot and the user's second input of not entering the target area, and The mobile device is controlled to perform navigation movement based on the navigation route. It should be noted that the second input is not limited to not entering the target area, and the second input is related to the target area created by the smart terminal based on actual user input.
本申请还提供一种移动机器人的控制系统,包括智能终端和移动机器人。所述控制系统中的移动机器人和智能终端的硬件装置及各自所执行的交互方法与前文中实施例中提到的移动机器人和智能终端的硬件装置及各自所执行的交互方法相同或相似,在此不再详述。This application also provides a control system for a mobile robot, including an intelligent terminal and a mobile robot. The hardware devices of the mobile robot and the smart terminal in the control system and the interaction methods each performed are the same as or similar to the hardware devices of the mobile robot and the smart terminal and the interaction methods each performed in the previous embodiments. This will not be detailed here.
本申请还提供一种计算机可读存储介质,用于存储至少一种程序,所述至少一种程序在被调用时执行本申请上述针对图2实施例中所述的交互方法。The present application also provides a computer-readable storage medium for storing at least one program, and the at least one program, when called, executes the interaction method described in the above-mentioned embodiment of this application with respect to FIG. 2.
所述交互方法如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。If the interaction method is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
于本申请提供的实施例中,所述计算机可读写存储介质可以包括只读存储器、随机存取存储器、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够由计算机进行存取的任何其它介质。另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。In the embodiments provided in this application, the computer readable and writable storage medium may include read-only memory, random access memory, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, U disk, mobile hard disk, or any other medium that can be used to store desired program codes in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instruction is sent from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave, the Coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium. However, it should be understood that computer readable and writable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are intended for non-transitory, tangible storage media. For example, the magnetic disks and optical disks used in the application include compact disks (CD), laser disks, optical disks, digital versatile disks (DVD), floppy disks, and Blu-ray disks. Disks usually copy data magnetically, while optical disks use lasers for optical Copy data locally.
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。The above-mentioned embodiments only exemplarily illustrate the principles and effects of the present application, and are not used to limit the present application. Anyone familiar with this technology can modify or change the above-mentioned embodiments without departing from the spirit and scope of this application. Therefore, all equivalent modifications or changes made by persons with ordinary knowledge in the technical field without departing from the spirit and technical ideas disclosed in this application should still be covered by the claims of this application.

Claims (31)

  1. 一种与移动机器人的交互方法,用于至少包含显示装置的智能终端,其特征在于,包括以下步骤:An interaction method with a mobile robot, which is used in an intelligent terminal including at least a display device, is characterized in that it includes the following steps:
    在所述显示装置预览物理空间界面的状态下检测用户的输入;Detecting user input in a state where the display device previews the physical space interface;
    响应检测到的输入以在所述预览的物理空间界面中创建至少一个目标区域;所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系;In response to the detected input, at least one target area is created in the previewed physical space interface; the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinate information is the same as the robot coordinates of the mobile robot The coordinate information in the system has a corresponding relationship;
    基于所述至少一个目标区域生成一交互指令以发送至所述移动机器人。An interactive command is generated based on the at least one target area to be sent to the mobile robot.
  2. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,所述在显示装置预览物理空间界面的状态下检测用户的输入的步骤包括:The interaction method with a mobile robot according to claim 1, wherein the step of detecting the user's input in a state where the display device previews the physical space interface comprises:
    在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流;Displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device;
    利用所述智能终端的输入装置检测用户在所述物理空间界面中的输入。The input device of the smart terminal is used to detect the user's input in the physical space interface.
  3. 根据权利要求2所述的与移动机器人的交互方法,其特征在于,所检测的输入包括以下至少一种:滑动输入操作、点击输入操作。The interaction method with a mobile robot according to claim 2, wherein the detected input includes at least one of the following: a sliding input operation and a tap input operation.
  4. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,所述在显示装置预览物理空间界面的状态下检测用户的输入的步骤包括:The interaction method with a mobile robot according to claim 1, wherein the step of detecting the user's input in a state where the display device previews the physical space interface comprises:
    在所述显示装置所预览的物理空间界面中显示所述智能终端的摄像装置实时摄取的视频流;Displaying the real-time video stream captured by the camera device of the smart terminal in the physical space interface previewed by the display device;
    利用检测所述智能终端中的移动传感装置以获得用户的输入。The mobile sensing device in the smart terminal is detected to obtain the user's input.
  5. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,还包括:在预览所述物理空间界面的状态下构建所述终端坐标系,以在完成所述终端坐标系构建的状态下响应检测到的输入。The method for interacting with a mobile robot according to claim 1, further comprising: constructing the terminal coordinate system in a state of previewing the physical space interface, so as to complete the construction of the terminal coordinate system Respond to the detected input.
  6. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端中;或者所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的云端服务器中;或者所述移动机器人的机器人坐标系中的坐标信息预存在所述智能终端网络连接的移动机器人中。The interaction method with a mobile robot according to claim 1, wherein the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the smart terminal; or the coordinates in the robot coordinate system of the mobile robot The information is pre-stored in the cloud server connected to the smart terminal network; or the coordinate information in the robot coordinate system of the mobile robot is pre-stored in the mobile robot connected to the smart terminal network.
  7. 根据权利要求1或6所述的与移动机器人的交互方法,其特征在于,所述交互方法还包括:The interaction method with a mobile robot according to claim 1 or 6, wherein the interaction method further comprises:
    基于从所预览的物理空间界面中提取到的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;Determine the corresponding relationship based on the coordinate information of the consensus elements extracted from the previewed physical space interface in the robot coordinate system and the coordinate information in the terminal coordinate system;
    基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。The coordinate information of the at least one target area in the robot coordinate system of the mobile robot is determined based on the corresponding relationship.
  8. 根据权利要求7所述的与移动机器人的交互方法,其特征在于,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:8. The method for interacting with a mobile robot according to claim 7, wherein the step of generating an interactive command based on at least one target area to send to the mobile robot comprises:
    生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以发送至所述移动机器人。An interactive instruction including the at least one target area described by using coordinate information in the robot coordinate system is generated to be sent to the mobile robot.
  9. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:The interaction method with a mobile robot according to claim 1, wherein the step of generating an interactive command based on at least one target area to send to the mobile robot comprises:
    生成包含所述至少一个目标区域、和与创建所述至少一个目标区域相关的共识要素的交互指令,以发送至所述移动机器人;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置。Generate an interactive instruction including the at least one target area and a consensus element related to the creation of the at least one target area to be sent to the mobile robot; wherein the consensus element is used to determine that the at least one target area is in The coordinate position in the robot coordinate system.
  10. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,还包括以下至少一种步骤:The method for interacting with a mobile robot according to claim 1, further comprising at least one of the following steps:
    利用所述物理空间界面提示用户进行输入操作;Using the physical space interface to prompt the user to perform an input operation;
    利用声音提示用户进行输入操作;或者Use voice to prompt the user to input; or
    利用振动提示用户进行输入操作。Use vibration to prompt the user to input.
  11. 根据权利要求1所述的与移动机器人的交互方法,其特征在于,在所述显示装置预览物理空间界面的状态下检测用户的输入的步骤中,检测用户的输入为第一输入,所述方法还包括:基于所述目标区域以及检测用户的第二输入生成一交互指令以发送至所述移动机器人。The method for interacting with a mobile robot according to claim 1, wherein in the step of detecting the user's input in a state where the display device previews the physical space interface, detecting that the user's input is the first input, the method It also includes: generating an interactive command based on the target area and detecting the second input of the user to send to the mobile robot.
  12. 根据权利要求11所述的与移动机器人的交互方法,其特征在于,所述第二输入包括以下任一种:清扫或不清扫目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。The interaction method with a mobile robot according to claim 11, wherein the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting the target area Items.
  13. 一种智能终端,其特征在于,包括:An intelligent terminal, characterized in that it comprises:
    显示装置,用于为一物理空间界面提供预览操作;The display device is used to provide a preview operation for a physical space interface;
    存储装置,用于存储至少一个程序;Storage device for storing at least one program;
    接口装置,用于与一移动机器人进行通信交互;The interface device is used to communicate and interact with a mobile robot;
    处理装置,与所述显示装置、存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述显示装置、存储装置和接口装置执行如权利要求1-12中任一所述的交互方法。The processing device is connected to the display device, the storage device, and the interface device, and is used to execute the at least one program to coordinate the display device, the storage device, and the interface device to execute the display device, the storage device, and the interface device as claimed in any one of claims 1-12 Interactive method.
  14. 一种服务端,其特征在于,包括:A server, which is characterized in that it includes:
    存储装置,用于存储至少一个程序;Storage device for storing at least one program;
    接口装置,用于协助一智能终端和移动机器人进行通信交互;The interface device is used to assist an intelligent terminal and the mobile robot to communicate and interact;
    处理装置,与所述存储装置和接口装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行如下交互方法:The processing device is connected to the storage device and the interface device, and is used to execute the at least one program to coordinate the storage device and the interface device to perform the following interaction methods:
    获取来自所述智能终端的至少一个目标区域;其中,所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述移动机器人的机器人坐标系中的坐标信息具有对应关系;Acquire at least one target area from the smart terminal; wherein the target area is obtained by detecting user input via the smart terminal, the target area includes coordinate information of the terminal coordinate system of the smart terminal, and the coordinates The information has a corresponding relationship with the coordinate information in the robot coordinate system of the mobile robot;
    基于所述至少一个目标区域生成一交互指令以通过所述接口装置发送至所述移动机器人。An interactive command is generated based on the at least one target area to be sent to the mobile robot through the interface device.
  15. 根据权利要求14所述的服务端,其特征在于,所述存储装置预存有所述机器人坐标系;或者所述处理装置通过接口装置从所述智能终端或移动机器人获取所述机器人坐标系。The server according to claim 14, wherein the storage device prestores the robot coordinate system; or the processing device obtains the robot coordinate system from the smart terminal or the mobile robot through an interface device.
  16. 根据权利要求15所述的服务端,其特征在于,所述处理装置还通过接口装置获取所述智能终端所摄取的视频流;The server according to claim 15, wherein the processing device further obtains the video stream taken by the smart terminal through an interface device;
    所述处理装置基于所述视频流所提供的共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;以及The processing device determines the corresponding relationship based on the coordinate information of the consensus elements provided by the video stream in the robot coordinate system and the coordinate information in the terminal coordinate system; and
    基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。The coordinate information of the at least one target area in the robot coordinate system of the mobile robot is determined based on the corresponding relationship.
  17. 根据权利要求16所述的服务端,其特征在于,所述处理装置基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:The server according to claim 16, wherein the step of the processing device generating an interactive command based on at least one target area to send to the mobile robot comprises:
    生成包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域的交互指令以通过所述接口装置发送至所述移动机器人。An interactive instruction including the at least one target area described by using coordinate information in the robot coordinate system is generated to be sent to the mobile robot through the interface device.
  18. 根据权利要求14所述的服务端,其特征在于,所述基于至少一个目标区域生成一交互指令以发送至所述移动机器人的步骤包括:The server according to claim 14, wherein the step of generating an interactive command based on at least one target area to send to the mobile robot comprises:
    获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置;Acquiring a consensus element related to the creation of the at least one target area from the smart terminal; wherein the consensus element is used to determine the coordinate position of the at least one target area in the robot coordinate system;
    生成包含所述至少一个目标区域和所述共识要素的交互指令,以通过所述接口装置发送至所述移动机器人。An interactive instruction including the at least one target area and the consensus element is generated to be sent to the mobile robot through the interface device.
  19. 根据权利要求14所述的服务端,其特征在于,所述处理装置还通过所述接口装置获取来自所述智能终端的第二输入,所述处理装置还执行基于所述目标区域以及第二输入生成一交互指令以发送至所述移动机器人。The server according to claim 14, wherein the processing device further obtains a second input from the smart terminal through the interface device, and the processing device also performs a process based on the target area and the second input An interactive command is generated to send to the mobile robot.
  20. 根据权利要求19所述的服务端,其特征在于,所述第二输入包括以下任一种:清扫或不清扫目标区域、进入或不进入目标区域、整理或不整理目标区域内的物品。The server according to claim 19, wherein the second input includes any one of the following: cleaning or not cleaning the target area, entering or not entering the target area, sorting or not sorting items in the target area.
  21. 一种移动机器人,其特征在于,包括:A mobile robot, characterized in that it comprises:
    存储装置,用于存储至少一个程序,以及存储有预先构建的机器人坐标系;A storage device for storing at least one program and storing a pre-built robot coordinate system;
    接口装置,用于与一智能终端进行通信交互;The interface device is used to communicate and interact with an intelligent terminal;
    执行装置,用于受控执行相应操作;Execution device, used for controlled execution of corresponding operations;
    处理装置,与所述存储装置、接口装置和执行装置相连,用于执行所述至少一个程序,以协调所述存储装置和接口装置执行如下交互方法:The processing device is connected to the storage device, the interface device, and the execution device, and is used to execute the at least one program to coordinate the storage device and the interface device to execute the following interaction method:
    获取来自所述智能终端的交互指令;其中,所述交互指令包含至少一个目标区域;所述目标区域经由所述智能终端检测用户输入而得到的,所述目标区域包括所述智能终端的终端坐标系的坐标信息,所述坐标信息与所述机器人坐标系中的坐标信息具有对应关系;Acquire an interactive instruction from the smart terminal; wherein the interactive instruction includes at least one target area; the target area is obtained by detecting user input through the intelligent terminal, and the target area includes the terminal coordinates of the intelligent terminal Coordinate information of the robot coordinate system, where the coordinate information has a corresponding relationship with the coordinate information in the robot coordinate system;
    控制所述执行装置执行与所述至少一个目标区域相关的操作。Controlling the execution device to perform an operation related to the at least one target area.
  22. 根据权利要求21所述的移动机器人,其特征在于,所述处理装置通过接口装置向所述智能终端或一云端服务器提供所述移动机器人的机器人坐标系,以供获取所述交互指令。The mobile robot according to claim 21, wherein the processing device provides the robot coordinate system of the mobile robot to the smart terminal or a cloud server through an interface device for obtaining the interactive instruction.
  23. 根据权利要求22所述的移动机器人,其特征在于,所述处理装置执行与所述至少一个目标区域相关的操作的步骤包括:The mobile robot according to claim 22, wherein the step of the processing device performing an operation related to the at least one target area comprises:
    解析所述交互指令以至少得到:包含利用机器人坐标系中的坐标信息描述的所述至少一个目标区域;Parse the interactive instruction to obtain at least: the at least one target area described by the coordinate information in the robot coordinate system;
    控制所述执行装置执行与所述至少一个目标区域相关的操作。Controlling the execution device to perform an operation related to the at least one target area.
  24. 根据权利要求21所述的移动机器人,其特征在于,所述处理装置通过接口装置还获取来自所述智能终端的与创建所述至少一个目标区域相关的共识要素;其中,所述共识要素用于确定所述至少一个目标区域在所述机器人坐标系中的坐标位置;The mobile robot according to claim 21, wherein the processing device also obtains the consensus element related to the creation of the at least one target area from the smart terminal through the interface device; wherein the consensus element is used for Determining the coordinate position of the at least one target area in the robot coordinate system;
    所述处理装置还执行以下步骤:The processing device also performs the following steps:
    基于所述共识要素分别在所述机器人坐标系下的坐标信息和终端坐标系下的坐标信息,确定所述对应关系;以及Determine the corresponding relationship based on the coordinate information of the consensus element in the robot coordinate system and the coordinate information in the terminal coordinate system; and
    基于所述对应关系确定所述至少一个目标区域在所述移动机器人的机器人坐标系中的坐标信息。The coordinate information of the at least one target area in the robot coordinate system of the mobile robot is determined based on the corresponding relationship.
  25. 根据权利要求21所述的移动机器人,其特征在于,所述处理装置通过所述接口装置还获取来自所述智能终端的第二输入,所述处理装置还执行基于所述第二输入控制所述执行装置执行与所述至少一个目标区域相关的操作。22. The mobile robot according to claim 21, wherein the processing device also obtains a second input from the smart terminal through the interface device, and the processing device also performs control of the second input based on the second input. The execution device executes an operation related to the at least one target area.
  26. 根据权利要求25所述的移动机器人,其特征在于,所述第二输入包括以下任一种:清扫或不清扫目标区域、清扫目标区域的力度、进入或不进入目标区域、整理或不整理目标区域内的物品。The mobile robot according to claim 25, wherein the second input includes any of the following: cleaning or not cleaning the target area, strength of cleaning the target area, entering or not entering the target area, sorting or not sorting the target Items in the area.
  27. 根据权利要求25所述的移动机器人,其特征在于,所述执行装置包括移动装置,所述处理装置基于所述第二输入生成与所述至少一个目标区域相关的导航路线,并基于所述导航路线控制所述移动装置执行导航移动。The mobile robot according to claim 25, wherein the execution device comprises a mobile device, and the processing device generates a navigation route related to the at least one target area based on the second input, and based on the navigation The route controls the mobile device to perform navigational movement.
  28. 根据权利要求25所述的移动机器人,其特征在于,所述执行装置包括清洁装置,所述处理装置基于所述第二输入控制所述清洁装置在所述至少一个目标区域内的清洁操作。The mobile robot according to claim 25, wherein the execution device comprises a cleaning device, and the processing device controls a cleaning operation of the cleaning device in the at least one target area based on the second input.
  29. 根据权利要求21所述的移动机器人,其特征在于,所述移动机器人包括:清洁机器人、 巡视机器人、搬运机器人。The mobile robot according to claim 21, wherein the mobile robot comprises: a cleaning robot, a patrol robot, and a transport robot.
  30. 一种移动机器人的控制系统,其特征在于,包括:A control system of a mobile robot, characterized in that it comprises:
    如权利要求13所述的智能终端;The smart terminal according to claim 13;
    如权利要求21-29中任一所述的移动机器人。The mobile robot according to any one of claims 21-29.
  31. 一种计算机可读存储介质,其特征在于,存储至少一种程序,所述至少一种程序在被调用时执行并实现如权利要求1-12中任一所述的交互方法。A computer-readable storage medium, characterized in that it stores at least one program, and the at least one program executes and implements the interaction method according to any one of claims 1-12 when called.
PCT/CN2019/108590 2019-09-27 2019-09-27 Intelligent terminal, control system, and method for interaction with mobile robot WO2021056428A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980094943.6A CN113710133B (en) 2019-09-27 2019-09-27 Intelligent terminal, control system and interaction method with mobile robot
PCT/CN2019/108590 WO2021056428A1 (en) 2019-09-27 2019-09-27 Intelligent terminal, control system, and method for interaction with mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108590 WO2021056428A1 (en) 2019-09-27 2019-09-27 Intelligent terminal, control system, and method for interaction with mobile robot

Publications (1)

Publication Number Publication Date
WO2021056428A1 true WO2021056428A1 (en) 2021-04-01

Family

ID=75164788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108590 WO2021056428A1 (en) 2019-09-27 2019-09-27 Intelligent terminal, control system, and method for interaction with mobile robot

Country Status (2)

Country Link
CN (1) CN113710133B (en)
WO (1) WO2021056428A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848892A (en) * 2021-09-10 2021-12-28 广东盈峰智能环卫科技有限公司 Robot cleaning area dividing method, path planning method and device
CN114153310A (en) * 2021-11-18 2022-03-08 天津塔米智能科技有限公司 Robot guest greeting method, device, equipment and medium
CN114431800A (en) * 2022-01-04 2022-05-06 北京石头世纪科技股份有限公司 Control method and device for cleaning robot compartment and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
CN106933227A (en) * 2017-03-31 2017-07-07 联想(北京)有限公司 The method and electronic equipment of a kind of guiding intelligent robot
US20180055312A1 (en) * 2016-08-30 2018-03-01 Lg Electronics Inc. Robot cleaner, method of operating the same, and augmented reality system
CN109262607A (en) * 2018-08-15 2019-01-25 武汉华安科技股份有限公司 Robot coordinate system's conversion method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6259233B2 (en) * 2013-09-11 2018-01-10 学校法人常翔学園 Mobile robot, mobile robot control system, and program
CN109725632A (en) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 Removable smart machine control method, removable smart machine and intelligent sweeping machine
CN110147091B (en) * 2018-02-13 2022-06-28 深圳市优必选科技有限公司 Robot motion control method and device and robot
CN110200549A (en) * 2019-04-22 2019-09-06 深圳飞科机器人有限公司 Clean robot control method and Related product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (en) * 2013-07-29 2016-03-16 三星电子株式会社 Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
US20180055312A1 (en) * 2016-08-30 2018-03-01 Lg Electronics Inc. Robot cleaner, method of operating the same, and augmented reality system
CN106933227A (en) * 2017-03-31 2017-07-07 联想(北京)有限公司 The method and electronic equipment of a kind of guiding intelligent robot
CN109262607A (en) * 2018-08-15 2019-01-25 武汉华安科技股份有限公司 Robot coordinate system's conversion method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848892A (en) * 2021-09-10 2021-12-28 广东盈峰智能环卫科技有限公司 Robot cleaning area dividing method, path planning method and device
CN113848892B (en) * 2021-09-10 2024-01-16 广东盈峰智能环卫科技有限公司 Robot cleaning area dividing method, path planning method and device
CN114153310A (en) * 2021-11-18 2022-03-08 天津塔米智能科技有限公司 Robot guest greeting method, device, equipment and medium
CN114431800A (en) * 2022-01-04 2022-05-06 北京石头世纪科技股份有限公司 Control method and device for cleaning robot compartment and electronic equipment
CN114431800B (en) * 2022-01-04 2024-04-16 北京石头世纪科技股份有限公司 Control method and device for cleaning robot zoning cleaning and electronic equipment

Also Published As

Publication number Publication date
CN113710133A (en) 2021-11-26
CN113710133B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN110974088B (en) Sweeping robot control method, sweeping robot and storage medium
US11385720B2 (en) Picture selection method of projection touch
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
WO2021056428A1 (en) Intelligent terminal, control system, and method for interaction with mobile robot
JP5942456B2 (en) Image processing apparatus, image processing method, and program
CN110310175A (en) System and method for mobile augmented reality
JP5807686B2 (en) Image processing apparatus, image processing method, and program
JP2019071046A (en) Robotic virtual boundaries
KR20220004607A (en) Target detection method, electronic device, roadside device and cloud control platform
WO2020223975A1 (en) Method of locating device on map, server, and mobile robot
JP5213183B2 (en) Robot control system and robot control program
KR20180118219A (en) Interfacing with a mobile telepresence robot
KR20200036678A (en) Cleaning robot and Method of performing task thereof
CN113116224B (en) Robot and control method thereof
US9477302B2 (en) System and method for programing devices within world space volumes
US10950056B2 (en) Apparatus and method for generating point cloud data
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN115164906B (en) Positioning method, robot, and computer-readable storage medium
US10241588B1 (en) System for localizing devices in a room
EP3422145A1 (en) Provision of virtual reality content
WO2021248857A1 (en) Obstacle attribute discrimination method and system, and intelligent robot
CN108874141B (en) Somatosensory browsing method and device
CN110962132B (en) Robot system
WO2021125019A1 (en) Information system, information processing method, information processing program and robot system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19947306

Country of ref document: EP

Kind code of ref document: A1