WO2022000755A1 - 机器人及其行动控制方法、装置和计算机可读存储介质 - Google Patents

机器人及其行动控制方法、装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2022000755A1
WO2022000755A1 PCT/CN2020/112499 CN2020112499W WO2022000755A1 WO 2022000755 A1 WO2022000755 A1 WO 2022000755A1 CN 2020112499 W CN2020112499 W CN 2020112499W WO 2022000755 A1 WO2022000755 A1 WO 2022000755A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional coordinate
robot
target object
information
coordinate information
Prior art date
Application number
PCT/CN2020/112499
Other languages
English (en)
French (fr)
Inventor
安程治
王芳
李锐
金长新
Original Assignee
济南浪潮高新科技投资发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 济南浪潮高新科技投资发展有限公司 filed Critical 济南浪潮高新科技投资发展有限公司
Publication of WO2022000755A1 publication Critical patent/WO2022000755A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present invention relates to the technical field of artificial intelligence, and in particular, to a method, device, robot, and computer-readable storage medium for motion control of a robot.
  • the purpose of the present invention is to provide a motion control method, device, robot and computer-readable storage medium of a robot, so that the robot can quickly and conveniently identify the position of the target object in the real physical world, so as to adapt to the position change of the target object, and correct the position of the target object. Take the appropriate action.
  • the present invention provides an action control method of a robot, including:
  • the action control instruction includes target object information and action control information
  • the robot is controlled to perform corresponding operations to the position corresponding to the action control information.
  • the obtaining the action control instruction includes:
  • Voice recognition is performed on the voice information collected by the microphone of the robot to obtain the action control instruction.
  • converting the two-dimensional coordinate information in the two-dimensional image into three-dimensional coordinate information in a preset three-dimensional coordinate system includes:
  • the two-dimensional coordinate information in the two-dimensional image is mapped to the three-dimensional coordinate information in the three-dimensional digital map.
  • the method before using the plane detection function of augmented reality to map the two-dimensional coordinate information in the two-dimensional image to the three-dimensional coordinate information in the three-dimensional digital map, the method further includes:
  • the three-dimensional digital map is constructed by utilizing the real-time positioning and map construction functions of augmented reality.
  • the present invention also provides a motion control device for a robot, comprising:
  • an acquisition module for acquiring an action control instruction; wherein the action control instruction includes target object information and action control information;
  • a detection module configured to detect the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determine the two-dimensional coordinate information of the target object;
  • a conversion module configured to convert the two-dimensional coordinate information in the two-dimensional image into three-dimensional coordinate information in a preset three-dimensional coordinate system
  • a control module configured to control the robot to perform corresponding operations to a position corresponding to the action control information according to the three-dimensional coordinate information.
  • the obtaining module includes:
  • the speech recognition submodule is used to perform speech recognition on the speech information collected by the microphone of the robot, and obtain the action control instruction.
  • the conversion module includes:
  • the plane detection sub-module is configured to use the plane detection function of augmented reality to map the two-dimensional coordinate information in the two-dimensional image to the three-dimensional coordinate information in the three-dimensional digital map.
  • the device further includes:
  • the real-time positioning and map building module is used for building the three-dimensional digital map by using the real-time positioning and map building function of augmented reality.
  • the present invention also provides a robot, comprising:
  • the processor is configured to implement the steps of the above-mentioned robot motion control method when executing the computer program.
  • the present invention also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, implements the steps of the above-mentioned method for controlling the motion of a robot.
  • An action control method for a robot includes: acquiring action control instructions; wherein the action control instructions include target object information and action control information; detecting a target corresponding to the target object information in a two-dimensional image collected by a camera of the robot object, and determine the two-dimensional coordinate information of the target object; convert the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information in the preset three-dimensional coordinate system; control the robot to the position corresponding to the action control information according to the three-dimensional coordinate information corresponding operation;
  • the present invention detects the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, determines the two-dimensional coordinate information of the target object, uses the two-dimensional image of the real physical world collected by the camera to identify the target object, and determines The two-dimensional coordinate information of the target object; by converting the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information in the preset three-dimensional coordinate system, the two-dimensional coordinate information of the target object is converted into the three-dimensional coordinate system used by the robot.
  • the corresponding three-dimensional coordinate information can determine the actual position of the target object in the three-dimensional coordinate system, so as to adapt to the position change of the target object, correctly execute the corresponding action operation, and improve the user experience.
  • the present invention also provides a motion control device for a robot, a robot and a computer-readable storage medium, which also have the above beneficial effects.
  • FIG. 1 is a flowchart of a method for controlling a motion of a robot according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a motion control device for a robot according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for controlling a motion of a robot according to an embodiment of the present invention.
  • the method can include:
  • Step 101 Obtain an action control instruction; wherein, the action control instruction includes target object information and action control information.
  • the action control instructions obtained by the processor of the robot in this step can be the action control instructions corresponding to the target object.
  • the processor can use the user's voice input or text input "Go to the coffee table and take the apple. "Come here" to generate the corresponding action control instructions, that is, the action control instructions corresponding to the two target objects, the coffee table and the apple.
  • the specific content and type of the action control instruction in this step can be set by the designer according to the usage scenario and user needs.
  • it can be implemented in the same or similar way as the action control instruction of the robot in the prior art
  • the action control instruction in this embodiment corresponds to the target object, that is, the action control instruction includes not only the action control information but also the target object information, which is not limited in this embodiment.
  • the processor can generate the action control instruction according to the touch information collected by the touch screen of the robot; That is, the user can control the action of the robot by touching the touch screen of the robot.
  • the processor can also directly obtain action control instructions received by the robot's wireless receiving device (such as a Bluetooth device or a WIFI device); for example, a user can wirelessly send action control instructions to the robot through a smart terminal such as a mobile phone to control the robot's action operations.
  • the processor can also perform voice recognition on the voice information collected by the robot's microphone, and obtain action control instructions; that is, the user can control the action operation of the robot through voice (sound waves); that is, when the robot is running, voice recognition can be turned on in real time.
  • the function converts the user's real-time voice commands into text (that is, character strings) information, and through the conversion of character strings, the robot can extract corresponding action control commands from it. This embodiment does not impose any limitation on this.
  • Step 102 Detect the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determine the two-dimensional coordinate information of the target object.
  • this step may be for the processor to determine the target object in the two-dimensional image and the two-dimensional coordinate information of the target object using the two-dimensional image of the actual physical environment collected by the camera set on the robot.
  • the processor detects the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determines the specific method of the two-dimensional coordinate information of the target object, which can be set by the designer, such as the processor
  • the target detection technology can be used to identify the target object corresponding to the target object information in the two-dimensional image collected by the camera, and the two-dimensional coordinate information of the target object can be determined; the processor can also use other detection technologies in the prior art to identify the target object collected by the camera.
  • the target object corresponding to the target object information in the two-dimensional image, and the two-dimensional coordinate information of the target object is determined. This embodiment does not impose any limitation on this.
  • the processor detects the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determines the specific method of the two-dimensional coordinate information of the target object, which can be set by the designer, such as the processor. Only the target object corresponding to the target object information in the two-dimensional image can be identified, and the two-dimensional coordinate information of the target object can be determined.
  • the processor can use the target detection technology to process and identify the target object corresponding to the target object information in the two-dimensional image in real time. The corresponding target object information is given to the target object, and the two-dimensional coordinate information of the target object is determined at the same time.
  • the processor can also identify all objects in the two-dimensional image, and determine the two-dimensional coordinate information of each object; among them, all objects contain the target object; for example, the processor can use the target detection technology to process and identify the two-dimensional image in real time. It includes all objects of the target object, and assigns corresponding object information to each object, and determines the two-dimensional coordinate information of each object at the same time. This embodiment does not impose any limitation on this.
  • Step 103 Convert the two-dimensional coordinate information in the two-dimensional image into three-dimensional coordinate information in a preset three-dimensional coordinate system.
  • the purpose of this step can be that the processor converts the two-dimensional coordinate information of the target object in the two-dimensional image into the three-dimensional coordinate information under the three-dimensional coordinate system (that is, the preset three-dimensional coordinate system) used by the robot to act, That is, the two-dimensional coordinates of the target object in the two-dimensional image are mapped to the three-dimensional coordinates of the preset three-dimensional coordinate system, so that the robot can determine the position of the target object in the preset three-dimensional coordinate system corresponding to the real physical world, so as to carry out subsequent accurate actions and operations. .
  • the processor converts the two-dimensional coordinate information of the target object in the two-dimensional image into the three-dimensional coordinate information under the three-dimensional coordinate system (that is, the preset three-dimensional coordinate system) used by the robot to act, That is, the two-dimensional coordinates of the target object in the two-dimensional image are mapped to the three-dimensional coordinates of the preset three-dimensional coordinate system, so that the robot can determine the position of the target object in the preset
  • the preset three-dimensional coordinate system in this step may be a three-dimensional coordinate system corresponding to the real physical world used by the preset robot action.
  • the specific type and acquisition method of the preset 3D coordinate system can be set by the designer according to practical scenarios and user needs.
  • the preset 3D coordinate system can use AR (Augmented Reality, augmented reality) SLAM ( Simultaneous localization and mapping, real-time positioning and map construction) function, the built three-dimensional digital map (ie SLAM digital map); that is to say, before this step can also include the use of augmented reality real-time positioning and map construction function, to build a three-dimensional digital map
  • AR Augmented Reality, augmented reality
  • SLAM Simultaneous localization and mapping, real-time positioning and map construction
  • the built three-dimensional digital map ie SLAM digital map
  • the processor can use the SLAM of AR to build a map in real time according to the two-dimensional image collected by the camera of the robot.
  • the function understands the physical environment in which the robot is located, and draws a three-dimensional digital map to correspond and record the objective physical structure space.
  • the preset 3D coordinate system may be a 3D coordinate system constructed by other mapping techniques. This embodiment does not impose any limitation on this.
  • the specific method for the processor to convert the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information in the preset three-dimensional coordinate system in this step can be set by the designer.
  • the processor can use the plane detection (Raycast) function of AR to map the two-dimensional coordinate information in the two-dimensional image to the three-dimensional coordinate information in the three-dimensional digital map.
  • the processor can convert the two-dimensional coordinate information of the target object in the two-dimensional image into the corresponding three-dimensional coordinate information in the preset three-dimensional coordinate system, this embodiment does not impose any limitation on this.
  • the processor converts the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information in the preset three-dimensional coordinate system in this step, it may be that the processor only converts the two-dimensional coordinate information of the target object into The 3D coordinate information in the 3D coordinate system is preset.
  • the processor can use the Raycast function of AR to convert the 2D coordinate mapping of the target object with the target object information in the 2D image to the 3D digital map.
  • the robot is equivalent to knowing the position of the target object in the real physical world; the processor can also convert the two-dimensional coordinate information of all objects including the target object in the two-dimensional image into a preset three-dimensional coordinate system.
  • the corresponding three-dimensional coordinate information for example, the processor can use the Raycast function of AR to convert the two-dimensional coordinate mapping of all objects with object information in the two-dimensional image to a three-dimensional digital map, so that the robot can understand the real physical world.
  • This embodiment does not impose any limitation on this.
  • Step 104 Control the robot to perform a corresponding operation at a position corresponding to the action control information according to the three-dimensional coordinate information.
  • this step may be for the processor to use the three-dimensional coordinate information of the target object to control the robot to perform operations corresponding to the action control information in the position corresponding to the action control information in the action control instruction, so as to complete the action control instruction and realize the control of the robot.
  • Action control may be for the processor to use the three-dimensional coordinate information of the target object to control the robot to perform operations corresponding to the action control information in the position corresponding to the action control information in the action control instruction, so as to complete the action control instruction and realize the control of the robot.
  • Action control may be for the processor to use the three-dimensional coordinate information of the target object to control the robot to perform operations corresponding to the action control information in the position corresponding to the action control information in the action control instruction, so as to complete the action control instruction and realize the control of the robot. Action control.
  • this step may further include determining the position corresponding to the action control information according to the three-dimensional coordinate information of the target object; calculating the action path according to the position corresponding to the action control information and the three-dimensional coordinate information of the robot, so as to realize the path planning of the robot , to ensure that the robot can go to the position corresponding to the action control information and perform operations corresponding to the action control information.
  • the processor obtains the action control command corresponding to "bring the apple over to the coffee table"
  • it can find the object with the coffee table information and the apple information in the 3D digital map, and determine its corresponding 3D coordinate information , and then carry out path planning, and execute the operation corresponding to the action control command, such as picking up an apple.
  • the embodiment of the present invention detects the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determines the two-dimensional coordinate information of the target object, and uses the two-dimensional image of the real physical world collected by the camera to identify target object, and determine the two-dimensional coordinate information of the target object; by converting the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information in the preset three-dimensional coordinate system, the two-dimensional coordinate information of the target object is converted into the robot used
  • the corresponding three-dimensional coordinate information in the three-dimensional coordinate system can determine the actual position of the target object in the three-dimensional coordinate system, so as to adapt to the position change of the target object, correctly perform the corresponding action operation, and improve the user experience.
  • FIG. 2 is a structural block diagram of a motion control device for a robot according to an embodiment of the present invention.
  • the apparatus may include:
  • an acquisition module 10 configured to acquire an action control instruction; wherein, the action control instruction includes target object information and action control information;
  • the detection module 20 is used to detect the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot, and determine the two-dimensional coordinate information of the target object;
  • the conversion module 30 is used for converting the two-dimensional coordinate information in the two-dimensional image into the three-dimensional coordinate information under the preset three-dimensional coordinate system;
  • the control module 40 is configured to control the robot to perform corresponding operations to a position corresponding to the action control information according to the three-dimensional coordinate information.
  • the obtaining module 10 may include:
  • the speech recognition sub-module is used to perform speech recognition on the speech information collected by the microphone of the robot, and obtain action control instructions.
  • the conversion module 30 may include:
  • the plane detection sub-module is used to map the two-dimensional coordinate information in the two-dimensional image to the three-dimensional coordinate information in the three-dimensional digital map by using the plane detection function of the augmented reality.
  • the device may also include:
  • the real-time positioning and map building module is used to construct a three-dimensional digital map using the real-time positioning and map building function of augmented reality.
  • the detection module 20 may include:
  • the target detection sub-module is used for using the target detection technology to identify the target object corresponding to the target object information in the two-dimensional image collected by the camera, and to determine the two-dimensional coordinate information of the target object.
  • the embodiment of the present invention detects the target object corresponding to the target object information in the two-dimensional image collected by the camera of the robot through the detection module 20, and determines the two-dimensional coordinate information of the target object, and uses the two-dimensional data of the real physical world collected by the camera.
  • the two-dimensional image recognizes the target object, and determines the two-dimensional coordinate information of the target object; the two-dimensional coordinate information in the two-dimensional image is converted into the three-dimensional coordinate information under the preset three-dimensional coordinate system through the conversion module 30, and the two-dimensional coordinate information of the target object is converted into The information is converted into the corresponding three-dimensional coordinate information in the three-dimensional coordinate system used by the robot, which can determine the actual position of the target object in the three-dimensional coordinate system, so as to adapt to the position change of the target object, correctly perform the corresponding action operation, and improve the user experience.
  • FIG. 3 is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • the device 1 may include:
  • the memory 11 is used to store a computer program; the processor 12 is used to implement the steps of the method for controlling the motion of the robot provided by the above embodiments when the computer program is executed.
  • Device 1 may include memory 11 , processor 12 and bus 13 .
  • the memory 11 includes at least one type of readable storage medium, including flash memory, hard disk, multimedia card, card-type memory (eg, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the device 1 in some embodiments. In other embodiments, the memory 11 may also be an external storage device of the device 1, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card equipped on the device 1, Flash card (Flash Card) and so on. Further, the memory 11 may also include both an internal storage unit of the device 1 and an external storage device.
  • the memory 11 can be used not only to store application software installed in the device 1 and various types of data, such as code for executing a program of a robot motion control method, but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, for running the program code or processing stored in the memory 11 Data, such as the code of the program that executes the robot's motion control method, etc.
  • CPU Central Processing Unit
  • controller microcontroller
  • microprocessor or other data processing chip in some embodiments, for running the program code or processing stored in the memory 11 Data, such as the code of the program that executes the robot's motion control method, etc.
  • the bus 13 may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 3, but it does not mean that there is only one bus or one type of bus.
  • the device may also include a network interface 14, and the network interface 14 may optionally include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used between the device 1 and other electronic devices. establish a communication connection.
  • a network interface 14 may optionally include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used between the device 1 and other electronic devices. establish a communication connection.
  • the device 1 may further include a user interface 15, and the user interface 15 may include a display, an input unit such as a keyboard, and the optional user interface 15 may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be appropriately referred to as a display screen or a display unit, for displaying information processed in the device 1 and for displaying a visual user interface.
  • FIG. 3 only shows the device 1 having the components 11-15. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the device 1, and may include fewer or more components than those shown in the drawings. components (such as microphones and cameras), or a combination of certain components, or a different arrangement of components.
  • an embodiment of the present invention also discloses a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the action control method of the robot provided by the above-mentioned embodiment is implemented. A step of.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various storage media that can store program codes medium.
  • U disk mobile hard disk
  • read-only memory Read-Only Memory
  • RAM random access memory
  • magnetic disk or optical disk and other various storage media that can store program codes medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Electromagnetism (AREA)
  • Robotics (AREA)
  • Manipulator (AREA)

Abstract

一种机器人的行动控制方法、装置、机器人及计算机可读存储介质,方法包括:获取行动控制指令(S101);检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息(S102);将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息(S103);根据三维坐标信息,控制机器人到行动控制信息对应的位置进行相应的操作(S104);利用摄像头采集的真实物理世界的二维图像识别目标物体,并确定目标物体的二维坐标信息,通过将目标物体的二维坐标信息转换为机器人所使用的三维坐标系下相应的三维坐标信息,能够确定目标物体在三维坐标系下的实际位置,从而适应目标物体的位置变化,正确执行相应的行动操作。

Description

机器人及其行动控制方法、装置和计算机可读存储介质 技术领域
本发明涉及人工智能技术领域,特别涉及一种机器人的行动控制方法、装置、机器人及计算机可读存储介质。
背景技术
随着现代社会科技的发展,人工智能领域的机器人得到了很好的发展。目前,机器人的对目标物体的行动操作仅能在固定的三维地图中进行,使得机器人难以适应目标物体的位置变化,无法正确的执行相应的行动操作,导致机器人的行动控制效果不佳。
因此,如何使机器人能够快速便捷的识别真实物理世界中目标物体的位置,从而适应目标物体的位置变化,正确执行相应的行动操作,是现今急需解决的问题。
发明内容
本发明的目的是提供一种机器人的行动控制方法、装置、机器人及计算机可读存储介质,以使机器人能够快速便捷的识别真实物理世界中目标物体的位置,从而适应目标物体的位置变化,正确执行相应的行动操作。
为解决上述技术问题,本发明提供一种机器人的行动控制方法,包括:
获取行动控制指令;其中,所述行动控制指令包括目标物体信息和行动控制信息;
检测机器人的摄像头采集的二维图像中所述目标物体信息对应的目标物体,并确定所述目标物体的二维坐标信息;
将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息;
根据所述三维坐标信息,控制所述机器人到所述行动控制信息对应的位置进行相应的操作。
可选的,所述获取行动控制指令,包括:
对所述机器人的麦克风采集到的语音信息进行语音识别,获取所述行动控制指令。
可选的,所述将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息,包括:
利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息。
可选的,所述利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息之前,还包括:
利用增强现实的即时定位与地图构建功能,构建所述三维数字地图。
本发明还提供了一种机器人的行动控制装置,包括:
获取模块,用于获取行动控制指令;其中,所述行动控制指令包括目标物体信息和行动控制信息;
检测模块,用于检测机器人的摄像头采集的二维图像中所述目标物体信息对应的目标物体,并确定所述目标物体的二维坐标信息;
转换模块,用于将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息;
控制模块,用于根据所述三维坐标信息,控制所述机器人到所述行动控制信息对应的位置进行相应的操作。
可选的,所述获取模块,包括:
语音识别子模块,用于对所述机器人的麦克风采集到的语音信息进行语音识别,获取所述行动控制指令。
可选的,所述转换模块,包括:
平面检测子模块,用于利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息。
可选的,该装置还包括:
即时定位与地图构建模块,用于利用增强现实的即时定位与地图构建功能,构建所述三维数字地图。
本发明还提供了一种机器人,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现如上述所述的机器人的行动控制方法的步骤。
本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述所述的机器人的行动控制方法的步骤。
本发明所提供的一种机器人的行动控制方法,包括:获取行动控制指令;其中,行动控制指令包括目标物体信息和行动控制信息;检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息;将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息;根据三维坐标信息,控制机器人到行动控制信息对应的位置进行相应的操作;
可见,本发明通过检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息,利用摄像头采集的真实物理世界的二维图像识别目标物体,并确定目标物体的二维坐标信息;通过将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息,将目标物体的二维坐标信息转换为机器人所使用的三维坐标系下相应的三维坐标信息,能够确定目标物体在三维坐标系下的实际位置,从而适应目标物体的位置变化,正确执行相应的行动操作,提升用户体验。此外,本发明还提供了一种机器人的行动控制装置、机器人及计算机可读存储介质,同样具有上述有益效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本发明实施例所提供的一种机器人的行动控制方法的流程图;
图2为本发明实施例所提供的一种机器人的行动控制装置的结构框图;
图3为本发明实施例所提供的一种机器人的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参考图1,图1为本发明实施例所提供的一种机器人的行动控制方法的流程图。该方法可以包括:
步骤101:获取行动控制指令;其中,行动控制指令包括目标物体信息和行动控制信息。
可以理解的是,本步骤中机器人的处理器获取的行动控制指令可以为目标物体对应的行动控制指令,例如家庭场景下,处理器可以根据用户语音输入或文字输入的“到茶几上把苹果拿过来”生成相应的行动控制指令,即茶几和苹果这两个目标物体对应的行动控制指令。
具体的,对于本步骤中的行动控制指令的具体内容和类型,可以由设计人员根据使用场景和用户需求自行设置,如可以采用与现有技术中机器人的行动控制指令相同或相似的方式实现,只要保证本实施例中的行动控制指令与目标物体相对应,即行动控制指令不仅包含行动控制信息,还包括目标物体信息,本实施例对此不做任何限制。
需要说明的是,对于本步骤中处理器获取行动控制指令的具体方式,可以由设计人员根据实用场景和用户需求自行设置,如处理器可以根据机器人的触摸屏采集的触摸信息,生成行动控制指令;即用户可以通过触摸机器人的触摸屏,控制机器人的行动操作。处理器也可以直接获取机器人的无线接收设备(如蓝牙设备或WIFI设备)接收的行动控制指令;例如用户可以通过如手机的智能终端向机器人无线发送行动控制指令,控制机器人的行动操作。处理器还可以对机器人的麦克风采集到的语音信息进行语音识别,获取行动 控制指令;即用户可以通过语音(声波),控制机器人的行动操作;也就是说,机器人运行时,可以实时开启语音识别功能,将用户实时的语音指令,转化为文字(即字符串)信息,通过字符串的转换使机器人可以从中提取得到相应的行动控制指令。本实施例对此不做任何限制。
步骤102:检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息。
可以理解的是,本步骤的目的可以为处理器利用机器人上设置的摄像头采集的实际物理环境的二维图像,确定二维图像中的目标物体和目标物体的二维坐标信息。
具体的,对于本步骤中处理器检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息的具体方式,可以由设计人员自行设置,如处理器可以利用目标检测技术,识别摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息;处理器也可以利用现有技术中的其他检测技术,识别摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息。本实施例对此不做任何限制。
对应的,对于本步骤中处理器检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息的具体方式,可以由设计人员自行设置,如处理器可以仅识别二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息,例如处理器可以利用目标检测技术,实时处理和识别二维图像中目标物体信息对应的目标物体,并为目标物体赋予对应的目标物体信息,同时确定出目标物体的二维坐标信息。处理器也可以识别二维图像中的全部物体,并确定每个物体的二维坐标信息;其中,全部物体中包含目标物体;例如处理器可以利用目标检测技术,实时处理和识别二维图像中包含目标物体的全部物体,并为每个物体赋予对应的物体信息,同时确定出每个物体的二维坐标信息。本实施例对此不做任何限制。
步骤103:将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息。
可以理解的是,本步骤的目的可以为处理器通过将二维图像中目标物体的二维坐标信息转换为机器人行动所使用的三维坐标系(即预设三维坐标系)下的三维坐标信息,即将二维图像中目标物体二维坐标映射到预设三维坐标系的三维坐标,使机器人能够确定真实物理世界对应的预设三维坐标系下的目标物体的位置,从而进行后续准确的行动和操作。
其中,本步骤中的预设三维坐标系可以为预先设置的机器人行动所使用的真实物理世界对应的三维坐标系。对于预设三维坐标系的具体类型和获取方式,可以由设计人员根据实用场景和用户需求自行设置,如预设三维坐标系可以为机器人的处理器利用AR(Augmented Reality,增强现实)的SLAM(simultaneous localization and mapping,即时定位与地图构建)功能,构建的三维数字地图(即SLAM数字地图);也就是说,本步骤之前还可以包括利用增强现实的即时定位与地图构建功能,构建三维数字地图的步骤,例如处理器在根据机器人的摄像头采集的二维图像检测目标物体检测和确定目标物体的二维坐标信息的同时,可以根据机器人的摄像头采集的二维图像,利用AR的SLAM实时建图功能对机器人所处的物理环境进行理解,绘制三维数字地图,来对应和记录客观物理结构空间。预设三维坐标系可以为通过其他建图技术构建的三维坐标系。本实施例对此不做任何限制。
具体的,对于本步骤中处理器将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息的具体方式,可以由设计人员自行设置,如预设三维坐标系为利用AR的SLAM功能构建的三维数字地图时,处理器可以利用AR的平面检测(Raycast)功能,将二维图像中的二维坐标信息映射到三维数字地图中的三维坐标信息。只要处理器可以将二维图像中的目标物体的二维坐标信息转换为预设三维坐标系下相应的三维坐标信息,本实施例对此不做任何限制。
对应的,对于本步骤中处理器将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息的具体方式,可以为处理器仅将目标物体的二维坐标信息转换为预设三维坐标系下的三维坐标信息,例如处理器可以利用AR的Raycast功能,将二维图像中带有目标物体信息的目标物体的二维坐标映射转化到三维数字地图,由于三维数字地图与真实物理世界相对应,机器人也就相当于了解了真实物理世界中目标物体的位置;处理器也可以将二维 图像中包含目标物体的全部物体的二维坐标信息转换为预设三维坐标系下各自对应的三维坐标信息;例如处理器可以利用AR的Raycast功能,将二维图像中带有物体信息的全部物体的二维坐标映射转化到三维数字地图,使机器人了解真实物理世界中各个物体的位置。本实施例对此不做任何限制。
步骤104:根据三维坐标信息,控制机器人到行动控制信息对应的位置进行相应的操作。
具体的,本步骤的目的可以为处理器利用目标物体的三维坐标信息,控制机器人到行动控制指令中行动控制信息对应的位置进行行动控制信息对应的操作,从而完成行动控制指令,实现对机器人的行动控制。
具体的,本步骤还可以包括根据目标物体的三维坐标信息,确定行动控制信息对应的位置;根据行动控制信息对应的位置和机器人的三维坐标信息,计算行动路径的步骤,以实现机器人的路径规划,保证机器人能够到行动控制信息对应的位置,进行行动控制信息对应的操作。例如在家庭场景下,处理器获取“到茶几上把苹果拿过来”对应的行动控制指令后,可以在三维数字地图中找到带有茶几信息和苹果信息的物体,并确定其对应的三维坐标信息,之后进行路径规划,执行行动控制指令对应的操作,如拿起苹果。
本实施例中,本发明实施例通过检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息,利用摄像头采集的真实物理世界的二维图像识别目标物体,并确定目标物体的二维坐标信息;通过将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息,将目标物体的二维坐标信息转换为机器人所使用的三维坐标系下相应的三维坐标信息,能够确定目标物体在三维坐标系下的实际位置,从而适应目标物体的位置变化,正确执行相应的行动操作,提升用户体验。
请参考图2,图2为本发明实施例所提供的一种机器人的行动控制装置的结构框图。该装置可以包括:
获取模块10,用于获取行动控制指令;其中,行动控制指令包括目标物体信息和行动控制信息;
检测模块20,用于检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息;
转换模块30,用于将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息;
控制模块40,用于根据三维坐标信息,控制机器人到行动控制信息对应的位置进行相应的操作。
可选的,获取模块10,可以包括:
语音识别子模块,用于对机器人的麦克风采集到的语音信息进行语音识别,获取行动控制指令。
可选的,转换模块30,可以包括:
平面检测子模块,用于利用增强现实的平面检测功能,将二维图像中的二维坐标信息映射到三维数字地图中的三维坐标信息。
可选的,该装置还可以包括:
即时定位与地图构建模块,用于利用增强现实的即时定位与地图构建功能,构建三维数字地图。
可选的,检测模块20,可以包括:
目标检测子模块,用于利用目标检测技术,识别摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息。
本实施例中,本发明实施例通过检测模块20检测机器人的摄像头采集的二维图像中目标物体信息对应的目标物体,并确定目标物体的二维坐标信息,利用摄像头采集的真实物理世界的二维图像识别目标物体,并确定目标物体的二维坐标信息;通过转换模块30将二维图像中的二维坐标信息转换为预设三维坐标系下的三维坐标信息,将目标物体的二维坐标信息转换为机器人所使用的三维坐标系下相应的三维坐标信息,能够确定目标物体在三维坐标系下的实际位置,从而适应目标物体的位置变化,正确执行相应的行动操作,提升用户体验。
请参考图3,图3为本发明实施例所提供的一种机器人的结构示意图。该设备1可以包括:
存储器11,用于存储计算机程序;处理器12,用于执行该计算机程序时实现如上述实施例所提供的机器人的行动控制方法的步骤。
设备1可以包括存储器11、处理器12和总线13。
其中,存储器11至少包括一种类型的可读存储介质,该可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是设备1的内部存储单元。存储器11在另一些实施例中也可以是设备1的外部存储设备,例如设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于设备1的应用软件及各类数据,例如:执行机器人的行动控制方法的程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行机器人的行动控制方法的程序的代码等。
该总线13可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图3中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
进一步地,设备还可以包括网络接口14,网络接口14可选的可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该设备1与其他电子设备之间建立通信连接。
可选地,该设备1还可以包括用户接口15,用户接口15可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口15还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在设备1中处理的信息以及用于显示可视化的用户界面。
图3仅示出了具有组件11-15的设备1,本领域技术人员可以理解的是,图3示出的结构并不构成对设备1的限定,可以包括比图示更少或者更多的 部件(如麦克风和摄像头),或者组合某些部件,或者不同的部件布置。
此外,本发明实施例还公开了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现如上述实施例所提供的机器人的行动控制方法的步骤。
其中,该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置、机器人及计算机可读存储介质而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
以上对本发明所提供的一种机器人的行动控制方法、装置、机器人及计算机可读存储介质进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。

Claims (10)

  1. 一种机器人的行动控制方法,其特征在于,包括:
    获取行动控制指令;其中,所述行动控制指令包括目标物体信息和行动控制信息;
    检测机器人的摄像头采集的二维图像中所述目标物体信息对应的目标物体,并确定所述目标物体的二维坐标信息;
    将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息;
    根据所述三维坐标信息,控制所述机器人到所述行动控制信息对应的位置进行相应的操作。
  2. 根据权利要求1所述的机器人的行动控制方法,其特征在于,所述获取行动控制指令,包括:
    对所述机器人的麦克风采集到的语音信息进行语音识别,获取所述行动控制指令。
  3. 根据权利要求1所述的机器人的行动控制方法,其特征在于,所述将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息,包括:
    利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息。
  4. 根据权利要求3所述的机器人的行动控制方法,其特征在于,所述利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息之前,还包括:
    利用增强现实的即时定位与地图构建功能,构建所述三维数字地图。
  5. 一种机器人的行动控制装置,其特征在于,包括:
    获取模块,用于获取行动控制指令;其中,所述行动控制指令包括目标物体信息和行动控制信息;
    检测模块,用于检测机器人的摄像头采集的二维图像中所述目标物体信息对应的目标物体,并确定所述目标物体的二维坐标信息;
    转换模块,用于将所述二维图像中的所述二维坐标信息转换为预设三维坐标系下的三维坐标信息;
    控制模块,用于根据所述三维坐标信息,控制所述机器人到所述行动控制信息对应的位置进行相应的操作。
  6. 根据权利要求5所述的机器人的行动控制装置,其特征在于,所述获取模块,包括:
    语音识别子模块,用于对所述机器人的麦克风采集到的语音信息进行语音识别,获取所述行动控制指令。
  7. 根据权利要求5所述的机器人的行动控制装置,其特征在于,所述转换模块,包括:
    平面检测子模块,用于利用增强现实的平面检测功能,将所述二维图像中的所述二维坐标信息映射到三维数字地图中的三维坐标信息。
  8. 根据权利要求7所述的机器人的行动控制装置,其特征在于,还包括:
    即时定位与地图构建模块,用于利用增强现实的即时定位与地图构建功能,构建所述三维数字地图。
  9. 一种机器人,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至4任一项所述的机器人的行动控制方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述的机器人的行动控制方法的步骤。
PCT/CN2020/112499 2020-06-29 2020-08-31 机器人及其行动控制方法、装置和计算机可读存储介质 WO2022000755A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010605094.0A CN111708366B (zh) 2020-06-29 2020-06-29 机器人及其行动控制方法、装置和计算机可读存储介质
CN202010605094.0 2020-06-29

Publications (1)

Publication Number Publication Date
WO2022000755A1 true WO2022000755A1 (zh) 2022-01-06

Family

ID=72544336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112499 WO2022000755A1 (zh) 2020-06-29 2020-08-31 机器人及其行动控制方法、装置和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111708366B (zh)
WO (1) WO2022000755A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638894A (zh) * 2022-03-18 2022-06-17 纯米科技(上海)股份有限公司 机器人行走的定位方法、系统、电子装置及存储介质
CN114648615A (zh) * 2022-05-24 2022-06-21 四川中绳矩阵技术发展有限公司 目标对象交互式重现的控制方法、装置、设备及存储介质
CN114955455A (zh) * 2022-06-14 2022-08-30 乐聚(深圳)机器人技术有限公司 机器人控制方法、服务器、机器人及存储介质
US20230199032A1 (en) * 2021-12-22 2023-06-22 Avaya Management L.P. Endpoint control over a text channel of a real-time communication session

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329530B (zh) * 2020-09-30 2023-03-21 北京航空航天大学 支架的安装状态检测方法、设备及系统
CN113696178B (zh) * 2021-07-29 2023-04-07 大箴(杭州)科技有限公司 一种机器人智能抓取的控制方法及系统、介质、设备
CN116100537A (zh) * 2021-11-11 2023-05-12 中国科学院深圳先进技术研究院 机器人的控制方法、机器人、存储介质及抓取系统
CN114425155A (zh) * 2022-01-26 2022-05-03 北京市商汤科技开发有限公司 数据处理方法和装置、计算机设备及计算机存储介质
CN116594408B (zh) * 2023-07-17 2023-10-13 深圳墨影科技有限公司 一种移动协作机器人路径规划系统及方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009266224A (ja) * 2008-04-22 2009-11-12 Honeywell Internatl Inc リアルタイム・ビジュアル・オドメトリの方法およびシステム
CN107315410A (zh) * 2017-06-16 2017-11-03 江苏科技大学 一种机器人自动排障方法
CN109582147A (zh) * 2018-08-08 2019-04-05 亮风台(上海)信息科技有限公司 一种用于呈现增强交互内容的方法以及用户设备
CN109859274A (zh) * 2018-12-24 2019-06-07 深圳市银星智能科技股份有限公司 机器人、其物体标定方法及视教交互方法
CN110487262A (zh) * 2019-08-06 2019-11-22 Oppo广东移动通信有限公司 基于增强现实设备的室内定位方法及系统
CN110631586A (zh) * 2019-09-26 2019-12-31 珠海市一微半导体有限公司 基于视觉slam的地图构建的方法、导航系统及装置
CN110825079A (zh) * 2019-10-15 2020-02-21 珠海格力电器股份有限公司 一种地图构建方法及装置
EP3640587A1 (en) * 2018-10-19 2020-04-22 HERE Global B.V. Method and apparatus for iteratively establishing object position

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281023A (zh) * 2008-05-22 2008-10-08 北京中星微电子有限公司 一种获取三维目标外形的方法及系统
CN104833360B (zh) * 2014-02-08 2018-09-18 无锡维森智能传感技术有限公司 一种二维坐标到三维坐标的转换方法
EP3701344A1 (en) * 2017-10-26 2020-09-02 Aktiebolaget Electrolux Using augmented reality to exchange spatial information with a robotic cleaning device
CN108885459B (zh) * 2018-06-08 2021-02-19 珊口(深圳)智能科技有限公司 导航方法、导航系统、移动控制系统及移动机器人
CN108986161B (zh) * 2018-06-19 2020-11-10 亮风台(上海)信息科技有限公司 一种三维空间坐标估计方法、装置、终端和存储介质
KR102051889B1 (ko) * 2018-12-05 2019-12-06 주식회사 증강지능 스마트 글래스에서 2d 데이터를 기반으로 3d 증강현실을 구현하는 방법 및 시스템

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009266224A (ja) * 2008-04-22 2009-11-12 Honeywell Internatl Inc リアルタイム・ビジュアル・オドメトリの方法およびシステム
CN107315410A (zh) * 2017-06-16 2017-11-03 江苏科技大学 一种机器人自动排障方法
CN109582147A (zh) * 2018-08-08 2019-04-05 亮风台(上海)信息科技有限公司 一种用于呈现增强交互内容的方法以及用户设备
EP3640587A1 (en) * 2018-10-19 2020-04-22 HERE Global B.V. Method and apparatus for iteratively establishing object position
CN109859274A (zh) * 2018-12-24 2019-06-07 深圳市银星智能科技股份有限公司 机器人、其物体标定方法及视教交互方法
CN110487262A (zh) * 2019-08-06 2019-11-22 Oppo广东移动通信有限公司 基于增强现实设备的室内定位方法及系统
CN110631586A (zh) * 2019-09-26 2019-12-31 珠海市一微半导体有限公司 基于视觉slam的地图构建的方法、导航系统及装置
CN110825079A (zh) * 2019-10-15 2020-02-21 珠海格力电器股份有限公司 一种地图构建方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230199032A1 (en) * 2021-12-22 2023-06-22 Avaya Management L.P. Endpoint control over a text channel of a real-time communication session
US11838331B2 (en) * 2021-12-22 2023-12-05 Avaya Management L.P. Endpoint control over a text channel of a real-time communication session
CN114638894A (zh) * 2022-03-18 2022-06-17 纯米科技(上海)股份有限公司 机器人行走的定位方法、系统、电子装置及存储介质
CN114648615A (zh) * 2022-05-24 2022-06-21 四川中绳矩阵技术发展有限公司 目标对象交互式重现的控制方法、装置、设备及存储介质
CN114955455A (zh) * 2022-06-14 2022-08-30 乐聚(深圳)机器人技术有限公司 机器人控制方法、服务器、机器人及存储介质

Also Published As

Publication number Publication date
CN111708366B (zh) 2023-06-06
CN111708366A (zh) 2020-09-25

Similar Documents

Publication Publication Date Title
WO2022000755A1 (zh) 机器人及其行动控制方法、装置和计算机可读存储介质
KR102606785B1 (ko) 동시 로컬화 및 매핑을 위한 시스템 및 방법
WO2019184889A1 (zh) 增强现实模型的调整方法、装置、存储介质和电子设备
US9939961B1 (en) Virtualization of tangible interface objects
KR102078427B1 (ko) 사운드 및 기하학적 분석을 갖는 증강 현실
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
US9495802B2 (en) Position identification method and system
CN109947886B (zh) 图像处理方法、装置、电子设备及存储介质
JP2020047276A (ja) センサーキャリブレーション方法と装置、コンピュータ機器、媒体及び車両
WO2019100932A1 (zh) 一种运动控制方法及其设备、存储介质、终端
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
JP2018500645A (ja) オブジェクトをトラッキングするためのシステムおよび方法
WO2016033787A1 (zh) 一种截屏方法及装置
CN109871800A (zh) 一种人体姿态估计方法、装置和存储介质
CN109992111B (zh) 增强现实扩展方法和电子设备
CN114494487B (zh) 基于全景图语义拼接的户型图生成方法、设备及存储介质
US20150235425A1 (en) Terminal device, information processing device, and display control method
WO2018083910A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
WO2021129345A1 (zh) 场景地图建立方法、设备及存储介质
KR102498597B1 (ko) 전자 장치 및 이를 이용하여 관심 영역을 설정하여 오브젝트를 식별하는 방법
CN106598422B (zh) 混合操控方法及操控系统和电子设备
CN106569716B (zh) 单手操控方法及操控系统
US20220319120A1 (en) Determining 6d pose estimates for augmented reality (ar) sessions
US20210375054A1 (en) Tracking an augmented reality device
CN114223021A (zh) 电子装置及其处理手写输入的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943369

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20943369

Country of ref document: EP

Kind code of ref document: A1