WO2022222490A1 - 一种机器人的控制方法及机器人 - Google Patents

一种机器人的控制方法及机器人 Download PDF

Info

Publication number
WO2022222490A1
WO2022222490A1 PCT/CN2021/137304 CN2021137304W WO2022222490A1 WO 2022222490 A1 WO2022222490 A1 WO 2022222490A1 CN 2021137304 W CN2021137304 W CN 2021137304W WO 2022222490 A1 WO2022222490 A1 WO 2022222490A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
semantic map
information
full
map
Prior art date
Application number
PCT/CN2021/137304
Other languages
English (en)
French (fr)
Inventor
程俊
宋呈群
曾驳
吴福祥
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2022222490A1 publication Critical patent/WO2022222490A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application belongs to the technical field of artificial intelligence, and in particular relates to a control method of a robot and a robot.
  • Autonomous mobile robots refer to robots that can move purposefully and autonomously without human control. Autonomous mobile robots are increasingly used in public places, workplaces, and home services. For example, autonomous mobile robots can be used for security inspections, item transportation, sweeping and cleaning, etc. However, the current robots have insufficient understanding of the surrounding environment, and it is difficult to adaptively move in the face of complex surrounding environments, which may easily cause the robot to overturn and cause damage to the robot or nearby objects.
  • the embodiments of the present application provide a control method, a device and a robot to solve the problem that the current robot has insufficient ability to understand the surrounding environment, and it is difficult to adaptively move in the face of the complex surrounding environment, which is easy to cause A problem in which the robot falls over and damages the robot or nearby objects.
  • an embodiment of the present application provides a control method, which is applied to a robot, including:
  • Dynamic path planning and obstacle avoidance control are performed according to the predicted trajectory to control the movement of the robot.
  • the determining a full semantic map according to the multimodal information includes:
  • the full semantic map is constructed according to the acquired multimodal information.
  • constructing the full semantic map according to the acquired multimodal information includes:
  • a global semantic map is obtained according to the local scene object model.
  • the multimodal information includes RGB images, pose information, depth images, and laser point cloud information
  • the construction of a local scene object model according to the multimodal information includes:
  • the laser point cloud and the secondary model are fused to obtain the local scene object model.
  • the obtaining a global semantic map according to the local scene object model includes:
  • the local scene object models with speech information are spliced together to obtain a global semantic map.
  • the method further includes:
  • the full semantic map is read.
  • control method further includes:
  • the full semantic map is updated according to the acquired multimodal information.
  • an embodiment of the present application provides a robot, including:
  • a multimodal information acquisition module for acquiring multimodal information
  • a map determination module configured to determine a full semantic map according to the multimodal information
  • a scene understanding module used for semantic understanding and trajectory prediction according to the full semantic map to obtain a predicted trajectory
  • the adaptive module is used for dynamic path planning and obstacle avoidance control according to the predicted trajectory, so as to control the movement of the robot.
  • the above-mentioned map determination module may include a multi-modal fusion three-dimensional modeling module and a map construction module.
  • the above-mentioned multi-modal fusion 3D modeling module is used to fuse multi-modal information, splicing point clouds in combination with fusion information, and filling and optimizing the holes of the spliced point cloud model to obtain a local scene object model.
  • map building module is used for object semantic recognition, dynamic object recognition, and semantic map construction and update.
  • an embodiment of the present application provides a robot, where the robot includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program When implementing the method described in the first aspect or any optional manner of the first aspect.
  • embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the first aspect or any of the first aspect. Select the method described in the method.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a robot, enables the robot to execute the method described in the first aspect or any optional manner of the first aspect.
  • a method for controlling a robot and a robot determine a full semantic map through multimodal information, then perform semantic understanding and trajectory prediction through the full semantic map, and then control the robot to perform dynamic path planning and avoidance according to the predicted trajectory.
  • Obstacle control which enables the robot to effectively avoid obstacles when faced with complex scenes, improves the robot's adaptive ability in complex scenes, and prevents the robot from tipping over due to being unable to avoid obstacles, resulting in damage to the robot or damage to items.
  • FIG. 1 is a schematic structural diagram of a robot provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for controlling a robot provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a method for controlling a robot according to another embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a robot provided by another embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • references to "one embodiment” or “some embodiments” and the like described in the specification of this application mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the application .
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the existing control methods of autonomous mobile robots are usually based on two-dimensional code vision and combined with real-time positioning and composition of the robot to achieve guidance control.
  • a plurality of two-dimensional codes are used to mark and guide the target position on the constructed map, and the robot is controlled to move to the two-dimensional code area to complete the rough positioning; then the robot recognizes the two-dimensional code, according to the posture of the recognized two-dimensional code and the two-dimensional code.
  • the code adjusts the speed and direction of the robot relative to the spatial position of the camera, so that the robot moves to the target position.
  • another existing control method takes the current position of the robot as the center, updates the passable area by scanning, and updates the critical point set according to the boundary of the updated passable area.
  • the critical point set is not empty
  • cluster the critical point set to obtain several clusters; select a cluster from several clusters, and take its center coordinates as the target point; automatically navigate to the target point, take the target point as the new current position of the robot, and use the new
  • a new round of laser scanning is performed, and the passable area is updated, and the process is repeated until the critical point set is empty, and the establishment of the map information of the target area is completed.
  • an embodiment of the present application provides a control method for a robot, which determines a global semantic map by collecting multimodal information, and then performs dynamic path planning based on the global semantic map, so that the robot can adaptively adapt to a complex environment.
  • Obstacle avoidance and autonomous navigation and movement effectively solve the problem of insufficient understanding of the surrounding environment in the current robot control method, and it is difficult to adapt to move in the face of complex surrounding environment, so as to avoid the robot from overturning and causing the robot or nearby objects are damaged.
  • FIG. 1 shows a schematic structural diagram of a robot provided by an embodiment of the present application.
  • the robot may include a multimodal information acquisition module 110 , a multimodal fusion three-dimensional modeling module 120 , a map construction module 130 , a scene understanding module 140 and an autonomous environment adaptation module 150 .
  • the multimodal information collection module 110 is connected to the map construction module 130 through the multimodal fusion unit modeling module 120
  • the map construction module 130 is connected to the autonomous environment adaptation module 150 via the scene understanding module 140 .
  • the multimodal acquisition module 110 is used to acquire multimodal information, and the multimodal information may include, but is not limited to, RGB images, laser point clouds, depth images, and pose information.
  • the above-mentioned multimodal acquisition module 110 may include a camera 111 , a lidar 112 , a depth camera 113 and an inertial measurement unit 114 (Inertial Measurement Unit, IMU).
  • IMU Inertial Measurement Unit
  • RGB is the color representing the three channels of red, green, and blue
  • RGB image refers to an image in which each pixel is represented by a different ratio of R/G/B.
  • a laser point cloud refers to a massive collection of points on the appearance and surface features of an item collected by lidar.
  • depth image image also known as a distance image, refers to an image in which the distance (depth) from the image collector to each point (each object) in the scene is taken as the pixel value.
  • the above-mentioned inertial measurement unit 114 is used to collect attitude information.
  • the above attitude information may be three-axis attitude angle (or angular rate) and acceleration information.
  • the above-mentioned inertial measurement unit 114 can include three single-axis accelerometers and three single-axis gyroscopes, the accelerometer detects the acceleration signals of the object in the independent three-axis of the carrier coordinate system, and the gyroscope detects the angular velocity signal of the carrier relative to the navigation coordinate system, Measure the angular velocity and acceleration of an object in three-dimensional space, and use this to calculate the object's attitude.
  • the above-mentioned multi-modal fusion 3D modeling module 120 is used for fusing multi-modal information, splicing point clouds in combination with the fusion information, and filling and optimizing the holes of the spliced point cloud model.
  • the boundary of the triangular patch can be determined for the closed hole through the triangular mesh, and the hole can be detected; then a new triangular patch can be quickly generated at the polygon of the hole to form the initial mesh; in the fusion of the least squares network and the radial function implicit
  • the minimum second-order derivative is used to minimize the curvature of the surface radially, and keep the same trend as the original mesh curvature, and perform smooth fusion to achieve laser point cloud hole repair.
  • the map building module 130 is used for object semantic recognition, dynamic object recognition, and semantic map building and updating.
  • the scene understanding module 140 is used for terrain state recognition and passable area recognition, semantic understanding and robot trajectory prediction in complex environments.
  • the above-mentioned autonomous environment adaptation module 150 is used to realize dynamic path planning and obstacle avoidance, and control the robot to move autonomously according to the planned path and obstacle avoidance function.
  • FIG. 2 shows a schematic flowchart of a control method provided by an embodiment of the present application. Exemplarily, the above control method is described by taking the robot shown in FIG. 1 as an example as follows:
  • control method may include S11 to S14, which are described in detail as follows:
  • S11 Collect multimodal information.
  • the multimodal information may be collected by the multimodal collection module of the robot.
  • the above-mentioned multimodal information may include, but is not limited to, RGB images, laser point clouds, depth images, and pose information. Specifically, it can be collected by cameras, lidars, depth cameras, and inertial measurement units installed on the robot, respectively.
  • a full semantic map of the current scene can be obtained based on the collected multimodal information. Specifically, if it is detected that the full semantic map exists, the full semantic map can be directly read. If it is detected that there is no full semantic map, a full semantic map of the current scene can be constructed according to the multimodal information.
  • determining the full semantic map according to the multimodal information may include the following steps:
  • the location where the full semantic map is stored can be pre-determined, and then it is detected whether the full semantic map in the current scene is stored in the storage location. If the full semantic map in the current scene is stored in the storage location, it means that there is a full semantic map map, otherwise it means that there is no full semantic map.
  • the above S23 may specifically include the following steps:
  • a global semantic map is obtained according to the local scene object model.
  • the local scene objects can be modeled through the multi-modal fusion 3D modeling module, and then a global semantic map can be constructed through the map building module.
  • the above-mentioned construction of the local scene object model according to the multimodal information may include the following steps:
  • the laser point cloud and the secondary model are fused to obtain the local scene object model.
  • obtaining the global semantic map according to the local scene object model may include the following steps:
  • the local scene object models with speech information are spliced together to obtain a global semantic map.
  • semantic understanding and trajectory prediction can be performed based on the determined full semantic map through the scene understanding module.
  • Minkowski convolution based on lidar and video stream, high frame rate lidar data and video stream data and low frame rate lidar data and video stream data are selected as dual-channel input, and Minkowski convolution is used to extract differential features respectively.
  • the binary attention mechanism is used for feature fusion and enhancement, and then single-level multi-frame prediction (Single The Shot MultiBox Detector, SSD) method obtains the target semantic detection results, and finally uses a long-short-term memory neural network (Long Short-Term Memory, LSTM) method to obtain refined semantic information and trajectory prediction results.
  • LSTM Long Short-Term Memory
  • S24 Determine whether the current scene has changed based on the full semantic map and the multimodal information.
  • the real-time information obtained by the camera, lidar, and depth camera is compared with the stored global semantic map to determine whether the current scene has changed. If the real-time information obtained by the lidar and depth camera is inconsistent with the stored global semantic map, it means that the current scene has been changed; otherwise, the current scene has not changed.
  • the new local scene objects are modeled through the multi-modal fusion 3D modeling module, and then the global semantic map is updated through the map building module.
  • S14 Perform dynamic path planning and obstacle avoidance control according to the predicted trajectory to control the movement of the robot.
  • dynamic path planning and obstacle avoidance are performed on the obtained predicted trajectory through the autonomous environment adaptation module, and the robot is controlled to move autonomously according to the planned path and obstacle avoidance function.
  • the robot control method determines a full semantic map through multimodal information, then performs semantic understanding and trajectory prediction through the full semantic map, and then controls the robot to perform dynamic path planning and trajectory prediction according to the predicted trajectory.
  • Obstacle avoidance control enables the robot to effectively avoid obstacles in the face of complex scenes, improve the robot's adaptive ability in complex scenes, and prevent the robot from tipping over due to being unable to avoid obstacles, resulting in damage to the robot or items damaged condition.
  • FIG. 4 is a schematic structural diagram of a robot provided by another embodiment of the present application.
  • the robot 4 provided in this embodiment includes: a processor 40, a memory 41, and a computer program 42 stored in the memory 41 and executable on the processor 40, such as a multi-agent system Cooperative control program.
  • the processor 40 executes the computer program 42
  • the steps in each of the above-mentioned embodiments of the network parameter updating method for the multi-agent system are implemented, for example, S11 to S14 shown in FIG. 1 .
  • the processor 40 executes the computer program 42
  • the functions of the modules/units in each of the above robot embodiments such as the functions of the units 21 to 24 shown in FIG. 2 , are implemented.
  • the computer program 42 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 41 and executed by the processor 40 to complete the present application .
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 42 in the robot 4 .
  • the computer program 42 may be divided into a first obtaining unit and a first processing unit, and the specific functions of each unit can be referred to the relevant descriptions in the corresponding embodiment in FIG. 12 , which will not be repeated here.
  • the robot may include, but is not limited to, a processor 40 and a memory 41 .
  • FIG. 4 is only an example of the robot 4, and does not constitute a limitation to the robot 4. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as
  • the robot may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the robot 4 , such as a hard disk or a memory of the robot 4 .
  • the memory 41 may also be an external storage device of the robot 4, such as a plug-in hard disk equipped on the robot 4, a smart memory card (Smart Media Card, SMC), Secure Digital (SD) card, Flash memory card (Flash Card), etc. Further, the memory 41 may also include both an internal storage unit of the robot 4 and an external storage device.
  • the memory 41 is used to store the computer program and other programs and data required by the robot.
  • the memory 41 can also be used to temporarily store data that has been output or will be output.
  • Embodiments of the present application also provide a computer-readable storage medium. Please refer to FIG. 5.
  • FIG. 5 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application. As shown in FIG. 5, a computer program 51 is stored in the computer-readable storage medium 5, and the computer program 51 is processed by a processor. The control method of the above-mentioned robot can be realized during execution.
  • the embodiment of the present application provides a computer program product, when the computer program product runs on a robot, the robot implements a method for updating network parameters that can implement the above-mentioned multi-agent system.

Abstract

本申请适用于人工智能技术领域,提供了一种机器人控制方法及机器人,包括:采集多模态信息;根据所述多模态信息确定全语义地图;根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹;根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动,通过多模态信息来确定全语义地图,进而通过全语义地图进行语义理解和轨迹预测,再根据预测轨迹控制机器人进行动态路径规划和避障控制,能够使得机器人在面对复杂场景时也能够有效地避开障碍物,提高机器人在复杂场景下的自适应能力,避免机器人因无法避开障碍物而翻倒,导致机器人损坏或物品损坏的情况。

Description

一种机器人的控制方法及机器人 技术领域
本申请属于人工智能技术领域,尤其涉及一种机器人的控制方法及机器人。
背景技术
自主移动机器人是指能够在没有人的操控下有目的的自主移动的机器人,自主移动机器人越来越多地应用于公共场所、工作场所以及家庭服务中。例如,自主移动机器人可用于安保巡查、物品运输、扫地清洁等。然而,目前的机器人存在对周围环境的理解能力不足,在面对复杂的周围环境难以自适应移动的问题,这容易造成机器人翻倒进而导致机器人或附近的物体受到损坏。
技术问题
有鉴于此,本申请实施例提供了一种控制方法、装置及机器人,以解决目前的机器人存在对周围环境的理解能力不足,在面对复杂的周围环境难以自适应移动的问题,这容易造成机器人翻倒进而导致机器人或附近的物体受到损坏的问题。
技术解决方案
第一方面,本申请实施例提供一种控制方法,应用于机器人,包括:
采集多模态信息;
根据所述多模态信息确定全语义地图;
根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹;
根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动。
在第一方面的一种实现方式中,所述根据多模态信息确定全语义地图,包括:
检测是否存在全语义地图;
若不存在所述全语义地图,则根据获取到的多模态信息构建所述全语义地图。
在第一方面的一种实现方式中,所述若不存在所述全语义地图,则根据获取到的多模态信息构建所述全语义地图,包括:
根据所述多模态信息构建局部场景物体模型;
根据所述局部场景物体模型得到全局语义地图。
在第一方面的一种实现方式中,所述多模态信息包括RGB图像,位姿信息、深度图像以及激光点云信息,所述根据所述多模态信息构建局部场景物体模型,包括:
通过所述RGB图像提取图像特征;
根据所述图像特征获取稀疏特征点云和相机姿态信息,得到一级模型;
基于所述一级模型将所述位姿信息和所述相机姿态信息进行加权融合;
将所述深度图像与所述稀疏特征点云进行匹配,得到二级模型;
将所述激光点云与所述二级模型进行融合,得到所述局部场景物体模型。
在第一方面的一种实现方式中,所述根据所述局部场景物体模型得到全局语义地图,包括:
将RGB图像中的物体实例进行分割,并进行语义识别;
通过融合后的相机姿态信息将实例分割的语音信息投射到局部场景物体模型中;
将带有语音信息的局部场景物体模型进行拼接,得到全局语义地图。
在第一方面的一种实现方式中,在检测是否存在全语义地图之后还包括:
若存在所述全语义地图,则读取所述全语义地图。
在第一方面的一种实现方式中,所述控制方法还包括:
基于所述全语义地图和所述多模态信息判断当前场景是否发生变化;
若当前场景发生变化,则根据获取到的多模态信息更新所述全语义地图。
第二方面,本申请实施例提供一种机器人,包括:
多模态信息采集模块,用于采集多模态信息;
地图确定模块,用于根据所述多模态信息确定全语义地图;
场景理解模块,用于根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹;
自适应模块,用于根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动。
在第二方面的第一种实现方式中,上述地图确定模块可以包括多模态融合三维建模模块和地图构建模块。
上述多模态融合三维建模模块用于融合多模态信息,结合融合信息将点云拼接,并对拼接得到的点云模型的空洞进行填补优化,得到局部场景物体模型。
上述地图构建模块用于物体语义识别、动态物体识别和语义地图构建和更新。
第三方面,本申请实施例提供一种机器人,所述机器人包括处理器、存储器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面或第一方面的任意可选方式所述的方法。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面或第一方面的任意可选方式所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行上述第一方面或第一方面的任意可选方式所述的方法。
有益效果
实施本申请实施例提供的一种机器人的控制方法及机器人、计算机可读存储介质及计算机程序产品具有以下有益效果:
本申请实施例提供的一种机器人的控制方法及机器人,通过多模态信息来确定全语义地图,进而通过全语义地图进行语义理解和轨迹预测,再根据预测轨迹控制机器人进行动态路径规划和避障控制,能够使得机器人在面对复杂场景时也能够有效地避开障碍物,提高机器人在复杂场景下的自适应能力,避免机器人因无法避开障碍物而翻倒,导致机器人损坏或物品损坏的情况。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种机器人的结构示意图;
图2是本申请实施例提供一种机器人的控制方法的示意性流程图;
图3是本申请另一实施例提供的一种机器人的控制方法的示意性流程图;
图4是本申请另一实施例提供的一种机器人的结构示意图;
图5是本申请实施例提供的一种计算机可读存储介质的结构示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
还应当理解,在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
自主移动机器人大多配置有各种传感器,能够在没有人操纵的情况下有目的(往目标位置)的自主移动。
现有的自主移动机器人的控制方法通常基于二维码视觉并结合机器人实时定位与构图来实现引导控制。具体地,在构建的地图上用多个二维码标记引导目标位置,控制机器人移动至二维码区域完成粗定位;然后机器人识别二维码,根据识别到的二维码的姿态和二维码相对相机的空间位置调整机器人前进的速度与方向,使机器人移动到目标位置。
然而上述方式需要额外的二维码标签配合摄影头进行视觉引导,使用比较不便且配置困难。
为了降低配置难度,现有的另一种控制方法以机器人的当前位置为中心,通过扫描更新可通过区域,根据更新后的可通过区域的边界更新临界点集,当临界点集不为空时,对临界点集进行聚类,得到若干簇;从若干簇中选择一个簇,将其中心坐标作为目标点;自动导航到目标点,将该目标点作为新的机器人当前位置,并以新的机器人当前位置为中心,进行新一轮激光扫描,更新可通过区域,如此重复,直至所述临界点集为空,则完成目标区域的地图信息建立。
然而上述方式存在自适应能力差,程序复杂,容易积累误差,致使长时导航不准确的问题。
以上可以看出,随着机器人的使用环境变得越来越复杂,机器人由于不期望的外部环境影响的干扰下,无法对突发情况作出适当的反应,这容易造成机器人翻倒进而导致机器人或附近的物体受到损坏,甚至有可能在翻倒的时候砸伤人,存在一定的安全隐患。
为了解决上述缺陷,本申请实施例提供了一种机器人的控制方法,通过采集多模态信息来确定全局语义地图,再基于全局语义地图进行动态路径规划,使得机器人能够在复杂环境下自适应地进行避障和自主导航移动,有效地解决了目前的机器人控制方法存在的对周围环境的理解能力不足,在面对复杂的周围环境难以自适应移动的问题,避免机器人翻倒进而导致机器人或附近的物体受到损坏的情况。
以下将对本申请实施例提供的机器人的控制方法及机器人进行详细的说明:
请参阅图1,图1示出了本申请实施例提供的一种机器人的结构示意图。如图1所示,机器人可以包括多模态信息采集模块110、多模态融合三维建模模块120、地图构建模块130、场景理解模块140以及自主环境适应模块150。上述多模态信息采集模块110通过上述多模态融合单位建模模块120与地图构建模块130连接,上述地图构建模块130通过上述场景理解模块140与自主环境适应模块150连接。
在具体应用中,上述多模态采集模块110用于采集多模态信息,上述多模态信息可以包括但不限于RGB图像、激光点云、深度图像以及位姿信息。相应的,上述多模态采集模块110可以包括相机111、激光雷达112、深度相机113和惯性测量单元114(Inertial Measurement Unit,IMU)。
在具体应用中,上述相机111用于采集RGB图像。RGB即是代表红、绿、蓝三个通道的颜色,RGB图像就是指每个像素点都用R/G/B的不同比例来表示的图像。
上述激光雷达112用于采集激光点云。激光点云是指利用激光雷达采集得到的物品外观表面特征的海量点集合。
上述深度相机113用于采集深度图像。深度图像(depth image)也称距离图像,是指将从图像采集器到场景中各点(各个物体)的距离(深度)作为像素点值的图像。
上述惯性测量单元114用于采集姿态信息。上述姿态信息可以是三轴姿态角(或角速率)以及加速度信息。上述惯性测量单元114可以包含三个单轴的加速度计和三个单轴的陀螺,加速度计检测物体在载体坐标系统独立三轴的加速度信号,而陀螺检测载体相对于导航坐标系的角速度信号,测量物体在三维空间中的角速度和加速度,并以此解算出物体的姿态。
上述多模态融合三维建模模块120用于融合多模态信息,结合融合信息将点云拼接,并对拼接得到的点云模型的空洞进行填补优化。
具体的,可以先通过三角网格针对封闭孔洞确定三角面片边界,检测孔洞;再在孔洞多边形处快速生成新三角面片,形成初始网格;在融合最小二乘网络与径向函数隐式曲面,利用最小二阶导数对曲面曲率径向最小化,并与原始网格曲率变化趋势保持相同,进行平滑融合,实现激光点云孔洞修补。
地图构建模块130用于物体语义识别、动态物体识别和语义地图构建和更新。
场景理解模块140用于地形状态识别和可通行区域只能识别,复杂环境下语义理解及机器人轨迹预测。
上述自主环境适应模块150用于实现动态路径规划及避障,并根据规划路径和避障功能控制机器人自主移动。
请参阅图2,图2示出了本申请实施例提供的一种控制方法的示意性流程图,示例性的,以图1所示的机器人为例对上述控制方法进行说明如下:
如图2所示,上述控制方法可以包括S11~S14,详述如下:
S11:采集多模态信息。
在本申请实施例中,上述多模态信息可以由上述机器人的多模态采集模块进行采集。
在本申请实施例中,上述多模态信息可以包括但不限于RGB图像、激光点云、深度图像以及位姿信息。具体地,可以由设置在机器人上的相机、激光雷达、深度相机以及惯性测量单元分别采集得到。
S12:根据所述多模态信息确定全语义地图。
在本申请实施例中,基于采集到的多模态信息就能够获取当前场景的全语义地图。具体的,如果检测到存在全语义地图,则可以直接读取全语义地图。如果检测到不存在全语义地图,则可以根据多模态信息构建出当前场景的全语义地图。
请参阅图3,在本申请一实施例中,上述根据所述多模态信息确定全语义地图可以包括以下步骤:
S21:检测是否存在全语义地图。
在具体应用中,可以预先确定存储全语义地图的位置,然后检测该存储位置是否存储有当前场景下的全语义地图,若该存储位置存储有当前场景下的全语义地图,则说明存在全语义地图,否则说明不存在全语义地图。
S22:若存在所述全语义地图,则读取所述全语义地图。
S23:若不存在所述全语义地图,则根据获取到的多模态信息构建所述全语义地图。
上述S23具体可以包括以下步骤:
根据所述多模态信息构建局部场景物体模型;
根据所述局部场景物体模型得到全局语义地图。
在具体应用中,可以通过上述多模态融合三维建模模块对局部场景物体建模,再通过地图构建模块构建全局语义地图。
具体地,上述根据所述多模态信息构建局部场景物体模型可以包括以下步骤:
通过所述RGB图像提取图像特征;
根据所述图像特征获取稀疏特征点云和相机姿态信息,得到一级模型;
基于所述一级模型将所述位姿信息和所述相机姿态信息进行加权融合;
将所述深度图像与所述稀疏特征点云进行匹配,得到二级模型;
将所述激光点云与所述二级模型进行融合,得到所述局部场景物体模型。
具体地,上述根据根据所述局部场景物体模型得到全局语义地图可以包括以下步骤:
将RGB图像中的物体实例进行分割,并进行语义识别;
通过融合后的相机姿态信息将实例分割的语音信息投射到局部场景物体模型中;
将带有语音信息的局部场景物体模型进行拼接,得到全局语义地图。
S13:根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹。
在具体应用中,在确定了全语义地图后,就可以通过场景理解模块基于确定的全语义地图进行语义理解和轨迹预测。
具体地,以激光雷达和视频流为基础,选取高帧率激光雷达数据和视频流数据及低帧率激光雷达数据和视频流数据为双通道输入,并分别采用Minkowski卷积提取差异性特征,采用二元注意力机制进行特征融合和增强,再采用单级多框预测(Single Shot MultiBox Detector,SSD)方法获取目标语义检测结果,最后采用长短时记忆神经网络(Long Short-Term Memory,LSTM)方法获取细化后的语义信息和轨迹预测结果。需要说明的是,上述Minkowski卷积、二元注意力机制、单级多框预测方法以及长短时记忆神经网络方法均为本领域常用的神经网络模型,本申请对此不在加以赘述。
请参阅图3,在本申请一实施例中,在根据全语义地图进行语义理解和轨迹预测,得到预测轨迹之前,还包括以下步骤:
S24:基于所述全语义地图和所述多模态信息判断当前场景是否发生变化。
通过相机,激光雷达,深度相机实时获取的信息与存储的全局语义地图比对,判断当前场景是否发生变化。若激光雷达,深度相机实时获取的信息与存储的全局语义地图不一致,则说明当前场景已经发送变化,否则说明当前场景没有发生变化。
S25:若当前场景发生变化,则根据获取到的多模态信息更新所述全语义地图。
如果判断当前场景发生了变化,则通过多模态融合三维建模模块对新增的局部场景物体建模,然后通过地图构建模块更新全局语义地图。
S14:根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动。
在具体应用中,通过自主环境适应模块对得到的预测轨迹进行动态路径规划及避障,并根据规划路径和避障功能控制机器人自主移动。
以上可以看出,本申请实施例提供的机器人的控制方法,通过多模态信息来确定全语义地图,进而通过全语义地图进行语义理解和轨迹预测,再根据预测轨迹控制机器人进行动态路径规划和避障控制,能够使得机器人在面对复杂场景时也能够有效地避开障碍物,提高机器人在复杂场景下的自适应能力,避免机器人因无法避开障碍物而翻倒,导致机器人损坏或物品损坏的情况。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图4是本申请另一实施例提供的一种机器人的结构示意图。如图4所示,该实施例提供的机器人4包括:处理器40、存储器41以及存储在所述存储器41中并可在所述处理器40上运行的计算机程序42,例如多智能体系统的协同控制的程序。处理器40执行所述计算机程序42时实现上述各个多智能体系统的网络参数更新方法实施例中的步骤,例如图1所示的S11~S14。或者,所述处理器40执行所述计算机程序42时实现上述各机器人实施例中各模块/单元的功能,例如图2所示单元21~24的功能。
示例性的,所述计算机程序42可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器41中,并由处理器40执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序42在所述机器人4中的执行过程。例如,所述计算机程序42可以被分割成第一获取单元和第一处理单元,各单元具体功能请参阅图12对应地实施例中的相关描述,此处不赘述。
所述机器人可包括但不仅限于,处理器40、存储器41。本领域技术人员可以理解,图4仅仅是机器人4的示例,并不构成对机器人4的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述机器人还可以包括输入输出设备、网络接入设备、总线等。
所称处理器40可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器41可以是所述机器人4的内部存储单元,例如机器人4的硬盘或内存。所述存储器41也可以是所述机器人4的外部存储设备,例如所述机器人4上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器41还可以既包括所述机器人4的内部存储单元也包括外部存储设备。所述存储器41用于存储所述计算机程序以及所述机器人所需的其他程序和数据。所述存储器41还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例还提供了一种计算机可读存储介质。请参阅图5,图5是本申请实施例提供的一种计算机可读存储介质的结构示意图,如图5所示,计算机可读存储介质5中存储有计算机程序51,计算机程序51被处理器执行时可实现上述机器人的控制方法。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人执行时实现可实现上述多智能体系统的网络参数更新方法。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述机器人的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参照其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种控制方法,其特征在于,应用于机器人,所述控制方法包括:
    采集多模态信息;
    根据所述多模态信息确定全语义地图;
    根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹;
    根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动。
  2. 根据权利要求1所述的方法,其特征在于,所述根据多模态信息确定全语义地图,包括:
    检测是否存在全语义地图;
    若不存在所述全语义地图,则根据获取到的多模态信息构建所述全语义地图。
  3. 根据权利要求2所述的方法,其特征在于,所述若不存在所述全语义地图,则根据获取到的多模态信息构建所述全语义地图,包括:
    根据所述多模态信息构建局部场景物体模型;
    根据所述局部场景物体模型得到全局语义地图。
  4. 根据权利要求3所述的方法,其特征在于,所述多模态信息包括RGB图像,位姿信息、深度图像以及激光点云信息,所述根据所述多模态信息构建局部场景物体模型,包括:
    通过所述RGB图像提取图像特征;
    根据所述图像特征获取稀疏特征点云和相机姿态信息,得到一级模型;
    基于所述一级模型将所述位姿信息和所述相机姿态信息进行加权融合;
    将所述深度图像与所述稀疏特征点云进行匹配,得到二级模型;
    将所述激光点云与所述二级模型进行融合,得到所述局部场景物体模型。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述局部场景物体模型得到全局语义地图,包括:
    将RGB图像中的物体实例进行分割,并进行语义识别;
    通过融合后的相机姿态信息将实例分割的语音信息投射到局部场景物体模型中;
    将带有语音信息的局部场景物体模型进行拼接,得到全局语义地图。
  6. 根据权利要求2所述的方法,其特征在于,还包括:
    若存在所述全语义地图,则读取所述全语义地图。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,还包括:
    基于所述全语义地图和所述多模态信息判断当前场景是否发生变化;
    若当前场景发生变化,则根据获取到的多模态信息更新所述全语义地图。
  8. 一种机器人,其特征在于,包括:
    多模态信息采集模块,用于采集多模态信息;
    地图确定模块,用于根据所述多模态信息确定全语义地图;
    场景理解模块,用于根据所述全语义地图进行语义理解和轨迹预测,得到预测轨迹;
    自适应模块,用于根据所述预测轨迹进行动态路径规划和避障控制,以控制所述机器人移动。
  9. 一种机器人,其特征在于,所述机器人包括处理器、存储器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
PCT/CN2021/137304 2021-04-21 2021-12-12 一种机器人的控制方法及机器人 WO2022222490A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110428814.5 2021-04-21
CN202110428814.5A CN113256716B (zh) 2021-04-21 2021-04-21 一种机器人的控制方法及机器人

Publications (1)

Publication Number Publication Date
WO2022222490A1 true WO2022222490A1 (zh) 2022-10-27

Family

ID=77221491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137304 WO2022222490A1 (zh) 2021-04-21 2021-12-12 一种机器人的控制方法及机器人

Country Status (2)

Country Link
CN (1) CN113256716B (zh)
WO (1) WO2022222490A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439510A (zh) * 2022-11-08 2022-12-06 山东大学 一种基于专家策略指导的主动目标跟踪方法及系统
CN117274353A (zh) * 2023-11-20 2023-12-22 光轮智能(北京)科技有限公司 合成图像数据生成方法、控制装置及可读存储介质
CN117428792A (zh) * 2023-12-21 2024-01-23 商飞智能技术有限公司 用于机器人的作业系统及方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256716B (zh) * 2021-04-21 2023-11-21 中国科学院深圳先进技术研究院 一种机器人的控制方法及机器人

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110220517A (zh) * 2019-07-08 2019-09-10 紫光云技术有限公司 一种结合环境语意的室内机器人鲁棒slam方法
CN110275540A (zh) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 用于扫地机器人的语义导航方法及其系统
US20200184718A1 (en) * 2018-12-05 2020-06-11 Sri International Multi-modal data fusion for enhanced 3d perception for platforms
CN111609852A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 语义地图构建方法、扫地机器人及电子设备
CN111798475A (zh) * 2020-05-29 2020-10-20 浙江工业大学 一种基于点云深度学习的室内环境3d语义地图构建方法
CN112683288A (zh) * 2020-11-30 2021-04-20 北方工业大学 一种交叉口环境下辅助盲人过街的智能引导机器人系统及方法
CN113256716A (zh) * 2021-04-21 2021-08-13 中国科学院深圳先进技术研究院 一种机器人的控制方法及机器人

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200184718A1 (en) * 2018-12-05 2020-06-11 Sri International Multi-modal data fusion for enhanced 3d perception for platforms
CN111609852A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 语义地图构建方法、扫地机器人及电子设备
CN110275540A (zh) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 用于扫地机器人的语义导航方法及其系统
CN110220517A (zh) * 2019-07-08 2019-09-10 紫光云技术有限公司 一种结合环境语意的室内机器人鲁棒slam方法
CN111798475A (zh) * 2020-05-29 2020-10-20 浙江工业大学 一种基于点云深度学习的室内环境3d语义地图构建方法
CN112683288A (zh) * 2020-11-30 2021-04-20 北方工业大学 一种交叉口环境下辅助盲人过街的智能引导机器人系统及方法
CN113256716A (zh) * 2021-04-21 2021-08-13 中国科学院深圳先进技术研究院 一种机器人的控制方法及机器人

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439510A (zh) * 2022-11-08 2022-12-06 山东大学 一种基于专家策略指导的主动目标跟踪方法及系统
CN115439510B (zh) * 2022-11-08 2023-02-28 山东大学 一种基于专家策略指导的主动目标跟踪方法及系统
CN117274353A (zh) * 2023-11-20 2023-12-22 光轮智能(北京)科技有限公司 合成图像数据生成方法、控制装置及可读存储介质
CN117274353B (zh) * 2023-11-20 2024-02-20 光轮智能(北京)科技有限公司 合成图像数据生成方法、控制装置及可读存储介质
CN117428792A (zh) * 2023-12-21 2024-01-23 商飞智能技术有限公司 用于机器人的作业系统及方法

Also Published As

Publication number Publication date
CN113256716B (zh) 2023-11-21
CN113256716A (zh) 2021-08-13

Similar Documents

Publication Publication Date Title
WO2022222490A1 (zh) 一种机器人的控制方法及机器人
CN109084732B (zh) 定位与导航方法、装置及处理设备
US10717193B2 (en) Artificial intelligence moving robot and control method thereof
CN107160395B (zh) 地图构建方法及机器人控制系统
US10133278B2 (en) Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
JP6897668B2 (ja) 情報処理方法および情報処理装置
JP2020047276A (ja) センサーキャリブレーション方法と装置、コンピュータ機器、媒体及び車両
CN111337947A (zh) 即时建图与定位方法、装置、系统及存储介质
US20200257821A1 (en) Video Monitoring Method for Mobile Robot
US10347001B2 (en) Localizing and mapping platform
EP3974778B1 (en) Method and apparatus for updating working map of mobile robot, and storage medium
CN111220148A (zh) 移动机器人的定位方法、系统、装置及移动机器人
WO2019136613A1 (zh) 机器人室内定位的方法及装置
Sales et al. Vision-based autonomous navigation system using ann and fsm control
Mojtahedzadeh Robot obstacle avoidance using the Kinect
KR20200143228A (ko) 3차원 가상 공간 모델을 이용한 사용자 포즈 추정 방법 및 장치
WO2023125363A1 (zh) 电子围栏自动生成方法、实时检测方法及装置
CN114255323A (zh) 机器人、地图构建方法、装置和可读存储介质
JP7351892B2 (ja) 障害物検出方法、電子機器、路側機器、及びクラウド制御プラットフォーム
US11055341B2 (en) Controlling method for artificial intelligence moving robot
Manivannan et al. Vision based intelligent vehicle steering control using single camera for automated highway system
CN114077252A (zh) 机器人碰撞障碍区分装置及方法
WO2019014620A1 (en) CAPTURE, CONNECTION AND USE OF BUILDING INTERIOR DATA FROM MOBILE DEVICES
CN112652001A (zh) 基于扩展卡尔曼滤波的水下机器人多传感器融合定位系统
WO2022174603A1 (zh) 一种位姿预测方法、位姿预测装置及机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937712

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21937712

Country of ref document: EP

Kind code of ref document: A1