CN117824663A - Robot navigation method based on hand-drawn scene graph understanding - Google Patents

Robot navigation method based on hand-drawn scene graph understanding Download PDF

Info

Publication number
CN117824663A
CN117824663A CN202410245310.3A CN202410245310A CN117824663A CN 117824663 A CN117824663 A CN 117824663A CN 202410245310 A CN202410245310 A CN 202410245310A CN 117824663 A CN117824663 A CN 117824663A
Authority
CN
China
Prior art keywords
robot
hand
scene graph
drawn
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410245310.3A
Other languages
Chinese (zh)
Other versions
CN117824663B (en
Inventor
姜自茹
张朕通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sijia Intelligent Technology Co ltd
Original Assignee
Nanjing Sijia Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sijia Intelligent Technology Co ltd filed Critical Nanjing Sijia Intelligent Technology Co ltd
Priority to CN202410245310.3A priority Critical patent/CN117824663B/en
Publication of CN117824663A publication Critical patent/CN117824663A/en
Application granted granted Critical
Publication of CN117824663B publication Critical patent/CN117824663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a robot navigation method based on hand-drawn scene graph understanding, which comprises the following steps: marking the position of the robot and the starting point and the ending point of the task on the hand-drawn scene graph; extracting hand drawing features by adopting a feature extraction network to obtain a standard scene graph; constructing a reinforcement learning model, and calculating an initial planning path according to a task starting point; at intervals, a laser radar is adopted to scan the position of the robot on the standard scene graph, wherein the position is matched and corrected by the structural features of the actual scene and the features of the standard scene graph; and optimizing the subsequent path according to the new positioning information, analyzing the subsequent path into robot parameters by using a large model, and realizing iterative optimization of the robot on the task path of the unknown environment. According to the invention, basic cognition of scenes is provided for the robot according to the hand-drawn scene graph, and positioning information of the robot is corrected by combining an actual sensor result with a standard scene graph after an initial path is planned, so that a more efficient task route is planned.

Description

Robot navigation method based on hand-drawn scene graph understanding
Technical Field
The invention relates to a robot navigation method based on hand-drawn scene graph understanding, and belongs to the technical field of computer vision and pattern recognition.
Background
With the development of artificial intelligence technology, the ability to give robots an understanding of a scene has gradually become a research hotspot, which greatly expands the complexity and flexibility of robots to perform tasks indoors. For example, in the item delivery service provided by a home service robot, the robot needs to create a scene graph and then plan an optimal delivery route to replace humans to accomplish automated delivery tasks, resulting in a convenient and comfortable experience for consumers with intelligent devices. Therefore, constructing an accurate indoor scene graph to enhance the robot cognition has important practical significance for promoting the intelligence of the machine.
The current indoor scene mapping technology of the robot mainly comprises the steps of acquiring data through a sensor, processing the data, extracting features and building an initial map. The diversity and complexity of the scene means that a general mapping method cannot be designed to be applicable to all situations. Each scene may have unique features, layouts, and obstructions, and many situations may be encountered during the mapping process. For example, the presence of a door may cause the robot to not easily pass through, or certain areas may not be fully perceived by the sensor due to the presence of an obstacle. These problems lead to the fact that the scene graph established by the robot may be incomplete, and the problems of detour, unreachable and the like may occur in the actual task execution. Under the condition of a dynamic environment, a certain time is required for processing sensor data and extracting features, and an established scene graph cannot be updated in real time, so that a scene graph recognized by a robot is inconsistent with a real scene graph when a task is executed. Also, in some unknown environments, for example, rescue tasks cannot waste a lot of time to build up a scene graph or no condition for building up a scene graph, so that the robot can only perform tasks by exploring this inefficient way.
The existing robot scene understanding technology can be influenced by the sensor and external complex conditions, so that cognition is incomplete, and the efficiency and success rate of navigation tasks are influenced.
Therefore, a robot navigation method based on hand-drawn scene graph understanding is needed to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to: aiming at the problems existing in the prior art, the invention provides a robot navigation method based on hand-drawn scene graph understanding.
A robot navigation method based on hand-drawn scene graph understanding comprises the following steps:
step S1: collecting data pairs of the hand-drawn scene graph and the standard scene graph, and marking the position of the robot in the hand-drawn scene graph, and a task starting point and a task end point;
step S2: using the data of the step S1 to train the neural network model of the hand drawing to the standardized drawing, taking the hand drawing scene drawing, the position of the robot, the task starting point and the task ending point as inputs, and using the neural network model of the hand drawing to the standardized drawing to generate a standard scene drawing;
step S3: constructing a reinforcement learning model according to the task starting point and the task ending point in the step S1, and calculating an initial planning path between the standardized task starting point and the task ending point or directly drawing a task travelling path on a hand-drawn scene graph by a person to obtain a current planning path;
step S4: establishing a laser radar map based on laser radar information of the robot, matching the laser radar map of the T frame with a standard scene map, adjusting the direction and the scale of the standard scene map to be consistent with those of the laser radar map of the T frame, and adjusting the position information of the robot in the standard scene map;
step S5: inputting the standard scene graph obtained in the step S2 and the position information obtained in the step S4 into the reinforcement learning model obtained in the step S3, and optimizing the current planning path to obtain an optimized planning path;
step S6: analyzing the parameters of the optimized planning path obtained in the step S5 into the motion parameters of the robot, and driving the robot to act;
step S7: at time interval t 0 And then, matching the laser radar image of the T+N frame with the standard scene image, adjusting the direction and the proportion of the standard scene image to be consistent with those of the laser radar image of the T+N frame, adjusting the position information of the robot in the standard scene image, and repeating the steps S4-S6 to realize the path iterative optimization of the robot until the task is completed.
Further, in step S2, the standard scene graph is a vector graph.
Further, in step S2, the neural network model for converting the hand drawing into the normalized drawing is a ControlNet model.
Further, the reinforcement learning model in step S3 is a reinforcement learning model based on the PPO algorithm. The reinforcement learning model needs to guarantee 2 inputs, one is a cognitive map of the standardized scene, and the other is the planning result feature of the previous frame. Mission planning involves two modes: the robot autonomously plans the task route through reinforcement learning and analyzes the route after drawing the travel route by the user.
Further, in steps S4 and S7, the lidar map is matched with the standard scene map, specifically: and extracting the corner features of the laser radar map by using a Hariss3D algorithm, and performing corner matching on the standard scene map and the corner features of the laser radar map.
Further, in step S6, the parameters of the optimized planned path obtained in step S5 are analyzed into the motion parameters of the robot by using a large language model, so as to drive the robot to move. And using the large language model as a nerve control center of the robot, training the large language model to analyze the planned path of the robot and the hand-drawn planned path, analyzing the path into motion parameters of each frame of robot, and controlling the robot to move. Based on the programming capability of the large language model, the large language model is guided to generate a program which obtains final control parameters by calling a relevant analysis API and inputs the final control parameters to the robot.
The beneficial effects are that: according to the robot navigation method based on hand-drawn scene graph understanding, basic cognition of scenes is provided for a robot according to the hand-drawn scene graph, and after an initial path is planned, positioning information of the robot is corrected by combining a laser radar result with a standard scene graph, so that a more efficient task route is planned.
Drawings
FIG. 1 is a typical flow chart of a robot navigation method understood based on a hand-drawn scene graph.
Fig. 2 is a flow chart of matching of robot laser radar information with a standard scene graph.
Detailed Description
The following description of the preferred embodiments of the present invention will be made with reference to the accompanying drawings, to more clearly and completely illustrate the technical aspects of the present invention.
Referring to fig. 1, the robot navigation method based on hand-drawn scene graph understanding of the present invention includes the following steps:
step S1: collecting data pairs of the hand-drawn scene graph and the standard scene graph, and marking the position of the robot in the hand-drawn scene graph, and a task starting point and a task end point;
the data pairs in step S1 relate to acquisition indoor scenes including residential homes, offices, commercial buildings. The hand-drawn scene graph is a hand-drawn indoor scene of human daily conditions, and comprises hand-drawn problems and errors of common human beings. The standard scene graph is a vector graph format which can be identified by a robot system and is a standardized correction of the hand graph. The annotation of the position of the robot in the hand-drawn scene graph provides knowledge of the robot's own positioning in the scene. Marking out the starting point and the end point of the task refers to providing the robot with knowledge of the location of the navigation task to be completed through a hand-drawn scene graph.
Step S2: using the data of the step S1 to train the neural network model of the hand drawing to the standardized drawing, taking the hand drawing scene drawing, the position of the robot, the task starting point and the task ending point as inputs, and using the neural network model of the hand drawing to the standardized drawing to generate a standard scene drawing; preferably, the neural network model for converting the hand drawing into the normalized drawing in step S2 is a ControlNet model.
In the step S2, a model is generated by using deep learning in a training stage, a hand-drawn scene graph is input, a standard scene graph is output, and fine adjustment weight parameters on a public data set enable a robot to establish basic cognition of a scene through the hand-drawn scene graph. The using stage needs to generate a model to obtain a standard scene graph of the training stage, and vectorizes the standard scene graph.
Step S3: constructing a reinforcement learning model according to the task starting point and the task ending point in the step S1, and calculating an initial planning path between the standardized task starting point and the task ending point or directly drawing a task travelling path on a hand-drawn scene graph by a person to obtain a current planning path; preferably, the reinforcement learning model in step S3 is a reinforcement learning model based on the PPO algorithm. The reinforcement learning model constructed in step S3 needs to guarantee 2 inputs, one is a cognitive map of the standardized scene, and the other is the planning result feature of the previous frame. Mission planning involves two modes: the robot autonomously plans the task route through reinforcement learning and analyzes the route after drawing the travel route by the user.
Step S4: establishing a laser radar map based on laser radar information of the robot, matching the laser radar map of the T frame with a standard scene map, adjusting the direction and the scale of the standard scene map to be consistent with those of the laser radar map of the T frame, and adjusting the position information of the robot in the standard scene map;
referring to fig. 2, the laser radar map is matched with the standard scene map, specifically, the corner features of the laser radar map are extracted by using the harrss 3D algorithm, and the corner features of the standard scene map and the laser radar map are matched.
In step S4, the local mapping result of the laser radar is matched with the standard scene graph, which generally includes direction and scale matching based on the radar graph. The direction matching refers to inconsistency between the hand-drawn scene graph and the radar scanning mapping direction, and the scale refers to inconsistency between the aspect ratio of the hand-drawn scene graph and the radar scanning mapping direction.
In the step S4, the laser radar map of the T frame is a fusion result of a result of the radar scanning of the T frame of the robot and a result of the laser radar map construction of the sampling before the T frame, and the more complete result is used for matching, updating and positioning.
Step S5: inputting the standard scene graph obtained in the step S2 and the position information obtained in the step S4 into the reinforcement learning model obtained in the step S3, and optimizing the current planning path to obtain an optimized planning path;
in step S5, the cognitive map of the standardized scene and the updated positioning parameters are used as input updating actions.
Step S6: analyzing the parameters of the optimized planning path obtained in the step S5 into the motion parameters of the robot, and driving the robot to act;
in the step S6, the large language model is used for analyzing the parameters of the optimized planning path obtained in the step S5 into the motion parameters of the robot, and the robot is driven to move. Specifically, a large language model is used as a nerve control center of the robot, the large language model is trained to analyze a planned path of the robot and a hand-drawn planned path, and the robot is controlled to move after the path is analyzed into motion parameters of each frame of robot. Based on the programming capability of the large language model, the large language model is guided to generate a program which obtains final control parameters by calling a relevant analysis API and inputs the final control parameters to the robot.
Step S7: at time interval t 0 Matching the laser radar image of the T+N frame with the standard scene image, adjusting the direction and the proportion of the standard scene image to be consistent with those of the laser radar image of the T+N frame, adjusting the position information of the robot in the standard scene image, and repeating the steps S4-S6 to realize the path iterative optimization of the robot until the task is completed;
in step S7, according to the complete scene sensing result, the robot may execute the navigation task, and update the iteration rule to reduce the time and path length of the path planning task in the unknown environment and increase the success rate of the path planning task.
According to the robot navigation method based on hand-drawn scene graph understanding, basic cognition of scenes is provided for a robot according to the hand-drawn scene graph, and after an initial path is planned, positioning information of the robot is corrected by combining a laser radar result with a standard scene graph, so that a more efficient task route is planned.
Example 1:
as shown in fig. 1 and 2, the robot navigation method based on hand-drawn scene graph understanding of the present invention comprises the following steps:
step S1: graph scene data is collected, including indoor scene graphs of residential homes, offices, commercial buildings, and standard scene graphs available to the production robots constitute data pairs. Marking the approximate position of the robot on the hand drawing, and starting and ending points of the robot to execute navigation tasks;
step S2: and converting the hand-drawn scene graph into a standard scene graph by adopting a control Net generation model, adding an auxiliary lightweight network block for each decoding layer, freezing the original network parameters during training, only updating the auxiliary network parameters, fusing the original network results with the auxiliary network results, inputting the hand-drawn scene graph and the robot position, the task starting point and the task end point position parameters as conditions, and then outputting the standard scene graph. During training, directly outputting a standard scene graph according to the hand-drawn scene graph, and processing the standard scene graph into a vector graph form;
step S3: constructing a reinforcement learning model based on a PPO algorithm, constructing a hand-drawn map simulation grid diagram, wherein the state of a robot is at the position of the hand-drawn grid, a task target point is also coordinates on the grid diagram, respectively inputting the state of the robot into an Actor and a Critic network to obtain an output strategy and action, optimizing by using gradient rising, using all planning steps in the simulation as the current planning line of the robot, and the following formula is the strategy gradient used by the PPO algorithm:
wherein, in the formula, the chemical formula,for the state at time t>For action at time t->Is a weight coefficient +.>Policy function of->Is a weight coefficient +.>Policy function of->As a dominance function, for balancing the rationality of a certain action in a certain state;
step S4: and (3) establishing an actual scene graph based on the laser radar information of the robot, matching the standard scene graph with the laser radar graph, and adjusting the direction and proportion of the standard graph to be consistent with the radar graph. And then, matching reliability is given according to the scanning time and the area, the radar chart tends to be complete along with the time, the reliability is improved, and if the continuous N times of matching results are consistent within a threshold range, the path planning result is considered to be reliable. And calculating the position of the new robot on the standard graph according to the matching result with the radar graph.
Step S5: taking the updated position as a new input of the reinforcement learning network in the step S3, obtaining a strategy and an action of the frame, wherein the action does not relate to the learning process of the reinforcement learning model, so that the updated position searching action is used, and only the action a which is the largest possible under the current state S is needed to be searched, namely:
wherein a is action, namely the motion parameter of the robot, s is state, pi is strategy function, A is action space;
step S6: based on the programming capability of the large model, using a LLM (Large Language Model) model as a control center of the robot, guiding the LLM to write a related self-adaptive control python or C program by taking parameters of a planned path as prompting words of the LLM, and determining a pulse signal of the robot in a next frame by using the control parameters converted by the LLM;
step S7: according to the complete scene perception result, the robot can execute navigation tasks, and update iteration rules are utilized, so that the path of the robot can be optimized in an iteration mode when the robot executes tasks in an unknown environment, and real-time correction of the planned path of the robot in the motion process is realized.

Claims (6)

1. The robot navigation method based on hand-drawn scene graph understanding is characterized by comprising the following steps of:
step S1: collecting data pairs of the hand-drawn scene graph and the standard scene graph, and marking the position of the robot in the hand-drawn scene graph, and a task starting point and a task end point;
step S2: using the data of the step S1 to train the neural network model of the hand drawing to the standardized drawing, taking the hand drawing scene drawing, the position of the robot, the task starting point and the task ending point as inputs, and using the neural network model of the hand drawing to the standardized drawing to generate a standard scene drawing;
step S3: constructing a reinforcement learning model according to the task starting point and the task ending point in the step S1, and calculating an initial planning path between the standardized task starting point and the task ending point or directly drawing a task travelling path on a hand-drawn scene graph by a person to obtain a current planning path;
step S4: establishing a laser radar map based on laser radar information of the robot, matching the laser radar map of the T frame with a standard scene map, adjusting the direction and the scale of the standard scene map to be consistent with those of the laser radar map of the T frame, and adjusting the position information of the robot in the standard scene map;
step S5: inputting the standard scene graph obtained in the step S2 and the position information obtained in the step S4 into the reinforcement learning model obtained in the step S3, and optimizing the current planning path to obtain an optimized planning path;
step S6: analyzing the parameters of the optimized planning path obtained in the step S5 into the motion parameters of the robot, and driving the robot to act;
step S7: at time interval t 0 And then, matching the laser radar image of the T+N frame with the standard scene image, adjusting the direction and the proportion of the standard scene image to be consistent with those of the laser radar image of the T+N frame, adjusting the position information of the robot in the standard scene image, and repeating the steps S4-S6 to realize the path iterative optimization of the robot until the task is completed.
2. The robot navigation method based on hand-drawn scene graph understanding of claim 1, wherein the standard scene graph in step S2 is a vector graph.
3. The robot navigation method based on understanding of hand-drawn scene graphs according to claim 1, wherein the neural network model of the hand-drawn to-normalized graph in step S2 is a ControlNet model.
4. The robot navigation method based on hand-drawn scene graph understanding of claim 1, wherein the reinforcement learning model in step S3 is a reinforcement learning model based on PPO algorithm.
5. The robot navigation method based on hand-drawn scene graph understanding of claim 1, wherein the laser radar graph is matched with the standard scene graph in steps S4 and S7, specifically: and extracting the corner features of the laser radar map by using a Hariss3D algorithm, and performing corner matching on the standard scene map and the corner features of the laser radar map.
6. The robot navigation method based on hand-drawn scene graph understanding according to claim 1, wherein in step S6, the parameters of the optimized planned path obtained in step S5 are parsed into the motion parameters of the robot by using a large language model, and the robot is driven to move.
CN202410245310.3A 2024-03-05 2024-03-05 Robot navigation method based on hand-drawn scene graph understanding Active CN117824663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410245310.3A CN117824663B (en) 2024-03-05 2024-03-05 Robot navigation method based on hand-drawn scene graph understanding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410245310.3A CN117824663B (en) 2024-03-05 2024-03-05 Robot navigation method based on hand-drawn scene graph understanding

Publications (2)

Publication Number Publication Date
CN117824663A true CN117824663A (en) 2024-04-05
CN117824663B CN117824663B (en) 2024-05-10

Family

ID=90508102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410245310.3A Active CN117824663B (en) 2024-03-05 2024-03-05 Robot navigation method based on hand-drawn scene graph understanding

Country Status (1)

Country Link
CN (1) CN117824663B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6744915B1 (en) * 1999-09-09 2004-06-01 Sony United Kingdom Limited Image identification apparatus and method of identifying images
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing
CN102313547A (en) * 2011-05-26 2012-01-11 东南大学 Vision navigation method of mobile robot based on hand-drawn outline semantic map
CN102853830A (en) * 2012-09-03 2013-01-02 东南大学 Robot vision navigation method based on general object recognition
CN106500684A (en) * 2016-09-30 2017-03-15 百度在线网络技术(北京)有限公司 The processing method and processing device of the routing information of navigation
CN106997056A (en) * 2017-05-15 2017-08-01 马上游科技股份有限公司 A kind of scenic spot intelligent guidance system based on hand-drawing map
CN109035357A (en) * 2018-07-10 2018-12-18 深圳市前海手绘科技文化有限公司 A kind of automatic drawing method of artificial intelligence
CN111149072A (en) * 2017-07-28 2020-05-12 罗博艾特有限责任公司 Magnetometer for robot navigation
CN111308495A (en) * 2020-03-13 2020-06-19 厦门知本家科技有限公司 Method for generating indoor house type 3D data through radar ranging
US20210064858A1 (en) * 2019-08-26 2021-03-04 Adobe Inc. Transformation of hand-drawn sketches to digital images
US20210397961A1 (en) * 2019-03-05 2021-12-23 Naver Labs Corporation Method and system for training autonomous driving agent on basis of deep reinforcement learning
US20220026920A1 (en) * 2020-06-10 2022-01-27 AI Incorporated Light weight and real time slam for robots
CN114608549A (en) * 2022-05-10 2022-06-10 武汉智会创新科技有限公司 Building measurement method based on intelligent robot
CN115601769A (en) * 2022-10-28 2023-01-13 京东方科技集团股份有限公司(Cn) Method and device for adjusting hand-drawn graph, electronic equipment and medium
US20230029596A1 (en) * 2021-07-30 2023-02-02 Clearedge3D, Inc. Survey device, system and method
US20230236606A1 (en) * 2022-01-21 2023-07-27 Tata Consultancy Services Limited Systems and methods for object detection using a geometric semantic map based robot navigation
CN116774691A (en) * 2022-03-17 2023-09-19 丰田自动车株式会社 Controlled area management system and method, mobile management system, and non-transitory storage medium
CN117249830A (en) * 2023-09-18 2023-12-19 山东旅游职业学院 Scenic spot path automatic planning method based on vector diagram analysis
CN117433526A (en) * 2023-09-06 2024-01-23 上海大漠电子科技股份有限公司 Indoor navigation and navigation system and method based on hand-drawn layout plan
CN117576259A (en) * 2023-11-29 2024-02-20 北京航空航天大学 Hand-drawing synthesis method and system based on text driving

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6744915B1 (en) * 1999-09-09 2004-06-01 Sony United Kingdom Limited Image identification apparatus and method of identifying images
CN102313547A (en) * 2011-05-26 2012-01-11 东南大学 Vision navigation method of mobile robot based on hand-drawn outline semantic map
CN102306145A (en) * 2011-07-27 2012-01-04 东南大学 Robot navigation method based on natural language processing
CN102853830A (en) * 2012-09-03 2013-01-02 东南大学 Robot vision navigation method based on general object recognition
CN106500684A (en) * 2016-09-30 2017-03-15 百度在线网络技术(北京)有限公司 The processing method and processing device of the routing information of navigation
CN106997056A (en) * 2017-05-15 2017-08-01 马上游科技股份有限公司 A kind of scenic spot intelligent guidance system based on hand-drawing map
CN111149072A (en) * 2017-07-28 2020-05-12 罗博艾特有限责任公司 Magnetometer for robot navigation
CN109035357A (en) * 2018-07-10 2018-12-18 深圳市前海手绘科技文化有限公司 A kind of automatic drawing method of artificial intelligence
US20210397961A1 (en) * 2019-03-05 2021-12-23 Naver Labs Corporation Method and system for training autonomous driving agent on basis of deep reinforcement learning
US20210064858A1 (en) * 2019-08-26 2021-03-04 Adobe Inc. Transformation of hand-drawn sketches to digital images
CN111308495A (en) * 2020-03-13 2020-06-19 厦门知本家科技有限公司 Method for generating indoor house type 3D data through radar ranging
US20220026920A1 (en) * 2020-06-10 2022-01-27 AI Incorporated Light weight and real time slam for robots
US20230029596A1 (en) * 2021-07-30 2023-02-02 Clearedge3D, Inc. Survey device, system and method
US20230236606A1 (en) * 2022-01-21 2023-07-27 Tata Consultancy Services Limited Systems and methods for object detection using a geometric semantic map based robot navigation
CN116774691A (en) * 2022-03-17 2023-09-19 丰田自动车株式会社 Controlled area management system and method, mobile management system, and non-transitory storage medium
CN114608549A (en) * 2022-05-10 2022-06-10 武汉智会创新科技有限公司 Building measurement method based on intelligent robot
CN115601769A (en) * 2022-10-28 2023-01-13 京东方科技集团股份有限公司(Cn) Method and device for adjusting hand-drawn graph, electronic equipment and medium
CN117433526A (en) * 2023-09-06 2024-01-23 上海大漠电子科技股份有限公司 Indoor navigation and navigation system and method based on hand-drawn layout plan
CN117249830A (en) * 2023-09-18 2023-12-19 山东旅游职业学院 Scenic spot path automatic planning method based on vector diagram analysis
CN117576259A (en) * 2023-11-29 2024-02-20 北京航空航天大学 Hand-drawing synthesis method and system based on text driving

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
R. SUZUKI: "Indoor SLAM based on line observation probability using a hand-drawn map", 《2022 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII)》, 16 February 2022 (2022-02-16), pages 695 - 698 *
X. HOU ET AL.: "A Novel Mobile Robot Navigation Method Based on Hand-Drawn Paths", 《IEEE SENSORS JOURNAL》, vol. 20, no. 19, 1 October 2020 (2020-10-01), pages 11660 - 11673, XP011807446, DOI: 10.1109/JSEN.2020.2997055 *
ZHAO, CY: "Energy Constrained Multi-Agent Reinforcement Learning for Coverage Path Planning", 《023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》, 29 February 2024 (2024-02-29), pages 5590 - 5597 *
李倩: "改进路径跟踪算法在机器人SLAM中的应用研究", 《系统仿真学报》, vol. 35, no. 12, 31 December 2023 (2023-12-31), pages 2602 - 2613 *
李新德: "一种基于手绘地图的动态环境视觉导航方法", 《机器人》, vol. 33, no. 04, 15 July 2011 (2011-07-15), pages 490 - 501 *
李龙: "一种基于草图接口的移动机器人交互控制研究", 《工业控制计算机》, vol. 32, no. 09, 30 September 2019 (2019-09-30), pages 27 - 28 *

Also Published As

Publication number Publication date
CN117824663B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
Wen et al. Path planning for active SLAM based on deep reinforcement learning under unknown environments
Cheng et al. Topological indoor localization and navigation for autonomous mobile robot
CN111098301A (en) Control method of task type robot based on scene knowledge graph
Shiarlis et al. Rapidly exploring learning trees
CN110287941B (en) Concept learning-based thorough perception and dynamic understanding method
CN111551184B (en) Map optimization method and system for SLAM of mobile robot
CN115438856A (en) Pedestrian trajectory prediction method based on space-time interaction characteristics and end point information
Poncela et al. Efficient integration of metric and topological maps for directed exploration of unknown environments
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
CN114967680B (en) Mobile robot path planning method based on ant colony algorithm and convolutional neural network
Agand et al. Human navigational intent inference with probabilistic and optimal approaches
Zhang et al. Design of dual-LiDAR high precision natural navigation system
CN117824663B (en) Robot navigation method based on hand-drawn scene graph understanding
Urcola et al. Cooperative minimum expected length planning for robot formations in stochastic maps
CN114493013A (en) Smart agent path planning method based on reinforcement learning, electronic device and medium
CN112987720A (en) Multi-scale map construction method and construction device for mobile robot
CN116907510A (en) Intelligent motion recognition method based on Internet of things technology
Liu et al. Intelligent robot motion trajectory planning based on machine vision
CN116477505A (en) Tower crane real-time path planning system and method based on deep learning
CN110926470A (en) AGV navigation control method and system
CN115493596A (en) Semantic map construction and navigation method for mobile robot
Zheng et al. BRR-DQN: UAV path planning method for urban remote sensing images
CN111596668B (en) Mobile robot anthropomorphic path planning method based on reverse reinforcement learning
Wurm et al. Improved Simultaneous Localization and Mapping using a Dual Representation of the Environment.
Müller et al. Mdp-based motion planning for grasping in dynamic scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant