CN109901589B - Mobile robot control method and device - Google Patents

Mobile robot control method and device Download PDF

Info

Publication number
CN109901589B
CN109901589B CN201910247676.3A CN201910247676A CN109901589B CN 109901589 B CN109901589 B CN 109901589B CN 201910247676 A CN201910247676 A CN 201910247676A CN 109901589 B CN109901589 B CN 109901589B
Authority
CN
China
Prior art keywords
model
current
action
sample
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910247676.3A
Other languages
Chinese (zh)
Other versions
CN109901589A (en
Inventor
尚云
刘洋
华仁红
王毓玮
冯卓玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yida Tuling Technology Co ltd
Original Assignee
Beijing Yida Tuling Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yida Tuling Technology Co ltd filed Critical Beijing Yida Tuling Technology Co ltd
Priority to CN201910247676.3A priority Critical patent/CN109901589B/en
Publication of CN109901589A publication Critical patent/CN109901589A/en
Application granted granted Critical
Publication of CN109901589B publication Critical patent/CN109901589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides a mobile robot control method and a mobile robot control device, wherein the method comprises the following steps: acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on a mobile robot; inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications; controlling the mobile robot based on the current motion information. The method and the device provided by the embodiment of the invention directly obtain the current positioning information through the positioning sensing device, do not need a high-precision sensing system and a large amount of priori knowledge, are simple to realize, reduce the application threshold of research personnel and have high positioning efficiency. In addition, the current action information is obtained through the pre-trained action planning model, the migration capability is strong, the method can adapt to most scenes, the application range is wide, and the stability is high.

Description

Mobile robot control method and device
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a mobile robot control method and device.
Background
With the development of science and technology, mobile robots are increasingly widely applied in the fields of warehousing, logistics, power inspection and the like.
In the existing mobile robot control method, a monocular camera installed on a mobile robot is generally used to acquire relevant image information in a working area, and the image information is processed off line to construct a 3D sparse point cloud image of an environment. In the autonomous moving process, the mobile robot processes the image acquired in real time based on a priori 3D sparse cloud map and texture information stored in the map and based on a visual SLAM (Simultaneous Localization and Mapping) technology, so that the accurate estimation of the current pose is realized. And then, performing behavior planning based on the current pose, and performing action planning on the mobile robot by combining the obstacle detection result.
Then, the implementation of the method is very complex, a high-precision sensing system is needed for map construction and positioning, the demand for prior knowledge is very large, and the application threshold of research and development personnel is high. In addition, the migration capability of the method is poor, and once the application scene changes, the map needs to be redrawn, so that time and labor are wasted.
Disclosure of Invention
The embodiment of the invention provides a mobile robot control method and device, which are used for solving the problems that the existing mobile robot control method has high requirement on the accuracy of a sensing system, needs a large amount of priori knowledge and has poor migration capability.
In a first aspect, an embodiment of the present invention provides a mobile robot control method, including:
acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on a mobile robot;
inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications;
controlling the mobile robot based on the current motion information.
In a second aspect, an embodiment of the present invention provides a mobile robot control apparatus, including:
the acquisition unit is used for acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on the mobile robot;
the planning unit is used for inputting the current positioning information and the current image into an action planning model and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications;
a control unit for controlling the mobile robot based on the current motion information.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a bus, where the processor and the communication interface, the memory complete communication with each other through the bus, and the processor may call a logic instruction in the memory to perform the steps of the method provided in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method as provided in the first aspect.
According to the mobile robot control method and device provided by the embodiment of the invention, the current positioning information is directly obtained through the positioning sensing device, a high-precision sensing system is not needed, a large amount of priori knowledge is not needed, the implementation is simple, the application threshold of research personnel is reduced, and the positioning efficiency is high. In addition, the current action information is obtained through the pre-trained action planning model, the migration capability is strong, the method can adapt to most scenes, the application range is wide, and the stability is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a mobile robot control method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a mobile robot control method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a mobile robot control device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a mobile robot control method, aiming at the problems that the existing mobile robot control method needs a high-precision sensing system to construct and position a map, needs a great amount of prior knowledge, is poor in migration capability, needs to redraw the map once an application scene changes, and is time-consuming and labor-consuming. Fig. 1 is a schematic flow chart of a mobile robot control method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
and step 110, acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on the mobile robot.
Specifically, the mobile robot is provided with a positioning sensing device, and the positioning sensing device is used for acquiring position information of the mobile robot at the current moment, namely current positioning information. The Positioning sensing device may be a GPS (Global Positioning System) module, or may be a sensing device capable of acquiring position information of the mobile robot based on an RFID (Radio Frequency Identification), a bluetooth Positioning technology, an ultrasonic Positioning technology, or the like, which is not specifically limited in this embodiment of the present invention.
In addition, the mobile robot is also provided with a vision sensing device, and the vision sensing device is used for acquiring an image of the environment in front of the motion path of the mobile robot at the current moment, namely the current image. The visual sensing device may be a monocular visual sensing device, or may be a binocular visual sensing device or a monocular visual sensing device in which two or more visual sensing devices are integrated. The visual sensing device may specifically be a visible light camera, an infrared camera, or a combination of a visible light camera and an infrared camera, which is not specifically limited in this embodiment of the present invention.
Step 120, inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications.
Specifically, the action planning model is used for planning a specific action which should be executed by the mobile robot at the current moment based on the current positioning information and the current image. After the current positioning information and the current image are obtained, the current positioning information and the current image are input into the action planning model, and then the current action information output by the action planning model can be obtained. Here, the current motion information is used to indicate motion information that the mobile robot should perform at the current time, and the motion information may be a linear velocity and/or an angular velocity of the mobile robot.
In addition, before step 120 is executed, the action planning model may be obtained by training in advance, and specifically, the action planning model may be obtained by training in the following manner: first, a large amount of sample positioning information, sample images, sample motion information, and sample identifications are collected. The sample positioning information is positioning information acquired by a positioning sensing device arranged on the mobile robot in the process of manually controlling the mobile robot to act; the sample image is an image of the environment in front of an action path acquired by a visual sensing device arranged on the mobile robot in the action process of manually controlling the mobile robot; the sample action information is an action which is specifically executed by the mobile robot by an operator according to the environment in front of the action path represented by the sample image, and can be the adjustment of the linear velocity and/or the angular velocity of the mobile robot; the sample identifier is used to indicate a result of controlling the mobile robot to move based on the sample movement information, and the sample identifier may be used to characterize whether the mobile robot walks along the planned path or not, and may also be used to characterize whether the mobile robot successfully avoids the obstacle, which is not specifically limited in this embodiment of the present invention.
And training the initial model based on the sample positioning information, the sample image, the sample action information and the sample identification to obtain an action planning model. The initial model may be a single neural network model or a combination of a plurality of neural network models, and the embodiment of the present invention does not specifically limit the type and structure of the initial model.
And step 130, controlling the mobile robot based on the current action information.
Specifically, after the current action information is obtained, the mobile robot is controlled to execute corresponding actions based on the current action information, so that the mobile robot can successfully avoid the obstacle while walking along the planned path.
The method provided by the embodiment of the invention directly obtains the current positioning information through the positioning sensing device, does not need a high-precision sensing system and a large amount of priori knowledge, is simple to realize, reduces the application threshold of research personnel and has high positioning efficiency. In addition, the current action information is obtained through the pre-trained action planning model, the migration capability is strong, the method can adapt to most scenes, the application range is wide, and the stability is high.
Based on the above embodiment, step 120 specifically includes:
step 121, inputting the current positioning information and the current image into a path planning model in the action planning model, and acquiring first action information output by the path planning model; the path planning model is obtained by training based on sample positioning information, sample images, sample action information and sample path identifications.
Specifically, the action planning model includes a path planning model for analyzing how to enable the mobile robot to walk along the planned path based on the current positioning information and the current image, and outputting the first action information. Here, the first motion information indicates motion information that should be executed at the current time to ensure that the mobile robot travels along the planned path.
In addition, before step 121 is executed, the path planning model may be obtained by training in advance, and specifically, the path planning model may be obtained by training in the following manner: first, a large amount of sample positioning information, sample images, sample motion information, and sample path identification are collected. The sample positioning information, the sample image and the sample action information are obtained in the process of manually controlling the mobile robot to walk along the planned path, and the sample path identifier is used for indicating whether the mobile robot walks along the planned path, for example, when the mobile robot normally travels on the planned path, the sample path identifier is positive, and when the mobile robot deviates from the planned path, the sample path identifier is negative. The value of the sample route flag may be determined according to a preset walking rule, for example, the shorter the time for completing the planned route travel, the larger the value of the sample route flag, and the longer the time for completing the planned route travel, the smaller the value of the sample route flag. And training the initial model based on the sample positioning information, the sample image, the sample action information and the sample path identification, thereby obtaining an action planning model. The initial model may be a single neural network model or a combination of multiple neural network models, and the embodiment of the present invention does not specifically limit the type and structure of the initial model.
Step 122, inputting the current image into an obstacle avoidance model in the action planning model, and acquiring second action information output by the obstacle avoidance model; the obstacle avoidance model is obtained based on sample images, sample action information and sample obstacle avoidance identification training.
Specifically, the action planning model further comprises an obstacle avoidance model, and the obstacle avoidance model is used for analyzing whether an obstacle exists in the environment in front of the action path of the mobile robot at the current moment based on the current image, and if the obstacle exists, automatically avoiding the obstacle. Here, the second motion information indicates motion information that should be executed at the current time to ensure that the mobile robot can successfully avoid the obstacle.
In addition, before step 122 is executed, the obstacle avoidance model may be obtained by training in advance, and specifically, the obstacle avoidance model may be obtained by training in the following manner: first, a large number of sample images, sample motion information, and sample obstacle avoidance markers are collected. The sample image and the sample action information are acquired in the process of manually controlling the mobile robot to avoid the barrier, and the sample barrier avoiding identifier is used for indicating whether the mobile robot successfully avoids the barrier or not, for example, when the mobile robot does not collide with the barrier, the sample barrier avoiding identifier is positive, and when the mobile robot collides with the barrier, the sample barrier avoiding identifier is negative. And training the initial model based on the sample image, the sample action information and the sample obstacle avoidance identifier, so as to obtain an obstacle avoidance model. The initial model may be a single neural network model or a combination of a plurality of neural network models, and the embodiment of the present invention does not specifically limit the type and structure of the initial model.
It should be noted that, in the embodiment of the present invention, the sequence of step 121 and step 122 is not specifically limited, and step 121 may be executed before step 122, may be executed after step 122, and may also be executed synchronously with step 122.
And step 123, acquiring current action information based on the first action information and the second action information.
Here, the first action information is action information considering only a planned path, the second action information is action information considering only obstacle avoidance, and the combination of the first action information and the second action information can obtain action information that can successfully avoid an obstacle while maintaining walking on the planned path, that is, current action information.
According to the method provided by the embodiment of the invention, through setting a path planning model, first action information for controlling the mobile robot to walk on a planned path is obtained; acquiring second action information for controlling the mobile robot to avoid the obstacle by setting an obstacle avoiding model; by combining the first action information and the second action information, the mobile robot can successfully avoid obstacles while walking along the planned path. By refining the action planning model into the path planning model and the obstacle avoidance model, the model training process is simplified, so that the action information output by a single model is more accurate, and the prediction precision of the action planning model is improved.
Based on any of the above embodiments, step 121 specifically includes: step 1211, based on the current positioning information and the current image, selecting a current path model from a road straight-going model, a road left-turn model and a road right-turn model included in the path planning model. Step 1212, inputting the current positioning information and the current image into the current path model, and obtaining first action information output by the current path model.
Specifically, the path planning model includes a road straight model, a road left-turn model, and a road right-turn model, where the road straight model, the road left-turn model, and the road right-turn model are respectively used to enable the mobile robot to walk along the planned path based on the current positioning information and the current image analysis when the planned path needs to go straight, turn left, and turn right.
Correspondingly, before step 1211 is executed, a road straight-going model, a road left-turning model and a road right-turning model may be trained in advance, and may be specifically trained as follows: firstly, a large amount of sample positioning information, sample images, sample action information and sample path identifications are collected, the sample positioning information, the sample images, the sample action information and the sample path identifications are classified according to a planned path at the acquisition moment, and the sample positioning information, the sample images, the sample action information and the sample path identifications which correspond to a straight line, the sample positioning information, the sample images, the sample action information and the sample path identifications which correspond to a left turn, and the sample positioning information, the sample images, the sample action information and the sample path identifications which correspond to a right turn are respectively obtained. And training the initial model based on sample positioning information, sample images, sample action information and sample path identifications corresponding to different target orientations, so as to respectively obtain a road straight-going model, a road left-turning model and a road right-turning model. The initial model may be a single neural network model or a combination of a plurality of neural network models, and the embodiment of the present invention does not specifically limit the type and structure of the initial model.
The current path model is selected from the path planning model based on the current positioning information and the current image, and the current path model is a road straight-going model, a road left-turning model or a road right-turning model. After the current path model is determined, current positioning information and a current image are input into the current path model, and first action information output by the current path model is obtained.
According to the method provided by the embodiment of the invention, the path planning model is refined into the road straight-going model, the road left-turning model or the road right-turning model, so that the work borne by the model can be further refined, and the prediction precision of a single model is improved.
Based on any of the above embodiments, step 1211 specifically includes: acquiring a task path based on the current positioning information and the destination information; acquiring a target position based on the task path and the current image; and based on the target direction, selecting a current path model from a road straight-going model, a road left-turning model and a road right-turning model.
Specifically, the destination information is position information of a place where the mobile robot is expected to arrive, which is set in advance. Based on the current positioning information and the destination information, a task path, that is, a path that the mobile robot needs to walk, can be obtained. After the task path is obtained, the front environment of the action path of the mobile robot at the current moment can be obtained based on the current image, the orientation of the mobile robot at the current moment is further determined, the orientation, namely the target orientation, of the mobile robot, which needs to walk at the current moment, can be obtained through the orientation of the mobile robot and the task path, and the target orientation can be a straight line, a left turn or a right turn. After the target position is determined, a model corresponding to the position can be directly selected as a current path model based on the target position, for example, if the target position is straight, a road straight model is selected as the current path model, and if the target position is left-turning, a road left-turning model is selected as the current path model.
Based on any of the above embodiments, after step 130, the method further includes: and training the action planning model based on the optimized positioning information, the optimized image, the optimized action information and the optimized identification.
Specifically, in the automatic walking process of the mobile robot, the current positioning information, the current image, and the current action instruction obtained based on the current positioning information and the current image may be recorded, and the result of walking of the mobile robot based on the current action instruction may be recorded. And after the mobile robot runs, taking the recorded current positioning information as optimized positioning information, taking the current image as an optimized image, taking the current action instruction as an optimized action instruction, and taking a walking result based on the current action instruction as an optimized identifier. Further, the optimized positioning information is position information of the mobile robot acquired by the mobile robot through a positioning sensing device arranged on the mobile robot during automatic walking, the optimized image is an image which is acquired by the mobile robot through a visual sensing device arranged on the mobile robot during automatic walking and used for representing the environment in front of the walking path of the mobile robot, the optimized action instruction is an action instruction which is output by a prime action planning model based on the optimized positioning information and the optimized image, and the optimized identifier is used for indicating the walking and obstacle avoidance results along the planned path based on the optimized action instruction.
And performing iterative tuning on the action planning model based on the optimized positioning information, the optimized image, the optimized action command and the optimized identification, so that the accuracy of the action planning model can be further improved, and the loophole of the action planning model is made up. Particularly, under the condition that the mobile robot deviates from the planned path or fails in obstacle avoidance, the optimized positioning information, the optimized image, the optimized action instruction and the optimized identifier corresponding to the deviation from the planned path or the failure in obstacle avoidance can be respectively used as the vulnerability positioning information, the vulnerability image, the vulnerability action instruction and the vulnerability identifier. The vulnerability positioning information is position information of the mobile robot acquired by the mobile robot through a positioning sensing device arranged on the mobile robot during the deviation from the planned path or the obstacle avoidance failure, the vulnerability image is an image which is acquired by the mobile robot through a visual sensing device arranged on the mobile robot during the deviation from the planned path or the obstacle avoidance failure and used for representing the environment in front of the action path of the mobile robot, the vulnerability action instruction is an action instruction which is output by an original target following model based on the vulnerability positioning information and the vulnerability image and causes the deviation from the planned path or the obstacle avoidance failure, and the vulnerability is marked as the deviation from the planned path or the obstacle avoidance failure. The action planning model is trained and updated based on the vulnerability positioning information, the vulnerability image, the vulnerability action instruction and the vulnerability identification, so that the vulnerability can be effectively remedied, and the performance of the action planning model is further improved.
Most of the existing mobile robot control methods adopt a visible light vision sensing device to acquire images, however, the visible light vision sensing device can only be used in the daytime and cannot be used in an environment with poor illumination conditions, especially at night, so that the working range and the working time of the mobile robot are limited. To this end, in any of the above embodiments, the method, the visual sensing device comprises an infrared camera; correspondingly, the current image comprises a current infrared thermal image.
In particular, an infrared camera is a camera for recording light emitted through an infrared light source, which is often applied in the field of night vision monitoring. The infrared camera is arranged to collect the front image of the walking path of the mobile robot, so that the problem that the visible light vision sensing device cannot work at night can be solved. Correspondingly, the image collected by the infrared camera is an infrared thermal image, and the current infrared thermal image is a front image of the walking path of the mobile robot at the current moment collected by the infrared camera in the control process of the mobile robot.
It should be noted that the visual sensing device in the embodiment of the present invention may include not only an infrared camera. The device also comprises a visible light visual sensing device. Correspondingly, the current image may include a current infrared thermal image, and may also include a current RGB image or other format image. Here, the current infrared thermal image is the infrared thermal image at the current time.
The method provided by the embodiment of the invention solves the problem that the existing mobile robot can only work in the daytime because of being limited by a visible light vision sensing device, and the mobile robot is not limited by time and light rays through arranging the infrared camera, so that the application range and the working time of the mobile robot are expanded.
According to any of the embodiments, in the method, the positioning sensing device is a GNSS sensor. GNSS (Global Navigation Satellite System) is a space-based radio Navigation positioning System that can provide users with all-weather 3-dimensional coordinates and speed and time information at any location on the surface of the earth or in the near-earth space. The GNSS sensor can be used for realizing simple road topology drawing and carrying out real-time low-precision positioning. Compared with a general high-precision sensing system, the GNSS sensor is lower in cost, the method is simpler and more convenient, and the control precision of the mobile robot cannot be influenced due to slightly lower positioning precision.
Based on any of the above embodiments, fig. 2 is a schematic flowchart of a mobile robot control method according to another embodiment of the present invention, and as shown in fig. 2, the method includes the following steps:
step 210, sample collection.
A large amount of sample positioning information, sample images, sample action information and sample path identifications are collected as path planning training samples. The sample positioning information, the sample image and the sample action information are obtained in the process of manually controlling the mobile robot to walk along the planned path, and the sample path identifier is used for indicating whether the mobile robot walks along the planned path, for example, when the mobile robot normally travels on the planned path, the sample path identifier is positive, and when the mobile robot deviates from the planned path, the sample path identifier is negative. The value of the sample route indicator may be determined according to a preset walking rule, for example, the shorter the time for completing the planned route travel or the smaller the deviation of the sample positioning information from the planned route, the larger the value of the sample route indicator, and the longer the time for completing the planned route travel or the larger the deviation of the sample positioning information from the planned route, the smaller the value of the sample route indicator.
Meanwhile, a large number of sample images, sample action information and sample obstacle avoidance marks are collected to serve as obstacle avoidance training samples. The sample image and the sample action information are acquired in the process of manually controlling the mobile robot to avoid the barrier, and the sample barrier avoiding identifier is used for indicating whether the mobile robot successfully avoids the barrier or not, for example, when the mobile robot does not collide with the barrier, the sample barrier avoiding identifier is positive, and when the mobile robot collides with the barrier, the sample barrier avoiding identifier is negative.
It should be noted that the training sample is an all-weather training sample, wherein the acquisition of the sample image is completed by a visible light camera and an infrared camera mounted on the mobile robot, and the corresponding sample image includes a sample RGB image and a sample infrared thermal image.
Step 220 is then performed.
Step 220, model training.
In order to improve the speed and precision of training, the mean value of each sample image is removed.
And then training a plurality of initial models with different structures based on the sample images after the mean value is removed, the sample positioning information, the sample action information and the sample path identification, and selecting one trained initial model with highest accuracy and best effect from the initial models as a path planning model. In the training process of the path planning model, the path planning training samples can be further divided into a straight-going training sample, a left-turning training sample and a right-turning training sample based on the sample positioning information and the sample images, and then the initial model is trained to obtain a road straight-going model, a road left-turning model and a road right-turning model.
Meanwhile, training a plurality of initial models with different structures based on the sample images after mean value removal, the sample action information and the sample obstacle avoidance identification, and selecting a trained initial model with highest accuracy and best effect as an obstacle avoidance model.
Step 230 is then performed.
And step 230, iteratively adjusting.
And after the action planning model comprising the path planning model and the obstacle avoidance model is obtained, judging whether tuning iteration needs to be carried out on the action planning model.
And if tuning iteration is needed, applying the action planning model to the autonomous walking process of the mobile robot to obtain online optimized positioning information, optimized images, optimized action information and optimized identifications. The optimized positioning information is position information of the mobile robot acquired by the mobile robot through a positioning sensing device arranged on the mobile robot during automatic walking, the optimized image is an image which is acquired by the mobile robot through a visual sensing device arranged on the mobile robot during automatic walking and is used for representing the environment in front of the walking path of the mobile robot, the optimized action instruction is an action instruction which is output by a prime motion planning model based on the optimized positioning information and the optimized image, and the optimized identifier is used for indicating the result of walking along the planned path and avoiding obstacles based on the optimized action instruction. And then, executing step 220, training the action planning model based on the optimized positioning information, the optimized image, the optimized action information and the optimized identifier, and realizing iterative optimization of the action planning model. Further, the path planning model can be trained based on the optimized positioning information, the optimized image, the optimized action information and the optimized path identification, and the obstacle avoidance model can be trained based on the optimized image, the optimized action information and the optimized obstacle avoidance identification.
If no tuning iterations are required, step 240 is performed.
Step 240, system deployment.
And installing the positioning sensing device, the visible light camera and the infrared camera on the mobile robot to complete the deployment of the control system of the mobile robot. Here, the positioning sensing device is a GNSS sensor, the visible light camera is a seacon USB camera, and the infrared camera is a hair one. The method comprises the steps of obtaining current positioning information and a current image based on a positioning sensing device and a visual sensing device respectively, inputting the current positioning information and the current image into an action planning model, obtaining current action information output by the action planning model, and controlling the mobile robot to independently walk and avoid obstacles based on the current action information.
The method provided by the embodiment of the invention directly obtains the current positioning information through the positioning sensing device, does not need a high-precision sensing system and a large amount of priori knowledge, is simple to realize, reduces the application threshold of research personnel and has high positioning efficiency. In addition, the current action information is obtained through the pre-trained action planning model, the migration capability is strong, the method can adapt to most scenes, the application range is wide, and the stability is high.
Based on any of the above examples, experiments were performed on this method. The experimental scene is a transformer substation, and the size is about 800m by 600 m. In the experimental process, firstly, data acquisition work is carried out, and a Haikang gun camera and an infrared camera are used for acquiring and training the road data at the daytime of the total station and the road data at night of the total station, so that a road straight-moving model, a road left-turning model, a road right-turning model and an obstacle avoidance model are obtained. Experimental results show that the method provided by the embodiment of the invention can be used for controlling the mobile robot, so that normal walking and obstacle avoidance can be realized in daytime and at night, and the method has good robustness on illumination and environmental change.
Based on any of the above examples, experiments were performed on this method. The experimental scene is a transformer substation, and the size is about 400m by 200 m. In the experimental process, firstly, data acquisition work is carried out, and the network camera and the infrared camera are used for acquiring and training the road data of the total station at daytime and the road data of the total station at night to obtain a road straight-going model, a road left-turning model, a road right-turning model and an obstacle avoidance model. Experimental results show that the method provided by the embodiment of the invention can be used for controlling the mobile robot, so that normal walking and obstacle avoidance can be realized in daytime and at night, and the method has good robustness on illumination and environmental change.
Based on any of the above examples, experiments were performed on this method. The experimental scene is a transformer substation, and the size is about 200m by 100 m. In the experimental process, firstly, data acquisition work is carried out, and a Haikang USB camera and an infrared camera are used for acquiring and training the road data at the daytime of the total station and the road data at night of the total station, so that a road straight-going model, a road left-turning model, a road right-turning model and an obstacle avoidance model are obtained. Experimental results show that the method provided by the embodiment of the invention can be used for controlling the mobile robot, so that normal walking and obstacle avoidance can be realized in daytime and at night, and the method has good robustness on illumination and environmental change.
Based on any of the above embodiments, fig. 3 is a schematic structural diagram of a mobile robot control apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus includes an obtaining unit 310, a planning unit 320, and a control unit 330;
the acquiring unit 310 is configured to acquire current positioning information and a current image based on a positioning sensing device and a visual sensing device that are installed on the mobile robot, respectively;
the planning unit 320 is configured to input the current positioning information and the current image into an action planning model, and obtain current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications;
the control unit 330 is configured to control the mobile robot based on the current motion information.
The device provided by the embodiment of the invention directly obtains the current positioning information through the positioning sensing device, does not need a high-precision sensing system and a large amount of priori knowledge, is simple to realize, reduces the application threshold of research personnel and has high positioning efficiency. In addition, the current action information is obtained through the pre-trained action planning model, the migration capability is strong, the method can adapt to most scenes, the application range is wide, and the stability is high.
Based on any of the above embodiments, the planning unit 320 includes a path planning subunit, an obstacle avoidance subunit, and a composite subunit;
the path planning subunit is configured to input the current positioning information and the current image to a path planning model in the action planning model, and acquire first action information output by the path planning model; the path planning model is obtained by training based on the sample positioning information, the sample image, the sample action information and the sample path identifier;
the obstacle avoidance subunit is used for inputting the current image into an obstacle avoidance model in the action planning model and acquiring second action information output by the obstacle avoidance model; the obstacle avoidance model is obtained based on the sample image, the sample action information and sample obstacle avoidance identification training;
the composite subunit is configured to obtain current motion information based on the first motion information and the second motion information.
Based on any of the above embodiments, the path planning subunit includes a selection module and a path planning module;
the selection module is used for selecting a current path model from a road straight-going model, a road left-turn model and a road right-turn model which are included in the path planning model based on the current positioning information and the current image;
and the path planning module is used for inputting the current positioning information and the current image into the current path model and acquiring the first action information output by the current path model.
Based on any of the embodiments above, the selection module is specifically configured to:
acquiring a task path based on the current positioning information and the destination information;
acquiring a target position based on the task path and the current image;
and based on the target position, selecting the current path model from the road straight-going model, the road left-turning model and the road right-turning model.
Based on any embodiment above, the apparatus further comprises an optimization unit;
the optimization unit is used for training the action planning model based on the optimized positioning information, the optimized image, the optimized action information and the optimized identification.
According to any one of the above embodiments, in the apparatus, the visual sensing device includes an infrared camera; correspondingly, the current image comprises a current infrared thermal image.
According to any one of the above embodiments, in the apparatus, the positioning sensing apparatus is a GNSS sensor.
Fig. 4 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)401, a communication Interface (communication Interface)402, a memory (memory)403 and a communication bus 404, wherein the processor 401, the communication Interface 402 and the memory 403 complete communication with each other through the communication bus 404. The processor 401 may call a computer program stored on the memory 403 and executable on the processor 401 to execute the mobile robot control method provided by the above embodiments, for example, including: acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on a mobile robot; inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications; controlling the mobile robot based on the current motion information.
In addition, the logic instructions in the memory 403 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the mobile robot control method provided in the foregoing embodiments, for example, including: acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on a mobile robot; inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications; controlling the mobile robot based on the current motion information.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. A mobile robot control method, comprising:
acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on a mobile robot;
inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications;
controlling the mobile robot based on the current motion information;
the inputting the current positioning information and the current image into an action planning model, and acquiring current action information output by the action planning model specifically includes:
inputting the current positioning information and the current image into a path planning model in the action planning models, and acquiring first action information output by the path planning model; the path planning model is obtained by training based on the sample positioning information, the sample image, the sample action information and the sample path identifier;
inputting the current image into an obstacle avoidance model in the action planning model, and acquiring second action information output by the obstacle avoidance model; the obstacle avoidance model is obtained based on the sample image, the sample action information and sample obstacle avoidance identification training;
acquiring current action information based on the first action information and the second action information;
the inputting the current positioning information and the current image into a path planning model in the action planning model to obtain first action information output by the path planning model specifically includes:
based on the current positioning information and the current image, selecting a current path model from a road straight-going model, a road left-turning model and a road right-turning model which are included in the path planning model;
inputting the current positioning information and the current image into the current path model, and acquiring the first action information output by the current path model;
based on the current positioning information and the current image, selecting a current path model from a road straight-going model, a road left-turn model and a road right-turn model which are included in the path planning model, wherein the current path model specifically comprises the following steps:
acquiring a task path based on the current positioning information and the destination information;
acquiring a target position based on the task path and the current image;
and based on the target position, selecting the current path model from the road straight-going model, the road left-turning model and the road right-turning model.
2. The method of claim 1, wherein the controlling the mobile robot based on the current motion information further comprises thereafter:
and training the action planning model based on the optimized positioning information, the optimized image, the optimized action information and the optimized identification.
3. The method according to any one of claims 1 and 2, wherein the visual sensing device comprises an infrared camera; correspondingly, the current image comprises a current infrared thermal image.
4. The method according to any one of claims 1 and 2, wherein the positioning sensing device is a GNSS sensor.
5. A mobile robot control apparatus, characterized by comprising:
the acquisition unit is used for acquiring current positioning information and a current image respectively based on a positioning sensing device and a visual sensing device which are arranged on the mobile robot;
the planning unit is used for inputting the current positioning information and the current image into an action planning model and acquiring current action information output by the action planning model; the action planning model is obtained by training based on sample positioning information, sample images, sample action information and sample identifications;
the planning unit is further configured to:
inputting the current positioning information and the current image into a path planning model in the action planning models, and acquiring first action information output by the path planning model; the path planning model is obtained by training based on the sample positioning information, the sample image, the sample action information and the sample path identifier;
inputting the current image into an obstacle avoidance model in the action planning model, and acquiring second action information output by the obstacle avoidance model; the obstacle avoidance model is obtained based on the sample image, the sample action information and sample obstacle avoidance identification training;
acquiring current action information based on the first action information and the second action information;
based on the current positioning information and the current image, selecting a current path model from a road straight-going model, a road left-turning model and a road right-turning model which are included in the path planning model;
inputting the current positioning information and the current image into the current path model, and acquiring the first action information output by the current path model;
acquiring a task path based on the current positioning information and the destination information;
acquiring a target position based on the task path and the current image;
based on the target position, selecting the current path model from the road straight-going model, the road left-turning model and the road right-turning model;
a control unit for controlling the mobile robot based on the current motion information.
6. An electronic device, comprising a processor, a communication interface, a memory and a bus, wherein the processor, the communication interface and the memory communicate with each other via the bus, and the processor can call logic instructions in the memory to execute the method according to any one of claims 1 to 4.
7. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN201910247676.3A 2019-03-29 2019-03-29 Mobile robot control method and device Active CN109901589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910247676.3A CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910247676.3A CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Publications (2)

Publication Number Publication Date
CN109901589A CN109901589A (en) 2019-06-18
CN109901589B true CN109901589B (en) 2022-06-07

Family

ID=66954012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910247676.3A Active CN109901589B (en) 2019-03-29 2019-03-29 Mobile robot control method and device

Country Status (1)

Country Link
CN (1) CN109901589B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021161950A1 (en) * 2020-02-12 2021-08-19 ファナック株式会社 Robot system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901888A (en) * 2014-03-18 2014-07-02 北京工业大学 Robot autonomous motion control method based on infrared and sonar sensors
CN107479368A (en) * 2017-06-30 2017-12-15 北京百度网讯科技有限公司 A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence
CN107818333A (en) * 2017-09-29 2018-03-20 爱极智(苏州)机器人科技有限公司 Robot obstacle-avoiding action learning and Target Searching Method based on depth belief network
CN108245384A (en) * 2017-12-12 2018-07-06 清华大学苏州汽车研究院(吴江) Binocular vision apparatus for guiding blind based on enhancing study
CN108320051A (en) * 2018-01-17 2018-07-24 哈尔滨工程大学 A kind of mobile robot dynamic collision-free planning method based on GRU network models
CN108960432A (en) * 2018-06-22 2018-12-07 深圳市易成自动驾驶技术有限公司 Decision rule method, apparatus and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103901888A (en) * 2014-03-18 2014-07-02 北京工业大学 Robot autonomous motion control method based on infrared and sonar sensors
CN107479368A (en) * 2017-06-30 2017-12-15 北京百度网讯科技有限公司 A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence
CN107818333A (en) * 2017-09-29 2018-03-20 爱极智(苏州)机器人科技有限公司 Robot obstacle-avoiding action learning and Target Searching Method based on depth belief network
CN108245384A (en) * 2017-12-12 2018-07-06 清华大学苏州汽车研究院(吴江) Binocular vision apparatus for guiding blind based on enhancing study
CN108320051A (en) * 2018-01-17 2018-07-24 哈尔滨工程大学 A kind of mobile robot dynamic collision-free planning method based on GRU network models
CN108960432A (en) * 2018-06-22 2018-12-07 深圳市易成自动驾驶技术有限公司 Decision rule method, apparatus and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于路径识别的移动机器人视觉导航;张海波 等;《中国图象图形学报》;20040731;第853-857页 *

Also Published As

Publication number Publication date
CN109901589A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
KR102273559B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN109324337B (en) Unmanned aerial vehicle route generation and positioning method and device and unmanned aerial vehicle
JP7371111B2 (en) Distributed processing of pose graphs to generate high-precision maps for autonomous vehicle navigation
CN110832279B (en) Alignment of data captured by autonomous vehicles to generate high definition maps
US20190147320A1 (en) "Matching Adversarial Networks"
JP6664470B2 (en) High-accuracy map data processing method, apparatus, storage medium, and device
CN107656545A (en) A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
KR20180079428A (en) Apparatus and method for automatic localization
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
CN109491377A (en) The decision and planning based on DP and QP for automatic driving vehicle
CN109491376A (en) The decision and planning declined based on Dynamic Programming and gradient for automatic driving vehicle
CN109489675A (en) The path planning based on cost for automatic driving vehicle
KR20180050823A (en) Generating method and apparatus of 3d lane model
US20200042656A1 (en) Systems and methods for persistent simulation
EP3676561B1 (en) Systems and methods to apply markings
US20210365038A1 (en) Local sensing based autonomous navigation, and associated systems and methods
WO2018154579A1 (en) Method of navigating an unmanned vehicle and system thereof
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
CN114077249B (en) Operation method, operation equipment, device and storage medium
CN114127738A (en) Automatic mapping and positioning
CN109901589B (en) Mobile robot control method and device
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
CN113358110A (en) Method and device for constructing robot obstacle map, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant