CN115223039A - Robot semi-autonomous control method and system for complex environment - Google Patents

Robot semi-autonomous control method and system for complex environment Download PDF

Info

Publication number
CN115223039A
CN115223039A CN202210518806.4A CN202210518806A CN115223039A CN 115223039 A CN115223039 A CN 115223039A CN 202210518806 A CN202210518806 A CN 202210518806A CN 115223039 A CN115223039 A CN 115223039A
Authority
CN
China
Prior art keywords
point cloud
dimensional
data
depth image
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210518806.4A
Other languages
Chinese (zh)
Inventor
顾文斌
徐文煜
徐孝彬
苑明海
裴凤雀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
INSTITUTE OF MARINE AND OFFSHORE ENGINEERING NANTONG HOHAI UNIVERSITY
Changzhou Campus of Hohai University
Original Assignee
INSTITUTE OF MARINE AND OFFSHORE ENGINEERING NANTONG HOHAI UNIVERSITY
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INSTITUTE OF MARINE AND OFFSHORE ENGINEERING NANTONG HOHAI UNIVERSITY, Changzhou Campus of Hohai University filed Critical INSTITUTE OF MARINE AND OFFSHORE ENGINEERING NANTONG HOHAI UNIVERSITY
Priority to CN202210518806.4A priority Critical patent/CN115223039A/en
Publication of CN115223039A publication Critical patent/CN115223039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot semi-autonomous control method and system for complex environment, wherein the method comprises the following steps: acquiring pose information, three-dimensional point cloud data and depth image data of the robot; according to the pose information, fusing the depth image with the laser radar data to obtain fused laser radar data, and according to the fused laser radar data, obtaining a global three-dimensional occupation dense map; performing point cloud segmentation based on the three-dimensional point cloud data, and fusing the point cloud segmentation with an image to obtain obstacle height estimation; performing semantic segmentation on the forward-looking color image based on a deep convolutional neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels so as to realize identification of a ground road; forming a collision-free navigation path according to the current position information and the constructed three-dimensional map with semantic information, and using the collision-free navigation path for global path planning; and performing local path planning on the basis, wherein the local path planning is used for dynamic obstacle avoidance in the navigation process.

Description

Robot semi-autonomous control method and system for complex environment
Technical Field
The invention relates to a robot semi-autonomous control method and system for complex environments, and belongs to the technical field of logistics robots.
Background
An Automated Guided Vehicle (AGV), which is a Vehicle equipped with an automatic guide device at a work site, can travel along a predetermined guide path, and has safety protection and various transfer functions, and belongs to a wheeled mobile robot.
At present, the port logistics robot has low general intelligent degree, weak environment sensing capability and more obvious technical bottleneck, and the traditional programming and remote control type robot is difficult to effectively respond to the rapid change of the environment due to the problems of fixed program, long response time and the like.
Secondly, the operation environment of the robot is a typical unstructured scene, the outdoor scene is large and complex, the environment recognition capability of the robot is seriously insufficient at present, personnel is required to lean forward to operate, the wireless remote control cannot play a remote control role, and the personal safety of the operators cannot be guaranteed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a semi-autonomous control method and a semi-autonomous control system for a port logistics robot in a complex environment, and can realize synchronous positioning, mapping and autonomous navigation of the port logistics robot.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the invention provides a robot semi-autonomous control method for complex environments, which comprises the following steps:
acquiring pose information, three-dimensional point cloud data and depth image data of the robot, and calibrating the three-dimensional point cloud data and the depth image data;
fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, and obtaining a global three-dimensional occupation dense map by using an RTABMAP algorithm according to the fused laser radar data;
performing point cloud segmentation based on the three-dimensional point cloud data, and fusing the point cloud segmentation with the depth image to obtain obstacle height estimation and obtain three-dimensional measurement data;
performing semantic segmentation on the forward-looking color image based on a deep convolutional neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels so as to realize the identification of ground roads and obtain scene identification data;
adding the three-dimensional measurement data and the scene recognition data into the global three-dimensional occupation dense map to obtain a global three-dimensional occupation dense map with semantic information;
and performing autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
Further, the method for obtaining the three-dimensional point cloud data and the depth image data, calibrating the three-dimensional point cloud data and the depth image data, and calibrating the three-dimensional point cloud data and the depth image data comprises the following steps:
acquiring three-dimensional point cloud data through a three-dimensional laser radar, and acquiring a depth image through a laser camera;
acquiring a pixel coordinate and a three-dimensional laser radar coordinate of a laser camera;
and calibrating by adopting a multi-triangle calibration method according to the pixel coordinate of the laser camera and the three-dimensional laser radar coordinate to obtain the camera internal reference matrix.
Further, the method for acquiring the pose data comprises the following steps:
acquiring the rotating speed of the wheels, performing integral operation to acquire position information of the logistics robot, and acquiring attitude information of the logistics robot by using the difference of the wheel speeds to finally acquire attitude data of the logistics robot;
further, point cloud segmentation is carried out based on the three-dimensional point cloud data, and the point cloud segmentation and the depth image are fused to obtain obstacle height estimation, so that the method for obtaining three-dimensional measurement data comprises the following steps:
partitioning the point cloud by adopting a region growing method to complete static obstacle identification of the environmental point cloud to obtain an obstacle region;
searching a corresponding region on the depth image according to the barrier region segmented by the point cloud data and extracting an ROI region;
and amplifying the ROI, and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinates through inverse transformation of a camera internal reference matrix on point cloud data corresponding to the obstacle area found on the depth image so as to acquire height information of the obstacle.
Further, semantic segmentation is carried out on the forward-looking color image based on a deep convolutional neural network, threshold discrimination is carried out on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels, and scene identification data are obtained by the method which comprises the following steps:
performing semantic segmentation on the depth image data based on a depth convolution neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points so as to realize identification of ground roads;
the ground road is used as a primarily selected safe traveling area, other areas on the section of the safe traveling area at the same distance are identified, a screening model is established by combining the test condition, and various areas are removed from the safe traveling area or a danger early warning is given.
Further, the method for autonomous navigation based on the known map comprises the following steps:
acquiring current position information and a target point;
forming a collision-free navigation path by adopting a global path planning algorithm according to the global three-dimensional occupation dense map with the semantic information, wherein the collision-free navigation path is used for global path planning;
based on global path planning, using a D-heuristic path search algorithm to carry out local path planning; the local path planning is used for dynamic obstacle avoidance in a navigation process;
and searching the optimal traveling path by using a D-algorithm, and continuously updating in the traveling process until the navigation is finished.
In a second aspect, the present invention provides a robot semi-autonomous control system, comprising:
a logistics robot;
the driving device is arranged on the logistics robot chassis, is connected with the logistics robot tires and is used for driving the logistics robot to run;
the wheel type odometer is connected with the driving device and used for detecting the pose information of the logistics robot;
the three-dimensional laser radar sensor is arranged on the logistics robot, is connected with the core processor through a USB (universal serial bus) serial port and is used for scanning a port environment to obtain three-dimensional laser radar point cloud data;
the visible light camera is arranged on the logistics robot, is connected with the core processor through a USB (universal serial bus) serial port and is used for acquiring depth image data of a port environment;
and the core processor is arranged on the logistics robot, is respectively connected with the driving device, the wheel-type odometer, the three-dimensional laser radar sensor and the visible light camera, and is used for constructing a global three-dimensional dense occupation map according to the position and posture information of the logistics robot, the three-dimensional laser radar point cloud data and the depth image data, and performing path planning and automatic navigation on the basis of the global three-dimensional dense occupation map.
Further, the core processor includes the following modules:
an input module: used for acquiring pose information, three-dimensional point cloud data and depth image data of the robot, calibrating the three-dimensional point cloud data and the depth image data;
a map generation module: is used for fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, obtaining a global three-dimensional occupied dense map by using an RTABMAP algorithm according to the fused laser radar data;
a height estimation module: the system is used for carrying out point cloud segmentation based on the three-dimensional point cloud data and fusing the point cloud segmentation with the depth image to obtain obstacle height estimation and obtain three-dimensional measurement data;
a scene recognition module: the system is used for carrying out semantic segmentation on a forward-looking color image based on a deep convolutional neural network, and carrying out threshold discrimination on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels to obtain scene identification data;
a semantic map module: the three-dimensional measurement data and the scene recognition data are added into the global three-dimensional occupation dense map to obtain the global three-dimensional occupation dense map with semantic information;
the navigation module: the method is used for autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
Further, the height estimation module performs point cloud segmentation based on the three-dimensional point cloud data, and performs fusion with the depth image to obtain obstacle height estimation, and the method for obtaining three-dimensional measurement data includes:
dividing the point cloud by adopting a region growing method, and completing static obstacle identification of the environmental point cloud to obtain an obstacle region;
searching a corresponding region on the depth image according to the barrier region segmented by the point cloud data and extracting an ROI region;
and amplifying the ROI area, and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinates through inverse transformation of a camera internal reference matrix on point cloud data corresponding to the obstacle area found on the depth image so as to acquire height information of the obstacle.
Further, the scene recognition module performs semantic segmentation on the forward-looking color image based on the deep convolutional neural network, and performs threshold discrimination on the region height data obtained by the semantic segmentation by combining the point cloud data corresponding to the pixels, so as to obtain the scene recognition data, wherein the method comprises the following steps:
performing semantic segmentation on the depth image data based on a depth convolution neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points so as to realize the identification of ground roads;
the ground road is used as a primarily selected safe traveling area, other areas on the section of the safe traveling area at the same distance are identified, a screening model is established by combining the test condition, and various areas are removed from the safe traveling area or a danger early warning is given.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention aims at the defects of low intelligent degree, weak environment perception capability and local autonomous navigation capability of the traditional port logistics robot. The method is characterized in that complex working environments such as port park cargo handling, stockpiling, transfer sites and the like are taken as target scenes, synchronous positioning mapping (SLAM) and autonomous navigation technologies are introduced into a port logistics robot, robot space positioning sensing capabilities are built by devices such as a multi-line laser radar sensor, a visible light camera and a wheel type odometer, research on related multi-sensor information registration, calibration, fusion and positioning navigation algorithms and development of system software are developed, a set of semi-autonomous control system capable of improving the intelligent degree and the local autonomous navigation capability of the port logistics robot is developed, and a set of applicable solution is provided for upgrading of the existing port logistics robot.
2. And (3) researching a registration calibration technology of laser radar three-dimensional point cloud data and visible camera depth image data. The method comprises the steps of synchronous acquisition control of three-dimensional point cloud data and depth image data, calibration of the three-dimensional point cloud data and the depth image data and accurate coordinate mapping. Establishing a parameterized model based on scanning parameters of the multi-line laser radar and relative position relation between the visible light camera and the multi-line laser radar, respectively extracting registration feature data sets formed by corresponding feature point pairs by customizing a structured scene on the basis of sensor data calibration, solving model parameters, and establishing a registration mapping function. The registration algorithm of three-dimensional point cloud data output by the multi-line laser radar and the pixel points of the visible light depth image is researched, and a calibration and registration program module which can be used for actual combat equipment is researched and developed. And a foundation is laid for further intelligent obstacle identification, dynamic intelligent identification of a safe traveling area and danger early warning.
3. And dynamic intelligent identification and danger early warning technical research of a safe traveling area. The method is characterized by researching scene recognition and three-dimensional measurement technologies based on intelligent image semantic segmentation, dynamically detecting the height, width, distance and other parameters of obstacles, door openings, channels and the like in the port logistics robot traveling region range, intelligently judging the safe traveling region based on the physical structure characteristic parameters of the logistics robot body and setting early warning parameters, and providing a visual auxiliary judgment tool for the robot. Meanwhile, dynamic identification and early warning technical research of barrier and dangerous areas is carried out.
4. Provided is an autonomous navigation obstacle avoidance technology of a port logistics robot. On the basis of constructing a global map, the technology for realizing autonomous obstacle avoidance/obstacle crossing judgment, path planning and navigation control of the logistics robot is researched. The logistics robot can intelligently plan a traveling path according to the target position clicked by the operator, intelligently and dynamically avoid obstacles, and realize autonomous navigation.
Drawings
FIG. 1 is a flow chart of the SLAM module and navigation module of the present invention;
FIG. 2 is a flow chart of the present invention for constructing a three-dimensional map with semantic information;
FIG. 3 is a flowchart of a calibration registration algorithm for the three-dimensional point cloud data and the visible light image data of the laser radar.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The first embodiment is as follows:
the embodiment provides a robot semi-autonomous control method for a complex environment, which comprises the following steps:
acquiring pose information, three-dimensional point cloud data and depth image data of the robot, and calibrating the three-dimensional point cloud data and the depth image data;
fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, and obtaining a global three-dimensional occupation dense map by using an RTABMAP algorithm according to the fused laser radar data;
performing point cloud segmentation based on the three-dimensional point cloud data, and fusing the point cloud segmentation with the depth image to obtain obstacle height estimation and obtain three-dimensional measurement data;
performing semantic segmentation on the forward-looking color image based on a deep convolutional neural network, and performing threshold discrimination on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels to obtain scene identification data;
adding the three-dimensional measurement data and the scene identification data into the global three-dimensional occupation dense map to obtain a global three-dimensional occupation dense map with semantic information;
and performing autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
The method specifically comprises the following steps:
step 1: and calibrating the three-dimensional point cloud data acquired by the laser radar and the visible light image acquired by the camera in the customized structured scene. By adopting a multi-triangle calibration method, the pixel coordinates of the laser camera are (u, v), the three-dimensional laser radar coordinates are (x, y, z), and the coordinate relationship between the two is as follows:
Figure BDA0003642404910000081
in the formula: f. of u And f v The effective focal length in the horizontal and vertical directions, respectively. u. of 0 And v 0 And the coordinates of the position of the center point of the image are taken as camera internal parameters. And R and T are respectively a rotation matrix and a translation matrix between the camera and the laser radar. m is ij And jointly transforming the matrix for the internal and external parameters.
Wherein, 12 unknown parameters exist, and 6 sets of equations are obtained at least for obtaining the 12 parameters to resolve the unknown numbers. The calibration registration algorithm flow is shown in fig. 3 below.
Randomly selecting 3 points from a scanned point cloud set, judging whether the 3 points are collinear, if so, reselecting, and if not, calculating a plane equation corresponding to the 3 points; and further calculating the vertex coordinates of the triangle in the two-dimensional image, and calculating each parameter value in the transformation matrix based on a least square method.
And 2, step: the logistics robot is placed in a port park environment with an unknown map, the movement of the robot is controlled to carry out map construction, and the specific algorithm process is as follows:
step 2.1: acquiring the rotating speed of the wheels from a wheel type odometer connected with the driving device, performing integral operation to acquire position information of the logistics robot, and acquiring attitude information of the logistics robot by using the difference of the wheel speeds to finally acquire attitude data of the logistics robot;
step 2.2: acquiring three-dimensional point cloud data from the port environment scanned by the three-dimensional laser radar sensor;
step 2.3: acquiring RGB image data and depth image data from a port environment acquired from the visible light camera;
step 2.4: and the core processor obtains a global three-dimensional dense map by using an RTABMAP algorithm according to the position and attitude data of the logistics robot, the three-dimensional point cloud data scanned by the laser radar and the depth image data acquired by the visible light camera, which are provided by the wheel-type odometer.
And step 3: and performing point cloud segmentation based on the obtained laser radar three-dimensional point cloud data, and fusing the point cloud data with the depth image of the visible light camera to obtain obstacle height estimation so as to complete three-dimensional measurement.
For three-dimensional point cloud segmentation, in order to meet the requirements of static obstacle identification and navigation real-time performance, a region growing method is adopted to segment the point cloud, and the static obstacle identification of the environmental point cloud is completed. The algorithm is based on a comparison of angles between point normals, and combines adjacent points satisfying a smoothness constraint together and outputs the combined points in a cluster of points, wherein each cluster of points is considered to belong to the same plane.
Searching a corresponding area on the image according to the barrier area segmented by the point cloud data and extracting an ROI area; amplifying the ROI area; and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinate through inverse transformation of the camera internal reference matrix to obtain the height information of the obstacle.
And 4, step 4: and performing image semantic segmentation and safe traveling area judgment according to the obtained RGB image data of the visible light camera. The method comprises the steps of performing semantic segmentation on foresight color image data obtained by a visible light camera based on a deep Convolutional Neural Network (CNN), and performing threshold discrimination on region height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points to realize identification of ground roads, wherein the ground roads are used as primary selected safe advancing regions. And further identifying other areas on the cross section at the same distance of the safe traveling area, if narrow door openings, channels, obstacles and the like exist, establishing a screening model by combining the test condition, and removing the areas from the safe traveling area or giving danger early warning.
And 5: and (4) adding the three-dimensional measurement data obtained in the step (3) and the scene identification data obtained in the step (4) into a global map to obtain a global three-dimensional occupation dense map with semantic information.
Step 6: based on a global three-dimensional occupied dense map with semantic information, the autonomous navigation based on the known map can be realized, and the specific algorithm process is as follows:
step 6.1: according to a global three-dimensional occupation dense map with semantic information and fused laser radar three-dimensional point cloud data and depth image data, positioning the current position of the logistics robot by adopting a self-adaptive Monte Carlo positioning (AMCL) method;
step 6.2: selecting target points on an operation interface, and guiding the robot to reach a target position by adopting a global path planning algorithm which can include but is not limited to an A-algorithm according to current position information, a constructed three-dimensional occupation dense map with semantic information, robot pose information provided by a wheel type odometer, three-dimensional point cloud data scanned by a laser radar and depth image data collected by a visible light camera to form a collision-free navigation path for global path planning; in order to improve the searching efficiency of the algorithm, whether a sight line exists between two points separated on a path or not is detected after the path is generated, if yes, a middle point between the two points can be deleted to form a collision-free navigation path, and therefore global path planning is carried out;
step 6.3: and 6.2, according to the global path planning, the three-dimensional point cloud data scanned by the laser radar and the depth image data collected by the visible light camera, performing local path planning by using a D-heuristic path search algorithm, wherein the local path planning is used for dynamically avoiding obstacles in the navigation process. And based on the identified safe traveling area, searching an optimal traveling path by using a D-x algorithm, continuously updating in the traveling process, and finally realizing the autonomous navigation of the logistics robot.
The invention is beneficial to improving the convenience of the operation of the port logistics robot, improving the coping ability of the port logistics robot in complex environments such as loading, unloading, piling, transferring sites and the like of goods in a port park, and improving the execution efficiency of port logistics tasks, thereby further reducing the comprehensive cost of logistics, and having very positive significance for ensuring the production safety of port logistics and improving the throughput efficiency of ports.
And (4) carrying out dynamic intelligent identification and danger early warning technical research on a safe travelling area. The method is characterized by researching scene recognition and three-dimensional measurement technology based on intelligent image semantic segmentation, dynamically detecting the height, width, distance and other parameters of obstacles, door openings, channels and the like in the range of a port logistics robot travelling area, intelligently distinguishing a safe travelling area based on the structural characteristic parameters of the robot body and the set early warning parameters, and providing a visual auxiliary distinguishing tool for the robot. Meanwhile, dynamic identification and early warning technical research of barrier and dangerous areas is carried out.
Provided is a technology for autonomous navigation and obstacle avoidance in a sight distance of a port logistics robot. On the basis of constructing a local map, the robot autonomous obstacle avoidance/obstacle crossing judgment and path planning and navigation control realization technology in the visual range is researched. The robot can intelligently plan a traveling path according to the target position clicked in the forward-looking image, intelligently avoid obstacles and realize autonomous traveling.
Example two:
the embodiment provides a semi-autonomous control system of logistics robot, includes:
a logistics robot;
the driving device is arranged on the logistics robot chassis, is connected with the logistics robot tires and is used for driving the logistics robot to run;
the wheel type odometer is connected with the logistics robot driving device and used for detecting the pose information of the logistics robot;
and the three-dimensional laser radar sensor is arranged on the logistics robot and is connected with the core processor through a USB (universal serial bus) transfer port. The three-dimensional laser radar sensor is used for scanning a port environment to obtain three-dimensional laser radar point cloud data for subsequent map construction and autonomous navigation; specifically, the three-dimensional laser radar sensor is a radium 16-line laser radar sensor;
the visible light camera is arranged on the logistics robot, is connected with the core processor through a USB (universal serial bus) serial port and is used for acquiring depth image data and RGB (red, green and blue) image data of a port environment, the RGB image can be used for target detection to acquire semantic information, and the depth image can be used for acquiring attitude angle and distance information of a target and carrying out calibration registration and information fusion with three-dimensional laser radar point cloud data; specifically, the visible light camera is a Kinect V2 camera;
the system comprises a core processor, a driving device, a wheel-type odometer, a three-dimensional laser radar sensor and a visible light camera, wherein the core processor is arranged on a logistics robot and is respectively connected with the driving device, the wheel-type odometer, the three-dimensional laser radar sensor and the visible light camera, and is used for constructing a global three-dimensional dense occupation map according to the position and posture information of the logistics robot, the three-dimensional laser radar point cloud data, the depth image data and the RGB image data, performing point cloud segmentation by adopting a region growing method on the basis to complete three-dimensional measurement, performing semantic segmentation by adopting a depth convolution neural network to complete scene recognition, and adding semantic information to the global three-dimensional dense occupation map; the global path planning algorithm can be adopted, including but not limited to an A algorithm, global path planning is carried out, a D heuristic path searching algorithm is adopted for local path planning, and the generated control signal controls the logistics robot to carry out autonomous navigation through the driving device; specifically, the core processor selects an NVIDIATX2 high-performance embedded development board which is provided with Ubuntu18.04 and an ROS Melodic operating system. The core processor comprises the following modules:
an input module: used for acquiring pose information, three-dimensional point cloud data and depth image data of the robot, calibrating the three-dimensional point cloud data and the depth image data;
a map generation module: the system is used for fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, and obtaining a global three-dimensional occupation dense map by using an RTABMAP algorithm according to the fused laser radar data;
a height estimation module: the system comprises a depth image acquisition unit, a point cloud segmentation unit, a barrier height estimation unit and a three-dimensional measurement unit, wherein the depth image acquisition unit is used for acquiring depth images of the three-dimensional point cloud data;
a scene recognition module: the system is used for carrying out semantic segmentation on a forward-looking color image based on a deep convolutional neural network, and carrying out threshold discrimination on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels to obtain scene identification data;
a semantic map module: the three-dimensional measurement data and the scene recognition data are added into the global three-dimensional occupation dense map to obtain the global three-dimensional occupation dense map with semantic information;
the navigation module: the method is used for autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
The embodiment also provides a logistics robot semi-autonomous control method, based on the system, comprising the following steps:
step 1: and calibrating the three-dimensional point cloud data acquired by the laser radar and the visible light image acquired by the camera in a customized structured scene (namely based on the system). By adopting a multi-triangle calibration method, the pixel coordinates of the laser camera are (u, v), the three-dimensional laser radar coordinates are (x, y, z), and the coordinate relationship between the two is as follows:
Figure BDA0003642404910000141
in the formula: f. of u And f v The effective focal length in the horizontal and vertical directions, respectively. u. of 0 And v 0 And the coordinates of the position of the center point of the image are taken as camera internal parameters. And R and T are respectively a rotation matrix and a translation matrix between the camera and the laser radar. m is a unit of ij And jointly transforming the matrix for the internal and external parameters.
Wherein, 12 unknown parameters exist, and 6 sets of equations are obtained at least for obtaining the 12 parameters to resolve the unknown numbers. The algorithm flow of the calibration and registration of the laser radar three-dimensional point cloud and the visible light image is shown in the following fig. 3:
randomly selecting 3 points from the scanned point cloud set, judging whether the 3 points are collinear, if so, reselecting, and if not, calculating a plane equation corresponding to the 3 points; and further calculating vertex coordinates of the triangle in the two-dimensional image, and calculating each parameter value in the transformation matrix based on a least square method.
Step 2: the logistics robot is placed in a port park environment with an unknown map, the movement of the robot is controlled to carry out map construction, and the specific algorithm process is as follows:
step 2.1: obtaining the rotating speed of a wheel from a wheel type odometer connected with the driving device, performing integral operation to obtain position information of the logistics robot, and obtaining attitude information of the logistics robot by using the difference of the wheel speeds to finally obtain attitude data of the logistics robot;
step 2.2: acquiring three-dimensional point cloud data from the port environment scanned by the three-dimensional laser radar sensor;
step 2.3: acquiring RGB image data and depth image data from the port environment collected in the visible light camera;
step 2.4: and the core processor obtains a global three-dimensional occupation dense map by using an RTABMAP algorithm according to the position and attitude data of the logistics robot, the three-dimensional point cloud data scanned by the laser radar and the depth image data acquired by the visible light camera, which are provided by the wheel-type odometer.
And step 3: and performing point cloud segmentation based on the obtained laser radar three-dimensional point cloud data, and fusing the point cloud data with the depth image of the visible light camera to obtain obstacle height estimation so as to complete three-dimensional measurement.
For three-dimensional point cloud segmentation, in order to meet the requirements of static obstacle identification and navigation real-time performance, a region growing method is adopted to segment the point cloud, and the static obstacle identification of the environment point cloud is completed. The algorithm is based on a comparison of angles between point normals, and combines adjacent points satisfying a smoothness constraint together and outputs the result as a cluster of points, and each cluster of points is considered to belong to the same plane.
Searching a corresponding area on the image according to the barrier area segmented by the point cloud data and extracting an ROI area; amplifying the ROI area; and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinates through inverse transformation of a camera internal reference matrix to obtain the height information of the obstacle.
And 4, step 4: and performing image semantic segmentation and safe traveling area judgment according to the obtained RGB image data of the visible light camera. The method comprises the steps of performing semantic segmentation on foresight color image data obtained by a visible light camera based on a deep Convolutional Neural Network (CNN), and performing threshold discrimination on region height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points to realize identification of ground roads, wherein the ground roads are used as primary selected safe advancing regions. And further identifying other areas on the section with the same distance in the safe traveling area, if narrow door openings, channels, obstacles and the like exist, establishing a screening model by combining the test condition, and removing the areas from the safe traveling area or giving a danger early warning.
And 5: and (4) adding the three-dimensional measurement data obtained in the step (3) and the scene identification data obtained in the step (4) into a global map to obtain a global three-dimensional occupation dense map with semantic information.
Step 6: based on a global three-dimensional occupation dense map with semantic information, the autonomous navigation based on the known map can be realized, and the specific algorithm process is as follows:
step 6.1: according to a global three-dimensional occupation dense map with semantic information and fused laser radar three-dimensional point cloud data and depth image data, positioning the current position of the logistics robot by adopting a self-adaptive Monte Carlo positioning (AMCL) method;
step 6.2: selecting target points on an operation interface, and guiding the robot to reach a target position by adopting a global path planning algorithm which can include but is not limited to an A-algorithm according to current position information, a constructed three-dimensional occupation dense map with semantic information, robot pose information provided by a wheel type odometer, three-dimensional point cloud data scanned by a laser radar and depth image data collected by a visible light camera to form a collision-free navigation path for global path planning; in order to improve the searching efficiency of the algorithm, whether a sight line exists between two points separated on a path or not is detected after the path is generated, if yes, a middle point between the two points can be deleted to form a collision-free navigation path, and therefore global path planning is carried out;
step 6.3: and 6.2, according to the global path planning and the three-dimensional point cloud data scanned by the laser radar and the depth image data collected by the visible light camera, performing local path planning by using a D-x heuristic path search algorithm, wherein the local path planning is used for dynamic obstacle avoidance in the navigation process. And based on the identified safe traveling area, searching an optimal traveling path by using a D-x algorithm, continuously updating in the traveling process, and finally realizing the autonomous navigation of the logistics robot.
The invention is beneficial to improving the convenience of the port logistics robot in operation, improving the corresponding capability of the port logistics robot in complex environments such as port park cargo handling, stockpiling, transferring sites and the like, and improving the execution efficiency of port logistics tasks, thereby further reducing the comprehensive logistics cost and having very positive significance for ensuring the production safety of port logistics and improving the port throughput efficiency.
The synchronous map construction and positioning technology of port logistics robots in complex working environments such as port park cargo handling, stockpiling, transfer sites and the like. Based on the latest research result of the SLAM technology, the port logistics robot mainly focuses on complex and open working environments such as cargo loading and unloading, stockpiling and transferring sites in port parks, and specific implementation technologies such as real-time scene detection, map construction, local positioning and global positioning of dynamically variable unknown scenes are used for developing research and engineering design of software and hardware systems.
And researching a registration technology of the laser radar three-dimensional point cloud and the visible light image. The method comprises the steps of synchronous acquisition control of point cloud data and a visible light image, calibration of the point cloud data and the image data and accurate coordinate mapping. Establishing a parameterized model based on the scanning parameters of the multi-line laser radar and the relative position relation between the visible light camera and the laser radar, respectively extracting registration feature data sets formed by corresponding feature point pairs by customizing a structured scene on the basis of sensor data calibration, solving model parameters, and establishing a registration mapping function. And researching a registration algorithm of three-dimensional point cloud data output by the laser radar and the visible light image pixel points, and researching and developing a calibration and registration program module which can be used for actual combat equipment. And a foundation is laid for further intelligent obstacle identification, dynamic intelligent identification of a safe traveling area and danger early warning.
And dynamic intelligent identification and danger early warning technical research of a safe traveling area. The method is characterized by researching scene recognition and three-dimensional measurement technology based on intelligent image semantic segmentation, dynamically detecting the height, width, distance and other parameters of obstacles, door openings, channels and the like in the range of a port logistics robot travelling area, intelligently distinguishing a safe travelling area based on the structural characteristic parameters of the robot body and the set early warning parameters, and providing a visual auxiliary distinguishing tool for the robot. Meanwhile, dynamic identification and early warning technical research of barrier and dangerous areas is carried out.
Provided is an autonomous navigation obstacle avoidance technology in a sight distance of a port logistics robot. On the basis of constructing a local map, the robot autonomous obstacle avoidance/obstacle crossing judgment and path planning and navigation control realization technology in the visual range is researched. The robot can intelligently plan a traveling path according to the target position clicked in the forward-looking image, intelligently avoid obstacles and realize autonomous traveling.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A robot semi-autonomous control method facing complex environment is characterized by comprising the following steps:
acquiring pose information, three-dimensional point cloud data and depth image data of the robot, and calibrating the three-dimensional point cloud data and the depth image data;
fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, and obtaining a global three-dimensional occupation dense map by using an RTABMAP algorithm according to the fused laser radar data;
performing point cloud segmentation based on the three-dimensional point cloud data, and fusing the point cloud segmentation with the depth image to obtain obstacle height estimation and obtain three-dimensional measurement data;
performing semantic segmentation on the forward-looking color image based on a deep convolutional neural network, and performing threshold discrimination on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels so as to realize the identification of ground roads and obtain scene identification data;
adding the three-dimensional measurement data and the scene recognition data into the global three-dimensional occupation dense map to obtain a global three-dimensional occupation dense map with semantic information;
and performing autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
2. The semi-autonomous robot control method of claim 1, wherein the method of acquiring three-dimensional point cloud data and depth image data, calibrating the three-dimensional point cloud data and the depth image data, and calibrating the three-dimensional point cloud data and the depth image data comprises:
acquiring three-dimensional point cloud data through a three-dimensional laser radar, and acquiring a depth image through a laser camera;
acquiring a pixel coordinate and a three-dimensional laser radar coordinate of a laser camera;
and calibrating by adopting a multi-triangle calibration method according to the pixel coordinate of the laser camera and the three-dimensional laser radar coordinate to obtain the camera internal reference matrix.
3. The semi-autonomous robot control method according to claim 1, wherein the method of acquiring pose data includes:
and obtaining the rotating speed of the wheels, performing integral operation to obtain position information of the logistics robot, and obtaining attitude information of the logistics robot by using the difference of the wheel speeds to finally obtain attitude data of the logistics robot.
4. The robot semi-autonomous control method according to claim 1, wherein point cloud segmentation is performed based on the three-dimensional point cloud data, and the point cloud segmentation is fused with the depth image to obtain an obstacle height estimation, and the method for obtaining three-dimensional measurement data comprises:
partitioning the point cloud by adopting a region growing method to complete static obstacle identification of the environmental point cloud to obtain an obstacle region;
searching a corresponding region on the depth image according to the barrier region segmented by the point cloud data and extracting an ROI region;
and amplifying the ROI, and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinates through inverse transformation of a camera internal reference matrix on point cloud data corresponding to the obstacle area found on the depth image so as to acquire height information of the obstacle.
5. The robot semi-autonomous control method according to claim 1, wherein semantic segmentation is performed on a forward-looking color image based on a deep convolutional neural network, threshold discrimination is performed on region height data obtained by the semantic segmentation in combination with point cloud data corresponding to pixels, and the method for obtaining scene recognition data comprises:
performing semantic segmentation on the depth image data based on a depth convolution neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points so as to realize identification of ground roads;
the ground road is used as a primarily selected safe traveling area, other areas on the section of the safe traveling area at the same distance are identified, a screening model is established by combining the test condition, and various areas are removed from the safe traveling area or danger early warning is given.
6. The semi-autonomous robot control method of claim 1, wherein the method of performing autonomous navigation based on a known map comprises:
acquiring current position information and a target point;
forming a collision-free navigation path by adopting a global path planning algorithm according to the global three-dimensional occupation dense map with the semantic information, wherein the collision-free navigation path is used for global path planning;
based on global path planning, local path planning is carried out by using a D-heuristic path search algorithm; the local path planning is used for dynamic obstacle avoidance in a navigation process;
and searching the optimal travel path by using a D-algorithm, and continuously updating the optimal travel path in the travel process until the navigation is finished.
7. A semi-autonomous control system of a robot, comprising:
a logistics robot;
the driving device is arranged on the logistics robot chassis, is connected with the logistics robot tires and is used for driving the logistics robot to run;
the wheel type odometer is connected with the driving device and used for detecting the pose information of the logistics robot;
the three-dimensional laser radar sensor is arranged on the logistics robot, is connected with the core processor through a USB (universal serial bus) serial port and is used for scanning a port environment to obtain three-dimensional laser radar point cloud data;
the visible light camera is arranged on the logistics robot, is connected with the core processor through a USB (universal serial bus) serial port and is used for acquiring depth image data of a port environment;
and the core processor is arranged on the logistics robot, is respectively connected with the driving device, the wheel type odometer, the three-dimensional laser radar sensor and the visible light camera, and is used for constructing a global three-dimensional occupation dense map according to the pose information, the three-dimensional laser radar point cloud data and the depth image data of the logistics robot, and planning a path and automatically navigating on the basis of the global three-dimensional occupation dense map.
8. The system of claim 7, wherein the core processor comprises the following modules:
an input module: the system comprises a robot, a position and pose information acquisition unit, a three-dimensional point cloud data acquisition unit, a depth image data acquisition unit, a data acquisition unit and a data processing unit, wherein the position and pose information, the three-dimensional point cloud data and the depth image data acquisition unit are used for acquiring the position and pose information of the robot, and calibrating the three-dimensional point cloud data and the depth image data;
a map generation module: is used for fusing the depth image and the three-dimensional point cloud data according to the pose information to obtain fused laser radar data, obtaining a global three-dimensional occupied dense map by using an RTABMAP algorithm according to the fused laser radar data;
a height estimation module: the system comprises a depth image acquisition unit, a point cloud segmentation unit, a barrier height estimation unit and a three-dimensional measurement unit, wherein the depth image acquisition unit is used for acquiring depth images of the three-dimensional point cloud data;
a scene recognition module: the system is used for carrying out semantic segmentation on a forward-looking color image based on a deep convolution neural network, and carrying out threshold discrimination on region height data obtained by the semantic segmentation by combining point cloud data corresponding to pixels to obtain scene identification data;
a semantic map module: the three-dimensional measurement data and the scene recognition data are added into the global three-dimensional occupation dense map to obtain the global three-dimensional occupation dense map with semantic information;
a navigation module: the method is used for autonomous navigation based on the known map based on the global three-dimensional occupation dense map with semantic information.
9. The system of claim 8, wherein the height estimation module performs point cloud segmentation based on the three-dimensional point cloud data and fuses with the depth image to obtain an obstacle height estimation, and the method of obtaining three-dimensional measurement data comprises:
partitioning the point cloud by adopting a region growing method to complete static obstacle identification of the environmental point cloud to obtain an obstacle region;
searching a corresponding region on the depth image according to the barrier region segmented by the point cloud data and extracting an ROI region;
and amplifying the ROI, and calculating coordinates of the lowest position and the highest position of the obstacle under the camera coordinates through inverse transformation of a camera internal reference matrix on point cloud data corresponding to the obstacle area found on the depth image so as to acquire height information of the obstacle.
10. The robot semi-autonomous control system of claim 8, wherein the scene recognition module performs semantic segmentation on the forward-looking color image based on a deep convolutional neural network, and performs threshold discrimination on region height data obtained by the semantic segmentation in combination with point cloud data corresponding to pixels to obtain scene recognition data, wherein the method comprises:
performing semantic segmentation on the depth image data based on a depth convolution neural network, and performing threshold discrimination on regional height data obtained by the semantic segmentation by combining laser radar three-dimensional point cloud data corresponding to pixel points so as to realize identification of ground roads;
the ground road is used as a primarily selected safe traveling area, other areas on the section of the safe traveling area at the same distance are identified, a screening model is established by combining the test condition, and various areas are removed from the safe traveling area or a danger early warning is given.
CN202210518806.4A 2022-05-13 2022-05-13 Robot semi-autonomous control method and system for complex environment Pending CN115223039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210518806.4A CN115223039A (en) 2022-05-13 2022-05-13 Robot semi-autonomous control method and system for complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210518806.4A CN115223039A (en) 2022-05-13 2022-05-13 Robot semi-autonomous control method and system for complex environment

Publications (1)

Publication Number Publication Date
CN115223039A true CN115223039A (en) 2022-10-21

Family

ID=83608242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210518806.4A Pending CN115223039A (en) 2022-05-13 2022-05-13 Robot semi-autonomous control method and system for complex environment

Country Status (1)

Country Link
CN (1) CN115223039A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115373407A (en) * 2022-10-26 2022-11-22 北京云迹科技股份有限公司 Method and device for robot to automatically avoid safety warning line
CN116630394A (en) * 2023-07-25 2023-08-22 山东中科先进技术有限公司 Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115373407A (en) * 2022-10-26 2022-11-22 北京云迹科技股份有限公司 Method and device for robot to automatically avoid safety warning line
CN116630394A (en) * 2023-07-25 2023-08-22 山东中科先进技术有限公司 Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN116630394B (en) * 2023-07-25 2023-10-20 山东中科先进技术有限公司 Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint

Similar Documents

Publication Publication Date Title
US11691648B2 (en) Drivable surface identification techniques
EP3635500B1 (en) Method of navigating a vehicle and system thereof
CN111874006B (en) Route planning processing method and device
CN111399505B (en) Mobile robot obstacle avoidance method based on neural network
EP3672762B1 (en) Self-propelled robot path planning method, self-propelled robot and storage medium
CN110780305A (en) Track cone bucket detection and target point tracking method based on multi-line laser radar
CN115223039A (en) Robot semi-autonomous control method and system for complex environment
CN107422730A (en) The AGV transportation systems of view-based access control model guiding and its driving control method
CN111788102A (en) Odometer system and method for tracking traffic lights
CN108475059A (en) Autonomous vision guided navigation
CN104777835A (en) Omni-directional automatic forklift and 3D stereoscopic vision navigating and positioning method
US20230063845A1 (en) Systems and methods for monocular based object detection
US11189049B1 (en) Vehicle neural network perception and localization
CN111469127A (en) Cost map updating method and device, robot and storage medium
CN111930125A (en) Low-cost obstacle detection device and method suitable for AGV
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
Kim et al. Autonomous mobile robot localization and mapping for unknown construction environments
CN114488185A (en) Robot navigation system method and system based on multi-line laser radar
CN113158779B (en) Walking method, walking device and computer storage medium
Valente et al. Evidential SLAM fusing 2D laser scanner and stereo camera
CN113282088A (en) Unmanned driving method, device and equipment of engineering vehicle, storage medium and engineering vehicle
CN116578081A (en) Unmanned transport vehicle pointing and stopping method based on perception
US20230056589A1 (en) Systems and methods for generating multilevel occupancy and occlusion grids for controlling navigation of vehicles
CN115755888A (en) AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method
CN115438430A (en) Mining area vehicle driving stability prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination