CN112925326B - AGV obstacle avoidance method based on data fusion of laser radar and depth camera - Google Patents

AGV obstacle avoidance method based on data fusion of laser radar and depth camera Download PDF

Info

Publication number
CN112925326B
CN112925326B CN202110130478.6A CN202110130478A CN112925326B CN 112925326 B CN112925326 B CN 112925326B CN 202110130478 A CN202110130478 A CN 202110130478A CN 112925326 B CN112925326 B CN 112925326B
Authority
CN
China
Prior art keywords
laser radar
data
agv
obstacle
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110130478.6A
Other languages
Chinese (zh)
Other versions
CN112925326A (en
Inventor
李勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siyang People's Hospital
Original Assignee
Siyang People's Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siyang People's Hospital filed Critical Siyang People's Hospital
Priority to CN202110130478.6A priority Critical patent/CN112925326B/en
Publication of CN112925326A publication Critical patent/CN112925326A/en
Application granted granted Critical
Publication of CN112925326B publication Critical patent/CN112925326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Abstract

The invention discloses an AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera, which comprises the following steps: a controller; the controller is internally provided with a data processing unit, a decision unit and a control unit; the system comprises an AGV, a laser radar, a depth camera and a common camera, wherein the laser radar and the depth camera are arranged at the front end of the AGV; and the laser radar, the depth camera and the common camera are respectively in signal connection with the controller for signal transmission. The method comprises the steps of scanning and tracking the obstacle in the AGV advancing direction in a mode of combining laser radar data and depth camera data, preprocessing sensor data, establishing corresponding data nodes in a system for data transmission, communicating a controller with the nodes to obtain different data, processing the sensor data by a data processing unit and operating an algorithm for data processing, determining the current situation of the AGV by a processing result through a decision unit, and transmitting an avoiding method to a bottom motion control unit to complete avoiding action.

Description

AGV obstacle avoidance method based on data fusion of laser radar and depth camera
Technical Field
The invention relates to an automatic drug delivery system, in particular to an AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera.
Background
As an emerging technology rising in recent decades, an intelligent robot and an automation technology thereof have been continuously developed and advanced into the lives of people, and in order to make the robot better serve the lives of people, the development of the robot is slow from an assembly robot on a workshop assembly line to an indoor floor sweeping robot, a restaurant ordering robot, and the like. Robotics is becoming an indispensable part of human life.
However, the existing obstacle avoidance scheme of the intelligent robot has the defect of singleness, the singleness causes that the robot cannot well cope with a complex environment, and the acquired data does not have enough information to cope with obstacle avoidance operation. If the multi-line laser radar is considered, although the data contains multi-dimensional information, the strain capacity of the robot is improved, but the hardware cost is greatly increased.
Disclosure of Invention
Aiming at the technical problems, the technical scheme provides an AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera, an automatic driving technology is fused into an indoor environment, and the strain capacity and the response capacity of the AGV to an obstacle are improved; the problems can be effectively solved.
Technical scheme
An AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera comprises the following steps: the system comprises a controller, a laser radar, a hardware device and a controller, wherein the controller is installed on an AGV and used for data processing, the laser radar is used for monitoring the surrounding environment and installed on an intelligent robot, the hardware device is used for obtaining a depth image and a point cloud image, and the laser radar and the hardware device are connected with the controller and used for signal transmission; the controller is internally provided with a data processing unit, a decision unit and a control unit; the hardware equipment capable of obtaining the depth image and the point cloud picture comprises a depth camera which is arranged at the front end of the AGV and used for scanning and tracking a barrier in the moving direction of the AGV, and a common camera which is arranged on the AGV and used for collecting a doorplate image; the laser radar, the depth camera and the common camera are respectively in signal connection with the controller for signal transmission; after receiving data uploaded by the laser radar, the depth camera and the common camera, a data processing unit of the controller carries out preprocessing and integration processing on the data; and transmitting the result obtained after the integration processing to a decision unit, determining how to control the AGV to avoid the current situation by the decision unit, transmitting an avoidance scheme to a bottom motion control unit, and controlling the node to complete the avoidance action by the motion control unit.
Furthermore, the laser radar adopts a single-line laser radar or a multi-line laser radar; the single line laser radar can adopt RPLIDAR S1 single line laser radar;
the RPLIDAR S1 single-line laser radar is used for judging the angle information of the obstacle and judging the movement direction of the obstacle by combining with the depth camera; the RPLIDAR S1 lidar data and the depth camera data are used to accomplish real-time tracking of obstacles.
Further, the depth camera is a ZED depth camera; the ZED depth camera is used for obtaining object distance information in a certain view range and is combined with a laser radar to position an object.
Further, the controller adopts an NVIDIA Jetson TX2 control board, and a modeling platform of the NVIDIA Jetson TX2 control board is a Ubuntu operating system.
Furthermore, the NVIDIA TX2 control panel is reserved for communication with the lower-layer control system, the NVIDIA TX2 control panel is in signal connection with the lower-layer control system through a serial port and/or in a wireless mode, and the wireless connection mode comprises WIFI, Bluetooth and other wireless connection modes.
Further, after receiving data uploaded by the laser radar, the depth camera and the common camera, a data processing unit of the controller performs preprocessing and integration processing on the data; transmitting the result obtained after the integration processing to a decision unit, determining how to control the AGV to avoid the current situation by the decision unit, transmitting an avoidance scheme to a motion control unit at the bottom layer, and controlling a node by the motion control unit to finish an avoidance action; the method comprises the following specific steps:
the method comprises the following steps: collecting corresponding data by a laser radar and a depth camera, transmitting the data to a system in a wired mode, and establishing corresponding nodes in the system for receiving and transmitting information;
step two: the data processing unit searches for a needed data node, establishes a communication link, receives and processes the sensor information;
step three: firstly, judging obstacles in a certain view by data transmitted by a depth camera, screening the obstacles in the advancing direction and judging the direction of the obstacles relative to the AGV;
step four: mapping the azimuth data of the obstacle to the phase of the laser radar, and finishing the subsequent tracking of the obstacle by the laser radar;
step five: comprehensively judging whether to perform evasion actions according to the movement direction and the relative distance of the obstacle;
step six: and transmitting the avoiding action information to a bottom layer control unit to complete the avoiding of the obstacle.
Further, the third step comprises the following specific steps: after receiving the data uploaded by the laser radar and the depth camera, the data processing unit preprocesses the data in the following specific mode: firstly, knowing according to the visual angle parameters of the depth camera, the detection wide angle of the depth camera is 110 degrees, 1080P resolution is used for an image, namely the size of the image is 3840 x 1080, and 35 pixel points are corresponded in one degree; the condition of the front road barrier is obtained by identifying the depth image in the scene.
Further, the fourth step specifically comprises: the laser radar continuously scans the surrounding environment, 0 degree of the laser radar always points to the reverse direction of the AGV advancing direction, and the laser radar passes through 90 degrees and 270 degreesValue, obtaining the position D of the self relative to the left wall bodyLAnd wall position D relative to the right sideRAnd setting the length of a vehicle body as h and the width as l, when the medicine conveying vehicle requires the vehicle to judge and react on an object in front of the vehicle within 1.5 meters in the advancing process, obtaining a detection field angle alpha according to the vehicle width as l:
Figure GDA0002987478440000041
thus obtaining the required depth data obstacle avoidance range alpha
Figure GDA0002987478440000042
Figure GDA0002987478440000043
Acquiring the depth data in the area to obtain the data of the depth camera, classifying the data according to the distance, and determining the distance d in the image1±dσCounting the points in the range, and storing the coordinates in an array Ad1Performing the following steps; the distance in the image is d2±dσCounting the points in the range, and storing the coordinates in an array Ad2…, and so on, a series of arrays is obtained.
Further, when the decision unit judges to avoid the obstacle, the decision unit only carries out obstacle avoidance operation on the obstacle with the distance within the range of 1.5 m.
Further, the step five comprehensively judges whether to perform the avoidance action according to the movement direction and the relative distance of the obstacle, and the specific operation mode is as follows: after the depth array uploaded by the data processing unit is obtained, the decision unit carries out obstacle avoidance operation by combining with the laser radar data;
first, get the set of depth arrays { A }d1,Ad2,…,AdnAnd the corresponding angle (125 deg. for lidar corresponding to 0 deg. from the depth camera view) and distance d1,d2,…,dnExplaining an obstacle avoidance process as follows:
when an obstacle is present in the detectionWhen the opening angle is within alpha, a set of points is set as A, and the mean value x of the abscissa within A is takenAIf xA>1980, judging that the obstacle is on the right side of the relative medicine delivery vehicle, otherwise, judging that the obstacle is on the left side; let x beA<1980, according to the corresponding relation between the visual angle and the pixel point, the deviation angle alpha of the barrier relative to the positive advancing direction of the AGV can be obtainedA
αA=(1980-xA) 35 (equation 2)
The service vehicle is regulated to carry out obstacle avoidance operation right first, and the deflection angle and the distance d at the moment are combinedAAs can be seen, the distance l that the AGV needs to travel to the rightA
lA=2tanαA+dsafe(formula 3)
dsafeIs a set safe distance;
at the moment, collecting the laser radar data, and detecting the angle in the laser radar
Figure GDA0002987478440000051
The distance perception is carried out in, and the judgment is carried out on the right l of the AGV according to the angle data of the laser radarAWhether the obstacle appears in the range or not is judged according to the following criteria:
Figure GDA0002987478440000052
ensuring that each laser radar data in the angle range is larger than L corresponding to the anglelaserThen, the right side of the AGV can be judged to have no obstacle, and the obstacle avoidance is carried out in the right driving; when the obstacle avoidance on the right side can not be carried out, according to the bilateral symmetry, the derivation is not carried out, and then the laser radar carries out the obstacle avoidance on the laser radar
Figure GDA0002987478440000053
Sensing the left side of the AGV, and ensuring that the left side of the AGV does not have an obstacle; and moving the barrier out of the advancing route by adjusting the position of the AGV of the medicine conveying vehicle.
Further, when at said declination angle αAWhen a plurality of obstacles appear in the deflection angle range, the two obstacles are aligned firstlyAnd (4) avoiding the obstacle by the nearer obstacle, and avoiding the obstacle by the farther obstacle.
(III) advantageous effects
Compared with the prior art, the AGV obstacle avoiding method based on the data fusion of the laser radar and the depth camera has the following beneficial effects:
(1) according to the technical scheme, by combining laser radar data and depth camera data, the field of vision of the binocular depth camera mainly detects obstacles in the traveling direction of the AGV and is also the main judgment direction; the method comprises the steps that a laser radar establishes a 2D plane model of an environment, judges the position of the laser radar in a map, and simultaneously tracks a front obstacle detected in a binocular camera, so that an AGV is guaranteed to accurately avoid the obstacle; and the capability of the AGV in avoiding obstacles is improved. The data of the two are fused to obtain the optimal judgment scheme, so that the smooth operation of the system is ensured, the operation speed of the system is increased, and the execution of tasks is ensured.
(2) According to the technical scheme, the recognition and avoiding capacity of the AGV to the obstacle is improved under the condition of reducing the hardware cost, the safety of the AGV in the process of executing the task is guaranteed, meanwhile, the multi-sensor data fusion technology and the data modularization technology can help the system to expand more functions in the later work, and the method has important significance for the development of the AGV.
Drawings
FIG. 1 is a schematic block diagram of the overall architecture of the system of the present invention.
Fig. 2 is a schematic block diagram of the hardware connections of the present invention.
Fig. 3 is a schematic block diagram of an obstacle avoidance system node in the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. The described embodiments are only some embodiments of the invention, not all embodiments. Various modifications and improvements of the technical solutions of the present invention may be made by those skilled in the art without departing from the design concept of the present invention, and all of them should fall into the protection scope of the present invention.
Example 1:
as shown in fig. 1-3, an AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera includes a controller installed on an AGV for data processing, the controller employs an NVIDIA Jetson TX2 control panel, and a data processing unit, a decision unit and a control unit are arranged in the controller; the modeling platform of the NVIDIA Jetson TX2 control board is a Ubuntu operating system.
The system also comprises a laser radar which is arranged on the AGV and used for monitoring the surrounding environment and hardware equipment which can obtain a depth image and a point cloud image, wherein the hardware equipment for the depth image and the point cloud image comprises a depth camera which is arranged at the front end of the AGV and used for scanning and tracking a barrier in the moving direction of the AGV and a common camera which is arranged on the AGV and used for collecting a doorplate image; the laser radar, the depth camera and the common camera are respectively in signal connection with the controller for signal transmission;
after receiving data uploaded by the laser radar, the depth camera and the common camera, a data processing unit of the controller carries out preprocessing and integration processing on the data; and transmitting the result obtained after the integration processing to a decision unit, determining how to control the AGV to avoid the current situation by the decision unit, transmitting an avoidance scheme to a bottom motion control unit, and controlling the node to complete the avoidance action by the motion control unit.
In the present embodiment, the laser radar is an RPLIDAR S1 single line laser radar; the RPLIDAR S1 single-line laser radar is used for judging the angle information of the obstacle and judging the movement direction of the obstacle by combining with the depth camera; the RPLIDAR S1 lidar data and the depth camera data are used to accomplish real-time tracking of obstacles.
The depth camera is a ZED depth camera; the ZED depth camera is used for obtaining object distance information in a certain view range and is combined with a laser radar to position an object.
The controller adopts an NVIDIA Jetson TX2 control board, and the modeling platform of the NVIDIA Jetson TX2 control board is a Ubuntu operating system. The NVIDIA TX2 control panel is reserved for communication with the lower control system, signal connection is carried out between the NVIDIA TX2 control panel and the lower control system through a serial port and/or a wireless mode, and the wireless connection mode comprises WIFI, Bluetooth and other wireless connection modes.
After receiving data uploaded by the laser radar, the depth camera and the common camera, a data processing unit of the controller carries out preprocessing and integration processing on the data; transmitting the result obtained after the integration processing to a decision unit, determining how to control the AGV to avoid the current situation by the decision unit, transmitting an avoidance scheme to a motion control unit at the bottom layer, and controlling a node by the motion control unit to finish an avoidance action; the method comprises the following specific steps:
the method comprises the following steps: collecting corresponding data by a laser radar and a depth camera, transmitting the data to a system in a wired mode, and establishing corresponding nodes in the system for receiving and transmitting information;
step two: the data processing unit searches for a needed data node, establishes a communication link, receives and processes the sensor information;
step three: firstly, judging obstacles in a certain view by data transmitted by a depth camera, screening the obstacles in the advancing direction and judging the direction of the obstacles relative to the AGV;
after receiving data uploaded by the laser radar and the depth camera, the data processing unit firstly processes a depth image and calculates the size of a detection field angle according to the detection wide angle and the obstacle avoidance distance of the camera; classifying the data in the field angle range according to the distance, dividing the areas with similar distances into a data set, and traversing the images in the field angle range to obtain the number of the obstacles on the driving road of the distribution vehicle;
according to the view angle parameters of the depth camera, the detection wide angle of the depth camera is 110 degrees, 1080P resolution is used for the image, namely the size of the image is 3840 x 1080, and 35 pixel points are corresponding to one degree; identifying the depth image in the scene to obtain the condition of the front road barrier;
step four: mapping the azimuth data of the obstacle to the phase of the laser radar, and finishing the subsequent tracking of the obstacle by the laser radar;
processing the data according to the sequence of near to far, obtaining the mean value of the abscissa of the data set through calculation, and obtaining the deflection angle of the barrier relative to the central axis of the distribution vehicle through the mean value and the distance;
the laser radar continuously scans the surrounding environment, 0 degree of the laser radar always points to the reverse direction of the AGV advancing direction, and the position D of the laser radar relative to the left wall body is obtained through the values of 90 degrees and 270 degrees of the laser radarLAnd wall position D relative to the right sideRAnd setting the length of a vehicle body as h and the width as l, when the medicine conveying vehicle requires the vehicle to judge and react on an object in front of the vehicle within 1.5 meters in the advancing process, obtaining a detection field angle alpha according to the vehicle width as l:
Figure GDA0002987478440000091
thus obtaining the required depth data obstacle avoidance range alpha
Figure GDA0002987478440000092
Figure GDA0002987478440000093
Acquiring the depth data in the area to obtain the data of the depth camera, classifying the data according to the distance, and determining the distance d in the image1±dσCounting the points in the range, and storing the coordinates in an array Ad1Performing the following steps; the distance in the image is d2±dσCounting the points in the range, and storing the coordinates in an array Ad2…, and so on, a series of arrays is obtained. When the obstacle avoidance judgment is carried out, only the obstacle with the distance within the range of 1.5m is subjected to obstacle avoidance operation.
Step five: comprehensively judging whether to perform evasion actions according to the movement direction and the relative distance of the obstacle;
after the depth array uploaded by the data processing unit is obtained, the decision unit carries out obstacle avoidance operation by combining with the laser radar data; first, get the set of depth arrays { A }d1,Ad2,…,AdnAnd the corresponding angle (125 deg. for lidar corresponding to 0 deg. from the depth camera view) and distance d1,d2,…,dnExplaining an obstacle avoidance process as follows:
when the obstacle appears in the detection field angle alpha, the set of points is A, and the mean value x of the abscissa in A is takenAIf xA>1980, judging that the obstacle is on the right side of the relative medicine delivery vehicle, otherwise, judging that the obstacle is on the left side; let x beA<1980, according to the corresponding relation between the visual angle and the pixel point, the deviation angle alpha of the barrier relative to the positive advancing direction of the AGV can be obtainedA
αA=(1980-xA) 35 (equation 2)
The service vehicle is regulated to carry out obstacle avoidance operation right first, and the deflection angle and the distance d at the moment are combinedAAs can be seen, the distance l that the AGV needs to travel to the rightA
lA=2tanαA+dsafe(formula 3)
dsafeIs a set safe distance;
at the moment, collecting the laser radar data, and detecting the angle in the laser radar
Figure GDA0002987478440000101
The distance perception is carried out in, and the judgment is carried out on the right l of the AGV according to the angle data of the laser radarAWhether the obstacle appears in the range or not is judged according to the following criteria:
Figure GDA0002987478440000102
ensuring that each laser radar data in the angle range is larger than l corresponding to the anglelaserThen, the right side of the AGV can be judged to have no obstacle, and the obstacle avoidance is carried out in the right driving; when the obstacle avoidance on the right side can not be carried out, according to the bilateral symmetry, the derivation is not carried out, and then the laser radar carries out the obstacle avoidance on the laser radar
Figure GDA0002987478440000111
Sensing the left side of the AGV, and ensuring that the left side of the AGV does not have an obstacle; the barrier is moved out of the advancing route by adjusting the position of the AGV of the medicine conveying vehicle。
When at said declination angle alphaAThat is, when a plurality of obstacles appear in the deflection angle range, the obstacle avoidance is performed on the nearer obstacle first, and then the obstacle avoidance is performed on the farther obstacle.
Step six: and transmitting the avoiding action information to a bottom layer control unit to complete the avoiding of the obstacle.

Claims (5)

1. An AGV obstacle avoidance method based on data fusion of a laser radar and a depth camera comprises the following steps: the system comprises a controller, a laser radar, a hardware device and a controller, wherein the controller is installed on an AGV and used for data processing, the laser radar is used for monitoring the surrounding environment and installed on an intelligent robot, the hardware device is used for obtaining a depth image and a point cloud image, and the laser radar and the hardware device are connected with the controller and used for signal transmission; the method is characterized in that: the controller is internally provided with a data processing unit, a decision unit and a control unit; the hardware equipment capable of obtaining the depth image and the point cloud picture comprises a depth camera which is arranged at the front end of the AGV and used for scanning and tracking a barrier in the moving direction of the AGV, and a common camera which is arranged on the AGV and used for collecting a doorplate image; the laser radar, the depth camera and the common camera are respectively in signal connection with the controller for signal transmission; after receiving data uploaded by the laser radar, the depth camera and the common camera, a data processing unit of the controller carries out preprocessing and integration processing on the data; transmitting the result obtained after the integration processing to a decision unit, determining how to control the AGV to avoid the current situation by the decision unit, transmitting an avoidance scheme to a motion control unit at the bottom layer, and controlling a node by the motion control unit to finish an avoidance action; the method comprises the following specific steps:
the method comprises the following steps: collecting corresponding data by a laser radar and a depth camera, transmitting the data to a system in a wired mode, and establishing corresponding nodes in the system for receiving and transmitting information;
step two: the data processing unit searches for a needed data node, establishes a communication link, receives and processes the sensor information;
step three: firstly, judging obstacles in a certain view by data transmitted by a depth camera, screening the obstacles in the advancing direction and judging the direction of the obstacles relative to the AGV;
after receiving the data uploaded by the laser radar and the depth camera, the data processing unit preprocesses the data in the following specific mode: firstly, knowing according to the visual angle parameters of the depth camera, the detection wide angle of the depth camera is 110 degrees, 1080P resolution is used for an image, namely the size of the image is 3840 x 1080, and 35 pixel points are corresponded in one degree; identifying the depth image in the scene to obtain the condition of the front road barrier;
step four: mapping the azimuth data of the obstacle to the phase of the laser radar, and finishing the subsequent tracking of the obstacle by the laser radar;
the laser radar continuously scans the surrounding environment, 0 degree of the laser radar always points to the reverse direction of the AGV advancing direction, and the position D of the laser radar relative to the left wall body is obtained through the values of 90 degrees and 270 degrees of the laser radarLAnd wall position D relative to the right sideRAnd setting the length of a vehicle body as h and the width as l, when the medicine conveying vehicle requires the vehicle to judge and react on an object in front of the vehicle within 1.5 meters in the advancing process, obtaining a detection field angle alpha according to the vehicle width as l:
Figure FDA0003517499310000021
thus obtaining the required depth data obstacle avoidance range alpha
Figure FDA0003517499310000022
Acquiring the depth data in the area to obtain the data of the depth camera, classifying the data according to the distance, and determining the distance d in the image1±dσCounting the points in the range, and storing the coordinates in an array Ad1Performing the following steps; the distance in the image is d2±dσCounting the points in the range, and storing the coordinates in an array Ad2…, and so on, a series of arrays can be obtained;
step five: comprehensively judging whether to perform evasion actions according to the movement direction and the relative distance of the obstacle; the specific operation mode is as follows: after the depth array uploaded by the data processing unit is obtained, the decision unit carries out obstacle avoidance operation by combining with the laser radar data;
first, get the set of depth arrays { A }d1,Ad2,...,AdnAnd the corresponding angles and distances d1,d2,...,dnThe corresponding angle is 0 degree of the visual angle of the depth camera corresponding to 125 degrees of the laser radar; the following explains the obstacle avoidance procedure:
when the obstacle appears in the detection field angle alpha, the set of points is A, and the mean value x of the abscissa in A is takenAIf xAIf the position is more than 1980, judging that the obstacle is on the right side relative to the medicine delivery vehicle, otherwise, judging that the obstacle appears on the left side; let x beA< 1980, the deflection angle alpha of the barrier relative to the positive direction of the AGV traveling can be obtained according to the corresponding relation between the visual angle and the pixel pointsA
αA=(1980-xA) 35 (equation 2)
The service vehicle is regulated to carry out obstacle avoidance operation right first, and the deflection angle and the distance d at the moment are combinedAAs can be seen, the distance l that the AGV needs to travel to the rightA
lA=2tanαA+dsafe(formula 3)
dsafeIs a set safe distance;
at the moment, collecting the laser radar data, and detecting the angle in the laser radar
Figure FDA0003517499310000023
The distance perception is carried out in, and the judgment is carried out on the right l of the AGV according to the angle data of the laser radarAWhether the obstacle appears in the range or not is judged according to the following criteria:
Figure FDA0003517499310000024
is ensured at theEach laser radar data in the angle range is larger than l corresponding to the anglelaserThen, the right side of the AGV can be judged to have no obstacle, and the obstacle avoidance is carried out in the right driving; when the obstacle avoidance on the right side can not be carried out, according to the bilateral symmetry, the derivation is not carried out, and then the laser radar carries out the obstacle avoidance on the laser radar
Figure FDA0003517499310000031
Sensing the left side of the AGV, and ensuring that the left side of the AGV does not have an obstacle; moving the barrier out of the advancing route by adjusting the position of the AGV of the medicine conveying vehicle;
step six: and transmitting the avoiding action information to a bottom layer control unit to complete the avoiding of the obstacle.
2. The AGV obstacle avoiding method based on the data fusion of the laser radar and the depth camera, according to claim 1, is characterized in that: the laser radar adopts a single line laser radar or a multi-line laser radar; the single line laser radar can adopt RPLIDAR S1 single line laser radar;
the RPLIDAR S1 single-line laser radar is used for judging the angle information of the obstacle and judging the movement direction of the obstacle by combining with the depth camera; the RPLIDAR S1 lidar data and the depth camera data are used to accomplish real-time tracking of obstacles.
3. The AGV obstacle avoiding method based on the data fusion of the laser radar and the depth camera, according to claim 1, is characterized in that: the depth camera is a ZED depth camera; the ZED depth camera is used for obtaining object distance information in a certain view range and is combined with a laser radar to position an object.
4. The AGV obstacle avoiding method based on the data fusion of the laser radar and the depth camera, according to claim 1, is characterized in that: the controller adopts an NVIDIA Jetson TX2 control plate, and a modeling platform of the NVIDIA Jetson TX2 control plate is a Ubuntu operating system.
5. According to claimThe AGV obstacle avoidance method based on the data fusion of the laser radar and the depth camera is characterized in that: when at said declination angle alphaAThat is, when a plurality of obstacles appear in the deflection angle range, the obstacle avoidance is performed on the nearer obstacle first, and then the obstacle avoidance is performed on the farther obstacle.
CN202110130478.6A 2021-01-29 2021-01-29 AGV obstacle avoidance method based on data fusion of laser radar and depth camera Active CN112925326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110130478.6A CN112925326B (en) 2021-01-29 2021-01-29 AGV obstacle avoidance method based on data fusion of laser radar and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110130478.6A CN112925326B (en) 2021-01-29 2021-01-29 AGV obstacle avoidance method based on data fusion of laser radar and depth camera

Publications (2)

Publication Number Publication Date
CN112925326A CN112925326A (en) 2021-06-08
CN112925326B true CN112925326B (en) 2022-04-08

Family

ID=76168865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110130478.6A Active CN112925326B (en) 2021-01-29 2021-01-29 AGV obstacle avoidance method based on data fusion of laser radar and depth camera

Country Status (1)

Country Link
CN (1) CN112925326B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690149B (en) * 2022-09-27 2023-10-20 江苏盛利智能科技有限公司 Image fusion processing system and method for display

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076363A (en) * 2014-06-26 2014-10-01 广东工业大学 Rapid automatic guided vehicle obstacle detection method based on multiple sensors

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100076599A1 (en) * 2008-09-20 2010-03-25 Steven Jacobs Manually driven determination of a region of interest (roi) or a path of interest (poi) for a robotic device
CN104965202B (en) * 2015-06-18 2017-10-27 奇瑞汽车股份有限公司 Obstacle detection method and device
CN107831777B (en) * 2017-09-26 2020-04-10 中国科学院长春光学精密机械与物理研究所 Autonomous obstacle avoidance system and method for aircraft and aircraft

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076363A (en) * 2014-06-26 2014-10-01 广东工业大学 Rapid automatic guided vehicle obstacle detection method based on multiple sensors

Also Published As

Publication number Publication date
CN112925326A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
JP3895238B2 (en) Obstacle detection apparatus and method
WO2020258721A1 (en) Intelligent navigation method and system for cruiser motorcycle
WO2021254367A1 (en) Robot system and positioning navigation method
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN108536149A (en) A kind of automatic driving vehicle avoidance obstacle device and control method based on the paths Dubins
CN108873914A (en) A kind of robot autonomous navigation system and method based on depth image data
CN106569225A (en) Range-finding sensor based real-time obstacle avoidance method of driveless car
CN111258311A (en) Obstacle avoidance method of underground mobile robot based on intelligent vision
CN110162066A (en) Intelligent cruise vehicle control
CN113850102A (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN112925326B (en) AGV obstacle avoidance method based on data fusion of laser radar and depth camera
Jun et al. Autonomous driving system design for formula student driverless racecar
CN115880368A (en) Method and system for detecting obstacle of power grid inspection unmanned aerial vehicle and storage medium
CN115223039A (en) Robot semi-autonomous control method and system for complex environment
CN205537632U (en) Impact system is prevented to mobile concrete pump cantilever crane
CN110696003A (en) Water side rescue robot based on SLAM technology and deep learning
CN116352722A (en) Multi-sensor fused mine inspection rescue robot and control method thereof
CN115755888A (en) AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method
CN113081525B (en) Intelligent walking aid equipment and control method thereof
CN114690779A (en) Positioning method and device based on robot vision recognition
JP7358108B2 (en) Information processing device, information processing method and program
CN113833042A (en) Skid-steer loader and unmanned driving method thereof
CN113759787A (en) Unmanned robot for closed park and working method
Orlov et al. Machine vision system for autonomous agricultural vehicle
CN210551274U (en) Robot monitoring system and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant