CN113970753A - Unmanned aerial vehicle positioning control method and system based on laser radar and visual detection - Google Patents

Unmanned aerial vehicle positioning control method and system based on laser radar and visual detection Download PDF

Info

Publication number
CN113970753A
CN113970753A CN202111157405.2A CN202111157405A CN113970753A CN 113970753 A CN113970753 A CN 113970753A CN 202111157405 A CN202111157405 A CN 202111157405A CN 113970753 A CN113970753 A CN 113970753A
Authority
CN
China
Prior art keywords
unit
positioning
image
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111157405.2A
Other languages
Chinese (zh)
Other versions
CN113970753B (en
Inventor
单梁
马苗苗
周逸飞
吴志强
陈佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202111157405.2A priority Critical patent/CN113970753B/en
Publication of CN113970753A publication Critical patent/CN113970753A/en
Application granted granted Critical
Publication of CN113970753B publication Critical patent/CN113970753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses an unmanned aerial vehicle positioning control method and system based on laser radar and visual detection. The system comprises a main control unit, a laser radar unit, a visual detection unit, a laser ranging unit, a storage unit and a communication unit; the laser radar unit, the visual detection unit, the laser ranging unit, the storage unit and the communication unit are all connected with the main control unit; the working method comprises the following steps: the laser radar unit obtains cloud information of ground target object points through radar scanning, and obtains coarse positioning information according to the cloud information; the visual detection unit carries out fine positioning identification on the image acquired in real time and carries out feature extraction on the target object to obtain accurate positioning information of the target image; the main control unit transmits the received positioning information and the unmanned aerial vehicle height information measured by the laser ranging unit to the external equipment through the communication unit. According to the invention, the multiple sensors are adopted to perform coarse positioning and fine positioning on the ground target respectively, so that positioning control of the unmanned aerial vehicle is realized, the drift error caused by an inertia measurement element is reduced, and the positioning control precision of the unmanned aerial vehicle is improved.

Description

Unmanned aerial vehicle positioning control method and system based on laser radar and visual detection
Technical Field
The invention relates to the technical field of unmanned aerial vehicle positioning, in particular to an unmanned aerial vehicle positioning control system based on laser radar and visual detection.
Background
In recent years, the application market of unmanned aerial vehicles is getting hot, and unmanned aerial vehicles are beginning to be applied in a plurality of fields, such as aerial photography, plant protection, transportation, security and the like. Along with the gradual popularization of unmanned aerial vehicles in life, the application market puts forward more and more strict requirements on the high-precision positioning technology of the unmanned aerial vehicles.
The current mature unmanned aerial vehicle positioning technology comprises the technologies of global satellite navigation, optical flow method positioning, wireless distance measurement positioning and the like, wherein the application effect of the global satellite navigation positioning technology can be greatly reduced due to the influence of the building environment on satellite signals, the optical flow algorithm in the optical flow method positioning technology and the effectiveness of a data fusion algorithm thereof have great influence on the positioning precision, and the wireless distance measurement positioning technology has the characteristics of wireless communication, so that the distance measurement is greatly shielded and interfered, the body posture change is sensitive, the dynamic performance is poor, and the application in the unmanned aerial vehicle system does not have universality.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle positioning control system based on laser radar and visual detection and a working method thereof, wherein the unmanned aerial vehicle positioning control system adopts a multi-sensor positioning mode, can reduce drift errors caused by inertia measurement elements and improve positioning accuracy.
The technical solution for realizing the invention is as follows: an unmanned aerial vehicle positioning control method based on laser radar and visual detection comprises a main control unit, a laser radar unit, a visual detection unit, a laser ranging unit, a storage unit and a communication unit;
the method comprises the following steps:
step 1, a laser ranging unit measures the flying height value of an unmanned aerial vehicle in real time and transmits height information to a main control unit;
step 2, the laser radar unit carries out coarse positioning, and the main control unit transmits coarse positioning information to external equipment through the communication unit:
step 3, the laser ranging unit transmits the measured flying height value of the unmanned aerial vehicle to the main control unit, and the main control unit transmits the received radar coarse positioning information to external equipment;
step 4, the vision detection unit carries out fine positioning identification on the image acquired in real time, carries out feature extraction on the target object to obtain accurate positioning information of the target image, transmits the fine positioning information to the main control unit,
step 4.1, aligning the camera to the positioning cross on the upper surface of the target container, and collecting the obtained image as a target image;
step 4.2, collecting images in real time, avoiding the occurrence of overexposure and over-darkness of the images, designing a brightness adaptive algorithm, firstly calculating the brightness value L of the images, wherein B, G, R respectively represents the pixel mean value of a BGR channel, then carrying out region division on the brightness, different brightness values correspond to different regions, and changing the brightness of the images by setting different gains alpha and offset beta under each region, wherein alpha is used for adjusting the contrast of the images, beta is used for adjusting the brightness of the images, f (i, j) represents the pixels of the ith row and the jth column of the original images, g (i, j) represents the corresponding pixels of the output images, and finally realizing brightness processing on the collected images;
L=0.299*R+0.587*G+0.114*B
g(i,j)=α*f(i,j)+β
4.3, carrying out gray processing, Gaussian filtering, adaptive threshold binarization and opening operation on the image after brightness processing by adopting an algorithm in an opencv library, wherein a threshold thresh in the adaptive threshold binarization is selected by carrying out mean processing on image pixels after Gaussian filtering, sum is an image pixel value after filtering, preS is a pixel value of an original image, and a deviation delta is added, wherein the range of the delta is between 20 and 30;
Figure BDA0003289167060000021
4.4, adopting a PP-YOLO target detection algorithm, wherein the Average value of the Average detection precision of each type of target, namely mAP (mean Average precision), on the COCO data set reaches 45.2%, the detection speed reaches 72.9FPS (Frames Per second), and as the cross target of a specific object is identified, the detection of multiple types of objects on the PP-YOLO is simplified into the detection of single type of object, and the detection speed is improved;
step 4.5, a cross positioning mark data set is created, and meanwhile, an image enhancement method is adopted to expand the data set, namely, the data set is rotated, turned over, cut, zoomed and deformed, so that the generalization capability of the PP-YOLO model training is improved;
step 4.6, adopting the trained model to carry out target detection and identifying the cross target;
step 4.7, performing cross contour detection on the identified cross target to obtain a cross center coordinate B (x1, y 1);
step 4.8, comparing the cross center coordinate A (x0, y0) of the target image with the cross center coordinate B (x1, y1) of the image acquired and detected in real time to obtain the offset in the x and y directions, and using the offset as horizontal position positioning information;
step 5, the main control unit associates the fine positioning information and the height value of the unmanned aerial vehicle measured by the laser ranging unit with the actual position of the unmanned aerial vehicle, and transmits the positioning information to external equipment through the communication unit;
and 6, the main control unit stores the rough positioning information, the fine positioning information and the height information in a storage unit.
Furthermore, the laser radar unit performs coarse positioning, specifically:
2.1, scanning a 16-line radar right above a container at a fixed distance to obtain and store point cloud information of a target position;
2.2, obtaining an output matrix T by utilizing an ICP (inductively coupled plasma) algorithm to further obtain position information, wherein R is a rotation transformation matrix, p is a translation transformation matrix, and current point cloud information X acquired by a 16-line radar in real time passes through the T homogeneous transformation matrix to reach target point cloud information Y;
Figure BDA0003289167060000031
T*X=Y
Figure BDA0003289167060000032
Figure BDA0003289167060000033
furthermore, the invention provides an unmanned aerial vehicle positioning control system, wherein the laser radar unit, the visual detection unit, the laser ranging unit, the storage unit and the communication unit are all connected with the main control unit; the laser radar unit obtains coarse positioning information according to the point cloud information; the visual detection unit carries out fine positioning identification on the image acquired in real time to obtain accurate positioning information of the target image; the laser ranging unit measures the flight height of the unmanned aerial vehicle; the main control unit receives coarse positioning information, height information and fine positioning information; the storage unit can store point cloud information, height information and image positioning information of radar, and can read out the stored data through external equipment; the communication unit enables the master to be connected to the communication device.
Furthermore, the laser radar unit is characterized in that an RS-LIDAR-16 radar is installed at the bottom of the body of the quad-rotor unmanned aerial vehicle, a 16-line radar scans the area below the unmanned aerial vehicle, and a container is used as a target object.
Further, the vision inspection unit mounts a camera, model OpenMV4, at the bottom of the drone fuselage, capturing an image as the target image while the camera is aiming at the positioning cross on the upper surface of the target cargo box.
Further, the main control unit adopts an STM32F407 microcontroller.
Furthermore, the laser radar unit adopts an RS-LIDAR-16 type laser radar, communicates with the main control unit through the W5500 Ethernet module, is connected through an SPI interface, and transmits the positioning information to the main control unit by utilizing a TCP/IP protocol.
Furthermore, the laser ranging unit adopts a PANFEE L1-40 laser ranging sensor.
Further, the communication unit comprises a WIFI module, and an ATK-ESP8226 model is adopted.
The invention has the advantages of
Compared with the prior art, the invention has the remarkable advantages that: (1) the illumination adaptability is strong, the brightness self-adaptive algorithm can solve the problem of difficulty in target identification under different illumination intensities instead of a single solution under a certain brightness condition, and meanwhile, as the traditional image processing algorithm is adopted, the processing speed is high, and the processing can be completed within 50 ms. (2) The precision is high. Compared with an unmanned aerial vehicle positioning system in an outdoor place adopting a GPS positioning mode, the unmanned aerial vehicle positioning system adopts a laser combined vision mode, can reduce the defects of GPS signal transmission delay and large positioning error, wherein a laser ranging sensor and a laser radar sensor have high precision, and can realize accurate positioning of a target object beyond one meter and five meters; and the visual positioning unit can realize high-precision positioning of a target object of 0-1.5m in a short distance, and acquire accurate horizontal position positioning information of the target object.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a laser radar positioning flow chart of the method of the present invention;
FIG. 4 is a flow chart of the visual positioning method of the present invention;
FIG. 5 is a diagram illustrating the effect of the pre-processing in the visual positioning according to the method of the present invention
FIG. 6 is a graph of the effect of the invention after PP-YOLO detection in visual positioning.
Detailed Description
A working method of an unmanned aerial vehicle positioning control system based on laser radar and visual detection comprises the following steps:
step 1, a laser radar unit scans in real time to obtain point cloud information under a current position, the current point cloud information is processed to obtain coarse positioning information, and the coarse positioning information is transmitted to a main control unit;
step 2, the main control unit transmits the coarse positioning information to the external equipment through the communication unit;
step 3, the laser ranging unit transmits the measured flying height value of the unmanned aerial vehicle to the main control unit;
step 4, the vision detection unit carries out fine positioning identification on the image acquired in real time, carries out feature extraction on the target object to obtain accurate positioning information of the target image, and transmits the fine positioning information to the main control unit;
step 5, the main control unit associates the fine positioning information and the height value of the unmanned aerial vehicle measured by the laser ranging unit with the actual position of the unmanned aerial vehicle, and transmits the positioning information to external equipment through the communication unit;
and 6, the main control unit stores the rough positioning information, the fine positioning information and the height information in a storage unit.
Further, the radar positioning in step 1 includes:
installing an RS-LIDAR-16 radar at the bottom of the body of the quad-rotor unmanned aerial vehicle, scanning an area below the unmanned aerial vehicle by using a 16-line radar, and using a container as a target object;
scanning a 16-line radar at a fixed distance right above the container to obtain and store point cloud information of a target position;
and obtaining an output matrix T by utilizing an ICP (inductively coupled plasma) algorithm, and further obtaining position information, wherein R is a rotation transformation matrix, p is a translation transformation matrix, and the current point cloud information X acquired by the 16-line radar in real time can reach the target point cloud information Y through the T homogeneous transformation matrix.
Figure BDA0003289167060000051
T*X=Y
Figure BDA0003289167060000052
Figure BDA0003289167060000053
Further, the visual positioning in step 4 includes:
installing a camera with the model of OpenMV4 at the bottom of the unmanned aerial vehicle body, and acquiring an image as a target image when the camera is aligned to a positioning cross on the upper surface of a target container;
acquiring an image in real time, avoiding the situations of overexposure and overexposure of the image, designing a brightness self-adaptive algorithm, firstly calculating a brightness value L of the image, wherein B, G, R respectively represents a pixel mean value of a BGR channel, then carrying out region division on brightness, wherein different brightness values correspond to different regions, and setting different gains alpha and offset beta under each region, wherein alpha can adjust the contrast of the image, beta can adjust the brightness of the image, f (i, j) represents the pixel of the ith row and the jth column of the original image, g (i, j) represents the corresponding pixel of an output image, changing the brightness of the image, and finally realizing brightness processing on the acquired image;
L=0.299*R+0.587*G+0.114*B
g(i,j)=α*f(i,j)+β
carrying out gray processing, Gaussian filtering, adaptive threshold value binarization and opening operation on the image after brightness processing by adopting an algorithm in an opencv library, wherein the threshold value thresh in the adaptive threshold value binarization is selected by carrying out mean processing on image pixels after Gaussian filtering, sum is the image pixel value after filtering, preS is the pixel value of the original image, and a deviation delta is added, wherein the delta range is between 20 and 30;
Figure BDA0003289167060000054
the method adopts a PP-YOLO target detection algorithm, the Average value of the Average detection precision of each type of target, namely mAP (mean Average precision) reaches 45.2% on a COCO data set, the detection speed reaches 72.9FPS (Frames Per second), and the cross target identification of a specific object is adopted, so that the detection of a plurality of types of objects on the PP-YOLO is simplified into the detection of a single type of object, and the detection speed is improved;
creating a cross positioning mark data set, and simultaneously expanding the data set by adopting an image enhancement method, namely rotating, turning, cutting, zooming and deforming, so as to improve the generalization capability of the PP-YOLO model training;
adopting a trained model to detect a target and identifying a cross target;
performing cross contour detection on the identified cross target to obtain a cross center coordinate B (x1, y 1);
and comparing the cross center coordinate A (x0, y0) of the target image with the cross center coordinate B (x1, y1) of the image acquired and detected in real time to obtain offset in x and y directions, and using the offset as horizontal position positioning information.
The invention is described in further detail below with reference to the figures and specific examples.
Examples
With reference to fig. 1, the positioning control system for the unmanned aerial vehicle with the laser radar and the visual detection comprises a main control unit, a laser radar unit, a visual detection unit, a laser ranging unit, a storage unit and a communication unit, wherein the laser radar unit, the visual detection unit, the laser ranging unit, the storage unit and the communication unit are all connected with the main control unit; the laser radar unit obtains coarse positioning information according to the point cloud information; the visual detection unit carries out fine positioning identification on the image acquired in real time to obtain accurate positioning information of the target image; the laser ranging unit measures the flight height of the unmanned aerial vehicle; the main control unit receives coarse positioning information, height information and fine positioning information; the storage unit can store point cloud information, height information and image positioning information of radar, and can read out the stored data through external equipment; the communication unit enables the master to be connected to the communication device.
The main control unit adopts an STM32F407 microcontroller, the controller has abundant peripheral resources and high-efficiency processing speed, can meet the requirement of being connected with a plurality of sensor modules, and can ensure the requirements of the system on instantaneity, rapidity and stability. The laser radar unit adopts an RS-LIDAR-16 type laser radar, communicates with the main control unit through the W5500 Ethernet module, is connected through an SPI interface, and transmits positioning information to the main control unit by utilizing a TCP/IP protocol. The vision detection unit adopts an OpenMV4 camera, a FPU processor STM32H7 with a 32-bit ARM architecture is integrated in the camera, the main frequency reaches 480MHz, the image processing algorithm is faster, the image sensor adopts an OV7725 photosensitive chip, and the unit is communicated with the main control unit through a serial port to transmit the positioning information to the main control unit. The laser ranging unit adopts a PANFEE L1-40 laser ranging sensor, the sensor has the measuring range of 40m, the resolution ratio of 1mm and the repetition precision of +/-1 mm, is a stable and accurate sensor, is communicated with the main control unit through a serial port, and transmits height information to the main control unit. The communication unit comprises a WIFI module, an ATK-ESP8226 model is adopted, and the module supports a UART data communication interface, so that the main control unit can be connected with the WIFI module through a serial port to realize data transmission with external communication equipment.
With reference to fig. 2, the working method of the positioning control system of the unmanned aerial vehicle with the laser radar and the visual inspection of the invention comprises the following steps:
step 1, a laser ranging unit measures the flying height value of an unmanned aerial vehicle in real time and transmits height information to a main control unit;
and 2, the main control unit judges according to the height information, and if the height value is greater than 2m, the radar positioning unit works by combining the graph 3, and the method specifically comprises the following steps:
2.1, scanning a position with a fixed distance of 2m right above the container by a 16-line radar to obtain and store point cloud information of a target position;
and 2.2, obtaining an output matrix T by utilizing an ICP (inductively coupled plasma) algorithm, and further obtaining position information, wherein R is a rotation transformation matrix, p is a translation transformation matrix, and the current point cloud information X acquired by the 16-line radar in real time can reach target point cloud information Y through the T homogeneous transformation matrix.
Figure BDA0003289167060000071
T*X=Y
Figure BDA0003289167060000072
Figure BDA0003289167060000073
Step 3, the master control unit transmits the received radar coarse positioning information to external equipment;
step 4, if the height information received by the main control unit is less than or equal to 2m, the visual positioning unit works by combining with fig. 4, which is as follows:
step 4.1, aligning the camera to the positioning cross on the upper surface of the target container, and collecting the obtained image as a target image;
step 4.2, collecting images in real time, avoiding the occurrence of overexposure and over-darkness of the images, designing a brightness adaptive algorithm, firstly calculating the brightness value L of the images, wherein B, G, R respectively represents the pixel mean value of a BGR channel, then carrying out region division on the brightness, different brightness values correspond to different regions, setting different gains alpha and offset beta under each region, wherein alpha can adjust the contrast of the images, beta can adjust the brightness of the images, f (i, j) represents the pixels of the ith row and the jth column of the original images, g (i, j) represents the corresponding output image pixels, changing the brightness of the images, and finally realizing brightness processing of the collected images;
L=0.299*R+0.587*G+0.114*B
g(i,j)=α*f(i,j)+β
4.3, carrying out gray processing, Gaussian filtering, adaptive threshold binarization and opening operation on the image after brightness processing by adopting an algorithm in an opencv library, wherein a threshold thresh in the adaptive threshold binarization is selected by carrying out mean processing on image pixels after Gaussian filtering, sum is an image pixel value after filtering, preS is a pixel value of an original image, and a deviation delta is added, wherein the range of the delta is between 20 and 30;
Figure BDA0003289167060000081
step 4.4, after the target image is subjected to the steps of 4.1 to 4.3, obtaining a preprocessed image shown in fig. 5, wherein the preprocessed image is a binary image, the outline of a cross target of a target object can be clearly seen, the subsequent detection is facilitated, the preprocessed image adopts a PP-YOLO target detection algorithm, the Average value of the Average detection precision of various types of targets, namely mAP (mean Average precision) reaches 45.2 percent on a COCO data set, the detection speed reaches 72.9FPS Frames Per second, and the cross target of a specific object is identified, so that the detection of various types of objects on the PP-YOLO is simplified into the detection of single type of objects, and the detection speed is improved;
step 4.5, a cross positioning mark data set is created, and meanwhile, an image enhancement method is adopted to expand the data set, namely, the data set is rotated, turned over, cut, zoomed and deformed, so that the generalization capability of the PP-YOLO model training is improved;
step 4.6, adopting the trained model to carry out target detection and identifying the cross target;
step 4.7, after the detection by the PP-YOLO algorithm, performing cross contour detection on the identified cross target to obtain a cross center coordinate B (x1, y1), which can be seen from the detection effect diagram shown in fig. 6, accurately identify the cross according to the image preprocessed in fig. 5, and obtain the center of the cross;
step 4.8, comparing the cross center coordinate A (x0, y0) of the target image with the cross center coordinate B (x1, y1) of the image acquired and detected in real time to obtain the offset in the x and y directions, and using the offset as horizontal position positioning information;
step 5, the main control unit associates the fine positioning information and the height value of the unmanned aerial vehicle measured by the laser ranging unit with the actual position of the unmanned aerial vehicle, and transmits the positioning information to external equipment through the communication unit;
and 6, the main control unit stores the rough positioning information, the fine positioning information and the height information in a storage unit.

Claims (9)

1. An unmanned aerial vehicle positioning control method based on laser radar and visual detection comprises a main control unit, a laser radar unit, a visual detection unit, a laser ranging unit, a storage unit and a communication unit;
the method is characterized by comprising the following steps:
step 1, a laser ranging unit measures the flying height value of an unmanned aerial vehicle in real time and transmits height information to a main control unit;
step 2, the laser radar unit carries out coarse positioning, and the main control unit transmits coarse positioning information to external equipment through the communication unit:
step 3, the laser ranging unit transmits the measured flying height value of the unmanned aerial vehicle to the main control unit, and the main control unit transmits the received radar coarse positioning information to external equipment;
step 4, the vision detection unit carries out fine positioning identification on the image acquired in real time, carries out feature extraction on the target object to obtain accurate positioning information of the target image, transmits the fine positioning information to the main control unit,
step 4.1, aligning the camera to the positioning cross on the upper surface of the target container, and collecting the obtained image as a target image;
and 4.2, acquiring images in real time, and processing the brightness, wherein a brightness self-adaptive algorithm comprises the following steps: firstly, calculating an image brightness value L, wherein B, G, R respectively represents a pixel mean value of a BGR channel, then dividing the brightness into regions, wherein different brightness values correspond to different regions, and setting different gains alpha and offset beta under each region, wherein alpha is used for adjusting the contrast of an image, beta is used for adjusting the brightness of the image, f (i, j) represents the pixel of the ith row and the jth column of the original image, g (i, j) represents the corresponding output image pixel, the brightness of the image is changed, and finally the brightness processing of the acquired image is realized;
L=0.299*R+0.587*G+0.114*B
g(i,j)=α*f(i,j)+β
4.3, carrying out gray processing, Gaussian filtering, adaptive threshold binarization and opening operation on the image after brightness processing by adopting an algorithm in an opencv library, wherein a threshold thresh in the adaptive threshold binarization is selected by carrying out mean processing on image pixels after Gaussian filtering, sum is an image pixel value after filtering, preS is a pixel value of an original image, and a deviation delta is added, wherein the range of the delta is between 20 and 30;
Figure FDA0003289167050000011
4.4, a PP-YOLO target detection algorithm is adopted, the average value of the average detection precision of each type of target on the COCO data set is identified by a specific object cross target, so that the detection of a plurality of types of objects on the PP-YOLO is simplified into the detection of a single type of object, and the detection speed is improved;
step 4.5, a cross positioning mark data set is created, and meanwhile, an image enhancement method is adopted to expand the data set, namely, the data set is rotated, turned over, cut, zoomed and deformed, so that the generalization capability of the PP-YOLO model training is improved;
step 4.6, adopting the trained model to carry out target detection and identifying the cross target;
step 4.7, performing cross contour detection on the identified cross target to obtain a cross center coordinate B (x1, y 1);
step 4.8, comparing the cross center coordinate A (x0, y0) of the target image with the cross center coordinate B (x1, y1) of the image acquired and detected in real time to obtain the offset in the x and y directions, and using the offset as horizontal position positioning information;
step 5, the main control unit associates the fine positioning information and the height value of the unmanned aerial vehicle measured by the laser ranging unit with the actual position of the unmanned aerial vehicle, and transmits the positioning information to external equipment through the communication unit;
and 6, the main control unit stores the rough positioning information, the fine positioning information and the height information in a storage unit.
2. The unmanned aerial vehicle positioning control method based on laser radar and visual inspection as claimed in claim 1, wherein the laser radar unit performs coarse positioning, specifically:
2.1, scanning a 16-line radar right above a container at a fixed distance to obtain and store point cloud information of a target position;
2.2, obtaining an output matrix T by utilizing an ICP (inductively coupled plasma) algorithm to further obtain position information, wherein R is a rotation transformation matrix, p is a translation transformation matrix, and current point cloud information X acquired by a 16-line radar in real time passes through the T homogeneous transformation matrix to reach target point cloud information Y;
Figure FDA0003289167050000021
T*X=Y
Figure FDA0003289167050000022
Figure FDA0003289167050000023
3. an unmanned aerial vehicle positioning control system using the unmanned aerial vehicle positioning control method of claim 1 or 2, wherein the lidar unit, the vision detection unit, the laser ranging unit, the storage unit and the communication unit are all connected with a main control unit; the laser radar unit obtains coarse positioning information according to the point cloud information; the visual detection unit carries out fine positioning identification on the image acquired in real time to obtain accurate positioning information of the target image; the laser ranging unit measures the flight height of the unmanned aerial vehicle; the main control unit receives coarse positioning information, height information and fine positioning information; the storage unit can store point cloud information, height information and image positioning information of radar, and can read out the stored data through external equipment; the communication unit enables the master to be connected to the communication device.
4. The system of claim 3, wherein the LIDAR unit is configured to mount an RS-LIDAR-16 radar at the bottom of the fuselage of the quad-rotor drone, and the 16-line radar scans the area below the drone, using a cargo box as a target object.
5. The positioning control system of an unmanned aerial vehicle according to claim 3, wherein the vision inspection unit is configured to mount a camera of OpenMV4 type at the bottom of the body of the unmanned aerial vehicle, and to capture the image as the target image when the camera is aligned with the positioning cross on the upper surface of the target container.
6. An unmanned aerial vehicle positioning control system of the unmanned aerial vehicle positioning control method according to claim 3, wherein the main control unit employs an STM32F407 microcontroller.
7. The unmanned aerial vehicle positioning control system based on LIDAR and vision detection as claimed in claim 3, wherein the LIDAR unit employs an RS-LIDAR-16 model LIDAR, communicates with the main control unit via a W5500 ethernet module, uses an SPI interface connection, and transmits positioning information to the main control unit using a TCP/IP protocol.
8. The positioning control system for unmanned aerial vehicle based on laser radar and visual inspection as claimed in claim 3, wherein the laser ranging unit employs PANFEE L1-40 laser ranging sensor.
9. The lidar and vision detection based drone positioning control system according to claim 3, wherein the communication unit comprises a WIFI module, model ATK-ESP 8226.
CN202111157405.2A 2021-09-30 2021-09-30 Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection Active CN113970753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111157405.2A CN113970753B (en) 2021-09-30 2021-09-30 Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111157405.2A CN113970753B (en) 2021-09-30 2021-09-30 Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection

Publications (2)

Publication Number Publication Date
CN113970753A true CN113970753A (en) 2022-01-25
CN113970753B CN113970753B (en) 2024-04-30

Family

ID=79587043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111157405.2A Active CN113970753B (en) 2021-09-30 2021-09-30 Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection

Country Status (1)

Country Link
CN (1) CN113970753B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105823478A (en) * 2016-03-14 2016-08-03 武汉卓拔科技有限公司 Autonomous obstacle avoidance navigation information sharing and using method
WO2017177533A1 (en) * 2016-04-12 2017-10-19 深圳市龙云创新航空科技有限公司 Method and system for controlling laser radar based micro unmanned aerial vehicle
GB201715590D0 (en) * 2017-09-26 2017-11-08 Cambridge Consultants Delivery system
CN107450577A (en) * 2017-07-25 2017-12-08 天津大学 UAV Intelligent sensory perceptual system and method based on multisensor
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN113358665A (en) * 2021-05-25 2021-09-07 同济大学 Unmanned aerial vehicle tunnel defect detection method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105823478A (en) * 2016-03-14 2016-08-03 武汉卓拔科技有限公司 Autonomous obstacle avoidance navigation information sharing and using method
WO2017177533A1 (en) * 2016-04-12 2017-10-19 深圳市龙云创新航空科技有限公司 Method and system for controlling laser radar based micro unmanned aerial vehicle
CN107450577A (en) * 2017-07-25 2017-12-08 天津大学 UAV Intelligent sensory perceptual system and method based on multisensor
GB201715590D0 (en) * 2017-09-26 2017-11-08 Cambridge Consultants Delivery system
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN109444911A (en) * 2018-10-18 2019-03-08 哈尔滨工程大学 A kind of unmanned boat waterborne target detection identification and the localization method of monocular camera and laser radar information fusion
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN113358665A (en) * 2021-05-25 2021-09-07 同济大学 Unmanned aerial vehicle tunnel defect detection method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
程健;祖丰收;王东伟;毛少文;马永辉;钱建生;: "露天储煤场无人机自动盘煤系统研究", 煤炭科学技术, no. 05, 31 December 2016 (2016-12-31), pages 162 - 167 *
闫尧;李春书;: "基于激光雷达信息和单目视觉信息的车辆识别方法", 河北工业大学学报, no. 06, 15 December 2019 (2019-12-15), pages 16 - 22 *

Also Published As

Publication number Publication date
CN113970753B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
EP3540464B1 (en) Ranging method based on laser radar system, device and readable storage medium
AU2018282302B2 (en) Integrated sensor calibration in natural scenes
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
EP3792660B1 (en) Method, apparatus and system for measuring distance
CN112669393A (en) Laser radar and camera combined calibration method
CN113359097B (en) Millimeter wave radar and camera combined calibration method
CN109472831A (en) Obstacle recognition range-measurement system and method towards road roller work progress
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN111815717A (en) Multi-sensor fusion external parameter combination semi-autonomous calibration method
CN110873879A (en) Device and method for deep fusion of characteristics of multi-source heterogeneous sensor
CN109801336B (en) Airborne target positioning system and method based on visible light and infrared light vision
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN115546741A (en) Binocular vision and laser radar unmanned ship marine environment obstacle identification method
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN109612333B (en) Visual auxiliary guide system for vertical recovery of reusable rocket
CN109146936B (en) Image matching method, device, positioning method and system
CN116429098A (en) Visual navigation positioning method and system for low-speed unmanned aerial vehicle
CN113970753B (en) Unmanned aerial vehicle positioning control method and system based on laser radar and vision detection
CN114973037B (en) Method for intelligently detecting and synchronously positioning multiple targets by unmanned aerial vehicle
CN115792912A (en) Method and system for sensing environment of unmanned surface vehicle based on fusion of vision and millimeter wave radar under weak observation condition
CN113947141B (en) Roadside beacon sensing system of urban intersection scene
CN115471555A (en) Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching
CN115267756A (en) Monocular real-time distance measurement method based on deep learning target detection
CN111521996A (en) Laser radar installation calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant