CN113947665A - Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision - Google Patents

Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision Download PDF

Info

Publication number
CN113947665A
CN113947665A CN202111076461.3A CN202111076461A CN113947665A CN 113947665 A CN113947665 A CN 113947665A CN 202111076461 A CN202111076461 A CN 202111076461A CN 113947665 A CN113947665 A CN 113947665A
Authority
CN
China
Prior art keywords
point cloud
hedge
cloud data
data
hedgerow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111076461.3A
Other languages
Chinese (zh)
Other versions
CN113947665B (en
Inventor
蒙艳玫
韩冰
许恩永
韦锦
董振
唐治宏
吴湘柠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University
Original Assignee
Guangxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University filed Critical Guangxi University
Priority to CN202111076461.3A priority Critical patent/CN113947665B/en
Publication of CN113947665A publication Critical patent/CN113947665A/en
Application granted granted Critical
Publication of CN113947665B publication Critical patent/CN113947665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

The invention discloses a map building method of a spherical hedge trimmer based on a multi-line laser radar and monocular vision. The image building method provided by the invention can obtain an accurate three-dimensional point cloud map, and the central control system plans the motion tracks of the hedge trimmer and the manipulator on the basis, so that not only is the frequent operation of driving the hedge trimmer and centering the hedge center avoided, but also the operation efficiency is greatly improved, and the operation quality is also guaranteed.

Description

Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision
Technical Field
The invention relates to the technical field of green belt pruning, in particular to a map building method of a spherical hedge trimmer based on a multi-line laser radar and monocular vision.
Background
With the rapid development of urban highway greening, the hedge trimming task amount on two sides of a highway is gradually increased, so that higher requirements on mechanization and automation of a hedge trimmer are provided. At present, a plurality of colleges and universities and companies in China carry out research on the hedge trimmer to a certain extent, and the mechanical and electrical engineering college of the university of Tarim develops a mobile hedge trimmer, which can complete simple lifting movement manually and is simple to operate; the Jinli hydraulic pressure Limited and Fangzheng Wei corporation developed hedge trimmers that can trim simple shapes such as flat, trapezoidal, tower shapes; the Guangxi university develops a vehicle-mounted hedge trimmer capable of carrying out various complex shapes such as planes, spheres, cylinders and the like. Common hedge shapes are generally rotating bodies, such as cylinders, cones, balls and the like, at present, mechanical equipment for trimming cylindrical hedges is provided with an L-shaped tool rest capable of rotating 360 degrees on a manipulator, and the mechanical equipment for trimming the cylindrical hedges is provided with an arc-shaped tool rest.
Generally, a hedge trimming apparatus is operated by driving a vehicle by hand to a position near a hedge to be trimmed, moving a hand of a robot arm to a position above a center position of the hedge by manual operation, and then performing rotary trimming by a cutter mounted on the hand with a center axis of the hedge as a rotation center. Firstly, because the hedgerow quantity is large and the position distribution is uneven, an operator needs to frequently move the vehicle, so that the operator is required to be skilled in driving technology and consumes time and labor for manual movement; secondly, the centering process of the manipulator position and the hedgerow completely depends on manual work, and mainly uses human eyes to perform centering through videos, so that the method is time-consuming and labor-consuming, and the centering precision cannot be well guaranteed due to long-time work. Therefore, the hedge trimming efficiency is difficult to be improved and the trimming quality is difficult to be ensured, so that the automatic and intelligent hedge trimming equipment is urgently needed to be provided at present, particularly, the image building method of the equipment is optimized, and the aim is to solve the problems.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide a map building method of a spherical hedge trimmer based on a multi-line laser radar and monocular vision, which comprises the following steps:
scanning a hedgerow environment to be pruned by using a multi-line laser radar and a monocular camera, and reading point cloud data and image data;
preprocessing point cloud data and image data, wherein the preprocessing comprises down-sampling point cloud data and depth feature association of multi-line laser radar data and monocular camera data to obtain depth information of visual feature points;
thirdly, clustering and partitioning the preprocessed point cloud data, and removing the point cloud data which have adverse effects on subsequent image construction;
positioning and establishing a three-dimensional point cloud map by using a slam method of tightly coupling a laser-inertial odometer and a visual-inertial odometer; identifying and segmenting the hedgerow and the guardrail by using a target detection algorithm, and removing point cloud data irrelevant to the hedgerow and the guardrail;
and fifthly, establishing a hedge canopy model, further obtaining a calculated spherical hedge center coordinate, storing all spherical hedge center coordinates to be trimmed into a database of the three-dimensional point cloud map, and obtaining the lightweight three-dimensional point cloud map comprising hedges, two side guardrails and the hedge center coordinate.
Further, the method for reading the point cloud data and the image data in the first step comprises the following steps: firstly, performing external reference calibration on a multi-line laser radar and a monocular camera by adopting a PnP method, visually extracting feature points, extracting edges by using a radar to obtain geometric constraints between the multi-line laser radar and the monocular camera, then scanning a hedgerow environment to be pruned by using the multi-line laser radar and the monocular camera, and reading point cloud data and image data.
Further, in the second step, a VoxeGrid filter is adopted to perform downsampling processing on the point cloud data, and the downsampling processing method includes: firstly, performing three-dimensional voxel grid division on point cloud data, and then representing all points in each grid by using a gravity center point;
further, the method for obtaining the depth information of the visual feature point in the second step includes: the visual feature points and the radar points are projected on a unit circle with the monocular camera as the center of a circle, depth points are subjected to down-sampling and stored in a polar coordinate mode, three laser points closest to the visual feature points are searched by using a KD _ tree, and the distance from the monocular camera to the visual feature points is the depth of the visual feature points.
Further, the method for clustering and segmenting the preprocessed point cloud data in the third step comprises the following steps: the method comprises the steps of projecting point cloud data onto a Range Image, representing point cloud coordinates in the Image in an array form, clustering the point clouds, searching through depth-first traversal recursion, traversing 4 points in front of, behind, on the left of and on the right of the point clouds from [0,0] coordinates, comparing the two point clouds respectively, considering the two point clouds to be the same point cloud cluster if the relative angle between the two point clouds is larger than 60 degrees, and completing clustering segmentation of the point cloud data after traversing all the point cloud coordinates.
Further, the method for eliminating the point cloud data which has adverse effect on subsequent image construction in the third step comprises the following steps: and after clustering and segmenting the point cloud data, eliminating the clustering clusters which are less than the minimum point threshold laser point.
Further, the method for positioning and establishing the three-dimensional point cloud map by using the slam method of tightly coupling the laser-inertial odometer and the visual-inertial odometer comprises the following steps: combining a visual-inertial odometer, a laser-inertial odometer constraint and an imu pre-integration constraint in a factor graph, jointly optimizing constraint factors in the factor graph by using isam2, then sensing and mapping the environment, and storing a three-dimensional point cloud map by using a three-dimensional voxel grid.
Further, the fourth step of identifying and segmenting the hedgerow and the guardrails on the two sides by using a target detection algorithm and removing the irrelevant point cloud data comprises the following steps: the hedgerow is only identified by adopting a visual target measuring algorithm improved based on yolo3, then the point cloud clusters of the hedgerow part and the guardrail parts on the two sides are subjected to semantic segmentation, the identified semantic information is mapped into the three-dimensional cloud map, then only the point cloud data of the hedgerow part, the guardrail parts and the guardrail parts on the two sides are reserved, the unrelated point cloud data is removed, and the light weight of the three-dimensional point cloud map is realized. The method for only identifying the hedgerow by adopting the improved visual target measurement algorithm based on yolo3 comprises the following steps: and establishing a data set for marking training, adjusting the weight parameters and only identifying one object of the hedgerow. Further, the method for establishing the hedge canopy model in the fifth step comprises the following steps: taking a curved surface of a vertical ground where symmetric centers of two side guardrails are located as a center line of the hedge, positioning a section of the center position of the hedge according to the center line of the hedge, dividing the curved surface into a section of plane, further vertically projecting a three-dimensional point cloud cluster of the hedge onto the divided corresponding plane to form a two-dimensional point cloud picture, fitting the point cloud data of the hedge by adopting a random sampling consistency method, obtaining optimal geometric parameters through iteration, and establishing a hedge canopy model.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of scanning the environments of the hedgerows on two sides of the highway by using the multi-line laser radar and the monocular camera to obtain point cloud and image data, processing the image and the point cloud to obtain a three-dimensional point cloud map comprising geometric center coordinates and hedgerow outline information of the hedgerows on one section of the highway and guardrail information on two sides of the hedgerows, and providing the three-dimensional point cloud map to a central control system which can perform subsequent motion planning according to the obtained information to complete spherical hedgerow trimming operation. The image building method provided by the invention can obtain an accurate three-dimensional point cloud map, and the central control system plans the motion tracks of the hedge trimmer and the manipulator on the basis, so that not only is the frequent operation of driving the hedge trimmer and centering the hedge center avoided, but also the operation efficiency is greatly improved, the operation quality is also well guaranteed, and the automation and intelligence levels of mechanical equipment of the hedge trimmer are greatly improved.
Drawings
Fig. 1 is a flow chart of a method for constructing a map of a spherical hedge trimmer based on a multiline lidar and monocular vision.
FIG. 2 is a schematic diagram of depth signature correlation of multiline lidar data with monocular camera data.
FIG. 3 is a schematic diagram of converting point cloud data into a two-dimensional map with depth information.
FIG. 4 is a schematic diagram showing comparison of point clouds before and after noise points are removed after clustering segmentation is performed on the point cloud data;
FIG. 5 is a flow chart of positioning and building a three-dimensional point cloud map by a slam method of tightly coupling a laser-inertial odometer and a visual-inertial odometer.
Fig. 6 is a schematic diagram of the recognition of multiple hedgerows by the improved yolov3 visual target measuring algorithm.
Fig. 7 is a schematic diagram of fitting a spherical hedge center on a two-dimensional plane.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
Example 1
Referring to fig. 1, a method for constructing a map of a spherical hedge trimmer based on a multi-line laser radar and monocular vision comprises the following steps:
the method comprises the following steps that firstly, a hedgerow to be pruned is a hedgerow of a central green belt of the highway, and multi-line laser radar point cloud data and monocular camera image data are collected;
firstly, performing external reference calibration on a multi-line laser radar and a monocular camera by adopting a PnP method, visually extracting feature points, extracting edges by using a radar to obtain geometric constraints between the multi-line laser radar and the monocular camera, then scanning a hedgerow environment to be pruned by using the multi-line laser radar and the monocular camera, and reading point cloud data and image data.
Secondly, preprocessing point cloud data and image data;
due to the fact that environmental factors are complex, a plurality of noise points exist in the original point cloud data, the overall efficiency is reduced due to the fact that the number of the original point cloud data is too large, the problems can cause great influences on subsequent point cloud data processing and system operation, and the real-time performance of the system and the accuracy of drawing are reduced. Therefore, the VoxeGrid filter performs downsampling processing on the point cloud data, firstly performs three-dimensional voxel grid division on the point cloud data, and then all points in each grid are represented by a center of gravity point, so that the point cloud distribution is more uniform, and the sufficient amount of point cloud data is ensured while the efficiency is improved; because the data measured by the monocular camera does not have depth information, the problems of mismatching of subsequent images, uncertain scale and the like can be caused, and the estimation on the image construction quality and the center point of the hedgerow is greatly influenced. Radar has the advantage of being robust and accurate in measuring depth, and therefore, the correlation of radar and camera depth characteristics is performed. As shown in fig. 2, the visual feature point and the radar point are projected onto a unit circle with the monocular camera as the center of the circle, the depth point is down-sampled and stored in a polar coordinate form, three laser points closest to the visual feature point are searched by using a KD _ tree, and the distance from the monocular camera to the visual feature point is the depth of the visual feature point.
Thirdly, clustering and partitioning the preprocessed point cloud data, and removing the point cloud data which have adverse effects on subsequent image construction;
due to uncertain environmental factors, the hedgerow does not grow according to a certain shape when growing, the spherical hedgerow is irregular in shape, certain interference is generated on subsequent centering, weeds, sparse leaves and the like cannot be kept in a fixed place under the action of the phoenix, and a dynamic object can also have adverse effects on subsequent point cloud matching. Therefore, the point cloud data which causes negative influence on the result is eliminated by clustering and dividing the point cloud data.
As shown in fig. 3, the point cloud data is projected onto a Range Image, point cloud coordinates in the Image are represented in an array form, then the point clouds are clustered, the point clouds are searched through depth-first traversal recursion, 4 points in front, back, left and right of the point clouds are traversed from [0,0] coordinates, comparison is carried out respectively, if the relative angle between the two point clouds is more than 60 degrees, the point clouds are considered to be the same point cloud cluster, and the clustering segmentation of the point cloud data is completed after traversing all the point cloud coordinates;
after clustering and segmenting the point cloud data, clustering clusters smaller than the minimum point threshold laser point are removed, and the effect is shown in fig. 4.
Positioning and establishing a three-dimensional point cloud map by using a slam method of tightly coupling a laser-inertial odometer and a visual-inertial odometer; identifying and segmenting the hedgerow and the guardrail by using a target detection algorithm, and removing point cloud data irrelevant to the hedgerow and the guardrail;
because the single laser-inertia odometer or the vision-inertia odometer has some defects, the laser-inertia odometer often has the problem of mismatching or matching compensation in places with poor characteristic textures, and the vision-inertia odometer can also have the problem of failure in the environments with changed illumination and poor weather, so that the laser-inertia odometer and the vision-inertia odometer can not achieve the robust positioning effect when working independently, and data fusion is carried out on the two odometers in a tight coupling mode;
as shown in fig. 5, combining a visual-inertial odometer, a laser-inertial odometer constraint and an imu pre-integration constraint in a factor graph, jointly optimizing constraint factors in the factor graph by using isam2, then sensing and mapping the environment, and storing a three-dimensional point cloud map by using a three-dimensional voxel grid;
as shown in fig. 6, by adopting a yolov 3-based improved visual target detection algorithm, aiming at the problems that a conventional yolov3 target detection algorithm can identify various objects but is easy to have false identification, the identification rate is reduced and the like, the invention carries out labeling training by a data set established by the invention, adjusts weight parameters, and only identifies one object of a hedgerow, thereby greatly improving the accuracy of hedgerow identification, effectively reducing the false judgment rate and further improving the accuracy and robustness of identification; and performing semantic segmentation on the point cloud clusters of the hedgerow part and the guardrail parts on two sides, mapping the identified semantic information into the three-dimensional cloud map, only keeping the point cloud data of the hedgerow part, the guardrail parts and the guardrail parts on two sides, removing irrelevant point cloud data, and realizing the light weight of the three-dimensional point cloud map.
Step five, calculating the center coordinates of the spherical hedgerow;
determining the distance between guardrails on two sides of a highway hedgerow by referring to national standards, taking a curved surface perpendicular to the ground where the symmetric centers of the guardrails on the two sides are located as a central line of the hedgerow, positioning a section of the central position of the hedgerow according to the central line of the hedgerow, dividing the curved surface into a section of plane, further vertically projecting a three-dimensional point cloud cluster of the hedgerow onto the divided corresponding plane to form a two-dimensional point cloud picture, fitting the point cloud data of the hedgerow by adopting a random sampling consistency method, obtaining optimal geometric parameters through iteration, and establishing a hedgerow canopy model;
as shown in fig. 7, a spherical hedge center coordinate is further calculated, and all the spherical hedge center coordinates to be trimmed are stored in a database of the three-dimensional point cloud map, so as to obtain a lightweight three-dimensional point cloud map including hedges, side fences, and the hedge center coordinate.
Step six, performing subsequent motion planning on the hedge trimmer and the manipulator;
providing the three-dimensional point cloud map obtained in the fifth step for a central control system, and carrying out motion planning on a hedge trimmer and a manipulator by the central control system according to the obtained information to finish spherical hedge trimming operation; hedge trimmers and manipulators are already known as well-established mechanical products, and are only cited as an illustration of the application of the method for constructing the figures, and are not regarded as the gist of the present invention.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (9)

1. A map building method of a spherical hedge trimmer based on a multi-line laser radar and monocular vision is characterized by comprising the following steps:
scanning a hedgerow environment to be pruned by using a multi-line laser radar and a monocular camera, and reading point cloud data and image data;
preprocessing point cloud data and image data, wherein the preprocessing comprises down-sampling point cloud data and depth feature association of multi-line laser radar data and monocular camera data to obtain depth information of visual feature points;
thirdly, clustering and partitioning the preprocessed point cloud data, and removing the point cloud data which have adverse effects on subsequent image construction;
positioning and establishing a three-dimensional point cloud map by using a slam method of tightly coupling a laser-inertial odometer and a visual-inertial odometer; identifying and segmenting the hedgerow and the guardrail by using a target detection algorithm, and removing point cloud data irrelevant to the hedgerow and the guardrail;
and fifthly, establishing a hedge canopy model, further obtaining a calculated spherical hedge center coordinate, storing all spherical hedge center coordinates to be trimmed into a database of the three-dimensional point cloud map, and obtaining the lightweight three-dimensional point cloud map comprising hedges, two side guardrails and the hedge center coordinate.
2. The mapping method according to claim 1, wherein the first step of reading the point cloud data and the image data comprises: firstly, performing external reference calibration on a multi-line laser radar and a monocular camera by adopting a PnP method, visually extracting feature points, extracting edges by using a radar to obtain geometric constraints between the multi-line laser radar and the monocular camera, then scanning a hedgerow environment to be pruned by using the multi-line laser radar and the monocular camera, and reading point cloud data and image data.
3. The mapping method according to claim 1, wherein: in the second step, a VoxeGrid filter is adopted to perform downsampling processing on the point cloud data, and the downsampling processing method comprises the following steps: the method comprises the steps of firstly carrying out three-dimensional voxel grid division on point cloud data, and then enabling all points in each grid to be represented by a center of gravity point.
4. The mapping method according to claim 1, wherein the second method for obtaining depth information of visual feature points comprises: the visual feature points and the radar points are projected on a unit circle with the monocular camera as the center of a circle, depth points are subjected to down-sampling and stored in a polar coordinate mode, three laser points closest to the visual feature points are searched by using a KD _ tree, and the distance from the monocular camera to the visual feature points is the depth of the visual feature points.
5. The mapping method according to claim 1, wherein the third step of clustering and segmenting the preprocessed point cloud data comprises: the method comprises the steps of projecting point cloud data onto a Range Image, representing point cloud coordinates in the Image in an array form, clustering the point clouds, searching through depth-first traversal recursion, traversing 4 points in front of, behind, on the left of and on the right of the point clouds from [0,0] coordinates, comparing the two point clouds respectively, considering the two point clouds to be the same point cloud cluster if the relative angle between the two point clouds is larger than 60 degrees, and completing clustering segmentation of the point cloud data after traversing all the point cloud coordinates.
6. The mapping method according to claim 1, wherein the step three of eliminating point cloud data having adverse effects on subsequent mapping comprises: and after clustering and segmenting the point cloud data, eliminating the clustering clusters which are less than the minimum point threshold laser point.
7. The mapping method according to claim 1, wherein the fourth step of positioning and building the three-dimensional point cloud map by using the slam method of tightly coupling the laser-inertial odometer and the visual-inertial odometer comprises the following steps: combining a visual-inertial odometer, a laser-inertial odometer constraint and an imu pre-integration constraint in a factor graph, jointly optimizing constraint factors in the factor graph by using isam2, then sensing and mapping the environment, and storing a three-dimensional point cloud map by using a three-dimensional voxel grid.
8. The mapping method according to claim 1, wherein the fourth step of identifying and segmenting the hedgerow and the guardrails on both sides by using a target detection algorithm and removing the irrelevant point cloud data comprises the following steps: the method comprises the steps of only identifying hedges by adopting a yolo 3-based improved visual target measurement algorithm, performing semantic segmentation on point cloud clusters of a hedge part and guardrail parts on two sides, mapping identified semantic information into a three-dimensional cloud map, only keeping point cloud data of the hedge part, the guardrail parts and the guardrail parts on two sides, removing irrelevant point cloud data, and achieving light weight of the three-dimensional point cloud map;
the method for only identifying the hedgerow by adopting the improved visual target measurement algorithm based on yolo3 comprises the following steps: and establishing a data set for marking training, adjusting the weight parameters and only identifying one object of the hedgerow.
9. The mapping method according to claim 1, wherein the step five of establishing the hedge canopy model comprises: taking a curved surface of a vertical ground where symmetric centers of two side guardrails are located as a center line of the hedge, positioning a section of the center position of the hedge according to the center line of the hedge, dividing the curved surface into a section of plane, further vertically projecting a three-dimensional point cloud cluster of the hedge onto the divided corresponding plane to form a two-dimensional point cloud picture, fitting the point cloud data of the hedge by adopting a random sampling consistency method, obtaining optimal geometric parameters through iteration, and establishing a hedge canopy model.
CN202111076461.3A 2021-09-14 2021-09-14 Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision Active CN113947665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111076461.3A CN113947665B (en) 2021-09-14 2021-09-14 Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111076461.3A CN113947665B (en) 2021-09-14 2021-09-14 Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision

Publications (2)

Publication Number Publication Date
CN113947665A true CN113947665A (en) 2022-01-18
CN113947665B CN113947665B (en) 2022-10-28

Family

ID=79328680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111076461.3A Active CN113947665B (en) 2021-09-14 2021-09-14 Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision

Country Status (1)

Country Link
CN (1) CN113947665B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111025323A (en) * 2020-02-17 2020-04-17 广西大学 Centering method of cylindrical hedge trimmer based on multi-line laser radar
CN111149536A (en) * 2019-12-31 2020-05-15 广西大学 Unmanned hedge trimmer and control method thereof
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane
CN112561985A (en) * 2020-10-27 2021-03-26 广西大学 Hedgerow nursery stock trimming and centering method based on binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111149536A (en) * 2019-12-31 2020-05-15 广西大学 Unmanned hedge trimmer and control method thereof
CN111025323A (en) * 2020-02-17 2020-04-17 广西大学 Centering method of cylindrical hedge trimmer based on multi-line laser radar
CN112258600A (en) * 2020-10-19 2021-01-22 浙江大学 Simultaneous positioning and map construction method based on vision and laser radar
CN112561985A (en) * 2020-10-27 2021-03-26 广西大学 Hedgerow nursery stock trimming and centering method based on binocular vision
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TIXIAO SHAN 等: "LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain", 《2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 *
TIXIAO SHAN 等: "LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping", 《ARXIV:2104.10831V1》 *
梁源 等: "基于数据驱动的车身不确定性优化方法", 《重庆理工大学学报(自然科学)》 *
沈小滨 等: "园林修剪机械手运动学与动力学仿真分析", 《广西大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082924A (en) * 2022-04-26 2022-09-20 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion
CN115082924B (en) * 2022-04-26 2024-03-29 电子科技大学 Three-dimensional target detection method based on monocular vision and radar pseudo-image fusion

Also Published As

Publication number Publication date
CN113947665B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN110780305B (en) Track cone detection and target point tracking method based on multi-line laser radar
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN111273305B (en) Multi-sensor fusion road extraction and indexing method based on global and local grid maps
CN110221311B (en) Method for automatically extracting tree height of high-canopy-closure forest stand based on TLS and UAV
CN110222626B (en) Unmanned scene point cloud target labeling method based on deep learning algorithm
CN106204705B (en) A kind of 3D point cloud dividing method based on multi-line laser radar
CN112991347B (en) Three-dimensional-based train bolt looseness detection method
CN110717983A (en) Building facade three-dimensional reconstruction method based on knapsack type three-dimensional laser point cloud data
CN108564650B (en) Lane tree target identification method based on vehicle-mounted 2D LiDAR point cloud data
CN110866934B (en) Normative coding-based complex point cloud segmentation method and system
CN102368158A (en) Navigation positioning method of orchard machine
CN114004938B (en) Urban scene reconstruction method and device based on mass data
CN111553292A (en) Rock mass structural plane identification and occurrence classification method based on point cloud data
CN112561985B (en) Hedgerow nursery stock trimming and centering method based on binocular vision
CN114821571A (en) Point cloud processing method for power cable identification and reconstruction
CN114004869A (en) Positioning method based on 3D point cloud registration
CN114119863A (en) Method for automatically extracting street tree target and forest attribute thereof based on vehicle-mounted laser radar data
CN113947665B (en) Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision
Li et al. 3D autonomous navigation line extraction for field roads based on binocular vision
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
CN113724387A (en) Laser and camera fused map construction method
CN114863376A (en) Road marking segmentation method and system based on vehicle-mounted laser point cloud data
CN111142116A (en) Road detection and modeling method based on three-dimensional laser
CN112232248B (en) Method and device for extracting plane features of multi-line LiDAR point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant