CN112465832A - Single-sided tree point cloud skeleton line extraction method and system based on binocular vision - Google Patents

Single-sided tree point cloud skeleton line extraction method and system based on binocular vision Download PDF

Info

Publication number
CN112465832A
CN112465832A CN202011343171.6A CN202011343171A CN112465832A CN 112465832 A CN112465832 A CN 112465832A CN 202011343171 A CN202011343171 A CN 202011343171A CN 112465832 A CN112465832 A CN 112465832A
Authority
CN
China
Prior art keywords
point
skeleton
candidate
value
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011343171.6A
Other languages
Chinese (zh)
Other versions
CN112465832B (en
Inventor
刘骥
夏张结
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202011343171.6A priority Critical patent/CN112465832B/en
Publication of CN112465832A publication Critical patent/CN112465832A/en
Application granted granted Critical
Publication of CN112465832B publication Critical patent/CN112465832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a binocular vision-based single-sided tree point cloud skeleton line extraction method and system. The method comprises the following steps: s1, acquiring left and right view images obtained by shooting a target tree from a single angle by a binocular stereo camera; s2, acquiring a depth map based on the left and right view images, converting the depth map into a point cloud model according to a camera imaging principle, and recording the point cloud model as a single-sided tree point cloud; s3, extracting N sampling points from the single-face tree point cloud through a down-sampling algorithm; s4, dividing the sampling point set P into M divided areas; s5, based on
Figure DDA0002798661050000011
Performing iterative contraction on a median algorithm to obtain skeleton points, and taking a preset radius with the highest convergence degree as a neighborhood radius updating value of the candidate skeleton points in the iteration; and S6, connecting the skeleton points to obtain a skeleton line. And (3) downsampling the input point cloud to enable the point cloud to be uniformly distributed on the surface of the model, and adjusting the size of the radius of the neighborhood according to the degree of gathering of the sampling points in the neighborhood around the candidate skeleton points, so that the skeleton line is more accurate.

Description

Single-sided tree point cloud skeleton line extraction method and system based on binocular vision
Technical Field
The invention relates to the technical field of extraction of tree point cloud skeleton lines, in particular to a method and a system for extracting a single-sided tree point cloud skeleton line based on binocular vision.
Background
With the development and popularization of computer vision technology, tree point cloud models are frequently applied to the fields of virtual reality, protection of ancient and famous trees, tree growth research, agriculture and forestry and the like. The tree skeleton is used as a one-dimensional expression form of tree three-dimensional information, the shape structure and the characteristic information which are the same as those of the tree point cloud model are kept, and the topological structure and the overall shape of the tree point cloud model are reflected more intuitively. The tree skeleton line can express the plant characteristics more directly than the tree point cloud model, for example, the trend of nutrients in trunks and branches can be more intuitively understood by replacing the whole tree model with the skeleton line in the research of vegetation growth nutrients, so that the tree skeleton line extracted from the three-dimensional point cloud has important application significance.
The method for acquiring the tree skeleton firstly needs to acquire three-dimensional data of trees in the real world, and the methods for acquiring the point cloud mainly comprise two types: the method comprises the steps of acquiring three-dimensional point cloud information of the surface of a tree by comprehensively scanning a real tree through a three-dimensional scanning device, acquiring an image of the tree in all directions by using a camera, and acquiring the three-dimensional point cloud of the surface of the tree by using a camera imaging principle and an image processing technology. Sufficient time and space are needed for acquiring the omnibearing point cloud data, the omnibearing point cloud data cannot be acquired due to the limitation of a shooting environment or an application scene in the actual acquisition process, for example, Chinese herbs in a forest are luxuriant, trees are shielded from each other, and all angles of the trees cannot be acquired; the shooting environment is limited to only be capable of collecting trees in an open place once. In practical application, for example, when outdoor scenes are scanned in the automatic driving process, only the single-side scanning result of trees can be obtained.
In recent years, more and more students research on extracting skeleton lines from tree point clouds, but the current research is mostly to extract skeleton lines from relatively complete tree three-dimensional point clouds. However, in the actual collection process of the tree point cloud, only the tree point cloud information at a single angle, i.e., the single-sided tree point cloud, can be obtained due to the limitation of the shooting environment or the application scene. The tree has multiple branches and extremely complex topological information, and for the tree with single-side point cloud data, a great amount of point clouds are lost, the structural information of the tree is seriously lacked, and the connectivity and the topology cannot be accurately obtained. In addition, equipment and algorithm errors result in uneven point cloud distribution and the presence of noise. Therefore, in order to solve the problems, a skeleton line extraction method for single-sided tree point cloud is provided based on single-sided tree point cloud obtained by shooting trees once by using a binocular stereo camera.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly innovatively provides a method and a system for extracting a single-sided tree point cloud skeleton line based on binocular vision.
In order to achieve the above object, according to a first aspect of the present invention, there is provided a method for extracting a point cloud skeleton line of a single-sided tree based on binocular vision, comprising: s1, acquiring left and right view images obtained by shooting a target tree from a single angle by a binocular stereo camera; s2, acquiring a depth map based on the left and right view images, converting the depth map into a point cloud model according to a camera imaging principle, and recording the point cloud model as a single-sided tree point cloud; s3, extracting N sampling points from the single-sided tree point cloud through a down-sampling algorithm to form a sampling point set P, wherein N is a positive integer; s4, dividing the sampling point set P into M partition areas, and setting a partition label for marking the partition area to which the sampling point belongs for each sampling point, wherein the size of M is determined by N, M is a positive integer, and M is smaller than N; s5, based on L in M divided regions1-Performing iterative shrinkage on a median algorithm to obtain skeleton points, presetting a plurality of radiuses for the neighborhood of each candidate skeleton point in each iteration, obtaining corresponding candidate skeleton point updating positions based on the preset radiuses of the neighborhood, obtaining the gathering degree of sampling points in the neighborhood under different preset radiuses around the candidate skeleton point updating positions corresponding to the neighborhood, and taking the preset radius with the highest gathering degree as the candidate skeleton point updating position in the iterationSelecting a neighborhood radius updating value of the skeleton point; and S6, connecting the skeleton points to obtain a skeleton line.
The technical scheme is as follows: the tree region segmentation method comprises the steps of shooting left and right images of a tree at a single angle through a binocular stereo camera to obtain single-sided tree point clouds, extracting a tree skeleton by using the single-sided tree point clouds, performing down-sampling on input point clouds by using a down-sampling algorithm to enable the point clouds to be uniformly distributed on the surface of a model, guiding point cloud segmentation by using sampled points, enabling region segmentation to be more reasonable and adaptive, and based on L1And in the iterative process, adjusting the radius of the neighborhood according to the degree that sampling points in the neighborhood gather around the candidate skeleton points, so that the contraction effect is more reasonable, the iterative contraction speed is accelerated, and the extracted skeleton line can more accurately restore the tree structure and posture.
In a preferred embodiment of the present invention, the S5 includes: s51, the parameter setting step specifically comprises the following steps: let the sampling point set P be
Figure BDA0002798661030000036
I represents an index set of a sampling point set P; setting the iteration number as k, wherein the initial value of k is 0; let the set of divided regions S ═ Sm}m∈M,smDenotes the mth division region, and the division label set C ═ Ci}i∈I,ciA segmentation label representing the ith sample point; let neighborhood radius h of jth candidate skeleton pointjInitial value of (2)
Figure BDA0002798661030000031
Comprises the following steps:
Figure BDA0002798661030000032
djas candidate skeleton point xjIs located in the divided region sjDiagonal length of, NjRepresenting candidate skeleton points xjIs located in the divided region sjThe number of middle sampling points; setting the first objective function as: argmin Sigmai∈Ij∈J||xj-pi||θ(||xj-piI, j) + R (X), where θ (r, j) is the firstThe function of the weight is that of the weight,
Figure BDA0002798661030000033
r represents the variable of the function θ (r, j), piThe depth value of the ith sample point, R (X), is a regularization term,
Figure BDA0002798661030000034
local repulsive force adjustment parameter of jth candidate skeleton point
Figure BDA0002798661030000035
Respectively represent h with the j-th skeleton point as the centerjThe first principal component corresponding characteristic value, the second principal component corresponding characteristic value and the third principal component corresponding characteristic value obtained by principal component analysis of the local point cloud with the local radius formed by the sampling points,
Figure BDA0002798661030000041
γjrepresents an equilibrium constant; s52, randomly selecting J sampling points from the sampling point set P as initial values X of the candidate skeleton point set X0(ii) a S53, let k be k +1, perform the k-th iterative contraction process: s531, for each candidate skeleton point in each partition region, updating the neighborhood radius and the position of the candidate skeleton point, specifically including: step A, respectively setting three increment hypotheses H for the jth candidate skeleton point0、H1And H2
Figure BDA0002798661030000042
Wherein the content of the first and second substances,
Figure BDA00027986610300000417
representing the radius of the (k-1) th iteration neighborhood of the jth candidate skeleton point;
Figure BDA0002798661030000043
representing the kth iteration neighborhood radius of the jth candidate skeleton point, wherein K1 and K2 respectively represent a first proportionality coefficient and a second proportionality coefficient, and K1 is greater than 0 and K2 is less than 1; step B, utilizing framework point positions based on the k iteration neighborhood radius under the three increment assumptionsObtaining the position of the kth iteration of the jth candidate skeleton point by an updating formula
Figure BDA0002798661030000044
The framework point updating formula is as follows:
Figure BDA0002798661030000045
wherein the first iteration variable
Figure BDA0002798661030000046
Second iteration variable
Figure BDA0002798661030000047
μ represents a local repulsive force parameter of the candidate skeleton point set X;
Figure BDA0002798661030000048
Figure BDA0002798661030000049
Figure BDA00027986610300000410
are respectively sigmaj
Figure BDA00027986610300000411
The (k-1) th iteration value;
Figure BDA00027986610300000412
representing the position of the jth candidate skeleton point in the (k-1) th iteration;
Figure BDA00027986610300000413
representing the position of the jth candidate skeleton point except the jth candidate skeleton point in the candidate skeleton point set X in the (k-1) th iteration; step C, respectively calculating the values of the discriminant functions under the three incremental assumptions, and respectively updating the k-th iteration values of the neighborhood radius and the position of the j-th candidate skeleton point to the values obtained by calculation under the incremental assumption of the maximum value of the discriminant function
Figure BDA00027986610300000414
And
Figure BDA00027986610300000415
the discriminant function is:
Figure BDA00027986610300000416
D(xj”,sj)∈[0,1](ii) a Said xj”Representing the position of a candidate skeleton point obtained by the jth candidate skeleton point under the assumption of increment
Figure BDA0002798661030000051
NjRepresents the j-th candidate skeleton point in the segmentation region sjThe number of middle sampling points; eta (x)j”,sj) Representing a divided region sjSatisfies to point xj”Is in the interval [0, σ (x) ]j”,sj)]Number of inner sampling points, σ (x)j”,sj) Representing a divided region sjFrom sample point to point xj”The variance of the distance of (a); s532, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a first numerical value, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a second numerical value, and merging and outputting the candidate skeleton point set X obtained by the (k) th iteration as a skeleton point set if the reduction of the second numerical value compared with the first numerical value reaches an error threshold value, and entering S6; if the decrease in the second value from the first value does not reach the error threshold, return is made to S53.
The technical scheme is as follows: in the process of acquiring the framework point set by iterative contraction, a judgment function is set, the neighborhood radius is determined by the judgment function, the optimal contracted neighborhood radius of the candidate framework point and the position of the updated candidate framework point are selected by the judgment function value in each iteration, and after the candidate framework point is ensured to be updated each time, the degree of gathering of sampling points in the contracted neighborhood of the candidate framework point around the position of the updated candidate framework point is higher, so that the contraction speed can be accelerated, the extraction precision of the framework point can be improved, and the finally output framework point set can be ensured to be the optimal result by taking the reduction of the target function value as input information for stopping iterative judgment. The adjacent radii of all candidate skeleton points can be increased in a self-adaptive mode, and different parts with different adjacent radii have different contraction speeds in the point cloud contraction process.
In a preferred embodiment of the present invention, a first step is further included between S531 and S532, and the first step includes: for any two adjacent divided regions suAnd svSeparately obtaining the divided regions suAnd svMean value of positions of medium candidate skeleton points
Figure BDA0002798661030000052
And
Figure BDA0002798661030000053
will be provided with
Figure BDA0002798661030000054
As variable x in the discriminant functionj”A 1 is tou∪svAs a variable s in the discriminant functionjObtaining a first discrimination function value; will be provided with
Figure BDA0002798661030000055
As variable x in the discriminant functionj”Dividing the region suAs a variable s in the discriminant functionjObtaining a second judgment function value; will be provided with
Figure BDA0002798661030000056
As variable x in said discriminant functionj”A 1 is tovAs the variable in the discrimination function, obtaining a third discrimination function value; calculating the sum of the second discrimination function value and the third discrimination function value; if the first discrimination function value is larger than the sum of the second discrimination function value and the third discrimination function value, merging the adjacent divided regions suAnd svAnd updating the divided area suAnd svIf the first discrimination function value is not greater than the second discrimination function value and the third discrimination function valueSum of adjacent divided sections s not being combineduAnd sv
The technical scheme is as follows: after the candidate skeleton point location is updated in each iteration, whether partition area merging is needed or not is judged, the partition label information and the partition area of the sampling point are updated after neighborhood partition merging is conducted, when initial iteration is conducted, the point cloud partition area is thin, each partition area is small, and therefore the increase of the candidate skeleton point neighborhood radius value stops when the candidate skeleton point neighborhood radius value is increased to a certain value.
In a preferred embodiment of the present invention, the S6 includes: s61, defining skeleton line as undirected connected graph T ═ (X', E)n) FIG. T is a tree, EnIs the set of minimum spanning tree edges on T, En={<xi',xj>; i ', J ∈ J, X' represents the skeleton point set output in the S5,
Figure BDA0002798661030000062
j represents an index set of the skeleton point set X'; s62, calculating the connection weight between all the skeleton points in the X', wherein the connection weight is the Euclidean distance between two skeleton points; for each skeleton point in X', the following steps are performed: will sum with the skeleton point xi'All the skeleton points and the skeleton point x with the connection weight less than the weight threshold valuei'Connecting to form a connected graph G ═ X', E, wherein E represents a connected edge set on the graph G; and S63, generating a minimum spanning tree based on the connected graph G (X ', E), and selecting the middle edge of the connected graph G (X', E) as a skeleton line segment according to a greedy strategy to obtain the skeleton line.
The technical scheme is as follows: and finally, connecting the initial skeleton points by using a minimum spanning tree algorithm according to the prior knowledge of the trees to obtain a skeleton line, so that the posture and the structure of the trees can be better and really reflected by the skeleton line.
In a preferred embodiment of the present invention, in the S4
Figure BDA0002798661030000061
And K is a proportionality coefficient and is more than 1.
The technical scheme is as follows: and adaptively determining the number M of the divided regions according to the number of the sampling points of the sampling point set.
In order to achieve the above object, according to a second aspect of the present invention, the present invention provides a binocular vision based single-sided tree point cloud skeleton line extraction system, comprising a binocular stereo camera and a processor, wherein the binocular stereo camera is connected with the processor, and the binocular stereo camera shoots towards a target tree at a certain angle; the processor extracts the skeleton line of the target tree by executing the binocular vision-based single-sided tree point cloud skeleton line extraction method.
The technical scheme is as follows: the binocular stereo camera of the system has the advantages of being cheap and easy to obtain, single-sided tree point clouds are obtained by shooting left and right images of trees at a single angle through the binocular stereo camera, tree frameworks are extracted through the single-sided tree point clouds, down sampling is carried out on input point clouds through a down sampling algorithm, the point clouds are uniformly distributed on the surface of a model, the sampled points are used for guiding point cloud segmentation, region segmentation is more reasonable and has self-adaptability, and the system is based on L1-And in the iterative process, the radius of the neighborhood is adjusted according to the degree that sampling points in the neighborhood gather around the candidate skeleton points, so that the contraction effect is more reasonable, the iterative contraction speed is accelerated, and the extracted skeleton line can more accurately restore the tree structure and posture.
Drawings
FIG. 1 is a schematic flow chart of a binocular vision-based single-sided tree point cloud skeleton line extraction method in an embodiment of the invention;
FIG. 2 is a flowchart illustrating steps S3-S6 according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of skeleton point extraction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a local neighborhood of candidate skeleton points according to an embodiment of the present invention;
FIG. 5 is a schematic view of a split area label in accordance with an embodiment of the present invention;
FIG. 6 is a diagram illustrating merging of segmentation regions according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The invention discloses a binocular vision-based single-sided tree point cloud skeleton line extraction method, in a preferred embodiment, the method has a flow schematic diagram as shown in fig. 1 and fig. 2, and comprises the following steps:
s1, acquiring left and right view images obtained by shooting a target tree from a single angle by a binocular stereo camera;
s2, acquiring a depth map based on the left and right view images, converting the depth map into a point cloud model according to a camera imaging principle, and recording the point cloud model as a single-sided tree point cloud;
s3, extracting N sampling points from the single-face tree point cloud through a down-sampling algorithm to form a sampling point set P, wherein N is a positive integer;
s4, dividing the sampling point set P into M partition areas, and setting partition labels for marking the partition areas to which the sampling points belong for each sampling point, wherein the size of M is determined by N, M is a positive integer, and M is smaller than N;
s5, based on L in M divided regions1-The median algorithm iteratively shrinks to obtain skeleton points, and a specific process schematic diagram is shown in fig. 3. Presetting a plurality of radiuses for the neighborhood of each candidate framework point in each iteration, acquiring corresponding candidate framework point updating positions based on the neighborhood preset radiuses, acquiring the gathering degree of sampling points in neighborhoods under different preset radiuses around the candidate framework point updating positions corresponding to the neighborhoods, and taking the preset radius with the highest gathering degree as the neighborhood radius updating value of the candidate framework point in the iteration;
and S6, connecting the skeleton points to obtain a skeleton line.
In the present embodiment, the specific procedures of steps S1 and S2 are:
calibrating a camera. And calibrating the camera by using a Matlab camera calibration tool to obtain camera parameters. The camera calibration method is preferably, but not limited to, selecting an existing checkerboard camera calibration method.
And preprocessing the image. The imaging pattern is distorted due to the influence of camera components, so that distortion of the camera is eliminated by adopting a Zhang Yongyou distortion model. In addition, the left imaging plane and the right imaging plane of a real binocular stereo system are not in the same plane, and the calculation for acquiring the depth map through parallax in the subsequent experiment is based on an ideal parallel binocular stereo system, so that epipolar correction is carried out on the image, rigid body transformation is carried out on the images shown by the two imaging planes in different directions through a homography matrix, and the image is re-projected onto the same plane with parallel optical axes, so that the epipolar level of the two imaging planes is achieved, and the following prior art can be referred to in the specific method: hutton, a. bork, o.josephs, et al.image distortion correction in MRI: a qualitative evaluation [ J ]. neuroisomage, 2002, 16 (1): 217 and 240, which will not be described in detail herein.
And thirdly, stereo matching. And calculating the parallax of each pixel point in the left view and the right view by using an SGBM algorithm, and converting the parallax value into a depth value to obtain a depth map.
And fourthly, repairing the depth map. A hole value with a pixel value of 0 exists in a depth map acquired by a binocular stereo camera, and pixel points with missing pixel values are repaired in order not to influence the quality of point cloud. Preferably, different patching algorithms are adopted when the depth map is patched according to whether the hole pixel is positioned at the edge of the object or not. And adopting an S-T DJBF algorithm to patch the hole pixels in the edge region, and adopting an S-T PDJBF algorithm to patch the hole pixels to be repaired in the non-edge region.
Acquiring the point cloud. According to the principle of converting three-dimensional coordinates into image coordinates, under the condition that a depth map and camera parameters are known, matrix transformation is carried out, the coordinate value of each pixel point in the depth map in a three-dimensional coordinate system is solved, and a point cloud model is obtained, wherein the specific algorithm can refer to the following prior art: (iii) S.I.Belley, M.Flanders, J.F.Soechting.A coordination system for the synthesis of visual and kinetic information [ J ]. Journal of Neuroscience, 1991, 11 (3): 770, 778, will not be described in detail.
In the present embodiment, in step S3, the down-sampling algorithm is preferably, but not limited to, an ALOP point cloud down-sampling method, an existing FPS sampling method, or an existing Feature-FPS sampling method. The problems of noise and uneven point cloud distribution inevitably exist when the point cloud is obtained by using the binocular stereo camera. The quality of the skeleton line extracted later is greatly influenced by the uneven distribution of noise and point cloud. The reasons for noise formation and uneven distribution of a point cloud model obtained based on a binocular stereo camera are complex and roughly divided into three categories: firstly, certain errors exist in hardware equipment when experimental data are shot by using a binocular stereo camera, such as pixel resolution of the camera, camera errors and the like; secondly, the surface texture of the shot object, namely the tree in the nature is complex; and thirdly, errors exist in the measurement system, absolute stillness cannot be guaranteed no matter the camera or the shot experimental object tree is in the process of acquiring the point cloud model by using the binocular stereo camera, and system errors inevitably exist due to position changes caused by vibration in the shooting process. In addition, the tree branches are complex, branches and leaves are flourishing, trunks are shielded from each other, a large amount of point clouds are lost, and a model topological structure is inaccurate due to the lost point clouds. And (3) denoising the single-sided tree point cloud with the problems of noise, uneven distribution and data loss by adopting an ALOP algorithm, and redistributing the data points to uniformly distribute the data points on the surface of the model.
In the present embodiment, in step S4, the number M of divided regions into which the point cloud needs to be divided is determined from the number of point sets P. Dividing the sampling point set P into M partition areas, and setting a partition label for marking the partition area to which the sampling point belongs for each sampling point, wherein the size of M is determined by N, M is a positive integer and is smaller than N; the point set P can be divided into M divided regions by the existing normalization division algorithm, the divided region is schematically shown in fig. 5, and fig. 5 shows a schematic diagram of three adjacent divided regions, preferably,
Figure BDA0002798661030000111
k is a proportionality coefficient, K > 1, K is preferably, but not limited to, 10. The normalized segmentation algorithm can be referred to the prior art J.Shi, J.Malik.normalized volumes and image segmentation [ J.]IEEE Transactions on Pattern Analysis and Machine understanding, 2000, 22 (8): 888-.
In this embodiment, in S6, it is preferable, but not limited to, to select the minimum spanning tree algorithm to connect the initial skeleton points.
In a preferred embodiment, in S2, the method further includes the step of patching the holes of the depth map by using a direction-based joint bilateral filtering algorithm.
In a preferred embodiment, S5 includes:
s51, the parameter setting step specifically comprises the following steps:
let the sampling point set P be
Figure BDA0002798661030000114
I represents an index set of a sampling point set P; setting the iteration number as k, wherein the initial value of k is 0; setting up a partitionSet of fields S ═ Sm}m∈M,smDenotes the mth division region, and the division label set C ═ Ci}i∈I,ciA segmentation label representing the ith sample point;
let neighborhood radius h of jth candidate skeleton pointjInitial value of (2)
Figure BDA0002798661030000112
Comprises the following steps:
Figure BDA0002798661030000113
djas candidate skeleton point xjIs located in the divided region sjDiagonal length of, NjRepresenting candidate skeleton points xjIs located in the divided region sjThe number of middle sampling points;
setting the first objective function as: argmin Sigmai∈Ij∈J||xj-pi||θ(||xj-piL, j) + R (X), where θ (r, j) is a first weight function,
Figure BDA0002798661030000121
r represents the variable of the function θ (r, j), piThe depth value of the ith sample point, R (X), is a regularization term,
Figure BDA0002798661030000122
local repulsive force adjustment parameter of jth candidate skeleton point
Figure BDA0002798661030000123
Figure BDA0002798661030000124
Respectively represent h with the j-th skeleton point as the centerjThe first principal component corresponding characteristic value, the second principal component corresponding characteristic value and the third principal component corresponding characteristic value obtained by principal component analysis of the local point cloud with the local radius formed by the sampling points,
Figure BDA0002798661030000125
γjindicating a balance constantCounting;
s52, randomly selecting J sampling points from the sampling point set P as initial values X0 of the candidate skeleton point set X;
s53, let k be k +1, perform the k-th iterative contraction process:
s531, for each candidate skeleton point in each partition region, updating the neighborhood radius and position of the candidate skeleton point as shown in fig. 4, specifically including:
step A, respectively setting three increment hypotheses H for the jth candidate skeleton point0、H1And H2
H0:
Figure BDA0002798661030000126
H1:
Figure BDA0002798661030000127
H2:
Figure BDA0002798661030000128
Wherein the content of the first and second substances,
Figure BDA0002798661030000129
representing the radius of the (k-1) th iteration neighborhood of the jth candidate skeleton point;
Figure BDA00027986610300001210
representing the kth iteration neighborhood radius of the jth candidate skeleton point, wherein K1 and K2 respectively represent a first proportionality coefficient and a second proportionality coefficient, and K1 is greater than 0 and K2 is less than 1;
step B, obtaining the position of the kth iteration of the jth candidate skeleton point by utilizing a skeleton point position updating formula based on the kth iteration neighborhood radius under the assumption of three increments
Figure BDA00027986610300001211
The framework point updating formula is as follows:
Figure BDA00027986610300001212
wherein the first iteration variable
Figure BDA0002798661030000131
Second iteration variable
Figure BDA0002798661030000132
μ represents a local repulsive force parameter of the candidate skeleton point set X;
Figure BDA0002798661030000133
Figure BDA0002798661030000134
are respectively sigmaj
Figure BDA0002798661030000135
The (k-1) th iteration value;
Figure BDA0002798661030000136
representing the position of the jth candidate skeleton point in the (k-1) th iteration;
Figure BDA0002798661030000137
representing the position of the jth candidate skeleton point except the jth candidate skeleton point in the candidate skeleton point set X in the (k-1) th iteration;
step C, respectively calculating the values of the discriminant functions under the three incremental assumptions, and respectively updating the k-th iteration values of the neighborhood radius and the position of the j-th candidate skeleton point to the values obtained by calculation under the incremental assumption of the maximum value of the discriminant function
Figure BDA0002798661030000138
And
Figure BDA0002798661030000139
the discriminant function is:
Figure BDA00027986610300001310
D(xj”,sj)∈[0,1];
xj”representing the position of a candidate skeleton point obtained by the jth candidate skeleton point under the assumption of increment
Figure BDA00027986610300001311
NjRepresents the j-th candidate skeleton point in the segmentation region sjThe number of middle sampling points; eta (x)j”,sj) Representing a divided region sjSatisfies to point xj”Is in the interval [0, σ (x) ]j”,sj)]Number of inner sampling points, σ (x)j”,sj) Representing a divided region sjFrom sample point to point xj”The variance of the distance of (a);
s532, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a first numerical value, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a second numerical value, and merging and outputting the candidate skeleton point set X obtained by the (k) th iteration as a skeleton point set if the reduction of the second numerical value compared with the first numerical value reaches an error threshold value, and entering S6; if the decrease in the second value from the first value does not reach the error threshold, return is made to S53.
In a preferred embodiment, a first step is further included between S531 and S532, and the first step includes:
as shown in fig. 6, for any two adjacent divided regions suAnd svSeparately obtaining the divided regions suAnd svMean value of positions of medium candidate skeleton points
Figure BDA0002798661030000141
And
Figure BDA0002798661030000142
will be provided with
Figure BDA0002798661030000143
As variable x in discriminant functionj”A 1 is tou∪svAs variable s in discriminant functionjObtaining a first discrimination function value;
will be provided with
Figure BDA0002798661030000144
As variable x in discriminant functionj”Dividing the region suAs variable s in discriminant functionjObtaining a second judgment function value;
will be provided with
Figure BDA0002798661030000145
As variable x in discriminant functionj”A 1 is tovAs a variable in the discrimination function, obtaining a third discrimination function value;
calculating the sum of the second discrimination function value and the third discrimination function value;
if the first discrimination function value is larger than the sum of the second discrimination function value and the third discrimination function value, combining the adjacent divided regions suAnd svAnd updating the divided area suAnd svIf the first discrimination function value is not larger than the sum of the second discrimination function value and the third discrimination function value, the adjacent segmentation intervals s are not combineduAnd sv
In a preferred embodiment, S6 includes:
s61, defining skeleton line as undirected connected graph T ═ (X', E)n) FIG. T is a tree, EnIs the set of minimum spanning tree edges on T, En={<xi',xj>; i ', J ∈ J, X' represents the skeleton point set output in S5,
Figure BDA0002798661030000146
j represents an index set of the skeleton point set X';
s62, calculating the connection weight between all the skeleton points in the X', wherein the connection weight is the Euclidean distance between two skeleton points;
for each skeleton point in X', the following steps are performed:
will sum with the skeleton point xi'All the skeleton points and the skeleton point x with the connection weight less than the weight threshold valuei'Connecting to form a connected graph G ═ X', E, wherein E represents a connected edge set on the graph G;
and S63, generating a minimum spanning tree based on the connected graph G (X ', E), and selecting the middle edge of the connected graph G (X', E) as a skeleton line segment according to a greedy strategy to obtain the skeleton line.
The invention also discloses a binocular vision-based single-sided tree point cloud skeleton line extraction system, in a preferred embodiment, the system comprises a binocular stereo camera and a processor, wherein the binocular stereo camera is connected with the processor and shoots towards a target tree at a certain angle; and the processor extracts the skeleton line of the target tree by executing the binocular vision-based single-sided tree point cloud skeleton line extraction method.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. A single-sided tree point cloud skeleton line extraction method based on binocular vision is characterized by comprising the following steps:
s1, acquiring left and right view images obtained by shooting a target tree from a single angle by a binocular stereo camera;
s2, acquiring a depth map based on the left and right view images, converting the depth map into a point cloud model according to a camera imaging principle, and recording the point cloud model as a single-sided tree point cloud;
s3, extracting N sampling points from the single-sided tree point cloud through a down-sampling algorithm to form a sampling point set P, wherein N is a positive integer;
s4, dividing the sampling point set P into M partition areas, and setting a partition label for marking the partition area to which the sampling point belongs for each sampling point, wherein the size of M is determined by N, M is a positive integer, and M is smaller than N;
s5, based on M divided regions
Figure FDA0002798661020000012
Performing iterative contraction on a median algorithm to obtain framework points, presetting a plurality of radiuses for the neighborhood of each candidate framework point in each iteration, obtaining corresponding candidate framework point updating positions based on the preset radiuses of the neighborhood, obtaining the gathering degree of sampling points in the neighborhoods under different preset radiuses around the candidate framework point updating positions corresponding to the neighborhoods, and taking the preset radius with the highest gathering degree as the neighborhood radius updating value of the candidate framework point in the iteration;
and S6, connecting the skeleton points to obtain a skeleton line.
2. The binocular vision-based single-sided tree point cloud skeleton line extraction method of claim 1, wherein the S5 comprises:
s51, the parameter setting step specifically comprises the following steps:
let the sampling point set P be
Figure FDA0002798661020000011
I represents an index set of a sampling point set P; setting the iteration number as k, wherein the initial value of k is 0; let the set of divided regions S ═ Sm}m∈M,smDenotes the mth division region, and the division label set C ═ Ci}i∈I,ciA segmentation label representing the ith sample point;
is provided with the firstNeighborhood radius h of j candidate skeleton pointsjInitial value of (2)
Figure FDA0002798661020000021
Comprises the following steps:
Figure FDA0002798661020000022
djas candidate skeleton point xjIs located in the divided region sjDiagonal length of, NjRepresenting candidate skeleton points xjIs located in the divided region sjThe number of middle sampling points;
setting the first objective function as: argmin Sigmai∈Ij∈J||xj-pi||θ(||xj-piL, j) + R (X), where θ (r, j) is a first weight function,
Figure FDA0002798661020000023
r represents the variable of the function θ (r, j), piThe depth value of the ith sample point, R (X), is a regularization term,
Figure FDA0002798661020000024
local repulsive force adjustment parameter of jth candidate skeleton point
Figure FDA0002798661020000025
Figure FDA0002798661020000026
Respectively represent h with the j-th skeleton point as the centerjThe first principal component corresponding characteristic value, the second principal component corresponding characteristic value and the third principal component corresponding characteristic value obtained by principal component analysis of the local point cloud with the local radius formed by the sampling points,
Figure FDA0002798661020000027
γjrepresents an equilibrium constant;
s52, randomly selecting J sampling points from the sampling point set P as initial values X of the candidate skeleton point set X0
S53, let k be k +1, perform the k-th iterative contraction process:
s531, for each candidate skeleton point in each partition region, updating the neighborhood radius and the position of the candidate skeleton point, specifically including:
step A, respectively setting three increment hypotheses H for the jth candidate skeleton point0、H1And H2
H0:
Figure FDA0002798661020000028
H1:
Figure FDA0002798661020000029
H2:
Figure FDA00027986610200000210
Wherein the content of the first and second substances,
Figure FDA00027986610200000211
representing the radius of the (k-1) th iteration neighborhood of the jth candidate skeleton point;
Figure FDA00027986610200000212
representing the kth iteration neighborhood radius of the jth candidate skeleton point, wherein K1 and K2 respectively represent a first proportionality coefficient and a second proportionality coefficient, and K1 is greater than 0 and K2 is less than 1;
step B, obtaining the position of the kth iteration of the jth candidate skeleton point by utilizing a skeleton point position updating formula based on the kth iteration neighborhood radius under the assumption of three increments
Figure FDA0002798661020000031
The framework point updating formula is as follows:
Figure FDA0002798661020000032
wherein the first iteration variable
Figure FDA0002798661020000033
Second iteration variable
Figure FDA0002798661020000034
μ represents a local repulsive force parameter of the candidate skeleton point set X;
Figure FDA0002798661020000035
Figure FDA0002798661020000036
Figure FDA0002798661020000037
are respectively sigmaj
Figure FDA0002798661020000038
The (k-1) th iteration value;
Figure FDA0002798661020000039
representing the position of the jth candidate skeleton point in the (k-1) th iteration;
Figure FDA00027986610200000310
representing the position of the jth candidate skeleton point except the jth candidate skeleton point in the candidate skeleton point set X in the (k-1) th iteration;
step C, respectively calculating the values of the discriminant functions under the three incremental assumptions, and respectively updating the k-th iteration values of the neighborhood radius and the position of the j-th candidate skeleton point to the values obtained by calculation under the incremental assumption of the maximum value of the discriminant function
Figure FDA00027986610200000311
And
Figure FDA00027986610200000312
the discriminant function is:
Figure FDA00027986610200000313
D(xj”,sj)∈[0,1];
said xj”Representing the position of a candidate skeleton point obtained by the jth candidate skeleton point under the assumption of increment
Figure FDA00027986610200000314
NjRepresents the j-th candidate skeleton point in the segmentation region sjThe number of middle sampling points; eta (x)j”,sj) Representing a divided region sjSatisfies to point xj”Is in the interval [0, σ (x) ]j”,sj)]Number of inner sampling points, σ (x)j”,sj) Representing a divided region sjFrom sample point to point xj”The variance of the distance of (a);
s532, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a first numerical value, calculating a first objective function value according to the candidate skeleton point set X obtained by the (k-1) th iteration to obtain a second numerical value, and merging and outputting the candidate skeleton point set X obtained by the (k) th iteration as a skeleton point set if the reduction of the second numerical value compared with the first numerical value reaches an error threshold value, and entering S6; if the decrease in the second value from the first value does not reach the error threshold, return is made to S53.
3. The binocular vision-based single-sided tree point cloud skeleton line extraction method as claimed in claim 2, wherein a first step is further included between S531 and S532, and the first step includes:
for any two adjacent divided regions suAnd svSeparately obtaining the divided regions suAnd svMean value of positions of medium candidate skeleton points
Figure FDA0002798661020000041
And
Figure FDA0002798661020000042
will be provided with
Figure FDA0002798661020000043
As variable x in the discriminant functionj”A 1 is tou∪svAs a variable s in the discriminant functionjObtaining a first discrimination function value;
will be provided with
Figure FDA0002798661020000044
As variable x in the discriminant functionj”Dividing the region suAs a variable s in the discriminant functionjObtaining a second judgment function value;
will be provided with
Figure FDA0002798661020000045
As variable x in said discriminant functionj”A 1 is tovAs the variable in the discrimination function, obtaining a third discrimination function value;
calculating the sum of the second discrimination function value and the third discrimination function value;
if the first discrimination function value is larger than the sum of the second discrimination function value and the third discrimination function value, merging the adjacent divided regions suAnd svAnd updating the divided area suAnd svIf the first discrimination function value is not larger than the sum of the second discrimination function value and the third discrimination function value, the adjacent segmentation intervals s are not combineduAnd sv
4. The binocular vision based single-sided tree point cloud skeleton line extraction method of claim 1, wherein the S6 comprises:
s61, defining skeleton line as undirected connected graph T ═ (X', E)n) FIG. T is a tree, EnIs the set of minimum spanning tree edges on T, En={<xi',xj>; i ', J ∈ J, X' represents the skeleton point set output in the S5,
Figure FDA0002798661020000046
j represents an index set of the skeleton point set X';
s62, calculating the connection weight between all the skeleton points in the X', wherein the connection weight is the Euclidean distance between two skeleton points;
for each skeleton point in X', the following steps are performed:
will sum with the skeleton point xi'All the skeleton points and the skeleton point x with the connection weight less than the weight threshold valuei'Connecting to form a connected graph G ═ X', E, wherein E represents a connected edge set on the graph G;
and S63, generating a minimum spanning tree based on the connected graph G (X ', E), and selecting the middle edge of the connected graph G (X', E) as a skeleton line segment according to a greedy strategy to obtain the skeleton line.
5. The binocular vision based single-sided tree point cloud skeleton line extraction method of claim 1, wherein in the S4, the point cloud skeleton line is extracted
Figure FDA0002798661020000051
And K is a proportionality coefficient and is more than 1.
6. The binocular vision based single-sided tree point cloud skeleton line extraction method of claim 1, further comprising a step of repairing a hole of the depth map by a direction-based joint bilateral filtering algorithm in the step S2.
7. A single-sided tree point cloud skeleton line extraction system based on binocular vision is characterized by comprising a binocular stereo camera and a processor, wherein the binocular stereo camera is connected with the processor and shoots towards a target tree at a certain angle; the processor extracts the skeleton line of the target tree by executing the binocular vision-based single-sided tree point cloud skeleton line extraction method of one of claims 1 to 6.
CN202011343171.6A 2020-11-25 2020-11-25 Single-side tree point cloud skeleton line extraction method and system based on binocular vision Active CN112465832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011343171.6A CN112465832B (en) 2020-11-25 2020-11-25 Single-side tree point cloud skeleton line extraction method and system based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011343171.6A CN112465832B (en) 2020-11-25 2020-11-25 Single-side tree point cloud skeleton line extraction method and system based on binocular vision

Publications (2)

Publication Number Publication Date
CN112465832A true CN112465832A (en) 2021-03-09
CN112465832B CN112465832B (en) 2024-04-16

Family

ID=74808278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011343171.6A Active CN112465832B (en) 2020-11-25 2020-11-25 Single-side tree point cloud skeleton line extraction method and system based on binocular vision

Country Status (1)

Country Link
CN (1) CN112465832B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298919A (en) * 2021-04-14 2021-08-24 江苏大学 Skeleton extraction method of three-dimensional plant point cloud model
CN113409265A (en) * 2021-06-11 2021-09-17 华中农业大学 Method and system for dynamically acquiring and analyzing 3D phenotype of tomato in whole growth period
CN113838198A (en) * 2021-08-17 2021-12-24 上海师范大学 Automatic marking method and device for characters in electronic map and electronic equipment
CN115512121A (en) * 2022-08-01 2022-12-23 南京林业大学 Branch point cloud framework extraction method for incompletely simulating tree water and nutrient transmission

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268631A (en) * 2013-05-23 2013-08-28 中国科学院深圳先进技术研究院 Method and device for extracting point cloud framework
CN103632381A (en) * 2013-12-08 2014-03-12 中国科学院光电技术研究所 Method for tracking extended targets by means of extracting feature points by aid of frameworks
CN107223268A (en) * 2015-12-30 2017-09-29 中国科学院深圳先进技术研究院 A kind of three-dimensional point cloud model method for reconstructing and device
CN107330901A (en) * 2017-06-29 2017-11-07 西安理工大学 A kind of object component decomposition method based on skeleton
US10140733B1 (en) * 2017-09-13 2018-11-27 Siemens Healthcare Gmbh 3-D vessel tree surface reconstruction
CN111161267A (en) * 2019-12-09 2020-05-15 西安工程大学 Segmentation method of three-dimensional point cloud model
CN111968089A (en) * 2020-08-15 2020-11-20 晋江市博感电子科技有限公司 L1 median skeleton extraction method based on maximum inscribed sphere mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268631A (en) * 2013-05-23 2013-08-28 中国科学院深圳先进技术研究院 Method and device for extracting point cloud framework
CN103632381A (en) * 2013-12-08 2014-03-12 中国科学院光电技术研究所 Method for tracking extended targets by means of extracting feature points by aid of frameworks
CN107223268A (en) * 2015-12-30 2017-09-29 中国科学院深圳先进技术研究院 A kind of three-dimensional point cloud model method for reconstructing and device
CN107330901A (en) * 2017-06-29 2017-11-07 西安理工大学 A kind of object component decomposition method based on skeleton
US10140733B1 (en) * 2017-09-13 2018-11-27 Siemens Healthcare Gmbh 3-D vessel tree surface reconstruction
CN111161267A (en) * 2019-12-09 2020-05-15 西安工程大学 Segmentation method of three-dimensional point cloud model
CN111968089A (en) * 2020-08-15 2020-11-20 晋江市博感电子科技有限公司 L1 median skeleton extraction method based on maximum inscribed sphere mechanism

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIXIAN FU等: "Tree Skeletonization for Raw Point Cloud Exploiting Cylindrical Shape Prior", 《IEEE ACCESS》, vol. 08, 4 February 2020 (2020-02-04), pages 27327 - 27341, XP011771909, DOI: 10.1109/ACCESS.2020.2971549 *
晁莹等: "基于区域分割的点云骨架提取算法", 《计算机工程》, vol. 43, no. 10, 15 October 2017 (2017-10-15), pages 222 - 227 *
李仁忠等: "基于点云内骨架的分割算法", 《激光与光电子学进展》, vol. 56, no. 22, 21 May 2019 (2019-05-21), pages 90 - 97 *
鲁斌等: "基于改进自适应k均值聚类的三维点云骨架提取的研究", 《自动化学报》, vol. 48, no. 08, 21 October 2020 (2020-10-21), pages 1994 - 2006 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298919A (en) * 2021-04-14 2021-08-24 江苏大学 Skeleton extraction method of three-dimensional plant point cloud model
CN113409265A (en) * 2021-06-11 2021-09-17 华中农业大学 Method and system for dynamically acquiring and analyzing 3D phenotype of tomato in whole growth period
CN113409265B (en) * 2021-06-11 2022-04-12 华中农业大学 Method and system for dynamically acquiring and analyzing 3D phenotype of tomato in whole growth period
CN113838198A (en) * 2021-08-17 2021-12-24 上海师范大学 Automatic marking method and device for characters in electronic map and electronic equipment
CN113838198B (en) * 2021-08-17 2023-12-05 上海师范大学 Automatic labeling method and device for characters in electronic map and electronic equipment
CN115512121A (en) * 2022-08-01 2022-12-23 南京林业大学 Branch point cloud framework extraction method for incompletely simulating tree water and nutrient transmission

Also Published As

Publication number Publication date
CN112465832B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN112465832A (en) Single-sided tree point cloud skeleton line extraction method and system based on binocular vision
CN110363858B (en) Three-dimensional face reconstruction method and system
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN111724433B (en) Crop phenotype parameter extraction method and system based on multi-view vision
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN107833181A (en) A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN110796694A (en) Fruit three-dimensional point cloud real-time acquisition method based on KinectV2
CN106651900A (en) Three-dimensional modeling method of elevated in-situ strawberry based on contour segmentation
CN112200854B (en) Leaf vegetable three-dimensional phenotype measuring method based on video image
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
CN112465889B (en) Plant point cloud segmentation method, system and storage medium based on two-dimensional-three-dimensional integration
CN111429490A (en) Agricultural and forestry crop three-dimensional point cloud registration method based on calibration ball
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN112990063B (en) Banana maturity grading method based on shape and color information
CN107607053A (en) A kind of standing tree tree breast diameter survey method based on machine vision and three-dimensional reconstruction
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN114332689A (en) Citrus identification and positioning method, device, equipment and storage medium
CN112906719A (en) Standing tree factor measuring method based on consumption-level depth camera
CN115239882A (en) Crop three-dimensional reconstruction method based on low-light image enhancement
CN114463425B (en) Workpiece surface featureless point positioning method based on probability Hough straight line detection
CN111951178B (en) Image processing method and device for remarkably improving image quality and electronic equipment
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
CN112686859A (en) Crop CWSI detection method based on thermal infrared and RGB-D camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant