CN110274549B - Method and device for measuring harvest target - Google Patents

Method and device for measuring harvest target Download PDF

Info

Publication number
CN110274549B
CN110274549B CN201910550096.1A CN201910550096A CN110274549B CN 110274549 B CN110274549 B CN 110274549B CN 201910550096 A CN201910550096 A CN 201910550096A CN 110274549 B CN110274549 B CN 110274549B
Authority
CN
China
Prior art keywords
target
target object
determining
image
connecting line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910550096.1A
Other languages
Chinese (zh)
Other versions
CN110274549A (en
Inventor
刘晋浩
吴健
刘德健
么汝婷
张昕
马英歌
祁东升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Forestry University
Original Assignee
Beijing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Forestry University filed Critical Beijing Forestry University
Priority to CN201910550096.1A priority Critical patent/CN110274549B/en
Publication of CN110274549A publication Critical patent/CN110274549A/en
Application granted granted Critical
Publication of CN110274549B publication Critical patent/CN110274549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/08Measuring arrangements characterised by the use of optical techniques for measuring diameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a measuring method and a measuring device for a harvesting and breeding target, which are used for determining the target object in an acquired target image and the actual position information of the target object based on the visual characteristics of the target object; acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object; determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line; the chest diameter of the target object at the acquisition position is determined based on the first length of the first connecting line, the second length of the second connecting line and a first included angle between the first connecting line and the second connecting line.

Description

Method and device for measuring harvest target
Technical Field
The application relates to the technical field of forest region harvesting and breeding, in particular to a method and a device for measuring a harvesting and breeding target.
Background
With the rapid development of the industry related to forestry, a large number of trees which are artificially and quickly grown need to be felled in many times, but for the continuous development of the forestry industry, each felled tree meets a certain standard, that is, each felled tree needs to reach a certain specification, for example, the breast diameter of the felled tree is more than a certain specification, and the felled trees can be determined after being manually measured and screened.
However, the forest land environment is complex, the number of obstacles is large, the measurement environment which can be reached by operators is very limited, the generally complex forestry environment can lead to discontinuous harvesting and breeding operation, and under some environments, such as areas with natural disasters like earthquake and flood, or certain rugged terrain environments, the forest land environment is not beneficial to manual measurement, even brings danger to the operators, is not beneficial to measurement of trees, and has low operation efficiency.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method and an apparatus for measuring a harvesting and breeding target, which determine point cloud data of a target object by collecting a forest region target image, determine specification information of the target object by the point cloud data of the object, and thereby determine whether to perform harvesting and breeding work, and can work in an environment where an operator cannot reach, thereby effectively reducing the risk of manual work and improving the efficiency of the work.
The embodiment of the application provides a method for measuring a harvesting and breeding target, which comprises the following steps:
determining a target object in the obtained target image and actual position information of the target object based on the visual characteristics of the target object;
acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object;
determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, based on the first point cloud data, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line;
determining a chest diameter of the target object at the acquisition location based on a first length of the first link, a second length of the second link, and a first angle between the first link and the second link.
Further, the determining the target object in the acquired target image and the actual position information of the target object based on the visual feature of the target object includes:
determining a target area of the target object in the target image;
respectively obtaining a pixel value of each pixel of the target area in each color channel, and calculating a pixel mean value of the target area in each channel based on the obtained pixel values;
determining a target color feature image of the target image based on the pixel mean value in the target area at each channel and within a preset pixel interval range of each channel;
determining a target area characteristic image in the target image and determining a target area characteristic image in the target image based on the target color characteristic image;
and determining a target object in the target image and actual position information of the target object based on the target area characteristic image.
Further, the determining a target area feature image in the target image based on the target color feature image includes:
establishing a pixel coordinate system of the target image;
determining a row coordinate value and a column coordinate value of the central position of the target object in the target color characteristic image in the pixel coordinate system, and a target row numerical value and a target column numerical value in an occupied area range of the target object in the target color characteristic image, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values in the occupied area range of the target object in the target color characteristic image, and the target column numerical value is smaller than other column numerical values except the target column numerical value in all column numerical values in the occupied area range of the target object in the target color characteristic image;
determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image;
and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
Further, the measuring method calculates the chest diameter of the target object by the following formula:
Figure GDA0002755425840000031
wherein D is the diameter at breast height of the target object, R1Is a first length, R, of the first connection line2And theta is a first included angle between the first connecting line and the second connecting line.
Further, after determining the target object in the acquired target image and the actual position information of the target object, the measuring method further includes:
determining an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object;
determining a second included angle between a third connecting line between the acquisition position and the upper boundary position of the target object and a fourth connecting line between the acquisition position and the lower boundary position of the target object;
acquiring second point cloud data of the target object based on any one of the upper boundary and the lower boundary of the target object and the second included angle;
and establishing a three-dimensional stereo image of the target object based on the second point cloud data of the target object.
The embodiment of the present application further provides a measuring device for harvesting and breeding targets, the measuring device includes:
the first determining module is used for determining the target object in the acquired target image and the actual position information of the target object based on the visual characteristics of the target object;
the first acquisition module is used for acquiring first point cloud data of the target object based on the actual position information determined by the first determination module and preset acquisition position information on the target object;
the second determining module is used for determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to two opposite sides of the target object, the first tangent point position and the acquisition position are first, the second tangent point position and the acquisition position are second, and the first included angle is formed between the first connecting line and the second connecting line;
a third determining module, configured to determine, based on the first length of the first connection line, the second length of the second connection line, and the first included angle between the first connection line and the second connection line determined by the second determining module, the chest diameter of the target object at the acquisition position.
Further, the first determining module comprises:
a first determination unit, configured to determine a target area of the target object in the target image;
a calculating unit, configured to respectively obtain a pixel value of each pixel in each color channel of the target region determined by the first determining unit, and calculate a pixel average value in the target region in each color channel based on the obtained pixel values;
a second determining unit, configured to determine a target color feature image of the target image based on the pixel mean value in the target region at each channel calculated by the calculating unit and within a preset pixel interval range of each channel;
a third determining unit, configured to determine a target region feature image in the target image and determine a target area feature image in the target image based on the target color feature image determined by the second determining unit;
a fourth determining unit, configured to determine, based on the target area feature image determined by the third determining unit, a target object in the target image and actual position information of the target object.
Further, the third determining unit is specifically configured to:
establishing a pixel coordinate system of the target image;
determining a row coordinate value and a column coordinate value of the central position of the target object in the target color characteristic image in the pixel coordinate system, and a target row numerical value and a target column numerical value in an occupied area range of the target object in the target color characteristic image, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values in the occupied area range of the target object in the target color characteristic image, and the target column numerical value is smaller than other column numerical values except the target column numerical value in all column numerical values in the occupied area range of the target object in the target color characteristic image;
determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image;
and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
Further, the third determination module calculates the chest diameter of the target object by the following formula:
Figure GDA0002755425840000051
wherein D is the diameter at breast height of the target object, R1Is a first length, R, of the first connection line2And theta is a first included angle between the first connecting line and the second connecting line.
Further, the measuring apparatus further includes:
a fourth determining module, configured to determine an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object determined by the first determining module;
a fifth determining module, configured to determine a second included angle between a third connection line between the collecting position and the upper boundary position of the target object and a fourth connection line between the collecting position and the lower boundary position of the target object;
a second acquisition module, configured to acquire second point cloud data of the target object based on either one of the upper boundary and the lower boundary of the target object determined by the fourth determination module and the second included angle;
and the establishing module is used for establishing a three-dimensional stereo image of the target object based on the second point cloud data of the target object acquired by the second acquisition module.
An embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the method of measuring a procreation target as described above.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method for measuring a harvesting target as described above.
According to the measuring method and the measuring device for the harvesting and breeding target, the target object in the obtained target image and the actual position information of the target object are determined based on the visual characteristics of the target object; acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object; determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, based on the first point cloud data, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line; determining a chest diameter of the target object at the acquisition location based on a first length of the first link, a second length of the second link, and a first angle between the first link and the second link.
Therefore, the point cloud data of the target object is determined through the collection of the forest region target image, the breast diameter of the target object at the preset collection position is determined through the point cloud data of the object, whether the harvesting and breeding work is carried out or not is judged, the operation can be carried out in the environment where the operator cannot reach, the danger of the operator is reduced, and the operation efficiency is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a diagram of a system architecture in one possible application scenario;
fig. 2 is a flowchart of a method for measuring a harvest target according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a target object chest diameter calculation method of the method for measuring a harvest target according to the embodiment of the present application;
fig. 4 is a flowchart of a method for measuring a breeding target according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a measuring apparatus for harvesting and breeding targets according to an embodiment of the present disclosure;
fig. 6 is a second schematic structural diagram of a measuring apparatus for harvesting and breeding targets according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of the structure of the first determination module shown in FIG. 5;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
First, an application scenario to which the present application is applicable will be described. The method and the device can be applied to the technical field of forest region harvesting and breeding, visual features based on the target object are obtained, the actual positions of the target object and the target object in the collected target image are determined, first point cloud data of the target object at the preset collection position are collected, and the breast diameter of the target object at the preset collection position is calculated according to the first point cloud data of the target object. Referring to fig. 1, fig. 1 is a system diagram in the application scenario. As shown in fig. 1, the system includes an acquisition device, a measurement device, and a target object, where the acquisition device is configured to acquire a target image including the target object, which is required by the prediction device, and the measurement device determines actual positions of the target object and the target object in the target image based on a visual feature of the target object after receiving the target image including the target object, which is acquired by the acquisition device, and acquires point cloud data at a preset acquisition position of the target object to obtain a chest diameter of the target object at the preset acquisition position.
Referring to fig. 2, fig. 2 is a flowchart of a method for measuring a harvesting and breeding target according to an embodiment of the present application. As shown in fig. 2, the method for measuring a harvesting and breeding target provided in the embodiment of the present application includes:
step 201, determining a target object in the acquired target image and actual position information of the target object based on the visual characteristics of the target object.
In the step, a target image is obtained, the visual characteristics of a target object in the target image are extracted, the target object in the target image is determined based on the extracted visual characteristics of the target object in the target image, and the actual position information of the target object in the forest area is determined.
For example, in a forest area, when a tree is to be measured, an image of the area where the tree is located is obtained, and based on the obtained image including the tree, the tree in the image and the actual position of the tree are determined.
Step 202, collecting first point cloud data of the target object based on the actual position information and preset collecting position information on the target object.
In the step, according to the determined actual position information of the target object and preset acquisition position information, acquiring first point cloud data of the target object at a preset acquisition position of the target object, and using the first point cloud data in subsequent calculation.
Corresponding to the embodiment, after the actual position of the tree is determined, the first point cloud data of the tree at the preset collecting position can be collected through the two-dimensional laser radar, for example, the point cloud data of the breast diameter of the trunk at the position 1.3m away from the ground where the target tree needs to be collected is firstly set, and the point cloud data of the breast diameter of the trunk at the position 1.3m away from the ground can be collected through the two-dimensional laser radar.
Step 203, based on the first point cloud data, determining a first tangent point position and a second tangent point position of which the collection position is tangent to both opposite sides of the target object, wherein the first tangent point position is a first length of a first connecting line between the first tangent point position and the collection position, the second tangent point position is a second length of a second connecting line between the second tangent point position and the collection position, and the first included angle between the first connecting line and the second connecting line.
In the step, after the first point cloud data of the target object is determined, fitting the acquired first point cloud data of the target object to obtain a contour boundary diagram of a preset acquisition position of the target object, determining a first tangent point position and a second tangent point position, at which the acquisition position is tangent to both opposite sides of the target object, based on the fitted first point cloud data of the target object, and determining a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line.
Corresponding to the embodiment, after the point cloud data of the tree is collected, the point cloud data is fitted, and as the trunk shape of the tree is similar to a circle, a tree boundary outline which is similar to a circle and is 1.3m of the ground is obtained after fitting. The method comprises the steps of determining the positions of first tangent points and second tangent points which are tangent to the two opposite sides of a tree boundary contour after the collection position and the fitting, determining the first length of a first connecting line between the collection position and the first tangent point position and the second length of a second connecting line between the collection position and the second tangent point position, and determining a first included angle between the first connecting line and the second connecting line.
And step 204, determining the chest diameter of the target object at the acquisition position based on the first length of the first connecting line, the second length of the second connecting line and the first included angle between the first connecting line and the second connecting line.
In the step, the chest diameter of the target object at a preset acquisition position is calculated through the determined first length of the first connecting line, the determined second length of the second connecting line and the determined first included angle between the first connecting line and the second connecting line.
Further, step 201 further includes: determining a target area of the target object in the target image; respectively obtaining a pixel value of each pixel of the target area in each color channel, and calculating a pixel mean value of the target area in each channel based on the obtained pixel values; determining a target color feature image of the target image based on the pixel mean value in the target area at each channel and within a preset pixel interval range of each channel; determining a target area characteristic image in the target image and determining a target area characteristic image in the target image based on the target color characteristic image; and determining a target object in the target image and actual position information of the target object based on the target area characteristic image.
Determining a target area where a target object is located in an acquired target image, respectively obtaining a pixel value of each pixel of the target area in each color channel after determining the target area of the target object in the target image, calculating a pixel mean value of the target area in each channel based on all the obtained pixel values, comparing the pixel mean value of the target area in each channel with a preset pixel interval range of each channel, and selecting the target area of which the mean value of each color channel is in the preset pixel interval range of each channel to form the target color feature image; determining a target area characteristic image of the target image on the basis of determining the target color characteristic image, and then determining a target area characteristic image of the target image on the basis of determining the target area characteristic image; and finally, determining a target object in the target image and actual position information of the target object based on the target area feature.
The target area characteristic image is obtained by calculating the total number of connected pixels in a target object in the target area characteristic image on the basis of the target area characteristic image, and comparing the total number with the range of the preset total number of pixels to exclude some target objects with smaller total number of connected pixels in the target area characteristic image.
Further, the determining a target area feature image in the target image based on the target color feature image includes: establishing a pixel coordinate system of the target image; determining a row coordinate value and a column coordinate value of the central position of the target object in the target color characteristic image in the pixel coordinate system, and a target row numerical value and a target column numerical value in an occupied area range of the target object in the target color characteristic image, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values of the target object in the occupied area range in the target color characteristic image, and the target column numerical value is smaller than other column numerical values except the target row numerical value in all column numerical values in the occupied area range of the target object in the target color characteristic image; determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image; and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
In the step, based on an acquired target image, a pixel coordinate system of the target image is established by taking the lower left corner of the target image as an origin O of a coordinate system; determining a row coordinate value and a column coordinate value of a central position of a target object in the target color feature image in the pixel coordinate system, and determining a target row numerical value and a target column numerical value of pixels in an area range occupied by the target object in the target color feature image through the pixel coordinate system, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values of the target object in the area range occupied by the target color feature image, the target column numerical value is smaller than other column numerical values except the target row numerical value in all column numerical values of the target object in the area range occupied by the target color feature image, for example, the 1 st column pixel value in a 1 st column pixel value and the 2 nd column pixel value in a pixel coordinate area corresponding to the target object in the obtained target color feature image include 3 rows of pixels, the 3 rd column pixel value includes 2 rows of pixels, then the target row value is "1" at this time; similarly, if the pixel values in row 1 include 3 rows of pixels, the pixel values in row 2 include 2 rows of pixels, and the pixel values in row 3 include 1 row of pixels, then the target row value is "1"; determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image; and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
Specifically, the target area feature of the target area is calculated by the following formula:
Figure GDA0002755425840000121
wherein A isWHW is the width value and H is the height value.
Further, step 204 calculates the chest diameter of the target object by the following formula (as shown in fig. 3):
Figure GDA0002755425840000122
wherein D is the diameter at breast height of the target object, R1Is a first length, R, of the first connection line2And theta is the second length of the second connecting line and is the included angle between the first connecting line and the second connecting line.
In this step, after first point cloud data of the target object is obtained, a first length R of a first connection line between the acquisition position and the first tangent point is determined1And a second length R of a second line connecting the acquisition position and the second tangent point2And obtaining a first length R1Square of and second length R2Sum of squares, first length R1A second length R2And a double value of the product of cos θ, and determining the square root of the difference of the sum and the double value of the product as the chest diameter of the target object.
The method for measuring the harvesting and breeding target provided by the embodiment of the application determines the target object in the obtained target image and the actual position information of the target object based on the visual characteristics of the target object; acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object; determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, based on the first point cloud data, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line; determining a chest diameter of the target object at the acquisition location based on a first length of the first link, a second length of the second link, and a first angle between the first link and the second link.
Therefore, the first point cloud data of the target object are determined through the collection of the forest region target image, the breast diameter of the target object at the preset collection position is determined through the first point cloud data of the object, whether the harvesting and breeding work is carried out or not is judged, the operation can be carried out in the environment where the operator cannot reach, the danger of the operator is reduced, and the operation efficiency is improved.
Referring to fig. 4, fig. 4 is a flowchart of a method for measuring a harvest target according to another embodiment of the present application. As shown in fig. 4, a method for measuring a harvesting target provided in an embodiment of the present application includes:
step 401, determining a target object in the acquired target image and actual position information of the target object based on the visual characteristics of the target object.
Step 402, determining an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object.
In this step, after the target object in the target image and the actual position information of the target object are determined, the upper boundary position and the lower boundary position of the target object are determined based on the actual position information of the target object.
And 403, determining a second included angle between a third connecting line between the acquisition position and the upper boundary position of the target object and a fourth connecting line between the acquisition position and the lower boundary position of the target object.
In this step, after the upper boundary and the lower boundary of the target object are determined, a second included angle between a third connection line between the acquisition position and the upper boundary position of the target object and a fourth connection line between the acquisition position and the lower boundary position of the target object is determined.
Step 404, collecting second point cloud data of the target object based on either one of the upper boundary and the lower boundary of the target object and the second included angle.
In this step, second point cloud data of the target object is acquired with either one of an upper boundary and a lower boundary of the target object as an initial end, the other end as a terminal end, and the second included angle as a rotation angle.
Step 405, establishing a three-dimensional stereo map of the target object based on the second point cloud data of the target object.
In this step, after second point cloud data of the target object is acquired, a three-dimensional stereogram of the target object is established based on the second point cloud data.
Thus, the situation of the target object can be analyzed through the three-dimensional perspective view of the target object.
Corresponding to the embodiment, after the actual position information of the tree is determined, the upper boundary of the tree and the lower boundary of the tree are determined, the third connecting line between the collecting position and the upper boundary and the fourth connecting line between the collecting position and the lower boundary are determined at the same time, the second included angle between the third connecting line and the fourth connecting line is obtained, the upper boundary (or the lower boundary) of the tree is used as the starting end, the second included angle is used as the rotating angle, the second point cloud data of the tree is collected from top to bottom (or from bottom to top), and the three-dimensional stereo image of the tree is established through the collected second point cloud data. That is to say, the method in the application can establish the tree three-dimensional model through the two-dimensional laser radar.
In addition, in the present embodiment, the tree is taken as an example, but not limited to this, and in other embodiments, the method can also detect the breast diameters of other objects and establish a three-dimensional model map.
The description of step 401 may refer to the description of step 201, and the same technical effect may be achieved, which is not described in detail herein.
The method for measuring the harvesting and breeding target provided by the embodiment of the application determines the target object in the obtained target image and the actual position information of the target object based on the visual characteristics of the target object; determining an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object; determining a second included angle between a third connecting line between the acquisition position and the upper boundary position of the target object and a fourth connecting line between the acquisition position and the lower boundary position of the target object; acquiring second point cloud data of the target object based on any one of the upper boundary and the lower boundary of the target object and the second included angle; and establishing a three-dimensional stereo image of the target object based on the second point cloud data of the target object.
Therefore, the second point cloud data of the target object is determined through the collection of the forest region target image, the three-dimensional stereo image of the target object is established through the second point cloud data of the object, the target object can be further analyzed through the three-dimensional stereo image of the target object, the target object can work in the environment where operators cannot reach, the danger of manual operation is effectively reduced, and the operation efficiency is favorably improved.
Referring to fig. 5 to 7, fig. 5 is a schematic structural diagram of a measuring apparatus for a harvesting target according to an embodiment of the present application, fig. 6 is a second schematic structural diagram of the measuring apparatus for a harvesting target according to the embodiment of the present application, and fig. 7 is a schematic structural diagram of a first determining module shown in fig. 5. As shown in fig. 5, the measuring apparatus 500 of the harvesting target includes:
a first determining module 510, configured to determine, based on a visual feature of a target object, the target object in the acquired target image and actual position information of the target object;
a first collecting module 520, configured to collect first point cloud data of the target object based on the actual position information determined by the first determining module 510 and preset collecting position information on the target object;
a second determining module 530, configured to determine, based on the first point cloud data acquired by the first acquiring module 520, a first tangent point position and a second tangent point position where an acquisition position is tangent to both opposite sides of the target object, a first length of a first connection line between the first tangent point position and the acquisition position, a second length of a second connection line between the second tangent point position and the acquisition position, and a first included angle between the first connection line and the second connection line;
a third determining module 540, configured to determine the chest diameter of the target object at the acquisition position based on the first length of the first connection line, the second length of the second connection line, and the first included angle between the first connection line and the second connection line determined by the second determining module 530.
Further, as shown in fig. 6, the measuring apparatus 500 for harvesting target further includes:
a fourth determining module 550, configured to determine an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object determined by the first determining module 510;
a fifth determining module 560, configured to determine a second included angle between a third connection line between the collecting position and the upper boundary position of the target object determined by the fourth determining module 550 and a fourth connection line between the collecting position and the lower boundary position of the target object determined by the fourth determining module 550;
a second collecting module 570, configured to collect second point cloud data of the target object based on either one of the upper boundary and the lower boundary of the target object determined by the fourth determining module 550 and the second included angle;
a creating module 580, configured to create a three-dimensional perspective view of the target object based on the second point cloud data of the target object acquired by the second acquiring module 570.
Further, as shown in fig. 7, the first determining module 510 includes:
a first determination unit 511, configured to determine a target area of the target object in the target image;
a calculating unit 512, configured to respectively obtain a pixel value of each pixel in each color channel of the target region determined by the first determining unit 511, and calculate a pixel average value in the target region at each channel based on the obtained pixel values;
a second determining unit 513, configured to determine a target color feature image of the target image based on the pixel mean value in the target region at each channel calculated by the calculating unit 512 and within a preset pixel interval range of each channel;
a third determining unit 514, configured to determine a target region feature image in the target image and determine a target area feature image in the target image based on the target color feature determined by the second determining unit 513;
a fourth determining unit 515, configured to determine, based on the target area feature image determined by the third determining unit 514, a target object in the target image and actual position information of the target object.
Further, the third determining unit 514 is specifically configured to:
establishing a pixel coordinate system of the target image;
determining a row coordinate value and a column coordinate value of the central position of the target object in the target color characteristic image in the pixel coordinate system, and a target row numerical value and a target column numerical value in an occupied area range of the target object in the target color characteristic image, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values in the occupied area range of the target object in the target color characteristic image, and the target column numerical value is smaller than other column numerical values except the target column numerical value in all column numerical values in the occupied area range of the target object in the target color characteristic image;
determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image;
and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
Further, the third determining module 540 calculates the chest diameter of the target object by the following formula:
Figure GDA0002755425840000171
wherein D is the diameter at breast height of the target object, R1Is a first length, R, of the first connection line2And theta is a first included angle between the first connecting line and the second connecting line.
The measuring device for the harvesting and breeding target provided by the embodiment of the application determines the target object in the obtained target image and the actual position information of the target object based on the visual characteristics of the target object; acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object; determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, based on the first point cloud data, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line; determining a chest diameter of the target object at the acquisition location based on a first length of the first link, a second length of the second link, and a first angle between the first link and the second link.
Therefore, the point cloud data of the target object is determined through the collection of the forest region target image, the breast diameter of the target object at the preset collection position is determined through the point cloud data of the object, whether the harvesting and breeding work is carried out or not is judged, the operation can be carried out in the environment where the operator cannot reach, the danger of the operator is reduced, and the operation efficiency is improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 8, the electronic device 800 includes a processor 810, a memory 820, and a bus 830.
The memory 820 stores machine-readable instructions executable by the processor 810, when the electronic device 800 runs, the processor 810 and the memory 820 communicate through the bus 830, and when the machine-readable instructions are executed by the processor 810, the steps of the method for measuring an harvesting target in the method embodiment shown in fig. 2 and fig. 4 may be executed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for measuring an harvesting target in the method embodiments shown in fig. 2 and fig. 4 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method of measuring a harvesting target, the method comprising:
determining a target object in the obtained target image and actual position information of the target object based on the visual characteristics of the target object;
acquiring first point cloud data of the target object based on the actual position information and preset acquisition position information on the target object;
determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to both opposite sides of the target object, based on the first point cloud data, a first length of a first connecting line between the first tangent point position and the acquisition position, a second length of a second connecting line between the second tangent point position and the acquisition position, and a first included angle between the first connecting line and the second connecting line;
determining a chest diameter of the target object at the acquisition position based on a first length of the first link, a second length of the second link, and a first angle between the first link and the second link;
determining the target object in the acquired target image and the actual position information of the target object based on the visual characteristics of the target object includes:
determining a target area of the target object in the target image;
respectively obtaining a pixel value of each pixel of the target area in each color channel, and calculating a pixel mean value of the target area in each channel based on the obtained pixel values;
determining a target color feature image of the target image based on the pixel mean value in the target area at each channel and within a preset pixel interval range of each channel;
determining a target area characteristic image in the target image and determining a target area characteristic image in the target image based on the target color characteristic image;
and determining a target object in the target image and actual position information of the target object based on the target area characteristic image.
2. The measurement method according to claim 1, wherein the determining a target area feature image in the target image based on the target color feature image comprises:
establishing a pixel coordinate system of the target image;
determining a row coordinate value and a column coordinate value of the central position of the target object in the target color characteristic image in the pixel coordinate system, and a target row numerical value and a target column numerical value in an occupied area range of the target object in the target color characteristic image, wherein the target row numerical value is smaller than other row numerical values except the target row numerical value in all row numerical values in the occupied area range of the target object in the target color characteristic image, and the target column numerical value is smaller than other column numerical values except the target column numerical value in all column numerical values in the occupied area range of the target object in the target color characteristic image;
determining a first difference between the row coordinate value and the target row numerical value as a width value of a target object in the target color feature image, and determining a second difference between the column coordinate value and the target column numerical value as a height value of the target object in the target color feature image;
and determining the target area characteristic of the target area based on the width value and the height value, and determining a target area characteristic image of the target image.
3. The measurement method according to claim 1, wherein the measurement method calculates the chest diameter of the target object by the following formula:
Figure FDA0002755425830000021
wherein D is the diameter at breast height of the target object, R1Is a first length, R, of the first connection line2And theta is a first included angle between the first connecting line and the second connecting line.
4. The measurement method according to claim 1, wherein after determining the target object in the acquired target image and the actual position information of the target object, the measurement method further comprises:
determining an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object;
determining a second included angle between a third connecting line between the acquisition position and the upper boundary position of the target object and a fourth connecting line between the acquisition position and the lower boundary position of the target object;
acquiring second point cloud data of the target object based on any one of the upper boundary and the lower boundary of the target object and the second included angle;
and establishing a three-dimensional stereo image of the target object based on the second point cloud data of the target object.
5. A measuring device for harvesting targets, the measuring device comprising:
the first determining module is used for determining the target object in the acquired target image and the actual position information of the target object based on the visual characteristics of the target object;
the first acquisition module is used for acquiring first point cloud data of the target object based on the actual position information determined by the first determination module and preset acquisition position information on the target object;
the second determining module is used for determining a first tangent point position and a second tangent point position, wherein the acquisition position is tangent to two opposite sides of the target object, the first tangent point position and the acquisition position are first, the second tangent point position and the acquisition position are second, and the first included angle is formed between the first connecting line and the second connecting line;
a third determining module, configured to determine, based on the first length of the first connection line, the second length of the second connection line, and the first included angle between the first connection line and the second connection line determined by the second determining module, the chest diameter of the target object at the acquisition position;
wherein the first determining module comprises:
a first determination unit, configured to determine a target area of the target object in the target image;
a calculating unit, configured to respectively obtain a pixel value of each pixel in each color channel of the target region determined by the first determining unit, and calculate a pixel average value in the target region in each color channel based on the obtained pixel values;
a second determining unit, configured to determine a target color feature image of the target image based on the pixel mean value in the target region at each channel calculated by the calculating unit and within a preset pixel interval range of each channel;
a third determining unit, configured to determine a target region feature image in the target image and determine a target area feature image in the target image based on the target color feature image determined by the second determining unit;
a fourth determining unit, configured to determine, based on the target area feature image determined by the third determining unit, a target object in the target image and actual position information of the target object.
6. The measurement device of claim 5, further comprising:
a fourth determining module, configured to determine an upper boundary position and a lower boundary position of the target object based on the actual position information of the target object determined by the first determining module;
a fifth determining module, configured to determine a second included angle between a third connection line between the acquisition position and the upper boundary position of the target object determined by the fourth determining module and a fourth connection line between the acquisition position and the lower boundary position of the target object determined by the fourth determining module;
a second acquisition module, configured to acquire second point cloud data of the target object based on either one of the upper boundary and the lower boundary of the target object determined by the fourth determination module and the second included angle;
and the establishing module is used for establishing a three-dimensional stereo image of the target object based on the second point cloud data of the target object acquired by the second acquisition module.
7. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the measurement method of any one of claims 1 to 4.
8. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the measurement method according to one of claims 1 to 4.
CN201910550096.1A 2019-06-24 2019-06-24 Method and device for measuring harvest target Active CN110274549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910550096.1A CN110274549B (en) 2019-06-24 2019-06-24 Method and device for measuring harvest target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910550096.1A CN110274549B (en) 2019-06-24 2019-06-24 Method and device for measuring harvest target

Publications (2)

Publication Number Publication Date
CN110274549A CN110274549A (en) 2019-09-24
CN110274549B true CN110274549B (en) 2020-12-29

Family

ID=67961791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910550096.1A Active CN110274549B (en) 2019-06-24 2019-06-24 Method and device for measuring harvest target

Country Status (1)

Country Link
CN (1) CN110274549B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6931501B1 (en) * 2021-05-24 2021-09-08 株式会社マプリィ Single tree modeling system and single tree modeling method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001275607A1 (en) * 2000-06-27 2002-01-08 Universite Catholique De Louvain Measurement of cylindrical objects through laser telemetry
CN102927921B (en) * 2012-11-13 2016-04-27 北京林业大学 Based on the standing tree tree breast diameter survey method of optics similar triangles method
CN103162635A (en) * 2013-02-21 2013-06-19 北京林业大学 Method for measuring diameter at breast height and tree height with camera
CN103616015B (en) * 2013-11-29 2015-12-02 浙江农林大学 Measure forest stock volume parameter laser panoramic scanning device
CN103791877A (en) * 2014-03-04 2014-05-14 北京林业大学 Method for measuring diameter at breast height of standing tree in fixed point angular distance mode
CN107084672A (en) * 2017-05-08 2017-08-22 北京林业大学 A kind of image collecting device for tree breast-height diameter, system and method

Also Published As

Publication number Publication date
CN110274549A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
Bao et al. Field‐based robotic phenotyping of sorghum plant architecture using stereo vision
US8537337B2 (en) Method and apparatus for analyzing tree canopies with LiDAR data
Peuhkurinen et al. Preharvest measurement of marked stands using airborne laser scanning
CN110458032A (en) Lichee upgrowth situation complete monitoring method, system, Cloud Server and storage medium
Hämmerle et al. Direct derivation of maize plant and crop height from low-cost time-of-flight camera measurements
WO2012063232A1 (en) System and method for inventorying vegetal substance
Röder et al. Application of optical unmanned aerial vehicle-based imagery for the inventory of natural regeneration and standing deadwood in post-disturbed spruce forests
CN113077476A (en) Height measurement method, terminal device and computer storage medium
CN110274549B (en) Method and device for measuring harvest target
CN113128576A (en) Crop row detection method and device based on deep learning image segmentation
CN111474443A (en) Method and device for measuring power transmission line
CN110610438B (en) Crop canopy petiole included angle calculation method and system
CN114548277B (en) Method and system for ground point fitting and crop height extraction based on point cloud data
CN109657540B (en) Withered tree positioning method and system
Smits et al. Individual tree identification using different LIDAR and optical imagery data processing methods
CN115936532A (en) Saline-alkali soil stability assessment method and system based on BP neural network
Korpela et al. Potential of aerial image-based monoscopic and multiview single-tree forest inventory: A simulation approach
Santos et al. Coffee crop coefficient prediction as a function of biophysical variables identified from RGB UAS images.
Wang et al. Automatic estimation of trunk cross sectional area using deep learning
CN113125383A (en) Farming land secondary salinization monitoring and early warning method and system based on remote sensing
Xiang et al. PhenoStereo: a high-throughput stereo vision system for field-based plant phenotyping-with an application in sorghum stem diameter estimation
Ullah et al. Comparing image-based point clouds and airborne laser scanning data for estimating forest heights
CN114913246B (en) Camera calibration method and device, electronic equipment and storage medium
CN114581450A (en) Point cloud image conversion-based corn plant height and stem thickness measuring method and device
CN113111793A (en) Tree identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant