CN106326846B - The forest plant parallel extraction method of unmanned plane image - Google Patents

The forest plant parallel extraction method of unmanned plane image Download PDF

Info

Publication number
CN106326846B
CN106326846B CN201610675005.3A CN201610675005A CN106326846B CN 106326846 B CN106326846 B CN 106326846B CN 201610675005 A CN201610675005 A CN 201610675005A CN 106326846 B CN106326846 B CN 106326846B
Authority
CN
China
Prior art keywords
image
test point
ratio
distance
log
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610675005.3A
Other languages
Chinese (zh)
Other versions
CN106326846A (en
Inventor
姜浩
李丹
陈水森
刘尉
王重洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Geography of GDAS
Original Assignee
Guangzhou Institute of Geography of GDAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Geography of GDAS filed Critical Guangzhou Institute of Geography of GDAS
Priority to CN201610675005.3A priority Critical patent/CN106326846B/en
Publication of CN106326846A publication Critical patent/CN106326846A/en
Application granted granted Critical
Publication of CN106326846B publication Critical patent/CN106326846B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of forest plant parallel extraction method of unmanned plane image, first by the way of GPU parallel processings, blob detection based on metric space technology is carried out to the forest zone image of unmanned plane shooting, obtain test point, CPU serial modes are used again, by the detection point deletion without green, remaining test point is to extract result.Used metric space technology can identify the forest of different canopy layers size under various yardsticks, precision is higher, again due to carrying out object detection by the way of GPU parallel processings so that this extracting method has higher efficiency and speed, and support is provided for the investigation and research of forest plant.

Description

The forest plant parallel extraction method of unmanned plane image
Technical field
The present invention relates to the investigation and research technical field of forest plant, and in particular to a kind of forest of unmanned plane image is planted Strain parallel extraction method.
Background technology
The remote sensing of unmanned plane passive optical in spatial and temporal resolution, can have compared with the remote sensing of satellite passive optical in terms of availability, cost There is greater advantage, be currently more and more applied in the investigation and research of forest plant.For forest (mainly orchard, Artificial forest etc.) for plant extraction, mainly using digital image processing techniques, mesh is identified from the image of unmanned plane shooting Mark individual plants are simultaneously counted or analyzed.
However, the accuracy of current digital image processing techniques is generally relatively low, the forest of different canopy layers is difficult to know simultaneously Not.It is simultaneously as universal higher currently without man-machine image resolution ratio so that Digital Image Processing needs to carry out substantial amounts of computing, Cause algorithm time-consuming longer.
The content of the invention
In view of the shortcomings of the prior art, it is an object of the invention to provide a kind of forest plant of unmanned plane image to carry parallel Method is taken, to improve the accuracy and speed of image zooming-out.
To achieve these goals, the present invention adopts the technical scheme that:
A kind of forest plant parallel extraction method of unmanned plane image, including step:
Obtain the forest zone image of unmanned plane shooting;
By the way of GPU parallel processings, the blob based on metric space technology is carried out to forest zone image Detection, obtains Initial Detection Point;
Initial Detection Point is screened, by the detection point deletion without green;
Using the test point after screening as extraction result.
Compared with prior art, the beneficial effects of the present invention are:
The present invention is detected using metric space technology to the forest zone image that unmanned plane obtains, can be under various yardsticks The forest of different canopy layers size is identified, due to adding object analysis step, precision is higher.Meanwhile GPU is used to committed step The mode of parallel processing so that this extracting method has higher efficiency and speed, and the investigation and research for forest plant provides Support.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the forest plant parallel extraction method of unmanned plane image of the present invention.
Embodiment
With reference to embodiment, the present invention is further illustrated.
The forest plant parallel extraction method of unmanned plane image of the present invention, as shown in figure 1, including step:
Step s101, the forest zone image of unmanned plane shooting is obtained;
Step s102, by the way of GPU parallel processings, the binary system based on metric space technology is carried out to forest zone image Large object detects, and obtains test point;
Step s103, Initial Detection Point is screened, by the detection point deletion without green;
Step s104, using the test point after screening as extraction result.
Using PyCUDA platforms, the parallel extraction algorithm of forest plant of unmanned plane image is realized.CUDA(Compute Unified Device Architecture) technology be NVIDIA companies exploitation class C language GPU programming platforms.And PyCUDA It is CUDA Python encapsulation, the encapsulation provides the automatic clearing function of object, and by combining Numpy, Scipy and GDAL Deng Python scientific algorithms storehouse, complicated applications can be easily developed.
Above-mentioned algorithm can be divided into:1) input;2) blob (Binary based on metric space technology Large Objects, BLOBS) detection;3) object analysis;4) export, totally four parts.Wherein third portion is image procossing, GPU parallel processings can be carried out, and improved efficiency can be obtained, is the core of the present invention, is programmed using parallel algorithm, Remainder uses serial programming.
Wherein, BLOBS detection parts include 3 core procedures:
1) multiple dimensioned LoG (Laplacian of Gaussian, LoG, Laplce-Gauss) filtering of original image;
2) the local maxima value filtering of LoG filter scales spatial image;
3) combined sorting based on area of test point.
Above-mentioned steps are described in detail below.
Algorithm includes 5 parameters, respectively m_grey, min_sigma, max_sigma, num_sigma altogether, threshold,overlap.(note:Sigma represents the yardstick of gaussian filtering, and its value is higher, and fog-level is higher) wherein, m_ Grey is the gray level image of input, and min_sigma, max_sigma, num_sigma corresponds to Laplace-gaussian Filtered respectively Smallest dimension, out to out and yardstick quantity, threshold represent BLOB identification gray thresholds, and overlap represents BLOB screenings Area anti-eclipse threshold.
Technology contents will illustrate respectively according to algorithm process step:
(1) input:Unmanned plane visible ray RGB (Red, Green, Blue, RGB) image is read in by GDAL.Utilize public affairs RGB spectrum conversions are gray level image by formula 1, and Y represents gray value.
Y=0.2125*R+0.7154*G+0.0721*B (formula 1)
(2) blob (Binary Large Objects, the BLOBS) detection based on metric space technology:
Preparation process:According to parameter min_sigma, max_sigma, num_sigma, num_sigma etc. are calculated respectively Away from sigma values, be designated as sigma_list
1) the multiple dimensioned LoG filtering of original image:For original image m_grey, it is respectively adopted each in sigma_list Individual sigma carries out LoG filtering, obtains LoG filter scale spatial images.Wherein, the formula of LoG filtering is:
Wherein, x, y are respectively the x in image, and y-coordinate, σ (sigma) is gaussian filtering yardstick.
Implementation process is:
The sigma (≤50) larger due to handling unmanned plane image needs, and the block that current CUDA equipment allows (Block) for thread (Thread) number in no more than 1024 (32*32), it 8.0 is that window size has reached to be as sigma 32.Therefore, two dimensional filter can not meet demand, it is necessary to be combined using two one-dimensional LoG wave filters, so as to sigma compared with LoG filtering is carried out when big.Combined method is, for a certain sigma:
1. carrying out column direction LoG filtering to original image, line direction gaussian filtering is then carried out, obtains Mlogx
2. carrying out line direction LoG filtering to original image, column direction gaussian filtering is then carried out, obtains Mlogy
③Mlog=(Mlogx+Mlogy)*(-sigma2)。
The final each sigma being directed in sigma_list is filtered, and merges into 3-D view array.
Note:Kernel functions are using constant memory (Constant Memory) storage filter parameter;And for every row or Row are using one-dimension array index shared drive (Shared Memory), to accelerate arithmetic speed, i.e., the institute in each Block There is the view data that Thread (1024) shares a caching.Because filter size is larger, to avoid edge data from wasting, BORDER PROCESSING mode is reflects (Reflect), i.e. M (x-i)=M (x+i-1).3. walks due to not being related to filtering, directly in master Display memory (Global Memory) operates.
2) the local maxima value filtering of LoG filter scales spatial image
Implementation process is:
1. window size is 3 (radius 1) individual pixels, three-dimensional LoG filtering images group is asked for respectively in x, y, z directions are most Big value, and generate maximum cube;
2. the big value cube of contrast and LoG cubes, for meeting the pixel of formula as potential BLOBS.
ImageLoG(x, y, z)=Imagemax(x, y, z) and ImageLoG(x,y,z)>Threshold (formula 2)
3. export all potential BLOBS x, y-coordinate, and sigma values corresponding to z-axis
Note:For every row or column using one-dimension array index shared drive (Shared Memory), to accelerate computing speed Spend, i.e., all Thread (1024) in each Block share the view data of a caching.And for z-axis, due to quantity Less than 1024, with 1024 Threads of x-axis joint distribution, shared drive is indexed using two-dimensional array.To avoid wave filter on side Window pixel quantity, which is reduced, during edge causes excessive pixel to be selected, and BORDER PROCESSING mode is constant (Constant), i.e. M (x- I)=0.
3) combined sorting based on area of test point
Potential BLOBS is inputted, is the List forms of Python, each unit includes x, the element of y, sigma tri-.This Step will carry out combination of two to all BLOBS, BLOBS area be contrasted, if certain two BLOB degree of overlapping is less than threshold value Overlap, then delete the less BLOB of area.This step is using the GPU difficult points realized, difficult because BLOBS length is unknown With using two BLOB elements of the two-dimensional array in the combination of Thread indexes.The present invention is using one-dimension array index combination sequence Number, then by combining, sequence number is counter to push away BLOB elements.Principle is code 1:
Implementation process is:
1. according to BLOBS quantity NBLOBS, the total N of combination of two is calculated at CPU ends using formula 3combination
Ncombination=NBLOBS*(NBLOBS- 1)/2 (formula 3)
It is each Block, Thread calculates the one-dimensional sequence number of combination corresponding to it, using iterative method 2. starting one-dimension array Calculate its corresponding two BLOBS sequence number;
3. calculate degree of overlapping OL using formula 4:
R1=sigma1*21/2,R2=sigma2*21/2
Distance=((x1–x2)2+(y1–y2)2)1/2
If Distance>R1+R2
OL=0
If Distance≤| R1-R2|
OL=1
Otherwise:
Ratio1=(Distance2+R1 2–R2 2)/(2.0*Distance*R1)
And Ratio1=max (Ratio1, -1.0), Ratio1=min (Ratio1,1.0)
Acos1=arccos (Ratio1)
Ratio2=(Distance2+R2 2–R1 2)/(2.0*Distance*R2)
And Ratio2=max (Ratio2, -1.0), Ratio2=min (Ratio2,1.0)
Acos2=arccos (Ratio2)
A=-Distance+R2+R1
B=Distance-R2+R1
C=Distance+R2-R1
D=Distance+R2+R1
Area=R1 2*Acos1+R2 2*Acos2-0.5*(|A*B*C*D|)1/2
Amin=min (R1,R2)
OL=Area/ (π * Amin 2) (formula 4)
In formula, sigma1And sigma2Gaussian filtering yardstick 1 and gaussian filtering yardstick 2, x are represented respectively1、y1Represent Gauss Under filter scale 1, the coordinate of potential test point 1, x2、y2Represent under gaussian filtering yardstick 2, the coordinate of potential test point 2, Distance represents Euclidean distance;R1、R2、Ratio1、Ratio2、Acos1、Acos2、A、B、C、D、Area、AminAmong representing Variable.
4. at CPU ends, the BLOB that all OL values are less than overlap threshold values is deleted, remaining BLOBS is what is extracted BLOBS, its color is analyzed followed by subsequent step, further screening.
Note:Due to the symmetry of GPU chips, the only meeting when the Blocks*Threads values of distribution are 2 integer power Optimum performance is played, and the Blocks*Threads maximums that GTX960m video cards allow are 2097152 (221), therefore the present invention Using the value as Kernel start-up parameters.
(3) object analysis:Original image is converted into CIE-Lab color spaces from rgb color space, conversion regime is shown in public affairs Formula 9.The BLOBS positions extracted according to previous step and area, calculate all BLOB a passage averages, and deletion value is higher than 0 BLOBS (a values are more than 0 and represented without green), left point is testing result
X=R*0.4124+G*0.3576+B*0.1805 (formula 9)
Y=R*0.2126+G*0.7152+B*0.0722
Z=R*0.0193+G*0.1192+B*0.9505
L=116*f (Y) -16
A=500* (f (X)-f (Y))
B=200* (f (Y)-f (Z))
Wherein, x is worked as>When 0.008856, f (x)=x^ (1/3), x represent any one in X, Y, Z, work as x<= When 0.008856, f (x)=(7.787*x)+(16/116).
(4) export:Export as BLOBS lists, wherein each BLOBS x is included, y-coordinate and its radius (note:Radius is sigma)。
Above-listed detailed description is illustrating for possible embodiments of the present invention, and the embodiment simultaneously is not used to limit this hair Bright the scope of the claims, all equivalence enforcements or change without departing from carried out by the present invention, it is intended to be limited solely by the scope of the claims of this case.

Claims (2)

1. the forest plant parallel extraction method of a kind of unmanned plane image, it is characterised in that including step:
Obtain the forest zone image of unmanned plane shooting;
By the way of GPU parallel processings, the blob detection based on metric space technology is carried out to forest zone image, Obtain Initial Detection Point;
Initial Detection Point is screened, by the detection point deletion without green;
Using the test point after screening as extraction result,
The forest zone image that the step obtains unmanned plane shooting includes:
The visible ray RGB image in the forest zone that unmanned plane is shot is read in by GDAL, RGB image is converted into gray-scale map using following formula Picture, Y represent gray value:
Y=0.2125*R+0.7154*G+0.0721*B
The step, which carries out the blob detection based on metric space technology to forest zone image, to be included:
Out to out, smallest dimension and the yardstick quantity of LoG filtering are determined, is calculated equidistant between out to out and smallest dimension Each yardstick of distribution, and determine the identification gray threshold of test point and the screening area anti-eclipse threshold of test point;
By the way of GPU parallel processings, multiple dimensioned LoG filtering is carried out to the gray level image in forest zone, obtains LoG filter scales sky Between image;
By the way of GPU parallel processings, local maxima value filtering is carried out to LoG filter scales spatial image, obtains potential inspection Measuring point;
By the way of GPU parallel processings, the combined sorting based on area is carried out to potential test point:In the potential of combination of two In test point, if area degree of overlapping is less than the screening area anti-eclipse threshold, the less potential test point of area is deleted, will be surplus Remaining potential test point as Initial Detection Point,
The step, which carries out multiple dimensioned LoG filtering to the gray level image in forest zone, to be included:By the way of GPU parallel processings, for Each gaussian filtering yardstick sigma is filtered as follows, and merges into three-dimensional LoG filtering images array:
Column direction LoG filtering is carried out to the gray level image in forest zone, line direction gaussian filtering is then carried out, obtains Mlogx
Line direction LoG filtering is carried out to the gray level image in forest zone, column direction gaussian filtering is then carried out, obtains Mlogy
Mlog=(Mlogx+Mlogy)*(-sigma2),
The step carries out local maxima value filtering to LoG filter scales spatial image, and obtaining potential test point includes:
By the way of GPU parallel processings, using window size as 3 pixels, three-dimensional LoG filtering images array is asked for respectively in x, The maximum in y, z direction, and generate maximum cube;
Maximum cube and three-dimensional LoG filtering images array are contrasted, the pixel of following formula will be met as potential test point:
ImageLoG(x, y, z)=Imagemax(x, y, z), and ImageLoG(x,y,z)>Threshold
In formula, Threshold represents test point identification gray threshold;
The x of all potential test points, y-coordinate, and sigma values corresponding to z-axis are exported,
The step carries out the combined sorting based on area to potential test point to be included:
By the way of GPU parallel processings, according to potential test point quantity NBLOBS, utilize the sum of following formula calculating combination of two Ncombination
Ncombination=NBLOBS*(NBLOBS-1)/2
Start one-dimension array, calculate the one-dimensional sequence number of combination corresponding to it for each block, thread, calculated using iterative method corresponding to it The sequence number of two potential test points;
Degree of overlapping OL is calculated using following formula:
R1=sigma1*21/2,R2=sigma2*21/2
Distance=((x1–x2)2+(y1–y2)2)1/2
If Distance>R1+R2
OL=0
If Distance≤| R1-R2|
OL=1
Otherwise:
Ratio1=(Distance2+R1 2–R2 2)/(2.0*Distance*R1)
And Ratio1=max (Ratio1, -1.0), Ratio1=min (Ratio1,1.0)
Acos1=arccos (Ratio1)
Ratio2=(Distance2+R2 2–R1 2)/(2.0*Distance*R2)
And Ratio2=max (Ratio2, -1.0), Ratio2=min (Ratio2,1.0)
Acos2=arccos (Ratio2)
A=-Distance+R2+R1
B=Distance-R2+R1
C=Distance+R2-R1
D=Distance+R2+R1
Area=R1 2*Acos1+R2 2*Acos2-0.5*(|A*B*C*D|)1/2
Amin=min (R1,R2)
OL=Area/ (π * Amin 2)
In formula, sigma1And sigma2Gaussian filtering yardstick 1 and gaussian filtering yardstick 2, x are represented respectively1、y1Represent gaussian filtering chi Under degree 1, the coordinate of potential test point 1, x2、y2Represent under gaussian filtering yardstick 2, the coordinate of potential test point 2, Distance tables Show Euclidean distance;R1、R2、Ratio1、Ratio2、Acos1、Acos2、A、B、C、D、Area、AminIntermediate variable is represented,
The step is screened to Initial Detection Point, and the detection point deletion without green is included:
The forest zone image that unmanned plane is shot is converted into CIE-Lab color spaces from rgb color space using following formula, calculates detection The average of point a passages, the test point that average is higher than 0 is deleted,
X=R*0.4124+G*0.3576+B*0.1805
Y=R*0.2126+G*0.7152+B*0.0722
Z=R*0.0193+G*0.1192+B*0.9505
L=116*f (Y) -16
A=500* (f (X)-f (Y))
B=200* (f (Y)-f (Z))
Wherein, x is worked as>When 0.008856, f (x)=x^ (1/3), x represent any intermediate variable in X, Y, Z, work as x<= When 0.008856, f (x)=(7.787*x)+(16/116).
2. the forest plant parallel extraction method of unmanned plane image according to claim 1, it is characterised in that
Also include step:After deleting test point of a passages average higher than 0, the coordinate and radius of each remaining test point are exported.
CN201610675005.3A 2016-08-16 2016-08-16 The forest plant parallel extraction method of unmanned plane image Expired - Fee Related CN106326846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610675005.3A CN106326846B (en) 2016-08-16 2016-08-16 The forest plant parallel extraction method of unmanned plane image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610675005.3A CN106326846B (en) 2016-08-16 2016-08-16 The forest plant parallel extraction method of unmanned plane image

Publications (2)

Publication Number Publication Date
CN106326846A CN106326846A (en) 2017-01-11
CN106326846B true CN106326846B (en) 2018-03-16

Family

ID=57739982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610675005.3A Expired - Fee Related CN106326846B (en) 2016-08-16 2016-08-16 The forest plant parallel extraction method of unmanned plane image

Country Status (1)

Country Link
CN (1) CN106326846B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10586347B1 (en) 2018-09-17 2020-03-10 Datalog, LLC Log scaling system and related methods

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203385417U (en) * 2013-08-28 2014-01-08 中国水利水电科学研究院 Large-scale aviation dynamic acquisition system for vegetation coverage
CN104881865B (en) * 2015-04-29 2017-11-24 北京林业大学 Forest pest and disease monitoring method for early warning and its system based on unmanned plane graphical analysis
CN104834920A (en) * 2015-05-25 2015-08-12 成都通甲优博科技有限责任公司 Intelligent forest fire recognition method and device based on multispectral image of unmanned plane
CN105241423B (en) * 2015-09-18 2017-03-08 北京林业大学 A kind of evaluation method based on unmanned plane photogram to high canopy density Stand Volume
CN105527969B (en) * 2015-12-17 2018-07-06 中国科学院测量与地球物理研究所 A kind of mountain garden belt investigation and monitoring method based on unmanned plane

Also Published As

Publication number Publication date
CN106326846A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
Shi et al. Study on modeling method of forest tree image recognition based on CCD and theodolite
CN110084292B (en) Target detection method based on DenseNet and multi-scale feature fusion
CN108154105B (en) Underwater biological detection and identification method and device, server and terminal equipment
Chen et al. Building extraction from remote sensing images with deep learning in a supervised manner
Huang et al. Local binary patterns and superpixel-based multiple kernels for hyperspectral image classification
CN106294705A (en) A kind of batch remote sensing image preprocess method
CN111126385A (en) Deep learning intelligent identification method for deformable living body small target
CN112288709A (en) Three-dimensional target detection method based on point cloud
Delibasoglu et al. Improved U-Nets with inception blocks for building detection
Song et al. Detection of maize tassels for UAV remote sensing image with an improved YOLOX model
CN105405138A (en) Water surface target tracking method based on saliency detection
CN110070503A (en) Scale calibration method, system and medium based on convolutional neural networks
CN116912674A (en) Target detection method and system based on improved YOLOv5s network model under complex water environment
Lin et al. A novel approach for estimating the flowering rate of litchi based on deep learning and UAV images
CN106326846B (en) The forest plant parallel extraction method of unmanned plane image
CN110751732B (en) Method for converting 2D image into 3D image
Hu et al. Automatic detection of pecan fruits based on Faster RCNN with FPN in orchard
Wu et al. A Dense Litchi Target Recognition Algorithm for Large Scenes
Zhang et al. Multilevel feature context semantic fusion network for cloud and cloud shadow segmentation
Guo et al. An improved YOLO v4 used for grape detection in unstructured environment
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion
CN109299295B (en) Blue printing layout database searching method
CN116630700A (en) Remote sensing image classification method based on introduction channel-space attention mechanism
Sari et al. The Effect of Batch Size and Epoch on Performance of ShuffleNet-CNN Architecture for Vegetation Density Classification
CN113902904B (en) Lightweight network architecture system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180316

Termination date: 20200816

CF01 Termination of patent right due to non-payment of annual fee