CN113421301B - Method and system for positioning central area of field crop - Google Patents

Method and system for positioning central area of field crop Download PDF

Info

Publication number
CN113421301B
CN113421301B CN202110772724.8A CN202110772724A CN113421301B CN 113421301 B CN113421301 B CN 113421301B CN 202110772724 A CN202110772724 A CN 202110772724A CN 113421301 B CN113421301 B CN 113421301B
Authority
CN
China
Prior art keywords
image
central area
component
crop
target crop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110772724.8A
Other languages
Chinese (zh)
Other versions
CN113421301A (en
Inventor
仇瑞承
何勇
蒋茜静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110772724.8A priority Critical patent/CN113421301B/en
Publication of CN113421301A publication Critical patent/CN113421301A/en
Application granted granted Critical
Publication of CN113421301B publication Critical patent/CN113421301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a field crop central area positioning method, which relates to the technical field of automatic identification and positioning of crops and comprises the following steps: processing an original color image of a target crop to obtain an original gray image; carrying out binarization processing on the original gray level image to obtain a first binary image and a second binary image; carrying out logical AND operation on the first binary image and the second binary image to generate a third binary image; the third binary image comprises potential central region pixel points of the target crop; carrying out clustering analysis on the potential central area pixel points to obtain a clustering result of the central area pixel points; determining the central area range according to the clustering result to obtain the central area of the target crop; and performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop. The invention can accurately and quickly obtain the central area of each crop in the farmland and provides help for mechanized operation in the farmland, phenotypic monitoring of the crops and navigation of agricultural machinery.

Description

Method and system for positioning central area of field crop
Technical Field
The invention relates to the technical field of automatic identification and positioning of crops, in particular to a field crop center area positioning method and system.
Background
The automatic crop identification and positioning technology based on machine vision is beneficial to realizing automatic detection of crops, quickly acquiring the field distribution condition of the crops, and providing help for field mechanized operation, crop phenotype monitoring and agricultural machinery navigation. In the current research, crops are automatically identified and positioned by multi-application two-dimensional image information or three-dimensional point cloud data. The two-dimensional image of the crop comprises a color image, a thermal infrared image, a spectrum image and the like, and comprises color, texture, temperature, spectrum information and the like of the crop; the three-dimensional point cloud data of the crops comprise three-dimensional positions and space information of the crops; the crop identification and positioning can be realized by utilizing the specificity and the distribution characteristics of the crop information. However, sensors or devices for acquiring thermal infrared images, spectral images, and point cloud data of crops are expensive and not easy to popularize and apply. Color images of crops are more easily obtained, and thus color image-based crop identification and localization have been the focus of research.
The crop identification and positioning method based on the color image generally utilizes the color image of the crop to calculate the color component and the texture information of the crop, divides and extracts the crop, then detects the contour and the connected region of the crop, calculates the center of mass of the connected region of the crop to realize the positioning of the central region of the crop. Locating the crop center area can provide assistance for field mechanization work, crop phenotype monitoring (measurement), and agricultural navigation. However, the method for calculating the centroid of the connected region of the crop is difficult to accurately position the crop with irregular shape. In addition, as the crops grow, the adjacent crops have overlapped and crossed leaves, and the positioning of a plurality of crops is difficult to realize by calculating the mass center of a crop communication area. In part of researches, a crop skeleton extraction method is adopted to position the central area of a crop, and the central area of the crop is identified and positioned by detecting and judging a cross point in the crop skeleton, but the method has low efficiency and poor accuracy. With the development of machine vision technology, deep learning technologies (such as Fast RCNN, Mask RCNN, etc.) are widely applied to crop detection and positioning, and a good effect is achieved. However, the deep learning technique requires a large number of labeled samples and is labor-intensive.
In summary, there is a need in the art for a method that can accurately and quickly obtain the central area of each crop in a field, thereby providing assistance for field mechanization, crop phenotype monitoring and agricultural machinery navigation.
Disclosure of Invention
The invention aims to provide a field crop central area positioning method and system, which can accurately and quickly obtain the central area of each crop in a field, thereby providing help for field mechanized operation, crop phenotype monitoring and agricultural machinery navigation.
In order to achieve the purpose, the invention provides the following scheme:
a field crop central area locating method, the method comprising:
acquiring an original color image of a target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera;
processing the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image;
carrying out binarization processing on the original gray level image to obtain a binary image of the target crop; the binary image comprises a first binary image and a second binary image;
performing logical AND operation on the first binary image and the second binary image, eliminating the interference of background noise, and generating a third binary image; the third binary image comprises potential central region pixel points of the target crop;
carrying out clustering analysis on the potential central region pixel points of the target crops to obtain a clustering result of the central region pixel points of the target crops;
determining the central area range of the target crop according to the clustering result to obtain the central area of the target crop;
and performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
Optionally, the processing the original color image to obtain an original gray-scale image of the target crop specifically includes:
extracting R components, G components and B components of each pixel point of the original color image in an RGB color space;
determining an S component of the original color image in an HSV color space according to the R component, the G component and the B component;
obtaining a first gray image of the target crop according to the S component;
and carrying out graying processing on the original color image by adopting the R component, the G component and the B component to obtain a second gray image of the target crop.
Optionally, the determining, according to the R component, the G component, and the B component, an S component of the original color image in an HSV color space specifically includes:
determining a gray value of the R component, a gray value of the G component and a gray value of the B component;
and obtaining the S component of each pixel point in the HSV color space according to the gray value of the R component, the gray value of the G component and the gray value of the B component.
Optionally, the performing graying processing on the original color image by using the R component, the G component, and the B component to obtain a second grayscale image of the target crop specifically includes:
and carrying out graying processing on the original color image by adopting the gray value of the R component, the gray value of the G component and the gray value of the B component to obtain a second gray image of the target crop.
Optionally, the binarizing the original grayscale image to obtain a binary image of the target crop specifically includes:
acquiring a preset segmentation threshold;
carrying out segmentation processing on the first gray-scale image by adopting the preset segmentation threshold value to obtain a first binary image of a target crop;
determining a conversion threshold value of the second gray level image by using a maximum inter-class variance method;
and converting the second gray image according to the conversion threshold value to obtain a second binary image of the target crop.
Optionally, the preset segmentation threshold is 0.95.
Optionally, the binarizing the original grayscale image to obtain a binary image of the target crop, and then the method further includes:
and carrying out morphological on operation denoising processing on the first binary image and the second binary image.
Optionally, the performing cluster analysis on the potential central area pixel points of the target crop to obtain a cluster result of the target crop central area pixel points specifically includes:
extracting the coordinates of the pixel points in the central area in the third binary image, and clustering the coordinates of the pixel points in the central area by applying a DBSCAN clustering algorithm to obtain the classified coordinates of the pixel points in the central area; the coordinates of each type of the pixel points of the central area correspond to the central area of each crop; the central area is the plane distribution of the central area pixel points.
Optionally, the determining the central area range of the target crop according to the clustering result to obtain the central area of the target crop specifically includes:
respectively counting the coordinates of each type of the central area pixel points to obtain the maximum value X of the central area pixel points in the row coordinate direction max Maximum value Y in the column coordinate direction max And the minimum value X of the row coordinate direction min The minimum value Y in the column coordinate direction min
Maximum value X according to line coordinate direction max And the minimum value X of the row coordinate direction min By the formula X length =X max -X min Determining the length X of the pixel point of the central area in the line coordinate direction length
Maximum value Y according to column coordinate direction max And the minimum value Y of the column coordinate direction min By the formula Y length =Y max -Y min Determining the length Y of the pixel point in the central area in the column coordinate direction length
Using the minimum X of the line coordinate direction min The minimum value Y in the column coordinate direction min Maximum value X of the line coordinate direction max Maximum value Y in the column coordinate direction max Length X of central area pixel point in line coordinate direction length And length Y of central region pixel point in column coordinate direction length Drawing a rectangular frame; the rectangular frame is the central area of each target crop.
The invention also provides the following scheme:
a field crop central area positioning system, the system comprising:
the color image acquisition module is used for acquiring an original color image of the target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera;
the color image processing module is used for processing the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image;
the gray level image processing module is used for carrying out binarization processing on the original gray level image to obtain a binary image of the target crop; the binary image comprises a first binary image and a second binary image;
the third binary image generation module is used for performing logical AND operation on the first binary image and the second binary image, eliminating the interference of background noise and generating a third binary image; the third binary image comprises potential central region pixel points of the target crop;
the cluster analysis module is used for carrying out cluster analysis on the potential central region pixel points of the target crops to obtain a cluster result of the central region pixel points of the target crops;
the central area determining module is used for determining the central area range of the target crop according to the clustering result so as to obtain the central area of the target crop;
and the central area application module is used for performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a field crop central area positioning method and system, which are used for processing an original color image of a target crop to obtain an original gray image, carrying out binarization processing on the original gray image to obtain a binary image, carrying out logical AND operation on the binary image to generate a new binary image, wherein the new binary image comprises potential central area pixel points of the target crop, carrying out cluster analysis on the potential central area pixel points of the target crop to obtain a cluster result of the central area pixel points, and determining a central area range according to the cluster result, so that the central area of each crop in a field is accurately and quickly obtained, and help is provided for field mechanized operation, crop phenotype monitoring and agricultural machinery navigation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a first embodiment of a method for locating a central area of a field crop according to the present invention;
FIG. 2 is a schematic flow chart of a second embodiment of a method for locating a central area of a field crop according to the present invention;
FIG. 3 is a schematic overall flow chart of a second embodiment of a method for locating a central area of a field crop according to the present invention;
FIG. 4 is a schematic view of a first gray scale image of a target crop according to a second embodiment of the method for locating a central area of a field crop of the present invention;
FIG. 5 is a schematic diagram of a denoised first binary image according to a second embodiment of the method for locating a central area of a field crop of the present invention;
FIG. 6 is a schematic second grayscale image of a target crop according to a second embodiment of the method for locating a central area of a field crop of the present invention;
FIG. 7 is a schematic diagram of a second denoised binary image according to a second embodiment of the method for locating a central region of a field crop of the present invention;
FIG. 8 is a schematic diagram illustrating a result of extracting pixel points in a center area of a target crop in a second embodiment of the method for locating a center area of a field crop of the present invention;
FIG. 9 is a schematic diagram illustrating a clustering result of pixel points in a potential center area of a target corn in a second embodiment of the method for locating a center area of a field crop according to the present invention;
FIG. 10 is a schematic diagram showing the center area positioning result of target corn in the second embodiment of the field crop center area positioning method according to the present invention;
fig. 11 is a block diagram of a third embodiment of a field crop center area positioning system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a field crop central area positioning method and system, which can accurately and quickly obtain the central area of each crop in a field, thereby providing help for field mechanized operation, crop phenotype monitoring and agricultural machinery navigation.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example one
FIG. 1 is a flowchart of a first embodiment of a method for locating a central area of a field crop according to the present invention. Referring to fig. 1, the method for positioning the central area of the field crop comprises the following steps:
step 101: acquiring an original color image of a target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera.
Step 102: processing the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image.
The step 102 specifically includes:
extracting R components, G components and B components of each pixel point of the original color image in RGB (Red, Green, Blue) color space; namely, the R component, the G component and the B component of the RGB color space are extracted from each pixel point of the original color image, and the R component, the G component and the B component of the original color image in the RGB color space are obtained.
Determining an S component of the original color image in an HSV (Hue, Saturation) color space from the R component, the G component, and the B component.
And obtaining a first gray image of the target crop according to the S component.
The components of the original color image in the RGB color space are determined.
And carrying out graying processing on the original color image by adopting the R component, the G component and the B component to obtain a second gray image of the target crop.
Determining an S component of the original color image in an HSV color space according to the R component, the G component, and the B component, and obtaining a first grayscale image of a target crop according to the S component, which specifically includes: and determining the gray value of the R component, the gray value of the G component and the gray value of the B component. And obtaining the S component of each pixel point in the HSV color space according to the gray value of the R component, the gray value of the G component and the gray value of the B component. And obtaining a first gray image of the target crop according to the S component. Namely, calculating the S component of the original color image in the HSV color space to obtain a first gray image of the target crop. The method specifically comprises the following steps: r component, G component and B component of RGB color space can be extracted from each pixel point of the color image, and H component, S component and V component of each pixel point in HSV color space can be obtained through conversion calculation according to three components by using a common general formula. And calculating the S component of each pixel point to obtain a first gray image with the S component value as the size.
Performing graying processing on the original color image by using the R component, the G component and the B component to obtain a second grayscale image of the target crop, specifically including: and carrying out graying processing on the original color image by adopting the gray value of the R component, the gray value of the G component and the gray value of the B component to obtain a second gray image of the target crop. Namely, the components of the original color image in the RGB color space are applied to perform graying processing on the original color image to obtain a second grayscale image of the target crop. The method specifically comprises the following steps: according to formula I gray Calculating to obtain a second gray scale image by G (i, j) 1.262-R (i, j) 0.884-B (i, j) 0.311; in the formula, i and j are the row and column coordinates of the pixel points, and G (i, j), R (i, j) and B (i, j) are respectively at the original color image (i, j)Gray values, I, of G, R and B components of pixel points gray And (i, j) is the gray value of the pixel point at the second gray image (i, j).
Step 103: carrying out binarization processing on the original gray level image to obtain a binary image of the target crop; the binary image includes a first binary image and a second binary image.
The step 103 specifically includes:
and acquiring a preset segmentation threshold value. The preset segmentation threshold is 0.95.
And carrying out segmentation processing on the first gray image by adopting the preset segmentation threshold value to obtain a first binary image of the target crop.
And determining the conversion threshold value of the second gray scale image by using a maximum inter-class variance method.
And converting the second gray image according to the conversion threshold value to obtain a second binary image of the target crop.
The method specifically comprises the following steps: setting a segmentation threshold, and performing segmentation processing on the first gray level image by using the segmentation threshold to obtain a first binary image of a target crop; in the first binary image, white pixel points are potential crop center area pixel points, the gray value is 1, black pixel points are background, and the gray value is 0.
Calculating a conversion threshold value of the second gray image by using a maximum inter-class variance method, and converting the second gray image according to the conversion threshold value to obtain a second binary image of the target crop; and white pixel points in the second binary image are crops, the gray value is 1, black pixel points are backgrounds, and the gray value is 0.
Step 104: performing logical AND operation on the first binary image and the second binary image, eliminating the interference of background noise, and generating a third binary image; the third binary image comprises potential central region pixel points of the target crop.
And the gray value of the pixel point of the potential central area of the target crop in the third binary image is 1.
Step 105: and carrying out clustering analysis on the potential central region pixel points of the target crops to obtain a clustering result of the central region pixel points of the target crops.
The step 105 specifically includes:
extracting the pixel point coordinates of the central area in the third binary image, and Clustering all the pixel point coordinates of the central area by applying a Density-Based Spatial Clustering of applications without noise (DBSCAN) Clustering algorithm Based on Density Clustering to obtain the classified pixel point coordinates of the central area; the coordinates of each type of the pixel points of the central area correspond to the central area of each crop; the central area is the plane distribution of the central area pixel points.
By adopting the DBSCAN algorithm to perform clustering operation on the pixel points in the potential center area of the target crop, the center area pixel points of each plant of the target crop can be obtained.
Step 106: and determining the central area range of the target crop according to the clustering result so as to obtain the central area of the target crop.
The step 106 specifically includes:
respectively counting the coordinates of each type of the central area pixel points to obtain the maximum value X of the central area pixel points in the row coordinate direction max Maximum value Y in the column coordinate direction max And the minimum value X of the row coordinate direction min The minimum value Y in the column coordinate direction min
Maximum value X according to line coordinate direction max And the minimum value X of the row coordinate direction min By the formula X length =X max -X min Determining the length X of the pixel point of the central area in the line coordinate direction length
Maximum value Y according to column coordinate direction max And the minimum value Y of the column coordinate direction min By the formula Y length =Y max -Y min Determining the length Y of the pixel point in the central area in the column coordinate direction length
Using the minimum X of the line coordinate direction min The minimum value Y in the column coordinate direction min Maximum value X of the line coordinate direction max Direction of column coordinateMaximum value of (Y) max Length X of central area pixel point in line coordinate direction length And length Y of central region pixel point in column coordinate direction length Drawing a rectangular frame; the rectangular frame is the central area of each target crop.
Step 106 is to calculate the maximum value and the minimum value of the row coordinate and the column coordinate of the pixel point in the center area of the target crop in the same category in step 105, then calculate the length of the center area in the direction of the row coordinate and the column coordinate, generate an enclosing matrix of the pixel point in the center area of the target crop, and finally realize the positioning of the center area of the target crop.
Step 107: and performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
Further, after the step 103, removing the interference of the ground noise in the first binary image and the second binary image by using a morphological "on operation", specifically including:
and carrying out morphological on operation denoising processing on the first binary image and the second binary image by using square structural elements to remove the interference of ground noise points.
Example two
Fig. 2 is a schematic flow chart of a second embodiment of the method for positioning the central area of field crops according to the present invention. Fig. 3 is a schematic overall flow chart of a second field crop center area positioning method according to an embodiment of the present invention. In the second embodiment, corn is selected as a research object, and the corn central area is positioned by utilizing the corn color image in the field environment. Referring to fig. 2, the method for locating a central area of a field crop generally comprises:
step 1: and acquiring a gray image and a binary image of the corn crop based on the original color image of the corn crop.
Step 2: and obtaining the coordinates of the central area of the corn crop based on the binary image of the corn crop.
Referring to fig. 3, the method for positioning the central area of the field crop specifically comprises the following steps:
and adjusting the position of the color camera to enable the camera to vertically shoot the canopy of the corn crop and collect the color image of the corn crop. The color image contains a plurality of corn crops.
The gray values of the red (R), green (G) and blue (B) components in the RGB color space of the color image of the corn crop are used for calculating the gray value of the S component of the corn crop in the HSV color space, and a first gray image shown in figure 4 is obtained.
The color image of the corn crop is subjected to gray scale conversion using the following formula to generate a second gray scale image as shown in fig. 6:
I gray (i,j)=G(i,j)*1.262-R(i,j)*0.884-B(i,j)*0.311
where I, j are the row and column coordinates of the pixel, G (I, j), R (I, j) and B (I, j) are the gray scale values of the color component of the pixel G, R, B at image (I, j), respectively, I gray (i, j) is the gray value of the pixel at the converted image (i, j).
And setting a segmentation threshold value TH to be 0.95, segmenting and processing the first gray image to generate a first gray image containing the corn crop central area pixel points, wherein the white pixel points in the first gray image are potential crop central area pixel points, the gray value is 1, the black pixel points are background, and the gray value is 0.
And calculating a segmentation threshold value of 139 by using a maximum inter-class variance method, segmenting and processing a second gray image, and converting the second gray image into a second binary image containing the corn crops, wherein white pixel points in the second binary image are the corn crops, the gray value is 1, black pixel points are the background, and the gray value is 0.
And performing morphological on operation on the first binary image and the second binary image by using square structural elements to remove the interference of the ground noise point. In this embodiment, square structural elements with pixel sizes of 2 pixels and 5 pixels are selected to perform an opening operation on the first binary image and the second binary image respectively, so as to eliminate fine noise interference, and the results are shown in fig. 5 and 7 respectively, where fig. 5 is the denoised first binary image and fig. 7 is the denoised second binary image.
And performing logical AND operation on the processed first binary image and the processed second binary image, eliminating the interference of background noise, and generating a third binary image with the potential gray value of the pixel point in the central area being 1. The method specifically comprises the following steps: firstly, performing logical AND operation on the processed first binary image and the processed second binary image, reserving pixel points with gray values of 1 or 0, and generating a third binary image, wherein the gray value of the pixel point in the central area of the corn is 1. Then, extracting the pixel points with the gray value of 1 to obtain the pixel points in the center region of the corn which are communicated, and performing expansion processing, wherein the result is displayed on the corn crop image as shown in fig. 8.
And extracting the coordinates of the central area pixel points in the third binary image, and clustering the obtained central area pixel points by using a DBSCAN clustering algorithm to obtain the plane distribution of the central area pixel points of the corn crops. The method specifically comprises the following steps: after the central area pixel points are obtained, clustering processing is carried out on all the pixel points by adopting a DBSCAN clustering algorithm, the classified number (the number of corn crops) is determined, the classified central area pixel points are obtained, and each classified central area pixel point corresponds to the central area of one corn crop. Due to the fact that certain intervals exist among corn crop plants, central region pixel points can be obviously divided, and the density of the pixel points is larger in the central region of the corn crops, the extracted pixel points can be automatically classified according to the density of the pixel points by adopting a DBSCAN clustering algorithm, so that the number of the corn crops in an image is automatically identified, and multiple types of central region pixel points are divided, wherein each type of central region pixel points correspond to the central region of one corn crop. By performing cluster analysis on the central area pixel points shown in fig. 8, a DBSCAN algorithm is selected to cluster the pixel points, the result is displayed on the corn crop image as shown in fig. 9, wherein fig. 9 totally includes 3 types of pixel points, one type of pixel point corresponds to one corn crop, and in order to distinguish the 3 types of pixel points, one type of pixel point adopts a shape mark. In this embodiment, the search radius is set to 20 pixels, the minimum number of search is set to 30 pixels, and 3 types of pixels shown in fig. 9 are obtained in total by detection, which correspond to 3 corn crops in fig. 9.
Counting the central region pixel points of the crops of the same category to obtain the maximum value X of the central region pixel points in the directions of row coordinates and column coordinates max 、Y max And minimum values Xmin, Y min Then, the length X of the pixel point in the central area in the direction of the row coordinate and the column coordinate is calculated according to the following formula length 、Y length
X length =X max -X min
Yl ength =Y max -Y min
By using X min ,Y min ,X max ,Y max ,X length And Y length Drawing a rectangular frame, and finally positioning the central area of the crop. Referring to fig. 10, the center area of 3 corns in the present example was detected, and the pixel points in the center area were the maximum value and the minimum value (X) in the row coordinate direction and the column coordinate direction max ,Y max ,X min ,Y min ) Lengths (X) in the row coordinate and column coordinate directions of (232,385,221,366), (207,603,195,577), and (220,101,201,85), respectively length ,Y length ) Respectively (11,19), (12,26) and (19, 16).
The invention discloses a method for positioning a central area of field crops, which is a crop central area positioning method based on machine vision in a field environment, and can effectively eliminate the interference of irregular shape and overlapped blades of the field crops, namely effectively eliminate the interference of the shape of the crops and the blades of adjacent crops, and improve the precision and speed of positioning the central area of the crops.
EXAMPLE III
Fig. 11 is a block diagram of a third embodiment of a field crop center area positioning system of the present invention. Referring to fig. 11, the field crop center area positioning system includes:
a color image acquisition module 1101, configured to acquire an original color image of a target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera.
The color image processing module 1102 is configured to process the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image.
A grayscale image processing module 1103, configured to perform binarization processing on the original grayscale image to obtain a binary image of the target crop; the binary image includes a first binary image and a second binary image.
A third binary image generating module 1104, configured to perform a logical and operation on the first binary image and the second binary image, eliminate interference of background noise, and generate a third binary image; the third binary image comprises potential central region pixel points of the target crop.
And a cluster analysis module 1105, configured to perform cluster analysis on the potential central area pixel points of the target crop to obtain a cluster result of the central area pixel points of the target crop.
And a central area determining module 1106, configured to determine a central area range of the target crop according to the clustering result, so as to obtain a central area of the target crop.
A central area application module 1107 is used for performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A method for locating a field crop center area, the method comprising:
acquiring an original color image of a target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera;
processing the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image; the processing the original color image to obtain an original gray image of the target crop specifically comprises: extracting R components, G components and B components of each pixel point of the original color image in an RGB color space; determining an S component of the original color image in an HSV color space according to the R component, the G component and the B component; obtaining a first gray image of the target crop according to the S component; performing graying processing on the original color image by adopting the R component, the G component and the B component to obtain a second grayscale image of a target crop;
carrying out binarization processing on the original gray level image to obtain a binary image of the target crop; the binary image comprises a first binary image and a second binary image;
performing logical AND operation on the first binary image and the second binary image, eliminating the interference of background noise, and generating a third binary image; the third binary image comprises potential central region pixel points of the target crop;
performing clustering analysis on the potential center region pixel points of the target crops to obtain clustering results of the center region pixel points of the target crops;
determining the central area range of the target crop according to the clustering result to obtain the central area of the target crop; the determining the central area range of the target crop according to the clustering result to obtain the central area of the target crop specifically comprises: respectively counting the coordinates of each type of the central area pixel points to obtain the maximum value X of the central area pixel points in the row coordinate direction max Maximum value Y in the column coordinate direction max And the minimum value X of the row coordinate direction min The minimum value Y in the column coordinate direction min (ii) a Maximum value X according to line coordinate direction max And in the direction of the row coordinateMinimum value X min By the formula X length =X max -X min Determining the length X of the pixel point of the central area in the line coordinate direction length (ii) a Maximum value Y according to column coordinate direction max And the minimum value Y of the column coordinate direction min By the formula Y length =Y max -Y min Determining the length Y of the pixel point in the central area in the column coordinate direction length (ii) a Using the minimum X of the line coordinate direction min The minimum value Y in the column coordinate direction min Maximum value X of the line coordinate direction max Maximum value Y in the column coordinate direction max Length X of central area pixel point in line coordinate direction length And length Y of central region pixel point in column coordinate direction length Drawing a rectangular frame; the rectangular frame is the central area of each target crop;
and performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
2. The method for locating the central area of field crops as claimed in claim 1, wherein said determining the S component of the original color image in HSV color space from the R component, the G component, and the B component specifically comprises:
determining a gray value of the R component, a gray value of the G component and a gray value of the B component;
and obtaining the S component of each pixel point in the HSV color space according to the gray value of the R component, the gray value of the G component and the gray value of the B component.
3. The method for locating the central area of the field crops according to claim 2, wherein the graying the original color image by using the R component, the G component and the B component to obtain a second grayscale image of the target crop specifically comprises:
and carrying out graying processing on the original color image by adopting the gray value of the R component, the gray value of the G component and the gray value of the B component to obtain a second gray image of the target crop.
4. The method for positioning the central area of the field crop as claimed in claim 1, wherein the binarizing the original gray level image to obtain a binary image of the target crop specifically comprises:
acquiring a preset segmentation threshold;
carrying out segmentation processing on the first gray-scale image by adopting the preset segmentation threshold value to obtain a first binary image of a target crop;
determining a conversion threshold value of the second gray level image by using a maximum inter-class variance method;
and converting the second gray image according to the conversion threshold value to obtain a second binary image of the target crop.
5. The method of claim 4, wherein the predetermined segmentation threshold is 0.95.
6. The method for positioning the central area of the field crop as claimed in claim 1, wherein the binarizing process is performed on the original gray level image to obtain a binary image of the target crop, and then further comprising:
and carrying out morphological on operation denoising processing on the first binary image and the second binary image.
7. The method for positioning the central area of the field crops according to claim 1, wherein the step of performing cluster analysis on the potential central area pixel points of the target crops to obtain the cluster result of the potential central area pixel points of the target crops specifically comprises the following steps:
extracting the coordinates of the pixel points in the central area in the third binary image, and clustering the coordinates of the pixel points in the central area by applying a DBSCAN clustering algorithm to obtain the classified coordinates of the pixel points in the central area; the coordinates of each type of the pixel points of the central area correspond to the central area of each crop; the central area is the plane distribution of the central area pixel points.
8. A field crop center area positioning system, the system comprising:
the color image acquisition module is used for acquiring an original color image of the target crop; the original color image is a color image of the target crop obtained by vertically shooting the canopy of the target crop by a color camera;
the color image processing module is used for processing the original color image to obtain an original gray image of the target crop; the original grayscale image includes a first grayscale image and a second grayscale image; the processing the original color image to obtain an original gray image of the target crop specifically comprises: extracting R components, G components and B components of each pixel point of the original color image in an RGB color space; determining an S component of the original color image in an HSV color space according to the R component, the G component and the B component; obtaining a first gray image of the target crop according to the S component; performing graying processing on the original color image by adopting the R component, the G component and the B component to obtain a second grayscale image of a target crop;
the gray level image processing module is used for carrying out binarization processing on the original gray level image to obtain a binary image of the target crop; the binary image comprises a first binary image and a second binary image;
the third binary image generation module is used for performing logical AND operation on the first binary image and the second binary image, eliminating the interference of background noise and generating a third binary image; the third binary image comprises potential central region pixel points of the target crop;
the cluster analysis module is used for carrying out cluster analysis on the potential central region pixel points of the target crops to obtain a cluster result of the central region pixel points of the target crops;
a central region determining module for determining the central region range of the target crop according to the clustering resultTo obtain a central region of the target crop; the determining the central area range of the target crop according to the clustering result to obtain the central area of the target crop specifically comprises: respectively counting the coordinates of each type of the central area pixel points to obtain the maximum value X of the central area pixel points in the row coordinate direction max Maximum value Y in the column coordinate direction max And the minimum value X of the row coordinate direction min The minimum value Y in the column coordinate direction min (ii) a Maximum value X according to line coordinate direction max And the minimum value X of the row coordinate direction min By the formula X length =X max -X min Determining the length X of the pixel point in the central area in the line coordinate direction length (ii) a Maximum value Y according to column coordinate direction max And the minimum value Y of the column coordinate direction min By the formula Y length =Y max -Y min Determining the length Y of the pixel point in the central area in the column coordinate direction length (ii) a Using the minimum X of the line coordinate direction min The minimum value Y in the column coordinate direction min Maximum value X of the line coordinate direction max Maximum value Y in the column coordinate direction max Length X of central area pixel point in line coordinate direction length And length Y of central region pixel point in column coordinate direction length Drawing a rectangular frame; the rectangular frame is the central area of each target crop;
and the central area application module is used for performing field mechanized operation, crop phenotype monitoring and agricultural machinery navigation according to the central area of the target crop.
CN202110772724.8A 2021-07-08 2021-07-08 Method and system for positioning central area of field crop Active CN113421301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772724.8A CN113421301B (en) 2021-07-08 2021-07-08 Method and system for positioning central area of field crop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772724.8A CN113421301B (en) 2021-07-08 2021-07-08 Method and system for positioning central area of field crop

Publications (2)

Publication Number Publication Date
CN113421301A CN113421301A (en) 2021-09-21
CN113421301B true CN113421301B (en) 2022-08-05

Family

ID=77720462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772724.8A Active CN113421301B (en) 2021-07-08 2021-07-08 Method and system for positioning central area of field crop

Country Status (1)

Country Link
CN (1) CN113421301B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614147A (en) * 2020-12-24 2021-04-06 中国农业科学院作物科学研究所 Method and system for estimating plant density of crop at seedling stage based on RGB image

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8160382B2 (en) * 2007-10-15 2012-04-17 Lockheed Martin Corporation Method of object recognition in image data using combined edge magnitude and edge direction analysis techniques
US8811751B1 (en) * 2013-12-20 2014-08-19 I.R.I.S. Method and system for correcting projective distortions with elimination steps on multiple levels
CN103984946B (en) * 2014-05-23 2017-04-26 北京联合大学 High resolution remote sensing map road extraction method based on K-means
CN105096288A (en) * 2015-08-31 2015-11-25 中国烟草总公司广东省公司 Method for detecting target positioning line of tobacco field image
US10198819B2 (en) * 2015-11-30 2019-02-05 Snap Inc. Image segmentation and modification of a video stream
CN106408025B (en) * 2016-09-20 2019-11-26 西安工程大学 Aerial Images insulator classifying identification method based on image procossing
CN107220647B (en) * 2017-06-05 2020-03-31 中国农业大学 Crop center point positioning method and system under blade crossing condition
CN109684938A (en) * 2018-12-06 2019-04-26 广西大学 It is a kind of to be taken photo by plane the sugarcane strain number automatic identifying method of top view based on crop canopies
CN112702565A (en) * 2020-12-03 2021-04-23 浙江大学 System and method for acquiring field plant phenotype information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614147A (en) * 2020-12-24 2021-04-06 中国农业科学院作物科学研究所 Method and system for estimating plant density of crop at seedling stage based on RGB image

Also Published As

Publication number Publication date
CN113421301A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN108009542B (en) Weed image segmentation method in rape field environment
CN109447945B (en) Quick counting method for basic wheat seedlings based on machine vision and graphic processing
CN108960011B (en) Partially-shielded citrus fruit image identification method
CN107220647B (en) Crop center point positioning method and system under blade crossing condition
CN106875407B (en) Unmanned aerial vehicle image canopy segmentation method combining morphology and mark control
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN114067207A (en) Vegetable seedling field weed detection method based on deep learning and image processing
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
Feng et al. A separating method of adjacent apples based on machine vision and chain code information
Malekabadi et al. A comparative evaluation of combined feature detectors and descriptors in different color spaces for stereo image matching of tree
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN107133964B (en) Image matting method based on Kinect
Hitimana et al. Automatic estimation of live coffee leaf infection based on image processing techniques
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN114431005A (en) Intelligent agricultural fruit picking, identifying and positioning method, system and device
Choudhuri et al. Crop stem width estimation in highly cluttered field environment
CN112115778B (en) Intelligent lane line identification method under ring simulation condition
Xiang et al. PhenoStereo: a high-throughput stereo vision system for field-based plant phenotyping-with an application in sorghum stem diameter estimation
Zeng et al. Detecting and measuring fine roots in minirhizotron images using matched filtering and local entropy thresholding
CN113421301B (en) Method and system for positioning central area of field crop
CN115731257A (en) Leaf form information extraction method based on image
Sun et al. A vision system based on TOF 3D imaging technology applied to robotic citrus harvesting
CN112541471B (en) Multi-feature fusion-based shielding target identification method
Yang et al. Cherry recognition based on color channel transform
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant