CN115082783A - Low-contrast fogdrop image identification method - Google Patents

Low-contrast fogdrop image identification method Download PDF

Info

Publication number
CN115082783A
CN115082783A CN202210598996.5A CN202210598996A CN115082783A CN 115082783 A CN115082783 A CN 115082783A CN 202210598996 A CN202210598996 A CN 202210598996A CN 115082783 A CN115082783 A CN 115082783A
Authority
CN
China
Prior art keywords
image
fogdrop
fog
carrying
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210598996.5A
Other languages
Chinese (zh)
Inventor
邓继忠
雷落成
叶家杭
罗明达
张子超
赵高源
刘理涵
李志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202210598996.5A priority Critical patent/CN115082783A/en
Publication of CN115082783A publication Critical patent/CN115082783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fog drop image recognition method with low contrast, which comprises the steps of firstly carrying out gray level operation on an acquired sample image to change the acquired sample image into a gray level image, carrying out threshold value binarization on the gray level image, simultaneously segmenting a background image and a fog drop image, then carrying out median filtering on the segmented fog drop image to obtain a smooth integral image, carrying out edge detection on the obtained integral image, then carrying out concave point corrosion and concave point expansion to enable fog drops with smaller particles to have more obvious shapes, and then carrying out contour detection and drawing to obtain the outer boundary of each fog drop, thereby clearly recognizing the distribution of the fog drops. The fog drop image recognition method can be used for detecting fog drops on water-sensitive paper or coated paper, makes up the condition of serious fog drop adhesion, can be used for accurately recognizing small-particle pictures, and optimizes the accuracy in recognition. In addition, the fogdrop image identification method can better detect the spraying effect, so that the development of precision agriculture is promoted.

Description

Low-contrast fogdrop image identification method
Technical Field
The invention relates to the field of agriculture, in particular to a low-contrast fogdrop image identification method.
Background
At present, the application of the plant protection unmanned aerial vehicle in agricultural production becomes the trend of the times, and the plant protection unmanned aerial vehicle is widely applied to various fields of agricultural production, such as pesticide application, sowing, agricultural condition monitoring and the like. In the aspect of pesticide application, research on the pesticide application technology of the plant protection unmanned aerial vehicle is greatly improved, and the plant protection unmanned aerial vehicle becomes an important means for preventing and treating plant diseases and insect pests in further development. It is therefore seen that detecting plant protection unmanned aerial vehicle sprays quality is also especially important. The decisive factor for detecting the quality of pesticide spraying of the plant protection unmanned aerial vehicle is the droplet particle size formed by pesticide spraying and the distribution condition of the droplet particle size, the detection on the droplet particle size can be realized by adopting coated paper and water-sensitive paper to collect droplets generally, the coated paper or the water-sensitive paper can be clamped on field crops before plant protection operation, after the operation, the coated paper or the water-sensitive paper is collected, the image of the droplets is formed by scanning, the droplet analysis is performed by utilizing the image processing method, the quality condition of the droplets is obtained by detection, and the quality of the spraying quality is obtained.
Along with the development of image technology, in the aspect of droplet identification, software which is developed by the United states department of agriculture and named DepositScan is commonly used for analyzing droplet particles at present, the software has accurate effect on droplet analysis, can quickly analyze and identify the particle size of the image of the droplets, solves the problem of complicated operation caused by manual identification, simplifies the difficulty in detection, improves the detection efficiency, and plays a good role in detecting the intellectualization of the quality of the spraying effect.
In addition, the big of unmanned aerial vehicle company has also developed the fog droplet analysis appearance now, this instrument revolving door designs for the agricultural field, the lens module of high accuracy has been equipped with, the cooperation is customized big of the Xinjiang fog droplet analysis appearance App on the smart machine that can shoot uses, can accurately acquire and analyze the fog droplet information on water-sensitive paper or the copper sheet paper, powerful support is provided for accurate agriculture, and this analysis appearance volume is small and exquisite relatively, and is simple easy-to-use, single can realize field fog droplet analysis work. When the device is used, the coated paper or the water-sensitive paper is placed at the position of the fog drops to be analyzed, and the spraying of the pesticide is carried out. And then, carrying out fog drop analysis according to steps, and displaying the analysis result in a task list of a data acquisition interface, wherein the analysis result comprises a variation coefficient, a coverage rate, the quantity of fog drops and the like. The coefficient of variation represents the distribution condition of the fog drops, and the smaller the coefficient of variation, the more uniform the distribution; the coverage rate represents the proportion of the area of the fogdrop, and the number of the fogdrop represents the total number of the fogdrop in the fogdrop picture, and all the fogdrop can be displayed on an application page.
However, the aforementioned DepositScan software of the U.S. department of agriculture and the droplet analyzer developed by the company of great care have the following disadvantages:
firstly, an image processing method used by a depitscan program can accurately analyze most conditions, but can not correctly divide fog drops with serious adhesion in paper for a small part, and can not accurately identify the fog drops with small particles. Secondly, although the droplet analyzer developed by the company of great Xinjiang realizes the above functions, a software and hardware combination mode is required for detection.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a low-contrast fogdrop image recognition method, which can detect fogdrops on water-sensitive paper or art paper, make up for the serious fogdrops adhesion, accurately recognize small-particle pictures, and optimize the accuracy in recognition. In addition, the fogdrop image identification method can better detect the spraying effect, so that the development of precision agriculture is promoted.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a low-contrast fogdrop image identification method comprises the following steps:
(1) collecting an image of the fog drop water drops to obtain a sample image of the fog drops;
(2) graying the collected sample image;
(3) carrying out binarization on the grayed sample image;
(4) segmenting the background image and the fogdrop image in the binarized sample image to obtain a fogdrop image without the background image;
(5) carrying out median filtering on the divided fogdrop image to obtain a smooth integral image;
(6) carrying out edge extraction on the whole image, then carrying out pit corrosion, and then carrying out pit expansion;
(7) and drawing and analyzing the shapes of the fog drops to obtain the corresponding quantity and area distribution of the fog drops.
Preferably, in step (3), a gray level threshold T is preset, and for pixels with gray levels smaller than T, the gray level is set to 0; for pixels with a grayscale value greater than T, the grayscale value is set to 255.
Preferably, in step (4), a background segmentation technique based on Opencv is used to segment the background image and the fog droplet image in the binarized sample image.
Preferably, in the step (5), the step of performing median filtering on the divided fogdrop image is: in a single channel, the gray values of the neighborhood of the pixel points in the fogdrop image are sorted, and the intermediate value in the sorting is used for replacing the gray value of the original fogdrop image.
Preferably, in step (6), the step of extracting the edge of the whole image is: connecting the upper limit position, the lower limit position, the left limit position and the right limit position of the processed whole image into a closed frame, then determining the boundary of the fogdrop outline, only taking the boundary of the boundary belonging to the inclusion relation, putting the detected fogdrop outline into a set, drawing the fogdrop outline in the set and obtaining the position of the fogdrop outline to obtain the corresponding fogdrop quantity and area distribution.
Preferably, in step (6), a water-flooding filling mode is adopted in pit expansion, and the water-flooding filling includes the following steps: firstly inputting a 1-channel picture or a 3-channel picture, then determining a starting point of filling the overflowing water, redrawing a new value of a region pixel, then drawing a rectangular region of a minimum drawing boundary, and finally determining the maximum value of the negative difference between the pixel value of an observer and the pixel value of the field of other parts or the seed pixel to be added into the part.
Preferably, in step (7), the contour drawing is performed by using CV _ CHAIN _ APPROX _ SIMPLE.
Preferably, in step (7), contour detection is performed using Findcontours function in opencv.
Compared with the prior art, the invention has the following advantages:
(1) the low-contrast fogdrop image identification method disclosed by the invention has the advantages that the method combining contour detection and contour drawing is adopted to detect the fogdrops on the water-sensitive paper or the coated paper, the method makes up the condition of serious fogdrops adhesion and can accurately identify small-particle pictures, the accuracy in identification is optimized, and the fogdrops are used as data reference basis, so that the unmanned aerial vehicle can obtain a better spraying effect in the pesticide spraying process, and the development of precision agriculture is facilitated.
(2) The fog drop image recognition method with low contrast solves the difficulty brought by manual recognition, can greatly improve the recognition efficiency, and enables accurate agriculture and intelligent agriculture to be deeply developed to all aspects of agricultural production.
(3) The fog drop image recognition method with low contrast carries out morphological expansion operation on small-particle fog drops to enable the small-particle fog drops to have a clearer form, then carries out outline drawing and detection to carry out fog drop recognition, and is higher in detection precision compared with the traditional fog drop detection method.
Drawings
The present invention will now be further described with reference to the accompanying drawings.
Fig. 1 is a flow chart of a low-contrast fogdrop image identification method according to the present invention.
Fig. 2 is an effect diagram before and after background division, where fig. 2 is an original diagram and fig. 3 is a divided fogdrop image.
Fig. 4 and 5 are graphs comparing effects before and after the flooding filling, in which fig. 4 is an original graph and fig. 5 is a flooding filling graph.
FIG. 6 is a pointer offset diagram.
Fig. 7 is a scanning schematic diagram of a boundary scan algorithm for contour detection, wherein (a) indicates that an outer boundary is scanned and (b) indicates that a hole is scanned.
Fig. 8 is a diagram showing effects in contour detection.
Detailed Description
The invention is further illustrated by the following figures and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
Referring to fig. 1 to 8, the low-contrast droplet image recognition method of the present invention includes the steps of:
(1) collecting an image of the fog drop water drops to obtain a sample image of the fog drops;
(2) graying the collected sample image;
(3) and binarizing the grayed sample image, specifically: presetting a gray threshold value T, and setting the gray value of a pixel with the gray value less than T as 0; setting the gray value of the pixel with the gray value larger than T as 255; expressed by the formula:
Figure BDA0003668856180000061
(4) segmenting the background image and the fogdrop image in the binarized sample image to obtain a fogdrop image without the background image; in the embodiment, a background segmentation technology based on Opencv is mainly adopted to segment the background image and the fogdrop image, wherein the background segmentation is a main processing step in most of the development programs based on vision; an OTSU algorithm is adopted, the OTSU algorithm uses the idea of clustering to divide the gray number of the picture into two parts according to the gray level of the picture, so that the difference between the gray values of the two parts is maximum, the gray level difference between each part is minimum, and a relatively proper gray level is selected for division by calculating the intra-class variance; fig. 2 and 3 show a comparison between before and after division, where fig. 2 is an original image and fig. 3 is a droplet image after division.
(5) Carrying out median filtering on the divided fogdrop image to obtain a smooth integral image; the method is characterized in that the median filtering is a nonlinear filtering technology, and the basic idea is to sort the gray values of the neighborhood of pixel points of fog drops in a single channel and take an intermediate value to replace the original gray value. If the pixel point is not in the edge area, the data of the image is smooth and has no large difference; therefore, a noise point is too large or too small, and original data is affected in a sensor or remote transmission process, so that a large number of granular noise points appear in an image. In the image, most energy is concentrated in low frequency and intermediate frequency, so that the intermediate value obtained by sequencing the data of the neighborhood is less likely to obtain a polluted high frequency point.
(6) Carrying out edge extraction on the whole image, then carrying out pit corrosion, and then carrying out pit expansion; wherein the content of the first and second substances,
the step of extracting the edge of the whole image comprises the following steps: connecting the upper limit position, the lower limit position, the left limit position and the right limit position of the processed integral image into a closed frame, then determining the boundary of the fogdrop outline, only taking the boundary of the inclusion relation, putting the detected fogdrop outline into a set, drawing the fogdrop outline in the set and obtaining the position of the fogdrop outline to obtain the corresponding fogdrop number and area distribution;
the pit expansion adopts a water-overflowing filling mode, and the water-overflowing filling comprises the following steps: inputting a 1-channel picture (namely, a picture only containing one color of RGB) or a 3-channel picture (namely, a picture containing three colors of RGB in the picture); the method comprises the steps of firstly determining a starting point of the flood filling, redrawing new values of pixels of the area, then drawing a minimum drawing boundary rectangular area, and finally determining the maximum value of the negative difference between the pixel value of an observer and the pixel values of other part fields or the seed pixels to be added into the part. The comparison of the effects before and after the flooding filling is shown in fig. 4 and 5, in which fig. 4 is an original drawing and fig. 5 is a flooding filling drawing.
(7) Drawing and analyzing the shape of the fog drop, wherein the contour detection is carried out by adopting a mode of CV _ CHAIN _ APPROX _ SIMPLE; since CV _ CHAIN _ CODE in Opencv stores an outline in the form of a Freeman CHAIN CODE, CV _ CHAIN _ APPROX _ NONE converts the Freeman CHAIN CODE into a set of points and stores all points on the outline, and CV _ CHAIN _ APPROX _ SIMPLE compresses elements in horizontal, vertical, and diagonal directions, and only those points at the end points of line segments are reserved. Since only the outer contour needs to be detected and the droplet contour is not regular, the contour detection is performed by adopting the CV _ CHAIN _ APPROX _ SIMPLE method, which has a more concise and clear contour processing effect, and therefore the CV _ CHAIN _ APPROX _ SIMPLE method is selected as the contour drawing parameter.
First, a 3X3 neighborhood position fast lookup table is defined, because Mat data is stored continuously in the memory, and the pointer is shifted by one row of data, i.e. the step distance, from pixel (i, j) to pixel (i +1, j). Here, step refers to the number of bytes occupied by a row of pixels, nch is a step close distance, and the following is set to 1, that is, the pointer is shifted by one byte and has eight directions, as shown in fig. 6.
For contour detection, the Findcontours function in opencv is mainly used for detection, in which:
the first parameter image, i.e. the input image, is Mat type data in the movement plan, and needs to be an image of the agent 8, but this image requires that all non-zeros are 1, i.e. non-zeros, i.e. 1, and is generally a processed binary image;
the second parameter, i.e. the output profiles, each profile being stored with a List < MatOfPoint > profiles;
the third parameter hierarchy is the storage of the output profile relationship, which is stored by Mat;
the fourth parameter mode is used to define the detected contour, which has an influence on both constants and hierarchy, and is selected according to the practical application:
CV _ RETR _ EXTERNAL: only the outermost profile;
CV _ RETR _ LIST: detecting all the contours, wherein the contours are independent and have no parent-child relationship;
CV _ RETR _ CCOMP: detecting all the contours, but only establishing two grade relations among all the contours; if more than two levels are related, each two levels are decomposed into a contour from the top level;
CV _ RETR _ TREE: detecting all the contours, establishing a hierarchical relation according to the real situation of all the contours, and having unlimited layer number; through multiple tests, CV _ RETR _ TREE is selected as an input parameter with the best effect;
the fifth parameter, method, defines whether the output profile is to undergo some processing:
CHAIN _ APPROX _ NONE: the calculated outline is not processed, and the data is directly used after calculation;
CHAIN _ APPROX _ SIMPLE: compressing the contour, namely compressing all horizontal, vertical, left-falling and right-falling to obtain a vertex;
CHAIN _ APPROX _ TC89_ L1: compressing the contour by using an algorithm of Teh-Chinchainpropromationalgorithm;
CHAIN _ APPROX _ TC89_ KCOS: compressing the profile with another algorithm of Teh-Chinchainpropromatiorialgorithm;
after a plurality of tests, CHAIN _ APPROX _ SIMPLE is selected, and the effect is best.
Therefore, the boundary scanning algorithm for contour detection starts to scan line by line from the upper left corner, and the boundary is considered to be found when the following two cases are found; the case (a) in fig. 7 indicates that the outer boundary is encountered and the case (b) in fig. 7 indicates that the aperture is encountered, all starting points being the first non-zero element encountered in the first row for the tested fogdrop picture.
And, when using the Findcontours function, attention needs to be paid to clockwise and anticlockwise positions during searching, otherwise, different results are obtained, and the effect is as shown in fig. 8.
The above description is only exemplary embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents and are included in the scope of the present invention.

Claims (8)

1. A low-contrast fogdrop image identification method is characterized by comprising the following steps:
(1) collecting an image of the fog drops to obtain a sample image of the fog drops;
(2) graying the collected sample image;
(3) carrying out binarization on the grayed sample image;
(4) segmenting the background image and the fogdrop image in the binarized sample image to obtain a fogdrop image without the background image;
(5) carrying out median filtering on the divided fogdrop image to obtain a smooth integral image;
(6) carrying out edge extraction on the whole image, then carrying out pit corrosion, and then carrying out pit expansion;
(7) and drawing and analyzing the shapes of the fog drops to obtain the corresponding quantity and area distribution of the fog drops.
2. The fog droplet image recognition method of claim 1, wherein in the step (3), a gray threshold T is preset, and the gray value of the pixel with the gray value smaller than T is set to 0; for pixels with a grayscale value greater than T, the grayscale value is set to 255.
3. The method for identifying a fog droplet image with low contrast according to claim 1, wherein in the step (4), a background segmentation technology based on Opencv is adopted to segment the background image and the fog droplet image in the binarized sample image.
4. The method for identifying a fog droplet image with low contrast according to claim 1, wherein in the step (5), the step of performing median filtering on the segmented fog droplet image comprises: in a single channel, the gray values of the neighborhood of the pixel points in the fogdrop image are sorted, and the intermediate value in the sorting is used for replacing the gray value of the original fogdrop image.
5. The fog droplet image recognition method of claim 1, wherein in the step (6), the step of performing edge extraction on the whole image comprises: connecting the upper limit position, the lower limit position, the left limit position and the right limit position of the processed whole image into a closed frame, then determining the boundary of the fogdrop outline, only taking the boundary of the boundary belonging to the inclusion relation, putting the detected fogdrop outline into a set, drawing the fogdrop outline in the set and obtaining the position of the fogdrop outline to obtain the corresponding fogdrop quantity and area distribution.
6. The method for identifying fog drops with low contrast according to claim 1, wherein in the step (6), the pit expansion is filled with the flooding water, and the flooding water filling comprises the following steps: firstly inputting a 1-channel picture or a 3-channel picture, then determining a starting point of filling the overflowing water, redrawing a new value of a region pixel, then drawing a rectangular region of a minimum drawing boundary, and finally determining the maximum value of the negative difference between the pixel value of an observer and the pixel value of the field of other parts or the seed pixel to be added into the part.
7. The low-contrast fogdrop image identification method according to claim 1, wherein in the step (7), the contour drawing is performed by adopting a method of CV _ CHAIN _ APPROX _ SIMPLE.
8. The low-contrast fogdrop image identification method according to claim 1, wherein in the step (7), contour detection is performed by using a Findcontours function in opencv.
CN202210598996.5A 2022-05-30 2022-05-30 Low-contrast fogdrop image identification method Pending CN115082783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210598996.5A CN115082783A (en) 2022-05-30 2022-05-30 Low-contrast fogdrop image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210598996.5A CN115082783A (en) 2022-05-30 2022-05-30 Low-contrast fogdrop image identification method

Publications (1)

Publication Number Publication Date
CN115082783A true CN115082783A (en) 2022-09-20

Family

ID=83248757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210598996.5A Pending CN115082783A (en) 2022-05-30 2022-05-30 Low-contrast fogdrop image identification method

Country Status (1)

Country Link
CN (1) CN115082783A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226108A (en) * 2007-01-19 2008-07-23 中国农业机械化科学研究院 Method for testing droplet distribution consistency degree
CN110838126A (en) * 2019-10-30 2020-02-25 东莞太力生物工程有限公司 Cell image segmentation method, cell image segmentation device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226108A (en) * 2007-01-19 2008-07-23 中国农业机械化科学研究院 Method for testing droplet distribution consistency degree
CN110838126A (en) * 2019-10-30 2020-02-25 东莞太力生物工程有限公司 Cell image segmentation method, cell image segmentation device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张键: "《低对比度的雾滴图像分割》", 《自动化应用》, no. 03, 25 March 2021 (2021-03-25), pages 61 - 63 *
马凯: "《叶面雾滴沉积检测系统设计与试验研究》", 《中国优秀硕士学位论文全文数据库 农业科技辑》, no. 2022, 15 May 2022 (2022-05-15), pages 36 - 37 *

Similar Documents

Publication Publication Date Title
CN109447945B (en) Quick counting method for basic wheat seedlings based on machine vision and graphic processing
CN112614062B (en) Colony counting method, colony counting device and computer storage medium
CN112149543B (en) Building dust recognition system and method based on computer vision
CN105825169B (en) A kind of pavement crack recognition methods based on road image
CN109472261B (en) Computer vision-based automatic monitoring method for grain storage quantity change of granary
CN102184405A (en) Image acquisition-analysis method
CN105139386A (en) Image processing method for quickly and automatically detecting soldered dot unsatisfied products of electric connectors
CN110942013A (en) Satellite image feature extraction method and system based on deep neural network
CN112069985B (en) High-resolution field image rice spike detection and counting method based on deep learning
Masood et al. Plants disease segmentation using image processing
CN110334760B (en) Optical component damage detection method and system based on RESUnet
CN110110667B (en) Processing method and system of diatom image and related components
GB2466818A (en) Cell image segmentation using binary threshold and greyscale image processing
CN116542975B (en) Defect classification method, device, equipment and medium for glass panel
CN115330795A (en) Cloth burr defect detection method
CN115311507B (en) Building board classification method based on data processing
CN102393902A (en) Vehicle color detection method based on H_S two-dimensional histogram and regional color matching
CN114549441A (en) Sucker defect detection method based on image processing
CN111077150A (en) Intelligent excrement analysis method based on computer vision and neural network
CN115731493A (en) Rainfall micro physical characteristic parameter extraction and analysis method based on video image recognition
CN107516315B (en) Tunneling machine slag tapping monitoring method based on machine vision
CN115294377A (en) System and method for identifying road cracks
CN116559111A (en) Sorghum variety identification method based on hyperspectral imaging technology
CN115082783A (en) Low-contrast fogdrop image identification method
CN115082379A (en) Activated sludge phase contrast microscopic image floc and filamentous bacterium segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination