CN115187788A - Crop seed automatic counting method based on machine vision - Google Patents

Crop seed automatic counting method based on machine vision Download PDF

Info

Publication number
CN115187788A
CN115187788A CN202210558202.2A CN202210558202A CN115187788A CN 115187788 A CN115187788 A CN 115187788A CN 202210558202 A CN202210558202 A CN 202210558202A CN 115187788 A CN115187788 A CN 115187788A
Authority
CN
China
Prior art keywords
image
seed
seeds
size
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210558202.2A
Other languages
Chinese (zh)
Inventor
杜志钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210558202.2A priority Critical patent/CN115187788A/en
Publication of CN115187788A publication Critical patent/CN115187788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a crop seed automatic counting method based on machine vision, which is used for automatic counting of seeds in a machine vision agricultural production scene and provides a technical means for weighing thousand seed weight of crops and counting seeds in an agricultural production scene. The method comprises the following steps: step 1, preprocessing a seed image; step 2, segmenting the seed image; step 3, detecting the contour of the seeds and step 4, automatically counting the seeds; wherein: step 1, carrying out gray level conversion and Gaussian blur noise reduction processing on an input image; step 2, processing the image by using an improved self-adaptive threshold edge detection operator to obtain a binary image; step 3, extracting and dividing the seed outline through a boundary tracking algorithm, and processing the overlapped seeds to identify a single seed and a plurality of seeds; and 4, fitting the profile size distribution by using a normal distribution function, and calculating the seed number by using the average size of the profile samples in a preset confidence interval as a standard seed size.

Description

Crop seed automatic counting method based on machine vision
Technical Field
The invention relates to the field of image processing, in particular to a crop seed automatic counting method based on machine vision.
Background
The crop seed counting and weight metering are key indexes for measuring the seed quality, and when the thousand seed weight index of crop seed particles is calculated, the accuracy of final metering is directly influenced by the accurate counting of the crop seed particles. Manual counting consumes manpower and has low efficiency and poor accuracy. At present, various automatic crop seed counting methods are available in the market, mainly based on a photoelectric tube counting technology, but the problems of high cost and poor precision exist.
Therefore, the invention provides the crop seed automatic counting method based on the machine vision, when the number of the particles of the crop seeds is counted, the counting precision is higher, the anti-interference capability is strong, the time consumption is very short, and the method has higher practical value in the field of agricultural production modernization.
Disclosure of Invention
In order to realize the purpose of the invention, the following technical scheme is adopted to realize the purpose:
a crop seed automatic counting method based on machine vision comprises the following steps: step 1, preprocessing an image; step 2, image segmentation; step 3, detecting the contour of the seeds, superposing the seeds, and step 4, automatically counting the seeds; wherein: step 1, carrying out gray level conversion and Gaussian blur noise reduction processing on an input image; step 2, processing the image by using an improved self-adaptive threshold edge detection operator to obtain a binary image; step 3, extracting a segmentation contour through a boundary tracking algorithm, and identifying a part where a plurality of seed particles are overlapped; and 4, fitting the profile size distribution by using a normal distribution function, and calculating the seed number by using the average size of the profile samples in a preset confidence interval as a standard seed size.
The machine vision-based seed counting method comprises the following steps: step 1 image preprocessing comprises:
(1) Converting an input BGR channel image into a gray image;
(2) Eliminating image Gaussian noise by using two-dimensional Gaussian filtering, and convolving all 5 × 5 pixel matrixes of the gray level image with 5 × 5 Gaussian kernels to obtain the gray level image after noise elimination, wherein the two-dimensional Gaussian kernels are expressed as:
Figure BDA0003655599300000021
where σ is the standard deviation of the distribution and x and y are the position indices.
The machine vision-based seed counting method comprises the following steps: σ =1.4, the kernel of the gaussian filter is shown in the formula.
Figure BDA0003655599300000022
The machine vision-based seed counting method comprises the following steps: step 2, detecting the image edge information by using the multi-stage edge detection algorithm of the self-adaptive threshold value comprises the following steps:
(1) Inputting the gray level image preprocessed in the step 1, and calculating the gradient size and gradient direction of each pixel point in the image through a Laplace operator, wherein the formula of the Laplace operator is as follows:
Figure BDA0003655599300000031
the gradient value is calculated by the formula:
Figure BDA0003655599300000032
the gradient direction is as follows:
Figure BDA0003655599300000033
the 3 × 3 laplacian template window is represented as:
Figure BDA0003655599300000034
Figure BDA0003655599300000035
(2) Performing non-maximum suppression, comparing the edge intensity of the current pixel with the edge intensities of the pixels in the positive gradient direction and the negative gradient direction, and if the edge intensity of the current pixel is maximum; the value of the edge strength is retained, and if not, the value is suppressed:
p grad =max{a grad ,b grad ,p grad }
wherein a and b are two adjacent pixel points in the positive and negative gradient directions of a certain pixel p on the edge;
(3) Firstly, determining a high threshold ratio through a gradient difference value of a maximum gradient value pixel, wherein G (i) max is the maximum gradient value pixel, diff (i) is a pixel number difference value at a position i in an image pixel gradient difference histogram, the gradient value represents the number of pixels which are more than the number of pixels at the position i-1 gradient amplitude in the histogram, and a calculation formula of the high threshold ratio is as follows:
Figure BDA0003655599300000041
the high threshold is obtained by the maximum gray value in the input gray image and the high threshold ratio:
TH h =max(img)*TH h Ratio
calculating the low threshold by the high threshold:
TH l =TH h /3
after the self-adaptive high and low threshold values are obtained, comparing the pixel gradient values in the image with the threshold values one by one: if the gradient value is above the high threshold, the pixel is set to 1, if below the low threshold, it is set to 0; if the gradient value is between the high and low thresholds, the neighboring pixels are found, if there are pixels with gradient values above the high threshold, 1 is set, otherwise 0 is set. This results in a binary image consisting of 0 and 1, i.e. 1 is the contour.
The machine vision-based seed counting method comprises the following steps: a contour detection unit using a boundary tracking algorithm, comprising:
(1) Inputting a binary image obtained after processing by an image segmentation unit, scanning pixels in a grid from left to right and from top to bottom in sequence, and finding out a first black pixel which is not scanned before;
(2) Taking the black pixel as a starting point, scanning peripheral adjacent pixels around the clockwise direction, setting the pixel as a new starting point if a new black pixel is scanned, and continuing to scan the peripheral adjacent pixels; if the periphery has no black pixel, abandoning the current starting point and returning to the step 1;
(3) And when the new starting point is coincident with the starting point at the beginning, the outline is connected, the number of the outline is added by 1, the number of pixels in the outline is calculated to obtain the size of the outline, and the steps 1-3 are repeated until all black pixels in the input image are traversed.
The machine vision-based seed counting method comprises the following steps: the step 4 comprises the following steps:
(1) Calculating the standard size of the seeds: and (3) carrying out normal distribution fitting on the contour size data, wherein the probability density function of the one-dimensional normal distribution is as follows:
Figure BDA0003655599300000051
wherein mu is a position parameter, sigma is a scale parameter, and x is a sample value, namely the contour size;
(2) The average size of the contour samples within the 95% confidence interval was taken as the seed standard size:
Figure BDA0003655599300000052
wherein, avg size For seed Standard size, spl size 95% interval is a two-dimensional normal distribution 95% confidence interval for the sample seed size;
(3) Dividing all profile dimensions by the seed standard dimensions to obtain the number of seeds contained within the profile:
Figure BDA0003655599300000053
drawings
FIG. 1 is a functional schematic of a seed counting system according to the present invention;
FIG. 2 is a flow chart of the operation of the image pre-processing unit of the present invention;
FIG. 3 is a flow chart of the operation of the image segmentation unit of the present invention;
FIG. 4 is a flow chart of the operation of the image contour detection unit of the present invention;
FIG. 5 is a flow chart of the operation of the counting unit of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention is made with reference to the accompanying drawings 1-5:
referring to fig. 1, the present invention is a crop seed automatic counting method based on machine vision, which inputs a color image of crop seeds under a solid background and outputs the number of seeds contained in the image. The system for implementing the method comprises an image preprocessing unit, an image segmentation unit, a contour detection unit and a counting unit.
Wherein: the image preprocessing unit is used for carrying out gray level conversion on an input original image and then removing noise by using Gaussian blur; the image segmentation unit is used for processing the image by using an edge detection operator of a self-adaptive threshold value to obtain a binary image, wherein a value 0 is used as a background, and a value 1 is used as a detected edge; the contour detection unit scans the pixel points one by one through a boundary tracking algorithm, extracts and segments contours and calculates the number and the size of the contours; the counting unit is used for calculating the number of the seeds according to the number of the outlines and the size of the outlines.
Fig. 2 is an image preprocessing unit work flow diagram. The function of this unit is to perform preliminary processing on the image to obtain a de-noised gray scale image for segmentation of the contours. The realization method comprises the following steps:
1. the original image is firstly divided into three channels of B, G and R, wherein B represents a blue channel, G represents a green channel, and R represents a red channel.
2. Converting the color image into a gray image, wherein the conversion formula is as follows:
gray=R*0.299+G*0.258+B*0.114 (1)
wherein, gray represents the gray value of the pixel, and R/G/B represents the red, green and yellow channel color value of the pixel respectively.
3. And then performing two-dimensional Gaussian filtering on the gray level image. Gaussian filtering is one of the most commonly used preprocessing methods in machine learning, and its main purpose is to eliminate noise in images and output filtered image data.
Gaussian filtering uses a "kernel convolution" technique, which works on the principle of taking a pixel and computing its blur value (similar to an average but with more variance in the middle). By convolving a 5 x 5 filter with a 5 x 5 matrix of pixels, the pixels near the center pixel will be enlarged, while those at the edges of the matrix will be blurred.
The two-dimensional Gaussian filter is shown in equation (2):
Figure BDA0003655599300000071
where σ is the standard deviation of the distribution and x and y are the position indices of the pixels. The value of σ controls the variance around the mean of the gaussian distribution, which determines the extent of the blurring effect around the pixel. The kernel of the gaussian filter with σ =1.4 is shown in equation (3).
Figure BDA0003655599300000072
FIG. 3 is a flowchart of an image segmentation unit operation for segmenting an image using an adaptive threshold multi-level edge detection algorithm.
The edge refers to a set of pixel points with large gray value change, and in the image, the change degree and direction of the gray value are represented by gradients.
The specific content of the adaptive threshold edge detection algorithm comprises the following steps:
1. the gradient magnitude and direction of the image are calculated.
Inputting a gray image obtained by a previous unit (an image preprocessing unit), creating a 3 x 3 image calculation kernel window, convolving the kernel with a laplacian template horizontal and vertical 3 x 3 window operator respectively, obtaining horizontal and vertical gradient partial derivatives Gx and Gy of pixels, and obtaining the gradient size by squaring and evolution of the Gx and Gy.
The formula for the laplacian is as follows:
Figure BDA0003655599300000081
the gradient value calculation formula is as follows:
Figure BDA0003655599300000082
the calculation formula of the gradient direction is as follows:
Figure BDA0003655599300000083
the 3 × 3 laplacian template window is shown in (7), and the operator can make a bright point in a darker area in the image brighter, so that the contrast at the gray abrupt change position is enhanced while each gray value in the image is retained, and the final result is that small detail information in the image is highlighted on the premise that the background of the image is retained, so that edge information can be better acquired.
Figure BDA0003655599300000084
Wherein, kgx and Kgy are Laplacian template windows of the pixel in the horizontal and vertical directions respectively.
2. Non-maximum suppression
The purpose of this step is to convert "blurred" edges in the gradient magnitude image to "sharp" edges. Basically, this is achieved by retaining all local maxima in the gradient image and deleting all other content. The method comprises the following specific steps:
(1) The gradient direction θ is first rounded to the nearest integer multiple of 45 °. That is, if the gradient direction is 40 °, it is considered to select between 0 ° and 45 °, which is rounded to 45 ° since 45 ° and 40 ° are closer.
(2) The gradient magnitude G of the current pixel is compared to the edge intensities of the pixel in the positive and negative gradient directions, i.e. the gradient magnitudes. That is, if the gradient direction is north (θ =90 °), the current pixel is compared with two pixels adjacent in the north-south direction.
(3) If the gradient size of the current pixel is the largest, the value of the gradient size of the current pixel is retained. If not, the value is suppressed (i.e., deleted):
p grad =max{a grad ,b grad ,p grad } (8)
wherein a and b are two adjacent pixel points in the positive and negative gradient directions of a certain pixel p on the edge.
3. Adaptive dual threshold detection
The result of the non-maximum rejection obtained in the last step is not perfect, some edges may not actually be edges, and there are some noise in the image, so it is necessary to solve this problem by dual threshold detection.
Generally, a specific threshold TH, i.e. high or low, is selected h And TH l
(1) If the gradient value of a pixel is above the high threshold, it is retained.
(2) If the gradient value of a certain pixel is lower than the low threshold, it is discarded.
(3) If the gradient value of a certain pixel is between the high threshold and the low threshold, the gradient values of the pixels are searched from the 8 neighborhoods of the pixel, if the gradient values of the pixels are higher than the high threshold, the gradient values are kept, and if the gradient values of the pixels are not higher than the high threshold, the gradient values of the pixels are discarded.
However, this method needs to set different dual thresholds for different images, and it is very troublesome to adjust the threshold each time, so setting an adaptive threshold can greatly improve the program efficiency.
The threshold ratio was found in actual testing withoutThe particular value and multiplication by the maximum pixel value in the image is more helpful to the final result, so the invention does not directly calculate the adaptive threshold, but obtains an adaptive high threshold ratio first, and multiplies the adaptive high threshold ratio by the maximum gray value pixel in the image to obtain the high threshold TH h
High threshold TH h The calculation formula of (a) is as follows:
TH h =max(img)*TH h Ratio (9)
wherein max (img) is the maximum gray value in the image, TH h Ratio is a high threshold Ratio.
The high threshold and the low threshold are generally in a multiple relation, so that the high threshold TH is further increased h Dividing by 3 to obtain the low threshold TH l
TH l =TH h /3 (10)
The adaptive high threshold ratio is determined by the zero of the gradient difference value of the largest gradient value pixel in the image. And if the gradient difference value of the pixel with the maximum gradient value is 0 and the gradient difference values of the pixel at the previous position and the pixel at the next position are of opposite signs, selecting the ratio of the gradient difference values of the pixel at the previous position and the pixel at the next position as the self-adaptive high threshold ratio. Otherwise, the maximum gradient value pixel which meets the condition is continuously searched from the rest pixels and the ratio is calculated.
The adaptive high threshold ratio calculation formula is as follows:
Figure BDA0003655599300000101
and diff (i) is the difference value of the number of pixels at the position of i in the gradient difference histogram of the image pixels, and represents the number of pixels which are more than the number of pixels at the position of i-1 gradient amplitude in the histogram. G (i) max The largest gradient value pixel.
After the self-adaptive high and low threshold values are obtained, comparing the pixel gradient values in the image with the threshold values one by one: if the gradient value is above the high threshold, the pixel is set to 1, if below the low threshold, it is set to 0; if the gradient value is between the high and low thresholds, the neighboring pixels are found, if there are pixels with gradient values above the high threshold, 1 is set, otherwise 0 is set. This results in a binary image consisting of 0 and 1, i.e. 1 is the contour.
Fig. 4 is a flowchart of the operation of the contour detection unit.
The unit inputs a previous unit (image segmentation unit) to obtain a binary contour image, and outputs the number of all contours in the image and size information of each contour.
Contour detection is a technique applied to digital images to extract their boundaries. Contour pixels are typically a small fraction of the total number of pixels representing the pattern, and the computational load is greatly reduced when running the feature extraction algorithm on a contour rather than on the entire pattern. Since the contour shares many features with the original pattern, it is more efficient to perform the feature extraction process on the contour rather than on the original pattern.
Boundary tracing is one of the common methods of contour detection, and is used in the case where a region is already divided (binary or labeled), but the boundary is unknown. The boundary tracking implementation method comprises the following steps:
1. inputting a binary image obtained after processing by the image segmentation unit, scanning pixels in the grid from left to right and from top to bottom in sequence, and finding out a first black pixel which is not scanned before.
2. The peripheral adjacent pixels are scanned around the clockwise direction with the black pixel as a starting point. If a new black pixel is scanned, setting the pixel as a new starting point, and continuing to scan peripheral adjacent pixels; if the periphery has no black pixel, the current starting point is abandoned and the step 1 is returned to.
3. When the new starting point coincides with the starting point of the beginning, the outline is connected, the number of the outline is added with 1, and the number of pixels in the outline is calculated to obtain the size of the outline. And repeating the steps 1-3 again until all black pixels in the input image are traversed.
Fig. 5 is a counting unit work flow diagram.
The unit counts the seeds in the image according to the number and the size of the contours output by the previous unit, and the method comprises the following specific steps:
1. calculating the standard size of the seed. The contour size obtained by the last unit is firstly fitted by using a normal distribution function, so that a concentration interval of the contour size can be obtained. The normal distribution is a continuous distribution which is most widely applied, and the probability density function of the one-dimensional normal distribution is as follows:
Figure BDA0003655599300000121
where μ is a position parameter, σ is a scale parameter, and x is a sample value, i.e., a contour size.
The contour data samples in the confidence interval with the confidence coefficient of 95% are selected as actual samples, because points outside the interval can be regarded as small probability events, namely events which are almost impossible to occur in reality, so that the interference of abnormal points can be avoided, wherein the abnormal points refer to contours with too small size (noise points) or too large size (large connected domains). By finding the average size of the actual samples, the standard size of a seed can be obtained.
The calculation formula of the standard seed size is as follows:
Figure BDA0003655599300000122
wherein, avg size For seed Standard size, spl size 95% interval is the 95% confidence interval of the two-dimensional normal distribution for the sample seed size.
2. The number of seeds is calculated. And (3) dividing the size of each contour by the size of the seeds, rounding the result upwards to obtain the number of seeds contained in the contour, accumulating the number of seeds of each contour to calculate the total number of seeds in the image, and displaying the total number of seeds in the segmented image.
The formula for calculating the number of seeds in the n contours is as follows:
Figure BDA0003655599300000131
generally, the method is based on the self-adaptive threshold edge segmentation machine vision method to detect seed images, uses the self-adaptive threshold edge detection and boundary tracking algorithm, and can reduce the interference of other noise information while keeping the original image contour information through the noise reduction processing and the reasonable setting of double thresholds, and on the basis, the number of seeds contained in the contour is calculated through the comparison of the contour size and the standard seed size. Compared with the mode of carrying out target detection through deep learning, the method does not need to waste a large amount of time for carrying out manual labeling and model training of training images, has higher operating efficiency than the deep learning method, can better meet the requirement of instant calculation, and has the more important point that the accuracy of the current deep learning on small target detection is low due to the fact that large-scale labeling samples are needed for carrying out model training, so that the method can more accurately calculate the number of seeds in the images in a faster time; compared with several counting modes, such as manual counting, photoelectric tube counting and weighing, the method can obtain the number of the seeds only through images, so that a large amount of cost is saved, the technical precision can completely meet the actual requirement, the method is more convenient and faster, and the method is favorable for popularization to agricultural practitioners and researchers.

Claims (2)

1. A crop seed automatic counting method based on machine vision comprises the following steps: step 1, preprocessing an image; step 2, image segmentation; step 3, detecting the contour of the seeds, and step 4, automatically counting the seeds; the method is characterized in that: step 1, carrying out gray level conversion and Gaussian blur noise reduction processing on an input image; step 2, processing the image by using an improved self-adaptive threshold edge detection operator to obtain a binary image; step 3, extracting a segmentation contour through a boundary tracking algorithm, and identifying a part where a plurality of seed particles are overlapped; and 4, fitting the profile size distribution by using a normal distribution function, and calculating the seed number by taking the average size of the profile samples in a preset confidence interval as a standard seed size.
2. The machine-vision-based seed counting method of claim 1, wherein: step 1 image preprocessing comprises:
(1) Converting an input BGR channel image into a gray image;
(2) Eliminating image Gaussian noise by using two-dimensional Gaussian filtering, and convolving all 5 × 5 pixel matrixes of the gray image with 5 × 5 Gaussian kernels to obtain the gray image after noise elimination, wherein the two-dimensional Gaussian kernels are expressed as follows:
Figure FDA0003655599290000011
where σ is the standard deviation of the distribution and x and y are the position indices.
CN202210558202.2A 2022-05-21 2022-05-21 Crop seed automatic counting method based on machine vision Pending CN115187788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210558202.2A CN115187788A (en) 2022-05-21 2022-05-21 Crop seed automatic counting method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210558202.2A CN115187788A (en) 2022-05-21 2022-05-21 Crop seed automatic counting method based on machine vision

Publications (1)

Publication Number Publication Date
CN115187788A true CN115187788A (en) 2022-10-14

Family

ID=83514251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210558202.2A Pending CN115187788A (en) 2022-05-21 2022-05-21 Crop seed automatic counting method based on machine vision

Country Status (1)

Country Link
CN (1) CN115187788A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197072A (en) * 2023-09-07 2023-12-08 石家庄铁道大学 Automatic object counting method based on machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197072A (en) * 2023-09-07 2023-12-08 石家庄铁道大学 Automatic object counting method based on machine vision
CN117197072B (en) * 2023-09-07 2024-04-05 石家庄铁道大学 Automatic object counting method based on machine vision

Similar Documents

Publication Publication Date Title
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN107507173B (en) No-reference definition evaluation method and system for full-slice image
CN111340824B (en) Image feature segmentation method based on data mining
CN109035273B (en) Image signal fast segmentation method of immunochromatography test paper card
CN112750106B (en) Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN112614062B (en) Colony counting method, colony counting device and computer storage medium
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
CN110309806B (en) Gesture recognition system and method based on video image processing
CN114549981A (en) Intelligent inspection pointer type instrument recognition and reading method based on deep learning
CN111882561A (en) Cancer cell identification and diagnosis system
CN114882040B (en) Sewage treatment detection method based on template matching
CN114037691A (en) Carbon fiber plate crack detection method based on image processing
Srinivas et al. Remote sensing image segmentation using OTSU algorithm
CN116524196B (en) Intelligent power transmission line detection system based on image recognition technology
CN113344810A (en) Image enhancement method based on dynamic data distribution
CN112017109A (en) Online ferrographic video image bubble elimination method
CN115187788A (en) Crop seed automatic counting method based on machine vision
CN113850792A (en) Cell classification counting method and system based on computer vision
CN111815542B (en) Tree annual ring image medulla positioning and annual ring measuring method
CN117496532A (en) Intelligent recognition tool based on 0CR
CN113052234A (en) Jade classification method based on image features and deep learning technology
Zhao et al. An effective binarization method for disturbed camera-captured document images
CN112184696A (en) Method and system for counting cell nucleus and cell organelle and calculating area of cell nucleus and cell organelle
Tabatabaei et al. A novel method for binarization of badly illuminated document images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination