CN109544562B - Automatic identification counting algorithm for end faces of reinforcing steel bars based on images - Google Patents

Automatic identification counting algorithm for end faces of reinforcing steel bars based on images Download PDF

Info

Publication number
CN109544562B
CN109544562B CN201811330764.1A CN201811330764A CN109544562B CN 109544562 B CN109544562 B CN 109544562B CN 201811330764 A CN201811330764 A CN 201811330764A CN 109544562 B CN109544562 B CN 109544562B
Authority
CN
China
Prior art keywords
image
face
cloud
template
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811330764.1A
Other languages
Chinese (zh)
Other versions
CN109544562A (en
Inventor
孙光民
孙凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201811330764.1A priority Critical patent/CN109544562B/en
Publication of CN109544562A publication Critical patent/CN109544562A/en
Application granted granted Critical
Publication of CN109544562B publication Critical patent/CN109544562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The invention relates to an automatic identification and counting algorithm for an end face of a steel bar based on an image. First, an end face region to be processed in an image is extracted. Preprocessing an image, including image scaling and Gaussian filtering, classifying pixel colors by using a cloud model, obtaining preliminary segmentation of the image according to a classification result, carrying out closed operation on a large number of segmented connected domains, extracting parameters including area ratio, gravity center and aggregation, carrying out linear weighting to obtain reference values, and selecting to obtain an end face region to be processed. The end face area is then counted for definitive detachment. And preprocessing the image of the end face part, including graying, histogram equalization and adaptive threshold binarization, to obtain a binary image of the end face region. And obtaining the estimated radius of a single end face by using a granularity measurement method, constructing a template according to a formula, and performing template matching to obtain the central position of the end face. And finally, improving the robustness of the algorithm through a limiting condition. The average accuracy of the invention can reach 97%.

Description

Automatic identification counting algorithm for end faces of reinforcing steel bars based on images
Technical Field
The invention designs an automatic identification and counting algorithm for an end face image of a steel bar, relates to a traditional image processing algorithm, a color image segmentation, granularity measurement method and a template matching method based on a cloud model, designs an automatic identification and counting algorithm for an end face with higher accuracy, and belongs to the field related to image processing.
Background
In actual production, irregular round-like objects are ubiquitous, for example, tubular workpieces, reinforcing steel bars, bars and the like are round-like objects, and identification research on the objects is generally called round-like identification. Circle-like recognition technology is an important aspect of digital image processing. One of the important applications is the problem of bar counting in steel production.
The production of bars is an extremely important link in the industrial production of steel plants, and round steel and deformed steel are used as production finished products of the steel plants and are widely applied to the building industry. For example, the construction of the main engineering project of the three gorges hydro junction uses a great amount of screw steel. The traditional packaging specifications for bar production are based on weight per ton packaging, and designers design the number of bars on the basis of the number of bars in the building design, and particularly in the international market, the bars are expected to be packaged in a fixed number.
The automatic bar counting technology is a difficult problem which troubles bar production enterprises in China. At the present stage, steel enterprises in China mainly adopt two modes of photoelectric tubes and manual counting to realize on-line counting of bars. The counting mode of the photoelectric tube is affected by the aging problem of the photoelectric tube and the overlapping problem of the bars, so that the counting precision is difficult to guarantee. The accuracy of the manual counting mode is greatly influenced by human factors, and long-term reliable counting cannot be realized.
In recent years, Charge Coupled Device (CCD) technology has been rapidly developed, and attempts have been made to solve the problem of automatic bar counting by using an image processing method. However, the adoption of the image processing method to solve the automatic counting of the bars has the following problems: (1) the boundary line is not obvious after the bars are packed, so that the images are seriously adhered; (2) the end surfaces of the bars are uneven to form holes, and the illumination is uneven; (3) the bending and the shape of the bar material in the shearing process are not very regular, and the counting is influenced.
Many scholars have proposed their solutions to the above problems. The idea of distance conversion is adopted, the center of the bar is determined to realize automatic counting of the bar, the problems of bar deformation and deep adhesion cannot be well solved, and misjudgment is easily caused. According to the bar counting method based on the template covering method, due to bar deformation caused by shearing, template matching is difficult, and accurate counting is difficult to obtain. The method of edge detection is adopted, so that the center information of the bars is obtained, the counting of the bars is realized, the edge information is greatly influenced by the outside, and the reliable extraction is difficult to realize.
From the above, it can be seen that accurate counting of the finished products produced in the bar factory has very important practical significance.
Disclosure of Invention
The purpose of the present invention is to solve the above technical drawbacks, to reduce the errors of the traditional bar end face counting methods and to reduce manual interventions.
In order to achieve the purpose, the invention provides an image-based steel bar end face identification and counting algorithm, which comprises the following steps:
step 1, shooting an image of an end face of a steel bar containing a background, and extracting an end face area;
step 1.1, preprocessing the shot image;
step 1.2, segmenting the end face image by utilizing a cloud model;
step 1.3, automatically selecting an end face area;
step 2, performing branch-fixing separation and counting according to the end surface area image;
step 2.1, preprocessing the end face image part;
and 2.2, measuring the granularity, constructing template matching and obtaining the end face mark.
Advantageous effects
The method has the advantages that firstly, a fully automatic algorithm flow is realized, manual intervention or minimum intervention is not needed, so that more conditions are adapted, and the universality of the algorithm is enhanced. And on the other hand, the algorithm has robustness, and errors are reduced as much as possible in automatic implementation so as to realize higher accuracy.
Drawings
The above advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flowchart of an automatic identification and counting algorithm for an end face of a steel bar based on an image according to an embodiment of the present invention;
FIG. 2 is a raw diagram of one embodiment of the present invention.
FIG. 3 is a diagram illustrating the effect of step 1 according to an embodiment of the present invention.
Fig. 4 is a diagram illustrating the final effect of step 2 according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As shown in fig. 1, according to the automatic identification and counting algorithm for the end face of the steel bar based on the image, the existing end face image is automatically identified, and the specific implementation steps are as follows:
step 1, shooting an image of the end face of the steel bar containing the background, and extracting an end face area needing to be counted, wherein a graph 2 is taken as a sample for demonstration.
Step 1.1, preprocessing the image obtained by shooting, adjusting the image to be uniform in size by zooming and clipping, wherein the size is set to be 560 × 420, and performing Gaussian filtering by using a Gaussian kernel with the size of 5 × 5 and the standard deviation of 0.5 to remove noise.
And 1.2, segmenting the end face image by using a cloud model.
First, color space conversion and color information quantization are performed on an image.
The image is converted from the RGB color space to the HSV color space. Color quantization techniques are divided into two categories, uniform quantization and non-uniform quantization, where non-uniform quantization is selected.
The HSV color space is non-uniformly quantized into 256 dimensions, namely 16 dimensions of hue (H), 4 dimensions of saturation (S) and brightness (V), and a one-dimensional feature vector G is constructed:
G=HδSδV+SδV+V (1)
wherein, deltaSAnd deltaVThe quantization levels of the components S and V, respectively, where δS4 and δV4, so there are:
G=16H+4S+V (2)
thus H, S, V the three components are distributed over a one-dimensional vector. And counting the characteristic value frequency of the G to obtain a distribution histogram H (G) of the characteristic vector G, wherein the characteristic value G belongs to [0, 255 ]. H (g) is normalized so that the frequency distribution is between [0, 1], and then h (g) smaller than 0.05 is made 0 to obtain a frequency distribution function f (g).
The frequency distribution function f (g) is converted into several normal clouds, each normal cloud C (Ex, En, He) having 3 digital features-expected Ex, entropy En and super-entropy He, each cloud representing one color. The mathematical expression for this process is:
Figure BDA0001859918380000031
where i is the cloud number, n1The number of clouds obtained by transformation also represents n obtained by division1And (4) classifying colors.
The expected Ex, entropy En and super-entropy He for each cloud are determined as follows:
first, extracting peak position g in f (g)iEx as per normal cloudi
Second, find Ex by f (g)iTwo side nearest valley position g1、g2And calculating the difference Δ g ═ g1-g2I, then the entropy of the cloud Eni=Δg/6;
Third, Ex in f (g)iThe value of the position is f (g)i) Then the hyper entropy of the cloud
Figure BDA0001859918380000041
So far, the parameters of each normal cloud have been determined, each cloud representing a color, and finally the color feature value g of each pixel in the image is usedxyCalculating membership mu of pixels to different normal cloudsi
Figure BDA0001859918380000042
Wherein, En'iIs a random number generated by a normal function with En as the mean and He as the standard deviation, i belongs to [1, n ]1]。
In this way, each pixel point calculates a membership degree for all normal clouds, and the cloud with the highest membership degree is selected as the classification of the current pixel, so that the classification results of all pixel points in the whole image are obtained. Since the classification results of neighboring pixels may be the same, many connected domains are generated, resulting in a segmentation of the image.
And 1.3, automatically selecting the end surface area.
After the segmentation of step 1.2, a plurality of connected domains are obtained, after the connected domains with the area smaller than 2000 are removed, the closed operation is carried out on the pixels of each class by using a circular kernel with the radius of 8 to obtain n2A connected domain to be selected.
And calculating three parameters of each connected domain, namely the area ratio SW, the gravity center MW and the concentration CL. The calculation formulas are respectively as follows:
Figure BDA0001859918380000043
Figure BDA0001859918380000044
Figure BDA0001859918380000045
wherein j is a connected domain serial number, j belongs to [1, n ]2]P is the pixel number in the connected domain j, and p belongs to [1, s ]],xp、ypIs the abscissa and ordinate of the pixel, s is the area (i.e., number of pixels) of connected component j, H, W is the height and width of the image, MxjIs the sum of the abscissas of all the pixels in the connected component j,
Figure BDA0001859918380000046
is the sum of the vertical coordinates of all pixels in the connected component j.
The three parameters are weighted and summed to obtain a reference value F (j):
F(j)=SW(j)+1.7*MW(j)+CL(j) (8)
and if the F is the maximum, the corresponding connected domain j is the connected domain which needs to be left, the set of the pixels in the connected domain is the image of the end face area to be counted, and the color of the pixels outside the connected domain is set to be black.
The effect of this step is shown in fig. 3.
And 2, performing branch determination separation and counting according to the end surface area image.
And 2.1, graying, histogram equalization and adaptive threshold binarization are carried out on the end face image part to obtain an end face region binary image I, wherein the end face is white, and the part which is not the end face is black.
And 2.2, obtaining the estimated radius of the end face by using a granularity measurement method, constructing a template, and performing template matching to obtain the central position of the end face.
When the granularity is measured, the image is processed by adopting opening operation with the size of the kernel increasing, and the area difference a (k) between the image and the previous opening operation is calculated after each opening operation:
Figure BDA0001859918380000051
wherein x and y are respectively the abscissa and ordinate of each pixel,
Figure BDA0001859918380000052
opening of image I for a circular structuring element of radius k (V)a) The total area of the back image is,
Figure BDA0001859918380000053
H. w is the height and width of the image, respectively.
And (c) calculating the maximum value of a (k), wherein the corresponding k value is the estimated radius r of the end face.
Since the end face radii in the actual image are not exactly the same and have a certain error, the template is constructed by setting the radius fluctuation range to Δ r, where Δ r is set to 0.5 × r.
The template formula in the x direction is:
Figure BDA0001859918380000054
the template formula in the y-direction is:
Figure BDA0001859918380000055
where d is the Euclidean distance from the pixel (x, y) in the template to the center of the template, (x)0,y0) Is the center coordinate of the template.
After construction, the binary image I obtained in the step 2.1 is processed in the x direction and the y direction respectively by using Sobel operators in the x direction and the y direction, and then template matching is carried out by using the template obtained by construction to obtain a template matching result image in the x direction and the y direction. The two result plots were normalized. Then for each pixel, the average value of the results of matching and normalizing the two templates is greater than 0.2, which is the required central mark, and the number of centers of the mark is the required number of end faces.
The effect of this step is shown in fig. 4.

Claims (1)

1. The automatic identification and counting algorithm for the end faces of the steel bars based on the images is characterized by comprising the following steps of:
step 1, shooting an image of an end face of a steel bar containing a background, and extracting an end face area needing to be counted;
step 1.1, preprocessing the shot image, adjusting the image to be uniform in size by zooming and cutting, wherein the size is set to be 560 × 420, and performing Gaussian filtering by using a Gaussian kernel with the size of 5 × 5 and the standard deviation of 0.5 to remove noise;
step 1.2, segmenting the end face image by utilizing a cloud model;
firstly, performing color space conversion and color information quantization on an image;
converting the image from an RGB color space to an HSV color space; the color quantization technology is divided into a uniform quantization and a non-uniform quantization, wherein the non-uniform quantization is selected;
the HSV color space is non-uniformly quantized into 256 dimensions, namely hue H16 dimension, saturation S and brightness V are both 4 dimensions, and a one-dimensional feature vector G is constructed:
G=HδSδV+SδV+V (1)
wherein, deltaSAnd deltaVThe quantization levels of the components S and V, respectively, where δS4 and δV4, so there are:
G=16H+4S+V (2)
thus H, S, V the three components are distributed on a one-dimensional vector; counting the characteristic value frequency of the G to obtain a distribution histogram H (G) of the characteristic vector G, wherein the characteristic value G belongs to [0, 255 ]; normalizing h (g) to obtain a frequency distribution function f (g) by distributing the frequency between [0 and 1] and then making h (g) less than 0.05 equal to 0;
converting the frequency distribution function f (g) into a number of normal clouds, each normal cloud C (Ex, En, He) having 3 digital features-an expected Ex, an entropy En and a super-entropy He, each cloud representing a color; the mathematical expression for this process is:
Figure FDA0003453904700000011
where i is the cloud number, n1The number of clouds obtained by transformation also represents n obtained by division1A color classification;
the expected Ex, entropy En and super-entropy He for each cloud are determined as follows:
first, extracting peak position g in f (g)iEx as per normal cloudi
Second, find Ex by f (g)iTwo side nearest valley position g1、g2And calculating the difference Δ g ═ g1-g2I, then the entropy of the cloud Eni=Δg/6;
Third, Ex in f (g)iThe value of the position is f (g)i) Then the hyper entropy of the cloud
Figure FDA0003453904700000012
So far, the parameters of each normal cloud have been determined, each cloud representing a color, and finally the color feature value g of each pixel in the image is usedxyCalculating membership degree mu of the pixel to different normal cloudsi
Figure FDA0003453904700000021
Wherein, En'iIs a random number generated by a normal function with En as the mean and He as the standard deviation, i belongs to [1, n ]1];
In this way, each pixel point calculates a membership degree to all normal clouds, and the cloud with the highest membership degree is selected as the classification of the current pixel, so that the classification results of all pixel points in the whole image are obtained; since the classification results of the adjacent pixels may be the same, many connected domains are generated, thereby forming the segmentation of the image;
step 1.3, automatically selecting an end face area;
after the segmentation of step 1.2, a plurality of connected domains are obtained, after the connected domains with the area smaller than 2000 are removed, the closed operation is carried out on the pixels of each class by using a circular kernel with the radius of 8 to obtain n2A connected domain to be selected;
calculating three parameters of each connected domain, namely an area ratio SW, a gravity center MW and a concentration CL; the calculation formulas are respectively as follows:
Figure FDA0003453904700000022
Figure FDA0003453904700000023
Figure FDA0003453904700000024
wherein j is a connected domain serial number, j belongs to [1, n ]2]P is the pixel number in the connected domain j, and p belongs to [1, s ]],xp、ypIs the horizontal direction of the pixel,On the ordinate, s is the number of pixels which is the area of the connected component j, H, W is the height and width of the image, MxjIs the sum of the abscissas of all pixels in the connected field j, MyjIs the sum of the vertical coordinates of all pixels in the connected domain j;
the three parameters are weighted and summed to obtain a reference value F (j):
F(j)=SW(j)+1.7*MW(j)+CL(j) (8)
the corresponding connected domain j when the F is maximum is the connected domain to be left, the set of pixels in the connected domain is the image of the end face area to be counted, and the color of the pixels outside the connected domain is set to be black;
step 2, performing branch-fixing separation and counting according to the end surface area image;
step 2.1, carrying out graying, histogram equalization and adaptive threshold binarization on the end face image part to obtain an end face region binary image I, wherein the end face is white, and the part which is not the end face is black;
step 2.2, obtaining the estimated radius of the end face by using a granularity measurement method, constructing a template, and performing template matching to obtain the central position of the end face;
when the granularity is measured, the image is processed by adopting opening operation with the size of the kernel increasing, and the area difference a (k) between the image and the previous opening operation is calculated after each opening operation:
Figure FDA0003453904700000031
wherein x and y are respectively the abscissa and ordinate of each pixel,
Figure FDA0003453904700000032
opening of image I for a circular structuring element of radius k (V)a) The total area of the back image is,
Figure FDA0003453904700000033
H. w is the height and width of the image, respectively;
calculating the maximum value of a (k), wherein the corresponding k value is the estimated radius r of the end face;
setting the radius of the end face in the actual image to be not completely the same and having a certain error, so that the radius is set to be within a floating range of delta r, wherein the delta r is set to be 0.5 r, and then constructing a template;
the template formula in the x direction is:
Figure FDA0003453904700000034
the template formula in the y-direction is:
Figure FDA0003453904700000035
where d is the Euclidean distance from the pixel (x, y) in the template to the center of the template, (x)0,y0) Is the center coordinate of the template;
after construction, the binary image I obtained in the step 2.1 is processed in the x direction and the y direction respectively by using Sobel operators in the x direction and the y direction, and then template matching is carried out by using the template obtained by construction to obtain a template matching result graph in the x direction and the y direction; normalizing the two result graphs; then for each pixel, the average value of the results of matching and normalizing the two templates is greater than 0.2, which is the required central mark, and the number of centers of the mark is the required number of end faces.
CN201811330764.1A 2018-11-09 2018-11-09 Automatic identification counting algorithm for end faces of reinforcing steel bars based on images Active CN109544562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811330764.1A CN109544562B (en) 2018-11-09 2018-11-09 Automatic identification counting algorithm for end faces of reinforcing steel bars based on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811330764.1A CN109544562B (en) 2018-11-09 2018-11-09 Automatic identification counting algorithm for end faces of reinforcing steel bars based on images

Publications (2)

Publication Number Publication Date
CN109544562A CN109544562A (en) 2019-03-29
CN109544562B true CN109544562B (en) 2022-03-22

Family

ID=65846610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811330764.1A Active CN109544562B (en) 2018-11-09 2018-11-09 Automatic identification counting algorithm for end faces of reinforcing steel bars based on images

Country Status (1)

Country Link
CN (1) CN109544562B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815950B (en) * 2018-12-28 2020-08-21 汕头大学 Reinforcing steel bar end face identification method based on deep convolutional neural network
CN110288634A (en) * 2019-06-05 2019-09-27 成都启泰智联信息科技有限公司 A kind of method for tracking target based on Modified particle swarm optimization algorithm
CN110766690B (en) * 2019-11-07 2020-08-14 四川农业大学 Wheat ear detection and counting method based on deep learning point supervision thought
CN111126415B (en) * 2019-12-12 2022-07-08 创新奇智(合肥)科技有限公司 Tunnel steel bar detection counting system and method based on radar detection image
CN112037198B (en) * 2020-08-31 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Hot-rolled bar fixed support separation detection method, system, medium and terminal
CN113674200A (en) * 2021-07-08 2021-11-19 浙江大华技术股份有限公司 Method and device for counting articles on production line and computer storage medium
CN115100196B (en) * 2022-08-24 2022-11-18 聊城市洛溪信息科技有限公司 Method for evaluating derusting effect of stamping part based on image segmentation
CN116385435B (en) * 2023-06-02 2023-09-26 济宁市健达医疗器械科技有限公司 Pharmaceutical capsule counting method based on image segmentation
CN116542968A (en) * 2023-06-29 2023-08-04 中国铁路设计集团有限公司 Intelligent counting method for steel bars based on template matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073850A (en) * 2010-12-24 2011-05-25 北京理工大学 Image processing based quasi-cylinder counting statistic method
CN104866857A (en) * 2015-05-26 2015-08-26 大连海事大学 Bar material counting method
CN105976390A (en) * 2016-05-25 2016-09-28 南京信息职业技术学院 Steel tube counting method by combining support vector machine threshold statistics and spot detection
CN107767388A (en) * 2017-11-01 2018-03-06 重庆邮电大学 A kind of image partition method of combination cloud model and level set

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130204743A1 (en) * 2012-02-07 2013-08-08 Zencolor Corporation Mobile shopping tools utilizing color-based identification, searching and matching enhancement of supply chain and inventory management systems
US10181188B2 (en) * 2016-02-19 2019-01-15 International Business Machines Corporation Structure-preserving composite model for skin lesion segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073850A (en) * 2010-12-24 2011-05-25 北京理工大学 Image processing based quasi-cylinder counting statistic method
CN104866857A (en) * 2015-05-26 2015-08-26 大连海事大学 Bar material counting method
CN105976390A (en) * 2016-05-25 2016-09-28 南京信息职业技术学院 Steel tube counting method by combining support vector machine threshold statistics and spot detection
CN107767388A (en) * 2017-11-01 2018-03-06 重庆邮电大学 A kind of image partition method of combination cloud model and level set

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A convolution approach to the circle Hough transform for arbitrary radius";Christopher Hollitt;《Machine Vision and Applications》;20120405;第683-694页 *
"一种基于图像处理的打捆钢筋计数方法";李篪;《沈阳工业大学学报》;20160930;第38卷(第5期);第551-554页 *
"基于粗糙集和云模型的彩色图像分割方法";姚红 等;《小型微型计算机系统》;20131130;第34卷(第11期);第2615-2620页 *

Also Published As

Publication number Publication date
CN109544562A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109544562B (en) Automatic identification counting algorithm for end faces of reinforcing steel bars based on images
CN106404793B (en) Bearing sealing element defect detection method based on vision
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN109801283B (en) Composite insulator hydrophobicity grade determination method based on water drop offset distance
CN115331119B (en) Solid waste identification method
CN114882026B (en) Sensor shell defect detection method based on artificial intelligence
CN115131359B (en) Method for detecting pitting defects on surface of metal workpiece
CN108829711B (en) Image retrieval method based on multi-feature fusion
CN110929713A (en) Steel seal character recognition method based on BP neural network
CN116934740B (en) Plastic mold surface defect analysis and detection method based on image processing
CN116309577B (en) Intelligent detection method and system for high-strength conveyor belt materials
CN115880699B (en) Food packaging bag detection method and system
CN110473174A (en) A method of pencil exact number is calculated based on image
CN111445512A (en) Hub parameter feature extraction method in complex production line background
CN112435272A (en) High-voltage transmission line connected domain removing method based on image contour analysis
CN115601379A (en) Surface crack accurate detection technology based on digital image processing
CN110728286B (en) Abrasive belt grinding material removal rate identification method based on spark image
CN109146853B (en) Bridge pitted surface defect detection method based on HIS different optical characteristics
CN117253024B (en) Industrial salt quality inspection control method and system based on machine vision
CN112102189B (en) Line structure light bar center line extraction method
CN114155226A (en) Micro defect edge calculation method
CN116862871A (en) Wood counting method based on mixed characteristics
CN109448030B (en) Method for extracting change area
CN106408029A (en) Image texture classification method based on structural difference histogram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant