CN111507956B - Nanowire quantity statistical method and system - Google Patents

Nanowire quantity statistical method and system Download PDF

Info

Publication number
CN111507956B
CN111507956B CN202010294470.9A CN202010294470A CN111507956B CN 111507956 B CN111507956 B CN 111507956B CN 202010294470 A CN202010294470 A CN 202010294470A CN 111507956 B CN111507956 B CN 111507956B
Authority
CN
China
Prior art keywords
image
nanowire
nodes
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010294470.9A
Other languages
Chinese (zh)
Other versions
CN111507956A (en
Inventor
李政林
王妙妙
蒋春利
王志
关磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Hefei Jiuzhou Longteng Scientific And Technological Achievement Transformation Co ltd
Original Assignee
Guangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University of Science and Technology filed Critical Guangxi University of Science and Technology
Priority to CN202010294470.9A priority Critical patent/CN111507956B/en
Publication of CN111507956A publication Critical patent/CN111507956A/en
Application granted granted Critical
Publication of CN111507956B publication Critical patent/CN111507956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a method and a system for counting the number of nanowires. The method comprises the following steps: acquiring a nanowire image to be counted; preprocessing the nanowire image to obtain a preprocessed image; thinning the preprocessed image, and extracting a nanowire framework to obtain a thinned nanowire image; counting the number of end points and the number of isolated points in the thinned nanowire image; identifying nodes in the thinned nanowire image; merging two nodes with the node connection distance smaller than a preset threshold value, marking all the nodes subjected to the merging processing as merged nodes; identifying nodes of which neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of the nodes after odd connection and merging; and calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination. The invention can avoid the generation of a large number of redundant short wires and improve the accuracy of the number statistics of the nano wires.

Description

Nanowire quantity statistical method and system
Technical Field
The invention relates to the field of image processing, in particular to a nanowire quantity statistical method and a nanowire quantity statistical system.
Background
Since Lijima reported for the first time in 1991 a one-dimensional nanomaterial such as carbon nanowires, the one-dimensional nanomaterial has rapidly become the focus of attention of experts and scholars. With the research, various new nanowire materials are developed. In various applications, the size and density of the nanowires are often important parameters that affect their performance. However, due to their high density and high overlap ratio, it is difficult to accurately calculate the actual number of nanowires. Therefore, it is difficult to obtain actual data such as nanowire density, and the relationship between analysis density and application performance is affected.
In order to accurately acquire the number of the nanowires, the image processing technology can be used for firstly identifying the linear object and then calculating the characteristic parameters of the linear object. However, the conventional image thinning algorithm has some defects in this application, such as easily generating a large number of redundant short wires, thereby affecting the accuracy of the statistics of the number of nanowires.
Disclosure of Invention
Therefore, it is necessary to provide a method and a system for counting the number of nanowires, which can avoid the generation of a large number of redundant short wires and improve the accuracy of the counting of the number of nanowires.
In order to achieve the purpose, the invention provides the following scheme:
a method of counting the number of nanowires, comprising:
acquiring a nanowire image to be counted;
preprocessing the nanowire image to obtain a preprocessed image;
thinning the preprocessed image, and extracting a nanowire framework to obtain a thinned nanowire image;
counting the number of end points and the number of isolated points in the thinned nanowire image;
identifying nodes in the refined nanowire image;
merging two nodes of which the node communication distance is smaller than a preset threshold value, and marking all the nodes subjected to merging processing as merged nodes;
identifying nodes of which neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of odd-connection merged nodes;
calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination, wherein a calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after merging for odd connections.
Optionally, the preprocessing the nanowire image to obtain a preprocessed image specifically includes:
adjusting the nanowire image into a gray level image;
determining a gray threshold of the gray image by using a gray histogram;
converting the gray level image into a binary image by adopting a threshold segmentation method according to the gray level threshold;
and performing opening operation and closing operation on the binary image to obtain the preprocessed image.
Optionally, the step of refining the preprocessed image and extracting the nanowire framework to obtain a refined nanowire image specifically includes:
and changing edge pixel points of the nanowire area in the preprocessed image into background pixel points until the remaining pixel points of the nanowire area are all edge pixel points.
Optionally, the counting the number of end points and the number of isolated points in the refined nanowire image specifically includes:
for a target pixel point p, when p satisfies the following formula, it is marked as an end point:
Figure BDA0002451656090000021
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixels of the target pixel point p, when p i P is a foreground point i Value takingIs 1 when p is i When p is a background point i The value is 0;
for a target pixel point p, when p satisfies the following formula, it is marked as an isolated point:
N(p)=0
and counting the number of the end points and the number of the isolated points.
Optionally, the identifying the node in the refined nanowire image specifically includes:
for a target pixel point p, when p satisfies the following formula, it is marked as a node:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure BDA0002451656090000031
when i =7, p i+1 =p 0
Figure BDA0002451656090000032
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j All the 8 neighborhood pixels are target pixels p.
A nanowire count system, comprising:
the image acquisition module is used for acquiring a nanowire image to be counted;
the preprocessing module is used for preprocessing the nanowire image to obtain a preprocessed image;
the thinning module is used for thinning the preprocessed image and extracting a nanowire framework to obtain a thinned nanowire image;
an end point isolated point counting module used for counting the number of end points and the number of isolated points in the thinned nanowire image;
a node identification module for identifying nodes in the refined nanowire image;
the node merging module is used for merging two nodes of which the node communication distance is smaller than a preset threshold value and marking all the nodes subjected to merging processing as merged nodes;
the odd connection merged node counting module is used for identifying nodes of which neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of the odd connection merged nodes;
the nanowire number calculating module is used for calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination, and the calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after merging for odd connections.
Optionally, the preprocessing module includes:
a gray scale adjustment unit for adjusting the nanowire image into a gray scale image;
a gray threshold determination unit for determining a gray threshold of the gray image using a gray histogram;
a binarization unit for converting the grayscale image into a binary image by adopting a threshold segmentation method according to the grayscale threshold value;
and the opening and closing operation unit is used for performing opening operation and closing operation on the binary image to obtain the preprocessed image.
Optionally, the refining module includes:
and the thinning unit is used for changing the edge pixel points of the nanowire area in the preprocessed image into background pixel points until the remaining pixel points of the nanowire area are all edge pixel points.
Optionally, the endpoint outlier statistic module includes:
an endpoint marking unit, configured to mark, as an endpoint, for a target pixel point p when p satisfies the following equation:
Figure BDA0002451656090000041
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixels of the target pixel point p, when p i P is a foreground point i A value of 1 when p i When being a background point p i The value is 0;
the isolated point marking unit is used for marking a target pixel point p as an isolated point when the p satisfies the following formula:
N(p)=0
and the endpoint isolated point counting unit is used for counting the number of the endpoints and the number of the isolated points.
Optionally, the node identification module includes:
the node identification unit is used for marking a target pixel point p as a node when the p satisfies the following formula:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure BDA0002451656090000042
when i =7, p i+1 =p 0
Figure BDA0002451656090000043
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j All the 8 neighborhood pixels are target pixels p.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a method and a system for counting the number of nanowires, which are used for counting end points and isolated points of a thinned nanowire image, identifying nodes, combining the nodes with a short communication distance, thereby effectively reducing the generation of redundant short wires, and finally calculating the number of nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after odd connection combination, thereby effectively improving the accuracy of nanowire counting.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method of statistical method of nanowire number in embodiment 1 of the present invention;
FIG. 2 is an image of the nanowires obtained in example 1;
FIG. 3 is a gray scale image obtained in this example 1;
fig. 4 is a binary image obtained in this embodiment 1;
FIG. 5 is a pre-processed image obtained in example 1;
FIG. 6 is a refined nanowire image obtained in this example 1;
FIG. 7 shows a target pixel p and 8 neighboring pixels p i A relationship diagram of (1);
FIG. 8 is an enlarged view of a part of FIG. 6 in embodiment 1 of the present invention;
FIG. 9 is an enlarged view of a portion of FIG. 6, taken across the area indicated in FIG. 8;
fig. 10 is a system configuration diagram of a nanowire number statistical system according to embodiment 2 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1:
fig. 1 is a flowchart of a method of a nanowire number statistical method according to embodiment 1 of the present invention.
Referring to fig. 1, the method for counting the number of nanowires includes:
step 101: and acquiring a nanowire image to be counted.
Fig. 2 is a nanowire image obtained in this example 1.
Step 102: and preprocessing the nanowire image to obtain a preprocessed image.
The step 102 specifically includes:
and adjusting the nanowire image into a gray image by using a curve adjusting method in Photoshop software. Fig. 3 is a grayscale image obtained in this example 1.
And determining a gray threshold value of the gray image by utilizing the gray histogram.
And converting the gray level image into a binary image by adopting a threshold segmentation method according to the gray level threshold. Fig. 4 is a binary image obtained in this example 1.
And performing opening operation and closing operation on the binary image, thereby removing noise and obtaining the preprocessed image. Fig. 5 is a pre-processed image obtained in this example 1.
Step 103: and thinning the preprocessed image, and extracting a nanowire framework to obtain a thinned nanowire image.
The step 103 specifically includes:
and changing edge pixel points of the nanowire area in the preprocessed image into background pixel points until the remaining pixel points of the nanowire area are all edge pixel points.
Thinning is to use an image thinning algorithm with strict single-pixel width and rotation invariant characteristics to change edge pixel points of a white nanowire area in an image into a black background area until all the white pixel points of the rest nanowire areas are edge pixel points. Fig. 6 is a thinned nanowire image obtained in example 1.
The thinning algorithm with strict single-pixel width can ensure that the statistical result is more accurate, and the image thinning algorithm with the rotation invariant characteristic can ensure that the statistical result is kept unchanged when the image rotates.
Step 104: and counting the number of end points and the number of isolated points in the refined nanowire image.
The step 104 specifically includes:
for a target pixel point p, when p satisfies the following formula, it is marked as an end point:
Figure BDA0002451656090000061
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixels of the target pixel p, FIG. 7 shows the target pixel p and 8 neighborhood pixels p i A graph of the relationship (c). When p is i P is a foreground point i A value of 1 when p i When being a background point p i The value is 0.
For a target pixel point p, when p satisfies the following formula, it is marked as an isolated point:
N(p)=0
and counting the number D of the end points and the number G of the isolated points.
Step 105: identifying nodes in the refined nanowire image.
For a target pixel point p, when p satisfies the following formula, it is marked as a node:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure BDA0002451656090000071
when i =7, p i+1 =p 0
Figure BDA0002451656090000072
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j All the 8 neighborhood pixels are target pixels p.
Step 106: and merging the two nodes of which the node communication distance is smaller than a preset threshold value, and marking all the nodes subjected to merging processing as merged nodes. The preset threshold W may be set to 1 to 10 pixels according to actual conditions.
The connection distance D of two nodes a and b E Is defined as: if there is a set of pixels with the smallest L value in the image skeleton { d 1 ,d 2 ,……,d L-1 Are such that a and d 1 D is arbitrary k And d k+1 ,d L-1 And b are both in 8-connectivity relation, then L is defined as the connectivity distance D of the two nodes a and b E
The two nodes a and b are defined jointly as: set { a, d 1 ,d 2 ,……,d L-1 B is identified as a new merge node containing multiple pixels.
Step 107: and identifying nodes of which the neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of the nodes after odd connection merging.
Step 108: calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination, wherein a calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after merging for odd connections.
Fig. 8 is a partially enlarged view of fig. 6 in embodiment 1 of the present invention. Distance D of nodes in FIG. 8 E Is smaller than a preset threshold W and is thus merged into one large merged node. Fig. 9 is an enlarged view of a portion of fig. 6 covering the area shown in fig. 8. The number of real nanowires in fig. 9 is 5. If node merging is not performed, the statistical result is (7+5)/2=6 root, and a statistical error occurs. After the merging, the statistics result is (7+3)/2=5, and the statistics are correct.
The method according to the present invention performs statistics on the nanowires in fig. 6, and the obtained statistics result is 153. The statistical result of the human eye obtained from fig. 2 is 152, and thus, the method of the present invention can greatly reduce the statistical error and improve the accuracy of the nanowire statistics.
Example 2:
fig. 10 is a system configuration diagram of a nanowire number statistical system according to embodiment 2 of the present invention.
Referring to fig. 10, the system for counting the number of nanowires includes:
an image obtaining module 201, configured to obtain a nanowire image to be counted;
a preprocessing module 202, configured to preprocess the nanowire image to obtain a preprocessed image;
a thinning module 203, configured to thin the preprocessed image, extract a nanowire skeleton, and obtain a thinned nanowire image;
an end point isolated point counting module 204, configured to count the number of end points and the number of isolated points in the refined nanowire image;
a node identification module 205, configured to identify a node in the refined nanowire image;
a node merging module 206, configured to merge two nodes whose node connection distance is smaller than a preset threshold, and mark all the nodes after merging as merged nodes;
the odd-connection merged node counting module 207 is configured to identify nodes in which the neighborhoods include odd number of foreground points from the merged nodes to obtain the number of the odd-connection merged nodes;
a nanowire number calculating module 208, configured to calculate the number of nanowires by using the number of endpoints, the number of isolated points, and the number of nodes after odd connection merging, where the calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after merging for odd connections.
Optionally, the preprocessing module 202 includes:
a gray scale adjustment unit for adjusting the nanowire image into a gray scale image;
a gray threshold determination unit for determining a gray threshold of the gray image using a gray histogram;
a binarization unit for converting the grayscale image into a binary image by using a threshold segmentation method according to the grayscale threshold value;
and the opening and closing operation unit is used for performing opening operation and closing operation on the binary image to obtain the preprocessed image.
Optionally, the refining module 203 includes:
and the thinning unit is used for changing the edge pixel points of the nanowire areas in the preprocessed image into background pixel points until the remaining pixel points of the nanowire areas are all edge pixel points.
Optionally, the endpoint outlier statistic module 204 includes:
an endpoint marking unit, configured to mark, as an endpoint, for a target pixel point p when p satisfies the following equation:
Figure BDA0002451656090000091
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixels of the target pixel point p, when p i P is a foreground point i A value of 1 when p i When being a background point p i The value is 0;
the isolated point marking unit is used for marking a target pixel point p as an isolated point when the p satisfies the following formula:
N(p)=0
and the endpoint isolated point counting unit is used for counting the number of the endpoints and the number of the isolated points.
Optionally, the node identifying module 205 includes:
the node identification unit is used for marking a target pixel point p as a node when the p satisfies the following formula:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure BDA0002451656090000092
when i =7, p i+1 =p 0
Figure BDA0002451656090000093
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j All the 8 neighborhood pixels are target pixels p.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a method and a system for counting the number of nanowires, which are used for counting end points and isolated points of a thinned nanowire image, identifying nodes, combining the nodes with a short communication distance, thereby effectively reducing the generation of redundant short wires, and finally calculating the number of nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after odd connection combination, thereby effectively improving the accuracy of nanowire counting.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (2)

1. A method for counting the number of nanowires, comprising:
acquiring a nanowire image to be counted;
preprocessing the nanowire image to obtain a preprocessed image;
thinning the preprocessed image, and extracting a nanowire framework to obtain a thinned nanowire image;
counting the number of end points and the number of isolated points in the thinned nanowire image;
identifying nodes in the refined nanowire image;
combining two nodes of which the node communication distance is smaller than a preset threshold value, and marking all the nodes after combination processing as combined nodes;
identifying nodes of which neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of the nodes after odd connection merging;
calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination, wherein a calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after odd connection merging;
the preprocessing the nanowire image to obtain a preprocessed image specifically comprises:
adjusting the nanowire image into a grayscale image;
determining a gray threshold of the gray image by using a gray histogram;
converting the gray level image into a binary image by adopting a threshold segmentation method according to the gray level threshold;
performing opening operation and closing operation on the binary image to obtain the preprocessed image;
the method for refining the preprocessed image and extracting the nanowire framework to obtain the refined nanowire image specifically comprises the following steps:
changing edge pixel points of the nanowire regions in the preprocessed image into background pixel points until the remaining pixel points of the nanowire regions are all edge pixel points;
the counting of the number of end points and the number of isolated points in the refined nanowire image specifically includes:
for a target pixel point p, when p satisfies the following formula, it is marked as an end point:
Figure FDA0004072688000000021
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixel point of target pixel point p, when p i P is a foreground point i A value of 1 when p i When being a background point p i The value is 0;
for a target pixel point p, when p satisfies the following formula, it is marked as an isolated point:
N(p)=0
counting the number of the end points and the number of the isolated points;
the identifying the nodes in the refined nanowire image specifically includes:
for a target pixel point p, when p satisfies the following formula, it is marked as a node:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure FDA0004072688000000022
when i =7, p i+1 =p 0
Figure FDA0004072688000000031
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j 8 neighborhood images of all target pixel points pAnd (5) prime points.
2. A system for counting the number of nanowires, comprising:
the image acquisition module is used for acquiring a nanowire image to be counted;
the preprocessing module is used for preprocessing the nanowire image to obtain a preprocessed image;
the thinning module is used for thinning the preprocessed image and extracting a nanowire framework to obtain a thinned nanowire image;
an end point isolated point counting module used for counting the number of end points and the number of isolated points in the thinned nanowire image;
a node identification module for identifying nodes in the refined nanowire image;
the node merging module is used for merging two nodes of which the node communication distance is smaller than a preset threshold value and marking all the nodes subjected to merging processing as merged nodes;
the odd connection merged node counting module is used for identifying nodes of which neighborhoods contain odd number of foreground points in the merged nodes to obtain the number of the odd connection merged nodes;
the nanowire number calculating module is used for calculating the number of the nanowires by using the number of the end points, the number of the isolated points and the number of the nodes after the odd connection combination, and the calculation formula is as follows:
m=∑(D+J odd )/2+G
wherein m is the number of nanowires, D is the number of endpoints, G is the number of isolated points, J odd The number of nodes after odd connection merging;
the preprocessing module comprises:
a gray scale adjustment unit for adjusting the nanowire image into a gray scale image;
a gray threshold determination unit for determining a gray threshold of the gray image using a gray histogram;
a binarization unit for converting the grayscale image into a binary image by adopting a threshold segmentation method according to the grayscale threshold value;
the opening and closing operation unit is used for performing opening operation and closing operation on the binary image to obtain the preprocessed image;
the refining module comprises:
the thinning unit is used for changing edge pixel points of the nanowire area in the preprocessed image into background pixel points until the remaining pixel points of the nanowire area are all edge pixel points;
the endpoint outlier statistics module comprises:
an endpoint marking unit, configured to mark, as an endpoint, for a target pixel point p when p satisfies the following equation:
Figure FDA0004072688000000041
wherein N (p) is the pixel sum of 8 neighborhood pixels of the target pixel p; p is a radical of i 8 neighborhood pixels of the target pixel point p, when p i P is a foreground point i A value of 1 when p i When being a background point p i The value is 0;
the isolated point marking unit is used for marking a target pixel point p as an isolated point when the p satisfies the following formula:
N(p)=0
an endpoint isolated point counting unit used for counting the number of the endpoints and the number of the isolated points;
the node identification module includes:
the node identification unit is used for marking a target pixel point p as a node when the p satisfies the following formula:
C 4 (p)≥3,
or C 8 (p) ≠ 1 and N (p) ≥ 3
Wherein the content of the first and second substances,
Figure FDA0004072688000000051
when i =7, p i+1 =p 0
Figure FDA0004072688000000052
When j =4, p 2j =p 0
p 2j-2 、p 2j-1 、p 2j All the 8 neighborhood pixels are target pixels p.
CN202010294470.9A 2020-04-15 2020-04-15 Nanowire quantity statistical method and system Active CN111507956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010294470.9A CN111507956B (en) 2020-04-15 2020-04-15 Nanowire quantity statistical method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010294470.9A CN111507956B (en) 2020-04-15 2020-04-15 Nanowire quantity statistical method and system

Publications (2)

Publication Number Publication Date
CN111507956A CN111507956A (en) 2020-08-07
CN111507956B true CN111507956B (en) 2023-04-07

Family

ID=71870931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010294470.9A Active CN111507956B (en) 2020-04-15 2020-04-15 Nanowire quantity statistical method and system

Country Status (1)

Country Link
CN (1) CN111507956B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5556764A (en) * 1993-02-17 1996-09-17 Biometric Imaging, Inc. Method and apparatus for cell counting and cell classification
CN102222216A (en) * 2011-06-02 2011-10-19 天津理工大学 Identification system based on biological characteristics of fingerprints
CN103150730A (en) * 2013-03-07 2013-06-12 南京航空航天大学 Round small target accurate detection method based on image
CN103246920A (en) * 2013-03-22 2013-08-14 浙江理工大学 Automatic counting method and system for silkworm cocoons
CN103514612A (en) * 2012-06-27 2014-01-15 中山大学 Color image processing method
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN106383334A (en) * 2016-08-31 2017-02-08 广西科技大学 Mobile object detecting method based on sound waves and wireless positioning
JP2018151791A (en) * 2017-03-10 2018-09-27 富士通株式会社 Similar case image search program, similar case image search apparatus, and similar case image search method
CN109118540A (en) * 2018-07-20 2019-01-01 宁波智哲信息科技有限公司 The sturgeon faster statistical approach extracted based on crestal line
CN109685783A (en) * 2018-12-18 2019-04-26 东北大学 A kind of method for cell count based on skeletal extraction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101767564B1 (en) * 2015-11-12 2017-08-11 성균관대학교산학협력단 A method of analysing images of rod-like particles

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5556764A (en) * 1993-02-17 1996-09-17 Biometric Imaging, Inc. Method and apparatus for cell counting and cell classification
CN102222216A (en) * 2011-06-02 2011-10-19 天津理工大学 Identification system based on biological characteristics of fingerprints
CN103514612A (en) * 2012-06-27 2014-01-15 中山大学 Color image processing method
CN103150730A (en) * 2013-03-07 2013-06-12 南京航空航天大学 Round small target accurate detection method based on image
CN103246920A (en) * 2013-03-22 2013-08-14 浙江理工大学 Automatic counting method and system for silkworm cocoons
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN106383334A (en) * 2016-08-31 2017-02-08 广西科技大学 Mobile object detecting method based on sound waves and wireless positioning
JP2018151791A (en) * 2017-03-10 2018-09-27 富士通株式会社 Similar case image search program, similar case image search apparatus, and similar case image search method
CN109118540A (en) * 2018-07-20 2019-01-01 宁波智哲信息科技有限公司 The sturgeon faster statistical approach extracted based on crestal line
CN109685783A (en) * 2018-12-18 2019-04-26 东北大学 A kind of method for cell count based on skeletal extraction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Wang Miaomiao.《An Improved Image Thinning Algorithm and Its Application in Chinese Character Image Refining》.《 2019 IEEE 3rd Information Technology,Networking,Electronic and Automation Control Conference 》.2019,1870-1874. *
李东洁 ; 王德宝 ; 张越 ; .基于核密度估计的纳米线识别和定位.纳米技术与精密工程.2016,(01),全文. *
陈光新.自动指纹识别技术及其应用.江苏船舶.2004,(03),全文. *

Also Published As

Publication number Publication date
CN111507956A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
Raghunandan et al. Riesz fractional based model for enhancing license plate detection and recognition
CN108447062B (en) Pathological section unconventional cell segmentation method based on multi-scale mixed segmentation model
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
Gatos et al. ICFHR 2010 handwriting segmentation contest
CN104077577A (en) Trademark detection method based on convolutional neural network
CN111445457B (en) Network model training method and device, network model identification method and device, and electronic equipment
CN114820625B (en) Automobile top block defect detection method
CN114897806A (en) Defect detection method, electronic device and computer readable storage medium
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position
CN114359288A (en) Medical image cerebral aneurysm detection and positioning method based on artificial intelligence
CN113793357A (en) Bronchopulmonary segment image segmentation method and system based on deep learning
CN111754441A (en) Passive detection method for image copy-paste forgery
CN115908142A (en) Contact net tiny part damage testing method based on visual recognition
CN112132854A (en) Image segmentation method and device and electronic equipment
CN115497109A (en) Character and image preprocessing method based on intelligent translation
Bamford Empirical comparison of cell segmentation algorithms using an annotated dataset
Wu et al. Towards robust text-prompted semantic criterion for in-the-wild video quality assessment
Liu et al. Splicing forgery exposure in digital image by detecting noise discrepancies
CN116824168B (en) Ear CT feature extraction method based on image processing
CN111507956B (en) Nanowire quantity statistical method and system
CN111445456B (en) Classification model, training method and device of network model, and recognition method and device
JP2013080389A (en) Vanishing point estimation method, vanishing point estimation device, and computer program
CN115331014B (en) Machine vision-based pointer instrument reading method and system and storage medium
CN116543373A (en) Block chain-based live video big data intelligent analysis and optimization method and system
CN111931689B (en) Method for extracting video satellite data identification features on line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231008

Address after: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20231008

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 545006 268 East Ring Road, Central District, Liuzhou, the Guangxi Zhuang Autonomous Region

Patentee before: GUANGXI University OF SCIENCE AND TECHNOLOGY