CN117635694B - Method, device and equipment for measuring secondary sphere size of electron microscope image - Google Patents

Method, device and equipment for measuring secondary sphere size of electron microscope image Download PDF

Info

Publication number
CN117635694B
CN117635694B CN202410100478.5A CN202410100478A CN117635694B CN 117635694 B CN117635694 B CN 117635694B CN 202410100478 A CN202410100478 A CN 202410100478A CN 117635694 B CN117635694 B CN 117635694B
Authority
CN
China
Prior art keywords
image
measured
size
electron microscope
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410100478.5A
Other languages
Chinese (zh)
Other versions
CN117635694A (en
Inventor
何敏
李劼
张红亮
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Changyuan Lithium New Energy Co ltd
Central South University
Original Assignee
Hunan Changyuan Lithium New Energy Co ltd
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Changyuan Lithium New Energy Co ltd, Central South University filed Critical Hunan Changyuan Lithium New Energy Co ltd
Priority to CN202410100478.5A priority Critical patent/CN117635694B/en
Publication of CN117635694A publication Critical patent/CN117635694A/en
Application granted granted Critical
Publication of CN117635694B publication Critical patent/CN117635694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a method, a device and equipment for measuring the size of a secondary sphere in an electron microscope image, wherein the method utilizes a computer vision technology to detect the secondary sphere in the electron microscope image, then identifies proportion information from the image, then determines the size of the secondary sphere in the image based on the position and boundary information of the identified secondary sphere, and finally obtains the actual size based on the size and the proportion number amplified by the electron microscope. The method has the advantages that the high-precision identification of the secondary ball in the electron microscope image is realized, the position and the boundary information of the secondary ball are rapidly and accurately positioned, the proportion information carried by the image can be identified, the size of the secondary ball is automatically calculated by utilizing the position and the boundary information and the proportion information of the secondary ball, and the accuracy and the efficiency of the size measurement of the secondary ball are improved.

Description

Method, device and equipment for measuring secondary sphere size of electron microscope image
Technical Field
The embodiment of the application relates to the technical field of lithium batteries, in particular to a secondary sphere size measurement method, device and equipment of an electron microscope image.
Background
In the technical field of lithium batteries, measurement of the size of secondary spherical particles (secondary spheres) is an important task.
The traditional secondary ball size measurement method mainly depends on manual operation and manual measurement, an operator needs to manually draw boundaries or mark key points in an image with high resolution provided by an electron microscope to measure the diameter of the secondary ball, but the method has high time cost and low efficiency, and is excessively dependent on subjective judgment of the operator, so that the error is high and the measurement precision is inaccurate.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention mainly aims to provide a secondary ball size measuring method, device and equipment of an electron microscope image, which can improve the accuracy and efficiency of secondary ball size measurement.
To achieve the above object, a first aspect of an embodiment of the present invention provides a method for measuring a secondary sphere size of an electron microscope image, including:
acquiring an image to be measured output by an electron microscope;
inputting the image to be measured into a measurement model based on deep learning to obtain a secondary sphere detected by the measurement model in the image to be measured;
Extracting the proportion information displayed in the image to be measured by the electron microscope from the image to be measured;
determining the position and boundary information of the secondary ball in the image to be measured, and determining the size of the secondary ball in the image to be measured according to the position and the boundary information;
And calculating the actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information.
In some embodiments, after obtaining the measurement model to detect the secondary sphere in the image to be measured, the secondary sphere size measurement method of the electron microscope image further includes:
converting the shape characteristics and the geometric properties of the preset standard secondary ball into standard parameter values;
The determining the size of the secondary ball in the image to be measured according to the position and the boundary information comprises the following steps:
If the measurement model detects that the secondary sphere in the image to be measured meets the standard parameter value, determining the size of the secondary sphere meeting the standard parameter value in the image to be measured according to the position and the boundary information;
the calculating the actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information comprises the following steps:
And calculating the actual size of the secondary sphere meeting the standard parameter value according to the size of the secondary sphere in the image to be measured and the proportion information.
In some embodiments, the extracting, from the image to be measured, scale information displayed by the electron microscope in the image to be measured includes:
Positioning a proportion information display area displayed in the image to be measured by the electron microscope;
And extracting the proportion information from the proportion information display area through a character recognition tool.
In some embodiments, the training method of the measurement model includes:
selecting an electron microscope image sample containing marked secondary ball position and boundary information;
And selecting a semantic segmentation network, and training the semantic segmentation network by adopting the image sample to obtain a measurement model after training.
In some embodiments, the semantic segmentation network is a UNet architecture-based semantic segmentation network.
In some embodiments, the semantic segmentation network is a refinement UNet-based semantic segmentation network; the improved UNet-based semantic segmentation network comprises a UNet-based encoder and decoder architecture, wherein a plurality of improved convolution blocks are included in the encoder and the decoder, each improved convolution block comprises a first network branch and a second network branch, each first network branch sequentially comprises an upsampling layer, two groups of first convolution layers and a downsampling layer, jump connection is adopted between the first group of first convolution layers and the second group of first convolution layers, and each first convolution layer comprises one 3*1 and one 1*3 convolution; the second network branch comprises a downsampling layer, two groups of second convolution layers and an upsampling layer in sequence, wherein the first group of second convolution layers and the second group of second convolution layers are connected in a jumping manner, and the second convolution layers comprise a 3*1 convolution and a 1*3 convolution;
The processing procedure of the improved convolution block to the input characteristics comprises the following steps:
Inputting the input features into the first network branch and the second network branch respectively to obtain a first scale feature output by the first network branch and a second scale feature output by the second network branch;
and fusing the first scale feature and the second scale feature to obtain a fused feature, and taking the fused feature as an input feature of the next improved convolution block or an output feature of the improved UNet-based semantic segmentation network.
In some embodiments, the training the semantic segmentation network with the image samples comprises:
Setting an iteration ending condition of the semantic segmentation network;
Performing repeated iterative training on the semantic segmentation network by adopting the image sample until reaching the iteration ending condition, ending the iteration and obtaining a measurement model;
One of the iterative training processes of the semantic segmentation network comprises the following steps:
Inputting the image sample into the semantic segmentation network to obtain a segmentation result of the semantic segmentation network on the image sample;
Calculating a loss value between the segmentation result and the label of the image sample according to a loss function;
according to the loss value, carrying out back propagation optimization on the weight and the parameter of the semantic segmentation network;
and adjusting the learning rate of the semantic segmentation network according to the cosine annealing self-adaptive period.
In some embodiments, the loss function comprises a set cross entropy loss function and/or a Dice loss function, wherein the cross entropy loss function comprises:
wherein, Representing the number of pixel points,/>Representing category number,/>Representing pixel points/>, in the segmentation resultBelonging to category/>Probability of/>Representing pixel points in the tag/>Belonging to category/>Is a tag value of (2);
the Dice loss function includes:
wherein, Representing pixel points/>, in the segmentation resultValue of/>Representing pixel points in the tag/>Value of/>Representing a preset positive number.
In some embodiments, when the loss function includes the cross entropy loss function and the Dice loss function, the loss function is:
wherein, Is a preset weight.
In some embodiments, prior to inputting the image sample into the semantic segmentation network, the secondary sphere size measurement method of the electron microscope image further comprises:
The image samples are pre-processed, wherein the pre-processing includes at least one of image denoising, resizing, contrast enhancement, and histogram equalization.
In some embodiments, prior to inputting the image sample into the semantic segmentation network, the secondary sphere size measurement method of the electron microscope image further comprises:
Randomly clipping the image sample by 20%;
rotating the image sample after the clipping proportion, wherein the rotation angle ranges from-10 degrees to 10 degrees;
The brightness and contrast of the rotated image samples are adjusted to be between the range of 30% and 130%.
In some embodiments, prior to inputting the image sample into the semantic segmentation network, the secondary sphere size measurement method of the electron microscope image further comprises:
Scaling the image sample to 576 x 576;
and data normalizing the image samples scaled 576×576 such that pixel values of the image samples are distributed within the [ -1,1] interval.
A second aspect of an embodiment of the present invention provides a secondary sphere size measurement apparatus of an electron microscope image, including:
The image acquisition unit is used for acquiring an image to be measured output by the electron microscope;
The secondary ball detection unit is used for inputting the image to be measured into a measurement model based on deep learning so as to obtain a secondary ball detected in the image to be measured by the measurement model;
A proportion information obtaining unit, configured to extract proportion information displayed in the image to be measured by the electron microscope from the image to be measured;
A first size obtaining unit, configured to determine a position and boundary information of the secondary ball in the image to be measured, and determine a size of the secondary ball in the image to be measured according to the position and boundary information;
And the second size acquisition unit is used for calculating the actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information.
To achieve the above object, a third aspect of an embodiment of the present invention provides an electronic device, including: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a method of secondary sphere sizing of an electron microscope image as described above.
To achieve the above object, a fourth aspect of the embodiments of the present invention proposes a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a secondary sphere size measurement method of an electron microscope image as described above.
One embodiment of the application provides a method for measuring the size of a secondary sphere in an electron microscope image, which utilizes a computer vision technology to detect the secondary sphere in the electron microscope image, then identifies proportion information from the image, then determines the size of the secondary sphere in the image based on the position and boundary information of the identified secondary sphere, and finally calculates the actual size based on the size and the proportion number amplified by the electron microscope. The method has the advantages that the high-precision identification of the secondary ball in the electron microscope image is realized, the position and the boundary information of the secondary ball are rapidly and accurately positioned, the proportion information carried by the image can be identified, the size of the secondary ball is automatically calculated by utilizing the position and the boundary information and the proportion information of the secondary ball, and the accuracy and the efficiency of the size measurement of the secondary ball are improved.
It is to be understood that the advantages of the second to fourth aspects compared with the related art are the same as those of the first aspect compared with the related art, and reference may be made to the related description in the first aspect, which is not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for measuring the size of a secondary sphere of an electron microscope image according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an image to be measured provided by one embodiment of the present application;
FIG. 3 is a schematic diagram of a segmentation result of an image to be measured according to an embodiment of the present application;
fig. 4 is a schematic flow chart of extracting the scale information in step S130 in fig. 1;
FIG. 5 is a schematic flow chart of training the measurement model in step S120 in FIG. 1;
FIG. 6 is a flow chart of training the semantic segmentation network in step S420 of FIG. 5;
FIG. 7 is a flow chart of one of the iterative training processes of the semantic segmentation network of step S422 in FIG. 6;
FIG. 8 is a schematic block diagram of an improved convolution block provided by one embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a process flow of the improved convolution block versus input feature provided by one embodiment of the present application;
FIG. 10 is a flow chart of preprocessing and image enhancement processing of an image sample according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
In the technical field of lithium batteries, measurement of the size of secondary spherical particles (secondary spheres) is an important task. The secondary sphere refers to secondary spherical particles which are formed by tightly stacking primary particles, taking doped basic cobalt carbonate/cobalt carbonate composite precursor in the field of lithium batteries as an example, the morphology of the primary particles is lamellar, and the secondary particles are formed by tightly stacking lamellar primary particles, so that the sphericity is perfect.
Electron microscopes can provide high resolution images for viewing microstructures and particles. Secondary spheres are commonly used to describe the shape and size of the particles.
Conventional secondary sphere size measurement methods rely mainly on manual operations and manual measurements, where an operator needs to manually draw boundaries or mark key points in an image provided by an electron microscope with high resolution to measure the diameter of the secondary sphere, but such methods have the following drawbacks:
Because the measurement depends on subjective judgment of an operator, the accuracy of the result is influenced by individual difference and subjective error, and inaccuracy of boundary drawing and deviation of key point marks are included, so that the inaccuracy of the result is caused;
manual measurement requires a lot of time and labor, especially in the case of large-scale data sets, which is inefficient.
To solve the above-mentioned drawbacks, referring to fig. 1, an embodiment of the present application provides a secondary sphere size measurement method of an electron microscope image, the method including steps S110 to S150 as follows:
Step S110, an image to be measured output by the electron microscope is obtained.
And step S120, inputting the image to be measured into a measurement model based on deep learning to obtain a measurement model for detecting the secondary ball in the image to be measured.
Step S130, extracting the proportion information displayed in the image to be measured by the electron microscope from the image to be measured.
And step 140, determining the position and boundary information of the secondary ball in the image to be measured, and determining the size of the secondary ball in the image to be measured according to the position and boundary information.
And S150, calculating the actual size of the secondary ball according to the size and proportion information of the secondary ball in the image to be measured.
In step S110, the image to be measured refers to an image in which measurement of the secondary ball is required using a measurement model based on deep learning. The image to be measured is mainly generated by inputting and amplifying an image to be detected by adopting an electron microscope, and the obtained electron microscope image is not limited by the specific model of the electron microscope.
In step S120, the measurement model is generated after training based on the deep learning neural network, which may be a UNet network or other types of neural networks, as described later. The measurement model is a trained measurement model, the detailed training process is described by a subsequent embodiment, an image to be measured is input into the measurement model, a segmentation result of the measurement model can be obtained, and in the segmentation result, the measurement model identifies a secondary ball in the image to be measured, and mainly identifies the position and boundary information of the secondary ball. In the model training process, the training set marks the position and boundary information of the secondary ball. By adopting the computer vision technology, the position and boundary information of the secondary ball can be accurately identified through the neural network based on deep learning, and compared with manual processing, the method is more accurate and has higher efficiency.
In step S130, when the electron microscope enlarges an image to be measured, the surface layer of the image to be measured carries the scale information of the electron microscope, the scale information refers to some parameters of the electron microscope during operation, such as the enlarged scale, and the scale information displayed in the image to be measured by the electron microscope is extracted from the image to be measured for subsequent size calculation.
In step S140, the position and boundary information of the secondary sphere in the image to be measured may be determined based on the measurement model, and the size of the secondary sphere in the image to be measured may be determined using these information, where the size includes, but is not limited to: diameter, circumference or area, etc.
In step S150, after obtaining the size of the secondary ball in the image to be measured, the actual size of the secondary ball needs to be further calculated, so that the actual size of the secondary ball can be calculated according to the size and the proportion information of the secondary ball in the image to be measured.
In this embodiment, a computer vision technology is used, a measurement model is used to detect a secondary sphere in an image, then scale information is identified from the image, finally, the size of the secondary sphere in the image to be measured is determined based on the position and boundary information of the identified secondary sphere, and finally, the actual size is calculated based on the size and the scale number amplified by an electron microscope. The method has the advantages that the high-precision identification of the secondary ball in the electron microscope image is realized through the steps, the position and the boundary information of the secondary ball are rapidly and accurately positioned, the proportion information carried by the surface layer of the image to be measured can be automatically identified, the size of the secondary ball is automatically calculated by utilizing the position and the boundary information and the proportion information of the secondary ball, and the accuracy and the efficiency of the size measurement of the secondary ball are improved.
In some embodiments of the present application, after obtaining the measurement model to detect the secondary sphere in the image to be measured, the secondary sphere size measurement method of the electron microscope image further includes:
step S210, converting the shape characteristics and the geometric properties of the preset standard secondary ball into standard parameter values.
Determining the size of the secondary sphere in the image to be measured according to the position and boundary information in step S130 includes:
Step 1310, if the measurement model detects that the secondary sphere in the image to be measured meets the standard parameter value, determining the size of the secondary sphere meeting the standard parameter value in the image to be measured according to the position and the boundary information.
In step S150, calculating the actual size of the secondary ball according to the size and the proportion information of the secondary ball in the image to be measured includes:
Step S1510, calculating the actual size of the secondary ball meeting the standard parameter value according to the size and the proportion information of the secondary ball in the image to be measured.
Since there may be overlapping and shielding between the secondary balls during the formation of the secondary balls, referring to fig. 2, it can be seen that many secondary balls are overlapped and shielded, and many secondary particles are irregularly shaped. Therefore, not all the secondary balls measured by the measuring model can be completely measured, and not all the output spherical particles meet the requirements of the secondary balls. Therefore, in this embodiment, the shape feature and the geometric property of a standard secondary sphere are set, and the shape feature and the geometric property of the standard secondary sphere are converted into standard parameter values, and the standard parameter values can be a range section, so long as the secondary sphere located in the range section can meet the requirements; the standard parameter value may be a specific value, and only the secondary sphere meeting the specific value is satisfied, and the standard parameter value is preferentially set to be a range interval. And then comparing the secondary ball output by the measurement model with a standard parameter value, judging the shape of the identified secondary ball, judging whether the shape of the secondary ball accords with the regular shape of the secondary ball, and processing the secondary ball which accords with the regular shape and rejecting the secondary particles which do not accord with the requirement in the following steps S130 to 150.
In the embodiment, the standard parameter value is set as the filtering standard, and the secondary balls identified by the measuring model can be further screened and filtered, so that the interference objects of irregular or non-secondary balls can be eliminated, and the accuracy and reliability of secondary ball identification are improved.
Referring to fig. 4, in some embodiments of the present application, extracting the scale information of the electron microscope displayed in the image to be measured from the image to be measured in step S130 includes the following steps S310 to S320:
step S310, positioning a proportion information display area displayed in an image to be measured by the electron microscope.
Step S320, extracting the proportion information from the proportion information display area through a character recognition tool.
First, in step S310, a scale information display area displayed in an image to be measured is first positioned. For example, fig. 2 and 3, the electron microscope generated information appears below fig. 2 and 3: "SU8000 1.5KV 4.1mm x 3.00j LA30 (U)", "10um", the area displaying this information is called a scale information display area.
After the scale information display area is determined, the scale information is extracted from the scale information display area by a character recognition tool, that is, the character part in the image can be extracted and converted into a character form by OCR (Optical Character Recognition). It should be noted that the scale information display areas of the images to be measured output by different types of electron microscopes may be different in position, and the respective recorded information may be different, for example, the scale of enlargement is directly recorded, and conversion by the recorded scale information is required, which is not particularly limited.
According to the embodiment, the proportion information of the electron microscope can be automatically extracted through a character recognition technology, further the subsequent secondary ball size calculation can be acted, and compared with the manual extraction of proportion numbers, the efficiency of secondary ball size measurement can be improved.
Referring to fig. 5, in some embodiments of the present application, the training method of the measurement model set in step S130 includes:
and step S410, selecting an electron microscope image sample containing marked secondary ball position and boundary information.
And step S420, selecting a semantic segmentation network, and training the semantic segmentation network by adopting an image sample to obtain a measurement model after training.
In step S410, an electron microscope image suitable for secondary sphere size measurement is selected, where an image of higher image quality and resolution is selected as much as possible. And marking the selected electron microscope image, and marking the position and boundary information of the secondary ball. The area of the secondary sphere can be marked in each image using a marking tool or software.
In step S420, the semantic segmentation network is a UNet network. The UNet network backbone is divided into a symmetrical left part and a symmetrical right part: the left side is a feature extraction network (encoder), and the original input image is subjected to four downsampling by convolution-maximum pooling to obtain a four-level feature map; the right is a feature fusion network (decoder), and feature fusion is carried out on each level of feature graphs and feature graphs obtained through deconvolution in a jump connection mode; the last layer predicts the semantic graph by calculating the loss with the label. It should be noted that Unet is a relatively mature network and will not be discussed in detail herein.
Referring to fig. 6, training the semantic segmentation network using image samples in step S420 includes steps S421 to S422:
step S421, setting iteration ending conditions of the semantic segmentation network.
And step 422, performing repeated iterative training on the semantic segmentation network by adopting the image sample until reaching the iteration ending condition, ending the iteration and obtaining a measurement model.
Referring to fig. 7, one of the iterative training processes of the semantic segmentation network includes:
Step S4221, inputting the image sample into a semantic segmentation network to obtain a segmentation result of the semantic segmentation network on the image sample.
Step S4222, calculating a loss value between the segmentation result and the label of the image sample according to the loss function.
And step S4223, carrying out back propagation optimization on the weight and the parameter of the semantic segmentation network according to the loss value.
Step S4224, adjusting the learning rate of the semantic segmentation network according to the cosine annealing self-adaptive period.
In step S421, the iteration end condition includes that the error between the maximum number of iterations and the result of the two iterations output is smaller than a preset value. It is noted that the model parameters, i.e. the weights and parameters of the initialized UNet model, need to be initialized if it is the first iteration. The initialization may be performed using weights of a random initialization or pre-training model.
Step S4221 inputs the image sample into the semantic segmentation network, and the segmentation result of the semantic segmentation network on the image sample is a forward propagation process of the model. Step S4222 is to calculate a loss value between the segmentation result and the label of the image sample based on the loss function. Steps S4223 and S4224 are back propagation processes, i.e. update the weights and parameters of the model by using an optimization algorithm (such as a common gradient descent method) to minimize the loss function, adjust the learning rate by the cosine annealing learning rate self-adaptive period, overcome local convergence, and improve training stability.
In this embodiment, the learning rate adjustment policy may be: the cosine annealing is used for adjusting the learning rate, and the initial learning rate and the cycle number are set to control the change of the learning rate. The total number of training cycles is determined according to the size of the training data set and the batch size, and steps S4221 to S4224 are training actions in one complete cycle. The learning rate is adjusted using a cosine function, and gradually decreases in each cycle in accordance with the form of the cosine function.
According to the method, the loss function and the back propagation optimization algorithm (such as gradient descent) are used for back propagation, and meanwhile, the cosine annealing learning rate adjustment method is combined, so that network parameters can be continuously corrected in the training process, the loss function is gradually reduced, the training stability and the convergence speed of the model can be improved, the problem of local convergence is solved, and the accuracy of secondary ball measurement is further improved.
In some embodiments of the application, the loss function comprises a set cross entropy loss function and/or a Dice loss function, the cross entropy loss function: the method is used for measuring the difference between the segmentation result and the real label, is particularly suitable for multi-category segmentation tasks, and can calculate the loss value of each pixel point by using cross entropy loss.
The cross entropy loss function self-set in this embodiment includes:
wherein, Representing the number of pixel points,/>Representing category number,/>Representing pixel points/>, in the segmentation resultBelonging to category/>Probability of/>Representing pixel points in the tag/>Belonging to category/>Is a tag value of (2);
Dice loss function: the method is used for measuring the similarity between the predicted result and the real label, and is particularly suitable for unbalanced class segmentation tasks.
The self-setting Dice loss function of this embodiment includes:
wherein, Representing pixel points/>, in the segmentation resultValue of/>Representing pixel points in the tag/>Value of/>Representing a preset positive number (for numerical stability, avoiding the divide by zero operation).
These loss functions may be selected and combined according to a specific application scenario, for example, a cross entropy loss function may be used as an overall loss function, or a weighted combination of a cross entropy loss function and a Dice loss function may be used as an overall loss function. By minimizing the loss function, the UNet network can be trained and its parameters optimized, enabling more accurate semantic segmentation.
In some embodiments, when the loss function includes a cross entropy loss function and a Dice loss function, the loss function is:
wherein, Is a preset weight.
In some embodiments of the application, the semantic segmentation network in step S420 is an improved UNet-based semantic segmentation network; the improved UNet-based semantic segmentation network comprises a UNet-based encoder and decoder architecture, wherein the encoder and the decoder comprise a plurality of improved convolution blocks, each improved convolution block comprises a first network branch and a second network branch, each first network branch sequentially comprises an up-sampling layer, two groups of first convolution layers and a down-sampling layer, jump connection is adopted between the first group of first convolution layers and the second group of first convolution layers, and each first convolution layer comprises one 3*1 and one 1*3 convolution; the second network branch comprises a downsampling layer, two groups of second convolution layers and an upsampling layer in sequence, the first group of second convolution layers and the second group of second convolution layers are connected in a jumping manner, and the second convolution layers comprise a 3*1 convolution and a 1*3 convolution;
referring to fig. 8 and 9, the improved convolution block processing of the input features includes:
Step S510, inputting the input features into the first network branch and the second network branch respectively, and obtaining a first scale feature output by the first network branch and a second scale feature output by the second network branch.
And step S520, fusing the first scale feature and the second scale feature to obtain a fused feature, and taking the fused feature as an input feature of a convolution block after next improvement or an output feature of a semantic segmentation network based on improved UNet.
In UNet networks, encoders and decoders are included, in which four downsampling is performed by convolution-max pooling, resulting in four levels of feature maps, although different levels of feature maps can be obtained. And feature fusion is carried out on the feature graphs of each level of the decoder and the feature graphs obtained through deconvolution in a jump connection mode. In order to better integrate the inconsistent characteristics of the hierarchy, the embodiment improves the UNet network, only changes the convolution blocks of the encoder and the decoder, and changes the traditional convolution blocks into improved convolution blocks. The improved convolution block is divided into two branches, and the first branch network and the second branch network are described as follows:
Assuming that the improved convolution block obtains the input feature, inputting the input feature into the first branch network, the flow in the first branch network includes: the output characteristics are obtained by up-sampling, convolution operation of a 3*1 convolution layer, convolution operation of a 1*3 convolution layer, convolution operation of a 3*1 convolution layer and a 1*3 convolution layer, and down-sampling. Wherein the output characteristics of the first 1*3 and the second 1*3 convolutional layers are connected by a jump.
The improved convolution block also inputs the input features to the second branch network, and the flow in the second branch network includes: the output characteristics are obtained by downsampling, performing a 3*1 convolution operation on the convolution layer, performing a 1*3 convolution operation on the convolution layer, performing a 3*1 convolution operation on the convolution layer and performing a 1*3 convolution operation on the convolution layer, and performing upsampling. Wherein the output characteristics of the first 1*3 and the second 1*3 convolutional layers are connected by a jump.
According to the structure, the channel of the convolution block is divided into the first branch network and the second branch network, the features with higher scales are extracted through the first branch network, the features with lower scales are extracted through the second branch network, and finally the output features of the first branch network and the second branch network are fused, so that the features with high scales and the features with low scales are fully fused, and the detection accuracy of the secondary ball detection can be improved by using the fused features by the measurement model. The effect of 3*3 convolution layers can be achieved by using a group of 3 x 1+1 x 3 convolution layers, the network depth of the first branch network and the second branch network can be improved, and network parameters can be reduced; similarly, the two groups of 3×1+1×3 convolution layers can achieve the effect of 5*5 convolution layers, and can also achieve the effects of improving the network depth of the first branch network and the second branch network and reducing network parameters.
Unlike the above embodiment, the present embodiment improves the convolutional blocks of the encoder and decoder based on Unet architecture, divides the channel of the convolutional block into a first branch network and a second branch network, extracts the features of higher scale through the first branch network, and extracts the features of lower scale through the second branch network, because the features of higher scale contain more semantic information, can help to identify the positions of the features of lower scale, and the features of lower scale contain more spatial information, can help to reconstruct details of the features of higher scale, and fully fuses the features of higher scale and the features of lower scale, so as to realize mutual complementation of the features of higher level and lower level. Not only is the combination of scale features of different convolution blocks realized on the original UNet framework (UNet principle), but also high-low level features are fully fused in a single convolution block, so that the detection accuracy of secondary ball detection can be improved.
Referring to fig. 10, in some embodiments of the present application, before the image sample is input to the semantic segmentation network in step S4221, the method further comprises steps S610 to S660:
Step S610, preprocessing is performed on the image sample, where the preprocessing includes at least one of image denoising, resizing, contrast enhancement, and histogram equalization.
Step S620, randomly clipping the image sample to 20%.
Step S630, rotating the image sample after clipping proportion, wherein the rotation angle ranges from-10 degrees to 10 degrees.
Step S640, adjusting the brightness and contrast of the rotated image sample to be between 30% and 130%.
Step S650, scaling the image samples to 576×576.
Step S660, data normalization is performed on the image samples scaled to 576×576, so that the pixel values of the image samples are distributed in the [ -1,1] interval.
In step S610, the image samples are preprocessed to improve the training effect of the measurement model.
The steps S620 to S640 are image enhancement on the image samples, and the purpose of the image sample enhancement in this embodiment is to increase the diversity and robustness of the data set and improve the training effect of the measurement model. Common data enhancement methods include image rotation, flipping, scaling, panning, adding noise, and the like.
The steps S650 to S660 are accurate processing before inputting the image sample into the measurement model, and the purpose of the processing is to improve the training efficiency of the measurement model.
The following provides a set of embodiments, and this embodiment discloses a secondary sphere size measurement method of an electron microscope image, which specifically includes:
step S711, collecting an electron microscope image:
In this embodiment, an electron microscope image dataset suitable for secondary sphere size measurement is selected. Image samples containing secondary spheres in the dataset are ensured. In selecting an image, some image samples of higher quality and resolution are preferred.
And marking the selected electron microscope image data set, and marking the position and boundary information of the secondary sphere. The area of the secondary sphere can be marked in each image using a marking tool or software. The marked electron microscope image dataset was taken as the initial dataset.
Step S712, dividing the initial data set into: training sets and test sets.
Step S713, preprocessing the image samples in the training set, where the preprocessing includes operations such as image denoising, resizing, contrast enhancement, histogram equalization, and the like.
In order to increase diversity and robustness of the training set, in the embodiment, the random clipping proportion of the pictures is 20%, the rotation angle ranges from-10 to 10%, and the brightness and the contrast are 30% -130%. As the data enhancement of the training image data set, the diversity of the data is expanded.
In this embodiment, the image data is uniformly scaled to 576×576 as the input of the model, and the image is subjected to data normalization operation, and pixels of the image are obtainedThe values of (2) are such that the pixel values are distributed over the interval [ -1, 1].
Step S714: and building a semantic segmentation network based on UNet. The UNet network consists of an encoder and a decoder, and performs feature extraction and upsampling through multi-layer convolution and pooling operations to achieve pixel level segmentation. The improved Unet network can also be selected to improve the measurement effect, and a traditional UNet network is selected for convenience of description.
Defining an input image asThe output segmentation result is/>
The encoder section is composed of a plurality of convolution blocks, wherein each convolution block is composed of a convolution kernelActivation function/>And batch normalization operation BN composition.
The output of the encoder is an encoding profile
The mathematical representation of the encoder section is:
the decoder section is composed of a plurality of upsampling layers and convolution blocks, wherein each upsampling layer upsamples the feature map to a higher resolution feature map and concatenates (skip connection) the feature map of the corresponding encoder section.
The output of the decoder is the segmentation result
The mathematical representation of the decoder part is:
step S715: the training set after the enhancement processing in step S713 is input into the UNet network, and the segmentation result is output, and the process is one forward propagation.
For the first time, model parameters need to be initialized, UNet weights and parameters are initialized, and random initialization or weight of a pre-training model can be used for initialization.
If not, determining the total number of training cycles according to the size of the training data set and the batch size.
Training within each cycle: in each cycle, the following steps are performed:
(1) Forward propagation: and inputting the training data set into the network model to obtain a predicted semantic segmentation result.
(2) Calculating loss: and calculating the loss error between the segmentation result and the real label by using a self-defined loss function.
(3) Back propagation: network parameters are updated according to random gradient drops of the loss function, and network weights and parameters are modified using AdamW algorithm to minimize the loss function.
(4) And (3) learning rate adjustment: according to the cosine annealing learning rate adjustment method, the learning rate is updated in each period.
Repeating the steps, and performing repeated iterative training on the whole training set until the preset training round number is reached or the stopping condition is met.
By using the self-defined loss function and AdamW (optimization algorithm in model training) to carry out backward propagation and combining with the cosine annealing learning rate adjustment method, network parameters can be continuously corrected in the training process, and the loss function is gradually reduced. Therefore, the training stability and the convergence rate of the network can be improved, the problem of local convergence is solved, and the accuracy of the network model in measuring the secondary sphere is further improved.
In training of the UNet network model, the loss function is:
the above character explanations are detailed in the above embodiments and will not be described here in detail.
Step S716: when the training number reaches the maximum iteration number, the training is ended. And (3) testing through the test set in the step S712, and storing the UNet network model with the highest accuracy as a measurement model for subsequent use.
Step S717: and applying the trained measurement model to secondary sphere particle identification, and carrying out secondary sphere identification and positioning on the input electron microscope image.
Step S718, positioning the scale information display area. OCR character recognition is performed on the located display area, and characters are converted into editable texts. And extracting the proportional number from the OCR recognition result and converting the proportional number into a numerical form.
And step S719, judging the shape of the identified secondary ball by utilizing the shape characteristics and the geometric attributes, and judging whether the shape meets the regular shape of the secondary ball.
Step S720, determining the accurate position and boundary information according to the secondary ball identified in step S717 or the secondary ball filtered in step S719. Calculating the size of the secondary ball using the position and boundary information of the secondary ball, the size of the secondary ball including but not limited to diameter, circumference or area; and calculating the size and the proportion number of the secondary sphere, and reducing the size of the secondary sphere amplified by the electron microscope to obtain the accurate size of the secondary sphere.
By using UNet networks for high precision segmentation of the secondary sphere in the electron microscope image, more accurate shape and boundary information can be provided.
In step S719, a filtering standard is set to further screen and filter the identified secondary balls, so as to eliminate the interfering objects of irregular or non-secondary balls, and improve the accuracy and reliability of secondary ball identification.
And extracting the proportion information in the secondary ball image by using an OCR character recognition technology, and measuring the size of the secondary ball to realize accurate size measurement of the secondary ball.
The embodiment can automatically realize secondary ball segmentation, can improve the working efficiency, and reduces the time and cost of manual operation. At the same time, accurate segmentation and dimensional measurement results also help to improve the quality and efficiency of research and production.
In one embodiment of the present application, there is provided a secondary sphere size measuring apparatus of an electron microscope image, the secondary sphere size measuring apparatus of an electron microscope image including: the device comprises an image acquisition unit, a secondary ball detection unit, a proportion information acquisition unit, a first size acquisition unit and a second size acquisition unit, wherein the specific contents comprise:
and the image acquisition unit is used for acquiring an image to be measured output by the electron microscope.
The secondary ball detection unit is used for inputting the image to be measured into the measurement model based on the deep learning so as to obtain a measurement model to detect the secondary ball in the image to be measured.
And the proportion information acquisition unit is used for extracting proportion information displayed in the image to be measured by the electron microscope from the image to be measured.
The first size acquisition unit is used for determining the position and boundary information of the secondary ball in the image to be measured, and determining the size of the secondary ball in the image to be measured according to the position and the boundary information.
And the second size acquisition unit is used for calculating the actual size of the secondary ball according to the size and the proportion information of the secondary ball in the image to be measured.
It should be noted that the secondary sphere size measurement device of the electron microscope image provided in this embodiment and the secondary sphere size measurement method embodiment of the electron microscope image are based on the same inventive concept, so that the relevant content of the secondary sphere size measurement method embodiment of the electron microscope image is also applicable to the secondary sphere size measurement device embodiment of the electron microscope image, and will not be described in detail herein.
In this embodiment, a computer vision technology is used, a measurement model is used to detect a secondary sphere in an image, then scale information is identified from the image, finally, the size of the secondary sphere in the image to be measured is determined based on the position and boundary information of the identified secondary sphere, and finally, the actual size is calculated based on the size and the scale number amplified by an electron microscope. The device realizes high-precision identification of the secondary ball in the electron microscope image, rapidly and accurately positions the position and boundary information of the secondary ball, can automatically identify the proportion information carried by the surface layer of the image to be measured, further automatically calculates the size of the secondary ball by utilizing the position and boundary information and proportion information of the secondary ball, and improves the accuracy and efficiency of measuring the size of the secondary ball.
As shown in fig. 11, the embodiment of the present application further provides an electronic device, where the electronic device includes:
At least one memory;
At least one processor;
at least one program;
The program is stored in the memory, and the processor executes at least one program to implement the secondary sphere size measurement method of the electron microscope image described above.
The electronic device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a vehicle-mounted computer, and the like.
The electronic device according to the embodiment of the application is described in detail below.
The processor 1600 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present invention;
The memory 1700 may be implemented in the form of Read Only Memory (ROM), static storage, dynamic storage, or random access memory (Random Access Memory, RAM). Memory 1700 may store an operating system and other application programs, and when implementing the technical solutions provided by the embodiments of the present disclosure by software or firmware, the relevant program code is stored in memory 1700 and invoked by processor 1600 to perform the secondary sphere size measurement method of the electron microscope image of the embodiment of the present invention.
An input/output interface 1800 for implementing information input and output;
The communication interface 1900 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, etc.), or can realize communication in a wireless manner (such as mobile network, WIFI, bluetooth, etc.);
Bus 2000, which transfers information between the various components of the device (e.g., processor 1600, memory 1700, input/output interface 1800, and communication interface 1900);
Wherein processor 1600, memory 1700, input/output interface 1800, and communication interface 1900 enable communication connections within the device between each other via bus 2000.
The embodiment of the invention also provides a storage medium which is a computer readable storage medium, wherein the computer readable storage medium stores computer executable instructions for causing a computer to execute the secondary sphere size measurement method of the electron microscope image.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the present invention are for more clearly describing the technical solutions of the embodiments of the present invention, and do not constitute a limitation on the technical solutions provided by the embodiments of the present invention, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present invention are applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the invention are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
"At least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application.
The aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing a program.
While the preferred embodiments of the present application have been described in detail, the embodiments of the present application are not limited to the above-described embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the embodiments of the present application, and these equivalent modifications or substitutions are included in the scope of the embodiments of the present application as defined in the appended claims.

Claims (12)

1. A secondary sphere size measurement method of an electron microscope image, characterized by comprising:
acquiring an image to be measured output by an electron microscope;
inputting the image to be measured into a measurement model based on deep learning to obtain a secondary sphere detected by the measurement model in the image to be measured;
Extracting the proportion information displayed in the image to be measured by the electron microscope from the image to be measured;
determining the position and boundary information of the secondary ball in the image to be measured, and determining the size of the secondary ball in the image to be measured according to the position and the boundary information;
Calculating the actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information;
The training method of the measurement model comprises the following steps:
selecting an electron microscope image sample containing marked secondary ball position and boundary information;
Selecting a semantic segmentation network, and training the semantic segmentation network by adopting the image sample to obtain a measurement model after training; the semantic segmentation network is a semantic segmentation network based on an improved UNet; the improved UNet-based semantic segmentation network comprises a UNet-based encoder and decoder architecture, wherein a plurality of improved convolution blocks are included in the encoder and the decoder, each improved convolution block comprises a first network branch and a second network branch, each first network branch sequentially comprises an upsampling layer, two groups of first convolution layers and a downsampling layer, jump connection is adopted between the first group of first convolution layers and the second group of first convolution layers, and each first convolution layer comprises one 3*1 and one 1*3 convolution; the second network branch comprises a downsampling layer, two groups of second convolution layers and an upsampling layer in sequence, wherein the first group of second convolution layers and the second group of second convolution layers are connected in a jumping manner, and the second convolution layers comprise a 3*1 convolution and a 1*3 convolution;
The processing procedure of the improved convolution block to the input characteristics comprises the following steps:
Inputting the input features into the first network branch and the second network branch respectively to obtain a first scale feature output by the first network branch and a second scale feature output by the second network branch;
and fusing the first scale feature and the second scale feature to obtain a fused feature, and taking the fused feature as an input feature of the next improved convolution block or an output feature of the improved UNet-based semantic segmentation network.
2. The method for measuring the size of a secondary sphere of an electron microscope image according to claim 1, wherein after obtaining the measurement model to detect the secondary sphere in the image to be measured, the method for measuring the size of a secondary sphere of an electron microscope image further comprises:
converting the shape characteristics and the geometric properties of the preset standard secondary ball into standard parameter values;
The determining the size of the secondary ball in the image to be measured according to the position and the boundary information comprises the following steps:
If the measurement model detects that the secondary sphere in the image to be measured meets the standard parameter value, determining the size of the secondary sphere meeting the standard parameter value in the image to be measured according to the position and the boundary information;
the calculating the actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information comprises the following steps:
And calculating the actual size of the secondary sphere meeting the standard parameter value according to the size of the secondary sphere in the image to be measured and the proportion information.
3. The method for measuring the secondary sphere size of an electron microscope image according to claim 1 or 2, wherein the extracting, from the image to be measured, the scale information of the electron microscope displayed in the image to be measured includes:
Positioning a proportion information display area displayed in the image to be measured by the electron microscope;
And extracting the proportion information from the proportion information display area through a character recognition tool.
4. The method of claim 1, wherein training the semantic segmentation network using the image samples comprises:
Setting an iteration ending condition of the semantic segmentation network;
Performing repeated iterative training on the semantic segmentation network by adopting the image sample until reaching the iteration ending condition, ending the iteration and obtaining a measurement model;
One of the iterative training processes of the semantic segmentation network comprises the following steps:
Inputting the image sample into the semantic segmentation network to obtain a segmentation result of the semantic segmentation network on the image sample;
Calculating a loss value between the segmentation result and the label of the image sample according to a loss function;
according to the loss value, carrying out back propagation optimization on the weight and the parameter of the semantic segmentation network;
and adjusting the learning rate of the semantic segmentation network according to the cosine annealing self-adaptive period.
5. The method of electron microscope image secondary sphere size measurement according to claim 4, wherein the loss function comprises a set cross entropy loss function and/or a Dice loss function, wherein the cross entropy loss function comprises:
wherein, Representing the number of pixel points,/>Representing category number,/>Representing pixel points/>, in the segmentation resultBelonging to category/>Probability of/>Representing pixel points in the tag/>Belonging to category/>Is a tag value of (2);
the Dice loss function includes:
wherein, Representing pixel points/>, in the segmentation resultValue of/>Representing pixel points in the tag/>Value of/>Representing a preset positive number.
6. The method of claim 5, wherein when the loss function includes the cross entropy loss function and the Dice loss function, the loss function is:
wherein, Is a preset weight.
7. The method of claim 4, further comprising, prior to inputting the image sample into the semantic segmentation network:
The image samples are pre-processed, wherein the pre-processing includes at least one of image denoising, resizing, contrast enhancement, and histogram equalization.
8. The method of claim 7, further comprising, prior to inputting the image sample into the semantic segmentation network:
Randomly clipping the image sample by 20%;
rotating the image sample after the clipping proportion, wherein the rotation angle ranges from-10 degrees to 10 degrees;
The brightness and contrast of the rotated image samples are adjusted to be between the range of 30% and 130%.
9. The method of claim 8, further comprising, prior to inputting the image sample into the semantic segmentation network:
Scaling the image sample to 576 x 576;
and data normalizing the image samples scaled 576×576 such that pixel values of the image samples are distributed within the [ -1,1] interval.
10. A secondary sphere size measurement apparatus of an electron microscope image, characterized in that the secondary sphere size measurement apparatus of an electron microscope image comprises:
The image acquisition unit is used for acquiring an image to be measured output by the electron microscope;
The secondary ball detection unit is used for inputting the image to be measured into a measurement model based on deep learning so as to obtain a secondary ball detected in the image to be measured by the measurement model;
A proportion information obtaining unit, configured to extract proportion information displayed in the image to be measured by the electron microscope from the image to be measured;
A first size obtaining unit, configured to determine a position and boundary information of the secondary ball in the image to be measured, and determine a size of the secondary ball in the image to be measured according to the position and boundary information;
A second size obtaining unit, configured to calculate an actual size of the secondary ball according to the size of the secondary ball in the image to be measured and the proportion information; the training method of the measurement model comprises the following steps:
selecting an electron microscope image sample containing marked secondary ball position and boundary information;
Selecting a semantic segmentation network, and training the semantic segmentation network by adopting the image sample to obtain a measurement model after training; the semantic segmentation network is a semantic segmentation network based on an improved UNet; the improved UNet-based semantic segmentation network comprises a UNet-based encoder and decoder architecture, wherein a plurality of improved convolution blocks are included in the encoder and the decoder, each improved convolution block comprises a first network branch and a second network branch, each first network branch sequentially comprises an upsampling layer, two groups of first convolution layers and a downsampling layer, jump connection is adopted between the first group of first convolution layers and the second group of first convolution layers, and each first convolution layer comprises one 3*1 and one 1*3 convolution; the second network branch comprises a downsampling layer, two groups of second convolution layers and an upsampling layer in sequence, wherein the first group of second convolution layers and the second group of second convolution layers are connected in a jumping manner, and the second convolution layers comprise a 3*1 convolution and a 1*3 convolution;
The processing procedure of the improved convolution block to the input characteristics comprises the following steps:
Inputting the input features into the first network branch and the second network branch respectively to obtain a first scale feature output by the first network branch and a second scale feature output by the second network branch;
and fusing the first scale feature and the second scale feature to obtain a fused feature, and taking the fused feature as an input feature of the next improved convolution block or an output feature of the improved UNet-based semantic segmentation network.
11. An electronic device, comprising: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the method of secondary sphere size measurement of an electron microscope image of any of claims 1 to 9.
12. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the secondary sphere size measurement method of an electron microscope image according to any one of claims 1 to 9.
CN202410100478.5A 2024-01-24 2024-01-24 Method, device and equipment for measuring secondary sphere size of electron microscope image Active CN117635694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410100478.5A CN117635694B (en) 2024-01-24 2024-01-24 Method, device and equipment for measuring secondary sphere size of electron microscope image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410100478.5A CN117635694B (en) 2024-01-24 2024-01-24 Method, device and equipment for measuring secondary sphere size of electron microscope image

Publications (2)

Publication Number Publication Date
CN117635694A CN117635694A (en) 2024-03-01
CN117635694B true CN117635694B (en) 2024-04-19

Family

ID=90032360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410100478.5A Active CN117635694B (en) 2024-01-24 2024-01-24 Method, device and equipment for measuring secondary sphere size of electron microscope image

Country Status (1)

Country Link
CN (1) CN117635694B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210418A (en) * 2020-01-09 2020-05-29 中国电建集团华东勘测设计研究院有限公司 Method for inspecting municipal water supply pipeline by using transparent camera ball
KR20200106763A (en) * 2019-03-05 2020-09-15 서울대학교산학협력단 Method for analyzing electron microscopy image
CN112419293A (en) * 2020-12-01 2021-02-26 杭州市第一人民医院 Method, device, apparatus and storage medium for counting cells in container
CN116399314A (en) * 2023-06-02 2023-07-07 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Calibrating device for photogrammetry and measuring method thereof
CN116953006A (en) * 2023-07-05 2023-10-27 长三角先进材料研究院 Casting material scanning electron microscope image defect identification and quantification method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9696897B2 (en) * 2011-10-19 2017-07-04 The Regents Of The University Of California Image-based measurement tools
CL2017000574A1 (en) * 2017-03-09 2018-02-23 Lmagne Ingenieria Ltda A system and a process to determine online the characteristics of spent balls and the pieces thereof, which have been expelled from a semi-autogenous mineral grinding mill (sag)
US20230368374A1 (en) * 2022-05-13 2023-11-16 City University Of Hong Kong Label-free Liquid Biopsy-based Disease Model, Analytical Platform and Method for Predicting Disease Prognosis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200106763A (en) * 2019-03-05 2020-09-15 서울대학교산학협력단 Method for analyzing electron microscopy image
CN111210418A (en) * 2020-01-09 2020-05-29 中国电建集团华东勘测设计研究院有限公司 Method for inspecting municipal water supply pipeline by using transparent camera ball
CN112419293A (en) * 2020-12-01 2021-02-26 杭州市第一人民医院 Method, device, apparatus and storage medium for counting cells in container
CN116399314A (en) * 2023-06-02 2023-07-07 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Calibrating device for photogrammetry and measuring method thereof
CN116953006A (en) * 2023-07-05 2023-10-27 长三角先进材料研究院 Casting material scanning electron microscope image defect identification and quantification method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DS-UNet: A dual steams UNet for refined image forgery localization;Yuanhang Huang et al.;《Information Sciences》;20220930;第610卷;第73-89页 *
FusionNet: A Deep Fully Residual Convolutional Neural Network for Image Segmentation in Connectomics;Tran Minh Quan et al.;《Frontiers in Computer Science》;20210312;第3卷;第1-12页 *
Hao Li et al..ERDUnet: An Efficient Residual Double-coding Unet for Medical Image Segmentation.《 IEEE Transactions on Circuits and Systems for Video Technology》.第1-15页. *
本杰明•普朗什等.《计算机视觉实战 基于TensorFlow 2》.机械工业出版社,2021,第145页. *

Also Published As

Publication number Publication date
CN117635694A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN108346154B (en) Method for establishing lung nodule segmentation device based on Mask-RCNN neural network
CN108304820B (en) Face detection method and device and terminal equipment
CN108230292A (en) The training method of object detecting method and neural network, device and electronic equipment
CN104217459B (en) A kind of spheroid character extracting method
CN104463240B (en) A kind of instrument localization method and device
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN110598613A (en) Expressway agglomerate fog monitoring method
CN114463637A (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN114821102A (en) Intensive citrus quantity detection method, equipment, storage medium and device
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN101710340B (en) Method for retrieving similar images
CN114428110A (en) Method and system for detecting defects of fluorescent magnetic powder inspection image of bearing ring
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN117635694B (en) Method, device and equipment for measuring secondary sphere size of electron microscope image
CN116363136B (en) On-line screening method and system for automatic production of motor vehicle parts
CN114065798A (en) Visual identification method and device based on machine identification
CN115861816B (en) Three-dimensional low-vortex recognition method and device, storage medium and terminal
CN110503631A (en) A kind of method for detecting change of remote sensing image
CN110533663B (en) Image parallax determining method, device, equipment and system
CN104616302A (en) Real-time object identification method
CN112634382B (en) Method and device for identifying and replacing images of unnatural objects
CN114758123A (en) Remote sensing image target sample enhancement method
CN114663681A (en) Method for reading pointer type meter and related product
CN114241194A (en) Instrument identification and reading method based on lightweight network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant