CN111798452A - Carotid artery handheld ultrasonic image segmentation method, system and device - Google Patents
Carotid artery handheld ultrasonic image segmentation method, system and device Download PDFInfo
- Publication number
- CN111798452A CN111798452A CN202010639059.0A CN202010639059A CN111798452A CN 111798452 A CN111798452 A CN 111798452A CN 202010639059 A CN202010639059 A CN 202010639059A CN 111798452 A CN111798452 A CN 111798452A
- Authority
- CN
- China
- Prior art keywords
- carotid artery
- hidden layer
- ultrasonic image
- size
- handheld ultrasonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000001715 carotid artery Anatomy 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003709 image segmentation Methods 0.000 title claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000001537 neural effect Effects 0.000 claims abstract description 18
- 238000002604 ultrasonography Methods 0.000 claims description 49
- 238000011176 pooling Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 4
- 101100295091 Arabidopsis thaliana NUDT14 gene Proteins 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 6
- 210000001367 artery Anatomy 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 210000003462 vein Anatomy 0.000 description 4
- 239000003814 drug Substances 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000003556 assay Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000005251 gamma ray Effects 0.000 description 1
- 239000011796 hollow space material Substances 0.000 description 1
- 238000003704 image resize Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses a carotid artery handheld ultrasonic image segmentation method, a system and a device based on a neural structure search network, wherein the method comprises the following steps: acquiring a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image to obtain a carotid artery handheld ultrasonic image with a preset size; inputting a carotid artery handheld ultrasonic image with a preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image. The invention can assist doctors in analyzing ultrasonic images and relieve the workload of doctors, and has very important practical significance.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a carotid artery handheld ultrasonic image segmentation method, a system and a device based on a neural structure search network.
Background
Computed Tomography (CT) is a cross-sectional scan that uses precisely collimated X-ray beams, gamma rays, ultrasonic waves, etc. and a detector with extremely high sensitivity to scan one by one around a certain part of a human body, has the characteristics of fast scanning time, clear images, etc., and can be used for the examination of various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), and gamma-ray CT (gamma-CT).
Ultrasound (US) medicine is a combined science of acoustics, medicine, optics and electronics. Ultrasound medicine is the application of acoustic technology in the medical field where frequencies higher than audible sound are studied. The US-CT image is an important means for a doctor to carry out preliminary screening on the illness state of a patient, has the advantages of lower acquisition cost than that of an X-CT image, no electromagnetic radiation and higher safety to the patient, and has the defect that the imaging is not clear in the X-CT. The handheld ultrasonic equipment is portable ultrasonic equipment newly developed in recent years, and has the advantages of lower price than large-scale ultrasonic equipment in hospitals, more convenient carrying, easier acquisition of small-sized mechanisms or individuals, simpler operation and less clear imaging than the large-scale ultrasonic equipment.
The significance of US-CT images is in preliminary screening, especially for hand-held ultrasound images. If the ultrasound screening can be socialized and personalized, the burden of a large-scale hospital, trimethyl, can be greatly relieved, and the time cost and economic burden of patients can also be greatly reduced. Therefore, a technical scheme for performing semantic analysis on the US-CT image is urgently needed at present.
Disclosure of Invention
The invention aims to provide a carotid artery handheld ultrasonic image segmentation method, a system and a device based on a neural structure search network, and aims to solve the problems in the prior art.
The invention provides a carotid artery handheld ultrasonic image segmentation method based on a neural structure search network, which comprises the following steps:
acquiring a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image to obtain a carotid artery handheld ultrasonic image with a preset size;
inputting a carotid artery handheld ultrasonic image with a preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
The invention provides a carotid artery handheld ultrasonic image segmentation system based on a neural structure search network, which comprises:
the preprocessing module is used for acquiring a carotid artery handheld ultrasonic image and performing size processing on the carotid artery handheld ultrasonic image to obtain a carotid artery handheld ultrasonic image with a preset size;
and the semantic segmentation module is used for inputting the carotid artery handheld ultrasonic image with the preset size into a pre-trained hnasnet model, searching a network structure and a unit structure simultaneously through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
The embodiment of the invention also provides carotid artery handheld ultrasonic image segmentation equipment based on a neural structure search network, which comprises: the device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program is used for realizing the steps of the carotid artery handheld ultrasonic image segmentation method based on the neural structure search network when being executed by the processor.
The embodiment of the invention also provides a computer-readable storage medium, wherein an implementation program for information transmission is stored on the computer-readable storage medium, and when the program is executed by a processor, the steps of the carotid artery handheld ultrasonic image segmentation method based on the neural structure search network are implemented.
By adopting the embodiment of the invention, according to the US-CT image acquired by the handheld ultrasonic equipment, the semantic segmentation result at the pixel level can be output through the model, and the semantic segmentation result can assist a doctor in analyzing the ultrasonic image and relieve the workload of the doctor, so that the method has very important practical significance.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for hand-held segmentation of ultrasound images of carotid arteries in accordance with an embodiment of the present invention;
FIG. 2 is a schematic illustration of an original image of a trained hnasnet model according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a first processing manner of an original image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a second processing manner of an original image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a prior art pnanset model of an embodiment of the present invention;
FIG. 6 is a schematic representation of the better results of the output of the pnanset model of an embodiment of the present invention;
FIG. 7 is a schematic representation of the poor results of the pnanset model output of an embodiment of the present invention;
FIG. 8 is a schematic diagram of the null scan result output by the pnanset model of an embodiment of the present invention;
FIG. 9 is a schematic diagram of the hnasnet model of an embodiment of the present invention;
FIG. 10 is a schematic diagram of a network structure and cell structure according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a carotid artery handheld ultrasound image segmentation system in accordance with an embodiment of the invention;
FIG. 12 is a schematic diagram of a handheld ultrasound image segmentation apparatus for carotid arteries, in accordance with an embodiment of the invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise. Furthermore, the terms "mounted," "connected," and "connected" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Method embodiment
According to an embodiment of the invention, a carotid artery handheld ultrasonic image segmentation method based on a neural structure search network is provided, fig. 1 is a flow chart of the carotid artery handheld ultrasonic image segmentation method of the embodiment of the invention, in the embodiment of the invention, a trained hnasnet model is needed, the model is a deep neural network model, and a data set is needed firstly to obtain the hnasnet model. The original images are US-CT grayscale images with unfixed length and width acquired by a handheld ultrasonic device, 20460 sheets in total, wherein 17771 sheets are used as a training set and a verification set, and 2689 sheets are used as a test set. The data type is fluid 8 and the range is 0-255, where an original image, such as that of FIG. 2, has a height of 1004 pixels and a width of 1712 pixels.
The original tags are classified tags marked by the experts by using a marking tool, and the marking tool can generate a JSON file. The corresponding tag file contents of fig. 2 are as follows:
{"shapes":[{"label":"Plaque","line_color":null,"fill_color":null,"points":[[753.0,544.0],[814.0,537.0],[878.0,517.0],[898.0,528.0],[870.0,552.0],[844.0,566.0],[802.0,576.0],[776.0,567.0]]},{"label":"CA","line_color":null,"fill_color":null,"points":[[726.0,371.0],[690.0,422.0],[670.0,488.0],[703.0,557.0],[761.0,598.0],[859.0,598.0],[930.0,527.0],[934.0,446.0],[876.0,362.0],[788.0,350.0]]},{"label":"JV","line_color":null,"fill_color":null,"points":[[373.0,373.0],[471.0,406.0],[589.0,478.0],[650.0,461.0],[680.0,381.0],[597.0,357.0],[477.0,334.0],[374.0,314.0]]}],"lineColor":[0,255,0,128],"fillColor":[255,0,0,128],"imagePath":"frm-0005.png","imageData":null}。
as shown in fig. 1, the method for segmenting a handheld ultrasound image of a carotid artery according to an embodiment of the present invention specifically includes:
step 101, obtaining a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image (namely an original image) to obtain a carotid artery handheld ultrasonic image with a preset size;
in step 101, the size processing of the carotid artery hand-held ultrasound image can include the following two ways:
1. processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
2. after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
Specifically, as shown in fig. 3 and 4, the image processing method employs 2 methods, and the first method is to directly resize the original image to a size of 96 × 96. The second method is to add a multi-target detection (multiscale) strategy based on the first method, i.e. after the original image resize is 128 × 128, crop top left, top right, bottom left, bottom right and center are 5 images of 96 × 96.
And 102, inputting the carotid artery handheld ultrasonic image with the preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
In step 102, when selecting the hnasnet model, the inventor finds that the existing model adopts the pnasnet model as shown in fig. 5, and the model searches for an optimal unit structure based on the artificial design of the network structure. In training 8X 106After one iteration (step), the model reaches the performance bottleneck and the effect is not promoted any more. The existing model does not achieve the expected effect, so other models need to be tried to achieve a better result. FIGS. 6-8 are 3 results from the pnanset model. The result is divided into 3 parts, the middle part is an original image, the left side is the real value of the doctor label, and the right side is the predicted value of the model prediction. Including arteries, veins, plaques and background. FIG. 6 is a typical case with better results; FIG. 7 is that in complex cases, the model predicted slightly less effective; fig. 8 is an image of the null scan output of the ultrasound probe when the patient is not being examined.
In order to improve the semantic segmentation performance, the embodiment of the present invention uses an hnasnet model, as shown in fig. 9, which uses a hierarchical neural structure search to search a network structure and a unit structure at the same time.
In step 102, the searching for the network structure and the unit structure simultaneously through the hnasnet model specifically includes:
for the network structure determined according to the formulas 1-3, 12 units are set, 4, 8, 16 and 32 downsampling spaces are adopted, namely the dimension of the feature map space is 4, 8, 16 and 32 times of that of the original image, and the features are extracted through the last unit by adopting a hole space convolution pooling pyramid ASPP method.
That is, as shown in the left side of fig. 10, for the network structure, the embodiment of the present invention adopts a down-sampling space of 4, 8, 16, 32, that is, the size of the feature map space dimension is 4, 8, 16, 32 times that of the original image. The size of the feature map spatial dimension between adjacent cells is 2 times (downsampling), 1 time (same sampling) or 0.5 time (upsampling). There are 12 units in total in a network, and the last unit adopts a hollow space convolution pooling pyramid (ASPP) method to extract features:
where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, beta represents weights of downsampling, upsampling and upsampling, Cell () represents a unit structure, a unit output of the l layer and a hidden layer output H of the l-1 layerl-1Hidden layer output H of the l-2 th layerl-2Related to the unit parameter alpha of the l-th layer(ii) a Equations 2 and 3 are constraints of the weights.
For the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerThe input of the hidden layer output device is connected with the hidden layer outputs in all the previous units, and simultaneously connected with the hidden layer outputs of the last 2 units, for a certain hidden layer H, 8 kinds of selectable operations are available, 2 operations can be selected repeatedly in the operations and are used as two branches of the hidden layer H, wherein the 8 kinds of selectable operations specifically comprise: 3 x 3 depth separable convolution, 5 x 5 depth separable convolution, 3 x 3 step 2 hole convolution, 5 x 5 step 2 hole convolution, 3 x 3 average pooling, 3 x 3 maximum pooling, direct connection, disconnection.
That is, as shown on the right side of fig. 10, for the cell structure, a densely connected manner is adopted, as shown in fig. 5 (right). There are 5 hidden layers (blocks) in the cell, and for a certain hidden layerIts input is connected to the hidden layer outputs of all previous units, while connected to the hidden layer outputs of the nearest 2 previous units. For exampleIs inputted withH4、H3Are connected. For a hidden layer H, there are 8 options: 3 x 3 depth separable convolution, 5 x 5 depth separable convolution, 3 x 3 step 2 hole convolution, 5 x 5 step 2 hole convolution, 3 x 3 average pooling, 3 x 3 maximum pooling, direct connection, disconnection. Of these operations, 2 operations can be repeatedly selected as two branches of one hidden layer H (block).
Wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresenting one of 8 alternative operations, for example, in equation 4, to be able to reverse the gradientA continuous relaxation operation is performed as in equation 5.After a continuous relaxation operationRepresenting the weight of the k-th operation of the 8 operations of block j through block i. Equations 6 and 7 are constraints of the weight α.
In step 102, outputting a semantic segmentation result of the carotid artery handheld ultrasound image specifically includes:
and outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image. That is, the above model can output semantic segmentation results at the pixel level: arteries, veins, plaques and background.
Specifically, according to the actual usage scene, the model inputs a grayscale image of 96 × 1 and outputs a semantic segmentation result of 96 × 4. When the original image processing adopts the first method, the resize size is 96 × 96; in the second method, the resize size is 128 × 128 and the crop size is 96 × 96. 2 modes including null scan and non-null scan are adopted for calculating test data, and 4 x 10 is adopted for training step6And 8X 106There are 2 modes in total, and the assay and experimental verification are performed on the hnasnet model as follows.
The plaque dice coefficient is adopted for the model performance, and the dice coefficient calculation method is not repeated. For raw image processing pair ratios table 1:
TABLE 1
The training results for the 2 models at different steps are shown in table 2:
TABLE 2
According to experimental results, the performance of the hnasnet model is greatly improved.
In summary, according to the US-CT image acquired by the handheld ultrasound device, the semantic segmentation result at the pixel level can be output through the model, and the semantic segmentation result can assist a doctor in analyzing an ultrasound image and relieve the workload of the doctor, so that the method and the device have very important practical significance.
System embodiment
According to an embodiment of the present invention, a carotid artery handheld ultrasound image segmentation system based on a neural structure search network is provided, fig. 11 is a schematic diagram of the carotid artery handheld ultrasound image segmentation system according to the embodiment of the present invention, and as shown in fig. 11, the carotid artery handheld ultrasound image segmentation system according to the embodiment of the present invention specifically includes:
the preprocessing module 110 is configured to obtain a carotid artery handheld ultrasound image, and perform size processing on the carotid artery handheld ultrasound image to obtain a carotid artery handheld ultrasound image with a predetermined size; the preprocessing module 110 is specifically configured to:
processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
And the semantic segmentation module 112 is configured to input the carotid artery handheld ultrasound image with the predetermined size into a pre-trained hnasnet model, perform search on a network structure and a unit structure simultaneously through the hnasnet model, and output a semantic segmentation result of the carotid artery handheld ultrasound image.
The semantic segmentation module 112 is specifically configured to:
for the network structure determined according to the formulas 1-3, 12 units are set, a down-sampling space of 4, 8, 16, 32 is adopted, namely, the dimension of the feature map space is 4, 8, 16, 32 times of that of the original image, and the features are extracted by the last unit by adopting an ASPP method:
where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, β represents weights of downsampling, upsampling, and upsampling, Cell () represents the unit structure, the unit output of the l layer and the hidden layer output H of the l-1 layerl-1Hidden layer output H of the l-2 th layerl-2Related to the unit parameter alpha of the l-th layer;
for the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerThe input of the hidden layer output device is connected with the hidden layer outputs in all the previous units, and simultaneously connected with the hidden layer outputs of the last 2 units, for a certain hidden layer H, 8 kinds of selectable operations are available, 2 operations can be selected repeatedly in the operations and are used as two branches of the hidden layer H, wherein the 8 kinds of selectable operations specifically comprise: 3 × 3 depth separable convolution, 5 × 5 depth separable convolution, 3 × 3 steps 2 hole convolution, 5 × 5 steps 2 hole convolution, 3 × 3 average pooling, 3 × 3 maximum pooling, direct connection, disconnection:
wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresents one of 8 alternative operations that may be performed,after a continuous relaxation operationRepresenting the weight of the k-th operation of the 8 operations of block j through block i.
And outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image.
The embodiment of the present invention is a system embodiment corresponding to the above method embodiment, and specific operations of each module may be understood with reference to the description of the method embodiment, which is not described herein again.
Apparatus embodiment one
The embodiment of the invention provides a carotid artery handheld ultrasonic image segmentation device based on a neural structure search network, as shown in fig. 12, comprising: a memory 120, a processor 122 and a computer program stored on the memory 120 and executable on the processor 122, which computer program, when executed by the processor 122, carries out the following method steps:
step 101, obtaining a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image (namely an original image) to obtain a carotid artery handheld ultrasonic image with a preset size;
in step 101, the size processing of the carotid artery hand-held ultrasound image can include the following two ways:
1. processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
2. after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
And 102, inputting the carotid artery handheld ultrasonic image with the preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
In step 102, the searching for the network structure and the unit structure simultaneously through the hnasnet model specifically includes:
for the network structure determined according to the formulas 1-3, 12 units are set, 4, 8, 16 and 32 downsampling spaces are adopted, namely the dimension of the feature map space is 4, 8, 16 and 32 times of that of the original image, and the features are extracted through the last unit by adopting a hole space convolution pooling pyramid ASPP method.
Where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, beta represents weights of downsampling, upsampling and upsampling, Cell () represents a unit structure, a unit output of the l layer and a hidden layer output H of the l-1 layerl-1Hidden layer output H of the l-2 th layerl-2Related to the unit parameter alpha of the l-th layer; equations 2 and 3 are constraints of the weights.
For the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerIts input is all the previousThe hidden layer outputs in the units are connected and simultaneously connected with the hidden layer outputs of the last 2 units, and for a certain hidden layer H, 8 kinds of selectable operations are total, 2 operations can be selected repeatedly in the operations and are used as two branches of the hidden layer H, wherein the 8 kinds of selectable operations specifically comprise: 3 x 3 depth separable convolution, 5 x 5 depth separable convolution, 3 x 3 step 2 hole convolution, 5 x 5 step 2 hole convolution, 3 x 3 average pooling, 3 x 3 maximum pooling, direct connection, disconnection.
Wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresenting one of 8 alternative operations, for example, in equation 4, to be able to reverse the gradientA continuous relaxation operation is performed as in equation 5.After a continuous relaxation operationRepresenting the weight of the k-th operation of the 8 operations of block j through block i. Equations 6 and 7 are constraints of the weight α.
In step 102, outputting a semantic segmentation result of the carotid artery handheld ultrasound image specifically includes:
and outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image. That is, the above model can output semantic segmentation results at the pixel level: arteries, veins, plaques and background.
Specifically, according to the actual usage scene, the model inputs a grayscale image of 96 × 1 and outputs a semantic segmentation result of 96 × 4. When the original image processing adopts the first method, the resize size is 96 × 96; in the second method, the resize size is 128 × 128 and the crop size is 96 × 96. 2 modes including null scan and non-null scan are adopted for calculating test data, and 4 x 10 is adopted for training step6And 8X 106There are 2 modes in total.
Device embodiment II
The embodiment of the present invention provides a computer-readable storage medium, where an implementation program for information transmission is stored, and when executed by the processor 122, the implementation program implements the following method steps:
step 101, obtaining a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image (namely an original image) to obtain a carotid artery handheld ultrasonic image with a preset size;
in step 101, the size processing of the carotid artery hand-held ultrasound image can include the following two ways:
1. processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
2. after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
And 102, inputting the carotid artery handheld ultrasonic image with the preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
In order to improve the semantic segmentation performance, the embodiment of the present invention uses an hnasnet model, as shown in fig. 9, which uses a hierarchical neural structure search to search a network structure and a unit structure at the same time.
In step 102, the searching for the network structure and the unit structure simultaneously through the hnasnet model specifically includes:
for the network structure determined according to the formulas 1-3, 12 units are set, 4, 8, 16 and 32 downsampling spaces are adopted, namely the dimension of the feature map space is 4, 8, 16 and 32 times of that of the original image, and the features are extracted through the last unit by adopting a hole space convolution pooling pyramid ASPP method.
Where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, beta represents weights of downsampling, upsampling and upsampling, Cell () represents a unit structure, a unit output of the l layer and a hidden layer output H of the l-1 layerl-1Hidden layer output H of the l-2 th layerl-2Related to the unit parameter alpha of the l-th layer; equations 2 and 3 are constraints of the weights.
For the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerThe input of the hidden layer output device is connected with the hidden layer outputs in all the previous units, and simultaneously connected with the hidden layer outputs of the last 2 units, for a certain hidden layer H, 8 kinds of selectable operations are available, 2 operations can be selected repeatedly in the operations and are used as two branches of the hidden layer H, wherein the 8 kinds of selectable operations specifically comprise: 3 x 3 depth separable convolution, 5 x 5 depth separable convolution, 3 x 3 step 2 hole convolution, 5 x 5 step 2 hole convolution, 3 x 3 average pooling, 3 x 3 maximum pooling, direct connection, disconnection.
Wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresents one of 8 alternative operations, in order toCapable of propagating gradients back toA continuous relaxation operation is performed as in equation 5.After a continuous relaxation operationRepresenting the weight of the k-th operation of the 8 operations of block j through block i. Equations 6 and 7 are constraints of the weight α.
In step 102, outputting a semantic segmentation result of the carotid artery handheld ultrasound image specifically includes:
and outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image. That is, the above model can output semantic segmentation results at the pixel level: arteries, veins, plaques and background.
Specifically, according to the actual usage scene, the model inputs a grayscale image of 96 × 1 and outputs a semantic segmentation result of 96 × 4. When the original image processing adopts the first method, the resize size is 96 × 96; in the second method, the resize size is 128 × 128 and the crop size is 96 × 96. 2 modes including null scan and non-null scan are adopted for calculating test data, and 4 x 10 is adopted for training step6And 8X 106There are 2 modes in total.
By adopting the embodiment of the invention, according to the US-CT image acquired by the handheld ultrasonic equipment, the semantic segmentation result at the pixel level can be output through the model, and the semantic segmentation result can assist a doctor in analyzing the ultrasonic image and relieve the workload of the doctor, so that the method has very important practical significance.
The computer-readable storage medium of this embodiment includes, but is not limited to: ROM, RAM, magnetic or optical disks, and the like.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A carotid artery handheld ultrasonic image segmentation method based on a neural structure search network is characterized by comprising the following steps:
acquiring a carotid artery handheld ultrasonic image, and performing size processing on the carotid artery handheld ultrasonic image to obtain a carotid artery handheld ultrasonic image with a preset size;
inputting a carotid artery handheld ultrasonic image with a preset size into a pre-trained hnasnet model, simultaneously searching a network structure and a unit structure through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
2. The method of claim 1, wherein the performing the size processing on the carotid artery handheld ultrasound image to obtain a carotid artery handheld ultrasound image with a predetermined size specifically comprises:
processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
3. The method according to claim 1, wherein the searching for the network structure and the unit structure simultaneously through the hnasnet model specifically comprises:
for the network structure determined according to the formulas 1-3, 12 units are set, 4, 8, 16 and 32 downsampling spaces are adopted, namely the dimension of the feature map space is 4, 8, 16 and 32 times of that of the original image, and the features are extracted by adopting a hole space convolution pooling pyramid ASPP method through the last unit:
where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, beta represents weights of downsampling, upsampling and upsampling, Cell () represents a unit structure, a unit output of the l layer and a hidden layer output H of the l-1 layerl-1The first steplHidden layer output H of 2 layersl-2Related to the unit parameter alpha of the l-th layer;
for the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerThe input of the hidden layer output device is connected with the hidden layer outputs in all the previous units, and simultaneously connected with the hidden layer outputs of the last 2 units, for a certain hidden layer H, 8 kinds of selectable operations are available, 2 operations can be selected repeatedly in the operations and are used as two branches of the hidden layer H, wherein the 8 kinds of selectable operations specifically comprise: 3 × 3 depth separable convolution, 5 × 5 depth separable convolution, 3 × 3 steps 2 hole convolution, 5 × 5 steps 2 hole convolution, 3 × 3 average pooling, 3 × 3 maximum pooling, direct connection, disconnection:
wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresents one of 8 alternative operations that may be performed,after a continuous relaxation operation Representing the weight of the k-th operation of the 8 operations of block j through block i.
4. The method of claim 2, wherein outputting the semantic segmentation result for the carotid artery handheld ultrasound image specifically comprises:
and outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image.
5. A carotid artery handheld ultrasonic image segmentation system based on a neural structure search network is characterized by comprising:
the preprocessing module is used for acquiring a carotid artery handheld ultrasonic image and performing size processing on the carotid artery handheld ultrasonic image to obtain a carotid artery handheld ultrasonic image with a preset size;
and the semantic segmentation module is used for inputting the carotid artery handheld ultrasonic image with the preset size into a pre-trained hnasnet model, searching a network structure and a unit structure simultaneously through the hnasnet model, and outputting a semantic segmentation result of the carotid artery handheld ultrasonic image.
6. The system of claim 5, wherein the preprocessing module is specifically configured to:
processing the size of the obtained carotid artery hand-held ultrasound image into 96 × 96 size; or,
after the size of the acquired carotid artery hand-held ultrasound image is processed into a size of 128 × 128, the carotid artery hand-held ultrasound image with the size of 128 × 128 is segmented into a plurality of images of 96 × 96.
7. The system of claim 5, wherein the semantic segmentation module is specifically configured to:
for the network structure determined according to the formulas 1-3, 12 units are set, a down-sampling space of 4, 8, 16, 32 is adopted, namely, the dimension of the feature map space is 4, 8, 16, 32 times of that of the original image, and the features are extracted by the last unit by adopting an ASPP method:
where l represents the l-th layer, s represents the down-sampled space s,sHlrepresenting the hidden layer output under l and s conditions,represents downsampling, s → s represents upsampling, 2s → s represents upsampling, beta represents weights of downsampling, upsampling and upsampling, Cell () represents a unit structure, a unit output of the l layer and a hidden layer output H of the l-1 layerl-1Hidden layer output H of the l-2 th layerl -2Related to the unit parameter alpha of the l-th layer;
for the cell structure determined according to the formulas 4-7, a dense connection mode is adopted, 5 hidden layers are arranged in the cell, and for a certain hidden layerIts input is connected with all the internal hidden layer outputs of the previous units, and at the same time is connected with hidden layer outputs of the last 2 units, for a certain hidden layer H, there are 8 kinds of selectable operations, and in these operations 2 operations can be repeatedly selected as that of a hidden layer HTwo branches, wherein the 8 selectable operations specifically include: 3 × 3 depth separable convolution, 5 × 5 depth separable convolution, 3 × 3 steps 2 hole convolution, 5 × 5 steps 2 hole convolution, 3 × 3 average pooling, 3 × 3 maximum pooling, direct connection, disconnection:
wherein l represents the l-th layer, i represents the ith block, j represents the jth block, j is more than 0 and less than i and less than 5,represents the output of the i-th hidden layer,represents the jth andconnected hidden layer, Oj→iRepresents one of 8 alternative operations that may be performed,after a continuous relaxation operation Representing the weight of the k-th operation of the 8 operations of block j through block i.
8. The system of claim 6, wherein the semantic segmentation module is specifically configured to:
and outputting a semantic segmentation result of 96 × 4 of the carotid artery handheld ultrasonic image.
9. A carotid artery handheld ultrasonic image segmentation device based on a neural structure search network is characterized by comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the method for carotid artery handheld ultrasound image segmentation based on neural structure search network according to any of claims 1 to 4.
10. A computer-readable storage medium, on which an information transfer implementing program is stored, which when executed by a processor implements the steps of the carotid artery handheld ultrasound image segmentation method based on neural structure search network according to any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010639059.0A CN111798452A (en) | 2020-07-06 | 2020-07-06 | Carotid artery handheld ultrasonic image segmentation method, system and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010639059.0A CN111798452A (en) | 2020-07-06 | 2020-07-06 | Carotid artery handheld ultrasonic image segmentation method, system and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111798452A true CN111798452A (en) | 2020-10-20 |
Family
ID=72811227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010639059.0A Pending CN111798452A (en) | 2020-07-06 | 2020-07-06 | Carotid artery handheld ultrasonic image segmentation method, system and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111798452A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112863647A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Video stream processing and displaying method, system and storage medium |
CN113947593A (en) * | 2021-11-03 | 2022-01-18 | 北京航空航天大学 | Method and device for segmenting vulnerable plaque in carotid artery ultrasonic image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985343A (en) * | 2018-06-22 | 2018-12-11 | 深源恒际科技有限公司 | Automobile damage detecting method and system based on deep neural network |
CN110136157A (en) * | 2019-04-09 | 2019-08-16 | 华中科技大学 | A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning |
US20200051238A1 (en) * | 2018-08-13 | 2020-02-13 | International Business Machines Corporation | Anatomical Segmentation Identifying Modes and Viewpoints with Deep Learning Across Modalities |
CN111243042A (en) * | 2020-02-28 | 2020-06-05 | 浙江德尚韵兴医疗科技有限公司 | Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning |
-
2020
- 2020-07-06 CN CN202010639059.0A patent/CN111798452A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985343A (en) * | 2018-06-22 | 2018-12-11 | 深源恒际科技有限公司 | Automobile damage detecting method and system based on deep neural network |
US20200051238A1 (en) * | 2018-08-13 | 2020-02-13 | International Business Machines Corporation | Anatomical Segmentation Identifying Modes and Viewpoints with Deep Learning Across Modalities |
CN110136157A (en) * | 2019-04-09 | 2019-08-16 | 华中科技大学 | A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning |
CN111243042A (en) * | 2020-02-28 | 2020-06-05 | 浙江德尚韵兴医疗科技有限公司 | Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning |
Non-Patent Citations (4)
Title |
---|
CHENXI LIU等: "Auto-DeepLab:Hierarchical Neural Architecture Search for Semantic Image Segmentation", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, pages 82 - 92 * |
YU WENG等: "NAS-Unet: Neural Architecture Search for Medical Image Segmentation", 《SPECIAL SECTION ON ADVANCED OPTICAL IMAGING FOR EXTREME ENVIRONMENTS》, vol. 7, pages 44247 - 44257, XP011718950, DOI: 10.1109/ACCESS.2019.2908991 * |
孙夏等: "基于卷积神经网络的颈动脉斑块超声图像特征识别", 《中国医疗器械信息》, pages 4 - 9 * |
梁新宇等: "基于深度学习的图像语义分割技术研究进展", 《计算机工程与应用》, pages 18 - 28 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112863647A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Video stream processing and displaying method, system and storage medium |
CN113947593A (en) * | 2021-11-03 | 2022-01-18 | 北京航空航天大学 | Method and device for segmenting vulnerable plaque in carotid artery ultrasonic image |
CN113947593B (en) * | 2021-11-03 | 2024-05-14 | 北京航空航天大学 | Segmentation method and device for vulnerable plaque in carotid ultrasound image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12067725B2 (en) | Image region localization method, image region localization apparatus, and medical image processing device | |
EP3852054A1 (en) | Method and system for automatically detecting anatomical structures in a medical image | |
US10478130B2 (en) | Plaque vulnerability assessment in medical imaging | |
CN110853111B (en) | Medical image processing system, model training method and training device | |
CN109829880A (en) | A kind of CT image detecting method based on deep learning, device and control equipment | |
Pradhan et al. | Transforming view of medical images using deep learning | |
CN109118487B (en) | Bone age assessment method based on non-subsampled contourlet transform and convolutional neural network | |
Feng et al. | Convolutional neural network‐based pelvic floor structure segmentation using magnetic resonance imaging in pelvic organ prolapse | |
Selvathi et al. | Fetal biometric based abnormality detection during prenatal development using deep learning techniques | |
CN111798452A (en) | Carotid artery handheld ultrasonic image segmentation method, system and device | |
CN111968108A (en) | CT intelligent imaging method, device and system based on intelligent scanning protocol | |
CN116309615A (en) | Multi-mode MRI brain tumor image segmentation method | |
Wang et al. | Deep transfer learning-based multi-modal digital twins for enhancement and diagnostic analysis of brain mri image | |
Reddy et al. | A deep learning based approach for classification of abdominal organs using ultrasound images | |
Lin et al. | Method for carotid artery 3-D ultrasound image segmentation based on cswin transformer | |
Ou et al. | RTSeg-Net: a lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images | |
Raina et al. | Deep Learning Model for Quality Assessment of Urinary Bladder Ultrasound Images using Multi-scale and Higher-order Processing | |
CN114241261B (en) | Dermatological identification method, dermatological identification device, dermatological identification equipment and dermatological identification storage medium based on image processing | |
Liu et al. | Multislice left ventricular ejection fraction prediction from cardiac MRIs without segmentation using shared SptDenNet | |
CN115035207A (en) | Method and device for generating fetal craniocerebral standard section and ultrasonic imaging display system | |
Xu et al. | A Multi-scale Attention-based Convolutional Network for Identification of Alzheimer's Disease based on Hippocampal Subfields | |
CN115115567A (en) | Image processing method, image processing device, computer equipment and medium | |
Amuthadevi et al. | Development of fuzzy approach to predict the fetus safety and growth using AFI | |
CN111862014A (en) | ALVI automatic measurement method and device based on left and right ventricle segmentation | |
Magesh et al. | Fetal heart disease detection via deep reg network based on ultrasound images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201020 |
|
RJ01 | Rejection of invention patent application after publication |