CN113517056A - Medical image target area identification method, neural network model and application - Google Patents

Medical image target area identification method, neural network model and application Download PDF

Info

Publication number
CN113517056A
CN113517056A CN202110680955.6A CN202110680955A CN113517056A CN 113517056 A CN113517056 A CN 113517056A CN 202110680955 A CN202110680955 A CN 202110680955A CN 113517056 A CN113517056 A CN 113517056A
Authority
CN
China
Prior art keywords
image
backbone network
layer
network model
network structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110680955.6A
Other languages
Chinese (zh)
Other versions
CN113517056B (en
Inventor
单淳劼
赵维佳
梁振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Medical University
Original Assignee
Anhui Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Medical University filed Critical Anhui Medical University
Priority to CN202110680955.6A priority Critical patent/CN113517056B/en
Publication of CN113517056A publication Critical patent/CN113517056A/en
Application granted granted Critical
Publication of CN113517056B publication Critical patent/CN113517056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method for identifying a target area of a medical image, which comprises the following steps: reading information of a current Dicom file, and if the information comprises a target keyword, processing the Dicom file to obtain an image to be identified; inputting the image to be identified into a neural network model based on a residual error structure, and outputting the image through different detection heads to respectively obtain target images with different receptive fields; the neural network model based on the residual error structure comprises a backbone network structure, an SPP layer and a detection head. The invention also discloses a neural network model based on the residual error structure. The network model established by the invention has the advantages of good nonlinear expression capability, reduced parameter of the network model, increased calculation speed of the network model, improved accuracy and robustness of the network model, good sensitivity to targets with different sizes and capability of accurately identifying the targets.

Description

Medical image target area identification method, neural network model and application
Technical Field
The invention relates to the field of medicine, in particular to a method for identifying a target region of a medical image, a neural network model and application.
Background
Currently, image processing, particularly, identification of MRI images, often employs image segmentation techniques. A magnetic resonance image segmentation method, apparatus, terminal device and storage medium as disclosed in patent application 201911243400.4, and an improved glioma segmentation method using cross-sequence mri as disclosed in patent application 202011164826.3.
However, the traditional segmentation network model has the defects of inaccurate target region segmentation, high possibility of being interfered by surrounding targets and low inference speed. Meanwhile, the network model is difficult to train due to the large consumption of video memory, the iteration period of the training is long, and the number of hyper-parameters is large, so that the adjustment is difficult. In addition, as the network depth of the network model is increased, the sensitivity of the network model to smaller target recognition is reduced.
Disclosure of Invention
The technical problem to be solved by the invention is how to provide a medical image target area identification method, a neural network model and application, which can optimize the nonlinear expression capability of a network model, reduce the parameter quantity of the network model, improve the calculation speed of the network model, improve the accuracy and robustness of the network model, have good sensitivity to targets with different sizes and accurately identify the targets.
The invention mainly solves the technical problems through the following technical means: a method of identifying a target region of a medical image, comprising the steps of:
step one, reading information of a current Dicom file, and if the information comprises a target keyword, processing the Dicom file to obtain an image to be identified;
inputting the image to be identified into a neural network model based on a residual error structure, and outputting the image through different detection heads to respectively obtain target images with different receptive fields;
the neural network model based on the residual error structure comprises a backbone network structure, an SPP layer and a detection head;
each backbone network structure comprises a residual structure which comprises basic units formed in a mode that a 1 × 1Conv layer is processed in advance and a 3 × 3Conv layer is processed in post; at least one basic unit in the same residual error structure;
the backbone network structures are matched in sequence according to the sequence of processing images; the output end of each backbone network structure is matched with the input end of a corresponding detection head; and the SPP layer is arranged between the backbone network structure of the final processed image and the corresponding detection head.
Preferably: in the first step, the information comprises MRI image weight information, and the target keyword is T2;
the first step is specifically as follows: sequentially reading the sequence information of each Dicom file by using an open source tool Pycom, judging whether a keyword T2 is contained in the sequence information, if the current Dicom file contains the keyword T2, determining that the Dicom file is a target file, reading the matrix information in the Dicom file by using a Pycom third party library, storing the matrix information as an image in a JPG format, and if the current Dicom file does not contain the keyword T2, identifying the next Dicom file.
Preferably: further comprising the steps of:
preprocessing of data is used to uniformly adjust the size of Dicom data to 412 × 412 and to normalize the pixel values of the image.
Preferably: the backbone network structure comprises a shallow backbone network structure, a middle backbone network structure and a deep backbone network structure; the detection head comprises a first detection head, a second detection head and a third detection head;
each backbone network structure comprises a residual structure which comprises basic units formed in a mode that a 1 × 1Conv layer is processed in advance and a 3 × 3Conv layer is processed in post; at least one basic unit in the same residual error structure; when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; and when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output.
Preferably: in the shallow backbone network structure, a first residual error structure, a second residual error structure and a third residual error structure are sequentially included according to the sequence of image processing; the first residual structure comprises a basic unit; the second residual structure comprises two sequentially matched basic units; the third residual structure comprises eight sequentially matched basic units; the middle layer backbone network structure comprises a fourth residual structure, and the fourth residual structure comprises eight basic units which are sequentially matched; the deep backbone network structure comprises a fifth residual structure comprising four sequentially fitted elementary units.
Preferably: in each residual structure, the number of channels of the 1 × 1Conv layer is less than the number of channels of the 3 × 3Conv layer.
Preferably: the shallow backbone network structure sequentially comprises two Conv layers of 3 multiplied by 3, a first residual error structure, a Conv layer of 3 multiplied by 3, a second residual error structure, a Conv layer of 3 multiplied by 3 and a third residual error structure according to the processing sequence of the image; the middle layer backbone network structure sequentially comprises a 3 x 3Conv layer and a fourth residual structure according to the processing sequence of the image; the deep backbone network structure sequentially comprises a 3 × 3Conv layer and a fifth residual error structure according to the processing sequence of the image.
Preferably: the SPP layer comprises a 5 × 5 Max Pooling layer, a 9 × 9 Max Pooling layer and a 13 × 13 Max Pooling layer which are arranged in parallel; after being processed by a deep backbone network structure, the images to be recognized are respectively input into a Max Pooling layer of 5 multiplied by 5, a Max Pooling layer of 9 multiplied by 9 and a Max Pooling layer of 13 multiplied by 13 for compression and then fusion output.
Preferably: a conversation Set is also arranged between each backbone network structure and the detection head matched with the backbone network structure; wherein, the Convolition Set matched with the backbone network structure of the final processed image is positioned between the SPP layer and the corresponding detection head; the convergence Set includes a structure in which 1 × 1Conv layers and 3 × 3Conv layers are alternately stacked, and guarantees that the 1 × 1Conv layers are located at the end.
Preferably: each of the final Conv layers includes a Conv2D layer and a Batch Normalization layer.
Preferably: the neural network model (hereinafter referred to as network model) based on the residual error structure is used for training the network model before processing an image to be recognized, and comprises the following steps: and inputting the images in the training set into the network model, training the network model, and finishing the training of the network model when the parameters of the network model enable the loss function to be converged.
Preferably: and for the images in the training set, before the images are input into the network model, data enhancement processing is further carried out, and the data enhancement processing comprises at least one of adjustment of random contrast, rotation of random angles and a Mosaic enhancement method.
Preferably: an optimizer adopted by the training of the network model is an Adam optimizer, and a Loss function is a Focal local Loss function;
expression of the Focal local Loss function:
Figure BDA0003122485100000041
wherein p in expression (1) represents a confidence, α is 0.25, γ is 2; if y is 0, it represents a negative sample, and y is 1, it represents a positive sample.
The invention also discloses a neural network model based on the residual error structure, which comprises a shallow layer backbone network structure, a middle layer backbone network structure, a deep layer backbone network structure, an SPP layer, a first detection head, a second detection head and a third detection head;
each backbone network structure comprises a residual structure which comprises basic units formed in a mode that a 1 × 1Conv layer is processed in advance and a 3 × 3Conv layer is processed in post; at least one basic unit in the same residual error structure; when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output; wherein the first receptive field, the second receptive field, and the third receptive field are all different.
The invention also discloses an application of the neural network based on the residual error structure in identifying the region of interest in the non-medical image.
The invention has the advantages that: the invention improves the calculation speed of the network model of the invention by greatly reducing the information quantity of the output items, and simultaneously, in order to improve the robustness of the network model of the invention as much as possible, the network model of the invention has a plurality of detection heads with different sizes, thereby ensuring the sensitivity of the network model to large targets and small targets. The basic unit of the structure of the network model comprises a residual error structure formed by alternately combining a Conv layer with a convolution kernel size of 3 multiplied by 3 and a Conv layer with a convolution kernel size of 1 multiplied by 1, so that the accuracy of the network model can be greatly improved, and the network model can not be degraded under the condition of a certain depth.
Further, by adopting the residual error structure of the invention, after the feature matrix of the image information is subjected to two Conv layer operations, the result and the original feature matrix are subjected to one-time superposition operation. The two Conv layers of the basic unit of the conventional residual structure use a mode that the Conv Layer size is 3 × 3Conv, while the residual structure of the present invention uses a Conv Layer with a Conv kernel size of 1 × 1 and a Conv Layer with a Conv kernel size of 3 × 3 as a basic unit, and each Conv Layer includes a 2-dimensional Conv (Conv2D) Layer, i.e., a 2D Convolitional Layer and a Batch normative Layer in FIG. 4, and is processed by a ReLU activation function of a ReLU Layer, and the weights of the network of the present invention are sufficiently activated compared with the conventional residual structure design with two layers of 3 × 3Conv, so as to satisfy the establishment of a network model at a deeper level. By establishing a deeper network model, the performance of the network model can be greatly improved, the detailed characteristics of deeper data can be extracted, and the judgment of the target category and the positioning of the target position become more accurate. The network model of the present invention will reduce the number of parameters by nearly 80% or more.
Further, in the present invention, the number of channels of the 1 × 1Conv layer in each residual structure is smaller than the number of channels of the 3 × 3Conv layer, and the number of parameters can be reduced by performing the operation of reducing the dimension by the 1 × 1 convolution and then increasing the dimension by the 3 × 3 convolution.
Further, the SPP structure used in the present invention, including Max Pooling layers of 5 × 5, 9 × 9, and 13 × 13 sizes, increased the final accuracy by 3.8%. Compared with the SPP structure with the conventional structure which adopts Max Pooling layers with the sizes of 6 × 6, 3 × 3 and 2 × 2, the experimental result of the invention has stronger robustness. If the SPP structure with the conventional structure is adopted, the final result is improved by only 2.1 percent.
Furthermore, before each detection head, a Convolume Set (i.e. Convolution output end) formed by alternately stacking and combining 1 × 1Conv layers and 3 × 3Conv layers is arranged for dimension reduction and improving the nonlinear fitting capability of the network model of the invention.
Further, the three detection heads of the invention respectively output from the shallow 26 layers, the middle 43 layers and the deep 52 layers, so that the reception fields with different sizes can be obtained, and the function of the invention is that the shallow network is suitable for predicting smaller targets due to the smaller reception fields; the deep networks are suitable for predicting larger targets due to the larger receptive field. Therefore, the pain point of the traditional target detection algorithm which is not sensitive to the detection of the small target is solved. By establishing the detection heads with different scales, the sensitivity of the network model of the invention to the targets with different sizes can be greatly improved.
Drawings
FIG. 1 is a graph of the comparison of four weighted MRI images in accordance with the present invention.
FIG. 2 is a schematic structural diagram of basic units of a residual structure in the present invention;
FIG. 3 is a schematic diagram of the structure of SPP in the present invention
FIG. 4 is a schematic view of a convolutional layer structure of the present invention;
FIG. 5 is a schematic structural diagram of a neural network model based on a residual structure according to the present invention;
FIG. 6 is a detailed structural diagram of the residual error structure in the dashed-line block diagram of FIG. 5 containing the symbol × 2 according to the present invention;
FIG. 7 is a schematic structural diagram of the Convolume Set according to the present invention;
FIG. 8 is a schematic diagram of a target image in the present invention;
FIG. 9 is a schematic illustration of an image to be identified in the present invention without processing by the network model of the present invention;
FIG. 10 is a schematic diagram of a target image of the image to be recognized of FIG. 9 output via a network model of the present invention;
FIG. 11 is a schematic illustration of a stitched image in accordance with the present invention;
FIG. 12 is a schematic diagram of a target image output by the network model in human motion recognition according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The present embodiment discloses a method for identifying an image target region, wherein the image of the present embodiment is described by taking an MRI image as an example, and the target region identified by the present embodiment is described by taking a brain tumor in the MRI image as an example. The Dicom file comprises MRI image weight information, and the target keyword is T2;
the method comprises the following steps:
step one, traversing all the Dicom files, reading sequence information of each Dicom file by using an open source tool PyDicom, judging whether the sequence information contains a keyword T2, if the sequence information contains the keyword T2, determining that the Dicom file is a target file, reading matrix information in the Dicom file by using a PyDicom third party library, and storing the matrix information as an image in a JPG format.
The format of pictures taken by nuclear magnetic resonance in hospitals is Dicom format. The so-called Digital Imaging and Communications in Medicine format, i.e. Digital Imaging and communication in Medicine, is an international standard for medical images and related information (ISO 12052). It defines a medical image format that can be used for data exchange with a quality that meets clinical needs. The Dicom format file includes not only MRI image information of a patient but also basic information of the image such as a weight of the MRI image in addition to basic information of the patient such as a name and an age of the patient.
Since the Dicom file cannot be directly recognized as an image file input model, format conversion is required using a format conversion module.
As shown in FIG. 1, the MRI images are composed of many slices, and the image of each slice is divided into four different weights, FLAIR, T1, T1CE (i.e., T1c in FIG. 1) and T2.
The MRI image weight information of each fault is stored in a Dicom file, a brain tumor signal with the weight of T2 is a high signal, the position of a tumor is easy to distinguish, and in order to reduce the workload of a system and save labor time, only a Dicom image with the weight of T2 of each fault needs to be found and converted into an image file with a JPG format, so that an image to be identified is obtained. The method is that the format is reserved and converted if the weight information in the Dicom file is read and the weight information is judged to be T2 weight, and the format is not converted if the weight information is not T2 weight.
Inputting the image to be identified into a neural network model based on a residual error structure for processing, and outputting through different detection heads to respectively obtain target images with different receptive fields;
the neural network model based on the residual error structure of the embodiment comprises a shallow backbone network structure, a middle backbone network structure, a deep backbone network structure, an SPP layer, a first Detection Head (Detection Head 1), a second Detection Head (Detection Head2) and a third Detection Head (Detection Head 3);
as shown in fig. 2, each backbone network structure includes a residual structure including basic units formed in a manner that a 1 × 1Conv Layer is processed first and a 3 × 3Conv Layer (a Conv Layer or Convolutional Layer) is processed later; at least one basic unit in the same residual error structure; x 1 in fig. 2 represents one basic unit.
For deep learning algorithms, networks with deeper numbers are often used, and better results can be obtained. The reason for this is that deeper networks can extract deeper levels of detail in the image. However, if a convolutional neural network of a general linear structure is used, it is easy to cause network degradation, that is, the weights of several individual layers are 0 because activation cannot be obtained.
By adopting the residual error structure, Feature Map, namely the Feature matrix of the image information, is subjected to Conv layer operation twice, and the result and the original Feature matrix are subjected to superposition operation once. The two Conv layers of the basic unit of the conventional residual structure use a mode that the Conv Layer size is 3 × 3Conv, while the residual structure of the present invention uses a Conv Layer with a Conv kernel size of 1 × 1 and a Conv Layer with a Conv kernel size of 3 × 3 as a basic unit, and each Conv Layer includes a 2-dimensional Conv (Conv2D) Layer, i.e., a 2D Convolitional Layer and a Batch normative Layer in FIG. 4, and is processed by a ReLU activation function of a ReLU Layer, and the weights of the network of the present invention are sufficiently activated compared with the conventional residual structure design with two layers of 3 × 3Conv, so as to satisfy the establishment of a network model at a deeper level. By establishing a deeper network model, the performance of the network model can be greatly improved, the detailed characteristics of deeper data can be extracted, and the judgment of the target category and the positioning of the target position become more accurate. The network model of the present invention will reduce the number of parameters by nearly 80% or more.
Further, in each residual structure, the number of channels of the 1 × 1Conv layer is less than the number of channels of the 3 × 3Conv layer.
In the present invention, the number of channels of the 1 × 1Conv layer in each residual structure is smaller than the number of channels of the 3 × 3Conv layer, and the number of parameters can be reduced by performing the operation of reducing the dimension by the 1 × 1 convolution and then increasing the dimension by the 3 × 3 convolution. If the convolution kernel size of the previous layer is 3 × 3, the number of channels is C1The number of channels of the current layer is C2Then the calculated amount is 3X C1×C2. If the dimension reduction is performed by using 1 × 1 convolution to reach the number of channels as C3And then 3 x 3 is used for increasing dimension to channel number C2The calculated amount of (1X C)1×C3+3×3×C3×C2=C3×(C1+9C2)。
Therefore, if the number of channels C of the previous layer is set1192, current layer channel number C2For 128, the calculation amount should be 3 × 3 × 192 × 128-221184, and if 1 × 1 layer of convolution is used to reduce the dimension to the channel number C3For 96, and then to 128, the calculation amount should be 96 × (192+9 × 128) ═ 129024, which can be seen to be reduced by nearly one time, and due to the addition of one Conv layer, the non-linear fitting capability of the network model of the present invention can be further increased. Therefore, the operation of reducing dimension and then increasing dimension by using 1 x 1 convolution not only compresses the size of the network model of the invention and improves the reasoning speed, but also improves the performance of the network model of the invention.
In the invention, when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output;
as shown in fig. 3, the SPP layer includes a 5 × 5 MaxPooling layer, a 9 × 9 Max Pooling layer, and a 13 × 13 Max Pooling layer arranged in parallel; after being processed by a deep backbone network structure, the images to be recognized are respectively input into a Max Pooling layer of 5 multiplied by 5, a Max Pooling layer of 9 multiplied by 9 and a Max Pooling layer of 13 multiplied by 13 for compression and then fusion output. The fusion here means that the feature matrices output by the three Max Pooling layers are spliced in the Channel dimension. If the feature matrix size of the input SPP structure is 16 × 16 × 512 at this time, the final size after splicing is 16 × 16 × 1536.
In order to further increase the nonlinear fitting capability of the network model and improve the robustness of the network model to data with severe Spatial distortion, the invention adds an SPP layer, namely a Spatial Pyramid Pooling layer, at the end of the backbone network. Because the invention is oriented to MRI images, and has the characteristic that after the shape is changed, the spatial distortion is easy to appear, the network model algorithm needs to be optimized by using the SPP structure.
The SPP structure used in the present invention, including the Max Pooling layers of 5 × 5, 9 × 9 and 13 × 13, increases the final accuracy by 3.8%. Compared with the SPP structure with the conventional structure which adopts Max Pooling layers with the sizes of 6 × 6, 3 × 3 and 2 × 2, the experimental result of the invention has stronger robustness. If the SPP structure with the conventional structure is adopted, the final result is improved by only 2.1 percent.
As shown in fig. 5, the shallow backbone network structure of the present invention sequentially includes two 3 × 3Conv layers, a first residual structure, a 3 × 3Conv layer, a second residual structure, a 3 × 3Conv layer, and a third residual structure according to a processing sequence from first to last for an image; the first residual structure comprises a basic unit; the second residual structure comprises two basic units which are sequentially matched; the third residual structure comprises eight sequentially matched basic units;
the middle layer backbone network structure sequentially comprises a 3 x 3Conv layer and a fourth residual structure according to the processing sequence of the images from first to last; the fourth residual structure comprises eight sequentially matched basic units;
the deep backbone network structure sequentially comprises a 3 x 3Conv layer and a fifth residual error structure according to a processing sequence of the images from first to last; the fifth residual structure comprises four sequentially fitted elementary units.
When the residual structure includes a plurality of sequentially matched basic units, such as two sequentially matched basic units, the 1 × 1Conv layer and the 3 × 3Conv layer are formed by stacking one basic unit through one cycle; when four basic units are sequentially matched, the Conv layer of 1 × 1 and the Conv layer of 3 × 3 are formed by stacking one basic unit in three cycles; eight basic units, which are sequentially matched, are stacked in seven cycles, with 1 × 1Conv layers and 3 × 3Conv layers as one basic unit. The numbers within the dashed box in fig. 5 indicate that several residual structures of the same structure are stacked. As the number of layers of the network model increases, in order to enable the network model to learn the detail features of the image, the number of channels of each layer increases, and more residual structures of the same structure are stacked.
In fig. 5, x 1 indicates that the residual structure has only one basic unit, x 4 indicates that the residual structure has four basic units, and x 8 indicates that the residual structure has eight basic units; the convolution step size of the Convolitional with no step size identified in FIG. 5 is 1; residual in the dashed box in FIG. 5 indicates that the structure in the dashed box is a Residual structure.
Fig. 6 shows a specific structure of the residual structure within the dashed box containing the × 2 mark in fig. 5, i.e., the residual structure includes two sequentially matched basic units, and the specific structure of the residual structure including a plurality of sequentially matched basic units in fig. 5 is a manner disclosed with reference to fig. 6.
Further, after the spatial distortion is corrected by using the SPP, the corrected signal is the output end of the neural network model (network model for short) based on the residual error structure of the invention. The network model comprises three output ends with different sizes as detection heads, namely a first detection head, a second detection head and a third detection head.
Further, before each detection head, a Convolume Set (i.e. Convolution output end) formed by stacking and combining 1 × 1Conv layers and 3 × 3Conv layers is arranged for dimension reduction and improving the nonlinear fitting capability of the network model. Wherein, the convergence Set matched with the backbone network structure (i.e. the deep backbone network structure) of the last processed image is located between the SPP layer and the corresponding detection head (the third detection head), i.e. the convergence Set is arranged between the SPP layer and the third detection head;
the structure of the Convolition Set is shown in FIG. 7, i.e., the 1 × 1Conv layers and the 3 × 3Conv layers are alternately combined twice and ended by the 1 × 1Conv layers.
As shown in fig. 5, the apparatus further includes a 3 × 3Conv layer and a 1 × 1Conv layer in sequence after the convergence Set, and the detection header is the 1 × 1Conv layer after the convergence Set.
As shown in fig. 5, the three detection heads output from the shallow 26 layers, the middle 43 layers and the deep 52 layers respectively, so that different sizes of reception fields can be obtained, and the effect of the three detection heads is that the shallow network is suitable for predicting smaller targets due to the smaller reception fields; the deep networks are suitable for predicting larger targets due to the larger receptive field. Therefore, the pain point of the traditional target detection algorithm which is not sensitive to the detection of the small target is solved. By establishing the detection heads with different scales, the sensitivity of the network model to the targets with different sizes can be greatly improved.
On the contrary, if the network model uses only one detection head to output the position information of the target, the information of the small target (the target in the area with a small proportion in the image) may not be detected. If multiple detection heads are used, the shallow detection head has a smaller receptive field, and the small target is sensitive under the condition, so that the small target can be effectively detected. The deeper detection head has a larger receptive field, and in this case, is more sensitive to a large target (a target occupying a larger area in the image), so that the large target can be effectively detected.
As shown in fig. 8, the output content of the detection head includes the position of the box and the category information of the object.
The position information of the frame is coordinates (x, y) of a center point of the frame, width and height (w, h) of the frame, and the values are positions relative to the size of the original image (i.e., the image to be recognized). If the original size of the image is W × H, the following transformation is required to obtain the actual coordinate parameters of the frame. Xreal=x×W、Yreal=y×H、Wreal=w×W、Hreal=H × H. Wherein, the coordinate of the actual central point of the square frame is (X)real,Yreal) The actual width of the box is WrealThe actual height of the box is Hreal(ii) a In the original size of the image, the width of the image is denoted by W and the height of the image is denoted by H.
If the input image to be recognized is set to 412 × 412 × 3, the output matrix sizes of the three detection heads are 52 × 52 × 6, 26 × 26 × 6, and 13 × 13 × 6, respectively. Here, the third dimension 6 can be decomposed into 4+1+1, i.e. the output content can be decomposed into: 4 numbers are used to represent information of the box in the figure, i.e. coordinates (x, y) of the upper left corner and width and height of the box, 1 number represents the category and 1 number represents the confidence. After information of all frames is traversed, frames with the confidence level higher than 0.5 are taken to be drawn on the image, and categories are output at the same time.
As a result of the output shown in FIG. 8, the location of the Tumor is indicated by a box and the Brain Tumor is indicated by a Brain Tumor above the box.
Further, the invention also carries out the preprocessing of data before inputting the image to be identified to the neural network model of the residual error structure.
The preprocessing of the data includes unifying the data size. The Dicom data has a size of 512 × 512, and the present invention converts the Dicom data into a size of 412 × 412 uniformly, thereby reducing the input size of the image and reducing the parameters of the network model of the present invention.
Further, in order to make the network model of the present invention converge better, the present invention normalizes all the images by dividing all the images by 127.5 and subtracting 1.
In summary, unlike the conventional medical image processing algorithm which uses the output item of the segmentation algorithm as the mask of the tumor region, the output item of the network model of the invention is the position and the category of the tumor, and the calculation speed of the model is improved by greatly reducing the information amount of the output item.
P and N in table 1 represent Positive Samples and Negative Samples, respectively, interpreted as: in 300 test pictures, a doctor manually marks 401 tumors, and the network model successfully finds 396 tumors and 45 tumors which are obtained by the network model false detection, wherein the network model of the invention does not successfully identify 5 tumors, and 441 tumors are detected by the network model.
TABLE 1
Figure BDA0003122485100000131
TABLE 2
Recognition result Correct recognition Missing sign Misidentification
Number of 396 5 45
As can be seen from Table 2: TP-396, FN-5, FP-45, TN-0. Thus, when the threshold is set to 0.4, the accuracy of the network model of the present invention is: precision ═ TP ÷ (TP + FP) ═ 89.80%; the recall rate of the network model of the invention is as follows: recall ═ TP ÷ (TP + FN) ═ 98.75%;
in order to better highlight the superiority of the network model of the invention, the classical two-stage algorithm fast-RCNN and the single-stage algorithm Yolo are respectively selected to train the data set, and then the same test data is used for testing. As shown in Table 3, the network model of the present invention performed better than fast-RCNN and Yolo in both accuracy and recall.
TABLE 3
Figure BDA0003122485100000132
When the brain tumor screening work is completed, the recall rate needs to be ensured to be as high as possible, namely more brain tumors are detected, and the recall rate of the network model of the invention is up to 98.75 percent, so that the network model of the invention can find out patients with brain tumors as far as possible without omission. The recall rate and the accuracy rate are often a pair of contradictory values, but the network model of the invention can ensure that the accuracy rate is nearly 90% under the condition of high recall rate. Therefore, the network model can be well applied to the identification of brain tumors.
The calculated amount and the user friendliness of the network model are judged by testing the detection speed of the network model.
The computing platform used was NVIDIA RTX2070, and the average computation time of the network model of the present invention on the test data set was 20.81ms, i.e., 48.05 fps. A comparison of the inference speed of the network model of the present invention with that of fast-RCNN and Yolo was also made, as shown in Table 4.
TABLE 4
Name of algorithm Faster-RCNN Yolo Network model of the invention
Inference time (ms) 164.16 27.83 20.81
As is known, if the parameter quantity of the model is too large, the reasoning speed of the model is slowed to a certain extent, which is very unfriendly for doctors and patients, and due to the reasonable and innovative backbone network design, the network model of the invention ensures the rapidness of the reasoning speed as much as possible on the premise of ensuring higher accuracy and recall rate, the reasoning speed of the network model exceeds the current mainstream single-stage algorithm and double-stage algorithm, and the recognition of an image can be completed in about 20ms, so that the waiting time of doctors is effectively saved.
Further, the training of the neural network model based on the residual error structure is performed before the processing of the image to be recognized, and the training comprises the following steps: and inputting the images in the training set into the network model, training the network model, and finishing the training of the network model when the parameters of the network model enable the loss function to be converged.
Further, an optimizer adopted by training of the network model is an Adam optimizer, and a Loss function adopted by the training of the network model is a Focal local Loss function;
when the network model is trained, forward propagation is firstly carried out, namely data are substituted into the network model to be calculated and obtain a result, and a Loss function is used for calculating the Loss value. And after the Loss calculation is finished, performing back propagation so as to complete the updating and optimization of the parameters of each convolutional layer. The optimizer Adam provides a parameter optimization algorithm that has a faster optimization speed than conventional optimization algorithms, such as a random gradient descent algorithm.
FocalLoss can effectively relieve the condition that positive and negative samples in training data are unbalanced. In a practical data set, only one target in one picture is needed to be detected (i.e. positive sample), and most of the targets are unrelated backgrounds (i.e. negative samples), so that the positive samples are far smaller than the negative samples. When the Focal local Loss function is used, if y is 0, namely, the confidence p of the negative sample is very high, the value of 1-p is very small, so that the Loss value of the negative sample is greatly reduced, and the network model of the invention can be more focused to optimize the Loss of the positive sample.
Expression of the Focal local Loss function:
Figure BDA0003122485100000151
wherein p in the expression (1) represents confidence, α is 0.25, γ is 2; if y is 0, it represents a negative sample, and y is 1, it represents a positive sample. The positive samples here refer to samples that are predicted correctly, and the negative samples refer to samples that are predicted incorrectly.
Further, the images in the training set are also pre-processed before training.
Further, the images in the training set are also subjected to data enhancement processing, which occurs after the data preprocessing.
The data enhancement processing used by the invention comprises at least one of adjustment of random contrast of the image, rotation of random angle of the image and a Mosaic enhancement method.
The Mosaic enhancing method is to adjust the contrast of the images randomly and then stitch a plurality of images into one image for training, and the effect is shown in fig. 11. It can be seen that multiple pictures are stitched into one picture, which is understood to be a strong way to increase the learning of small objects by the network model of the present invention. Compared with the existing single-stage algorithm on the market, the network model of the invention is very labourious to identify the small target just because the image does not adopt a Mosaic enhancement method. The invention further improves the accurate recording for identifying the small target.
Of course, the optimizer and the loss function of the invention can also adopt other prior arts, and can also adopt a back propagation algorithm and a gradient descent algorithm to make the loss value of the loss function converge, thereby completing the training of the network model of the invention.
Of course, the image of the present invention may be other medical images or non-medical images, and the identified target region is not limited to tumors, such as identification of other cells or tissues in the image, or identification of cell tissues in an ex vivo tissue image, or extending to identification of a portion of interest in a non-medical image. Such as identification and detection of human faces, masks and fire hazards in the images.
Example 2
The embodiment discloses a neural network model based on a residual error structure, which comprises a shallow backbone network structure, a middle backbone network structure, a deep backbone network structure, an SPP layer, a first detection head, a second detection head and a third detection head;
each backbone network structure comprises a residual structure which comprises basic units formed in a mode that a 1 × 1Conv layer is processed in advance and a 3 × 3Conv layer is processed in post; at least one basic unit in the same residual error structure; when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output; wherein the first receptive field, the second receptive field, and the third receptive field are all different.
The neural network model based on the residual error structure is deployed in Jetson Nano through TensorRT. In order to better and conveniently carry out related operations, the Jetson Nano is connected with the nuclear magnetic resonance instrument by using the USB, and the nuclear magnetic resonance instrument transmits data to the Jetson Nano from the USB data line after the data acquisition is finished. Jetson Nano can be connected with a touch display screen externally, and an operator can operate the system through the touch screen. The specific operation is as follows: and browsing the path of data storage, automatically converting all data into picture format by the system, identifying all pictures, and displaying the pictures marked with the target area in the picture display area after the identification is finished. When the target is a tumor, the upper left corner of the image will show the patient's name and the detailed information display area of the system will show the size and number of tumors.
The TensorRT is a neural network model reasoning acceleration tool based on CUDA. TensorRT reconstructs the structure of the neural network model, combines all calculations which can be combined, and enables the calculations to support GPU acceleration. The operation greatly accelerates the reasoning speed of the neural network model, and greatly reduces the video memory occupied by the neural network during calculation, thereby greatly saving the calculation resources. In actual use, the product needs to meet high accuracy and real-time performance, so that a TensorRT acceleration tool is used for accelerating the reasoning speed of the network model on Jetson Nano.
Jetson Nano is a computer, can be used by embedded designers, researchers and DIY manufacturers at present, provides powerful functions of modern AI on a compact and easy-to-use platform, and has the characteristics of small volume, high performance and complete functions. Jetson Nano can provide the computational performance of 472GFLOPS by integrating NVIDIA GPU by adopting four-core 64-bit ARMCPU and 128 cores. It also includes a 4GBLPDDR4 memory, in a high efficiency, low power consumption package, with a 5W/10W power mode and a 5V DC input. Jetson Nano simultaneously provides a complete desktop Linu × environment, supports NVIDIA CUDA Toolkit 10.0, and libraries such as cuDNN 7.3 and TensrT, and supports acceleration of graphic operation. The development kit also comprises a function open-source machine learning framework with popular local installation, such as TensorFlow, PyTorch, Caffe, Keras and the like, and a computer vision and robot development framework, such as OpenCV and ROS, for developers of artificial intelligence, Jetson Nano provides a complete and efficient solution, and can help the developers of artificial intelligence to develop complex AI applications in a short time.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A method for identifying a target region of a medical image is characterized by comprising the following steps: the method comprises the following steps:
step one, reading information of a current Dicom file, and if the information comprises a target keyword, processing the Dicom file to obtain an image to be identified;
inputting the image to be identified into a neural network model based on a residual error structure, and outputting the image through different detection heads to respectively obtain target images with different receptive fields;
the neural network model based on the residual error structure comprises a backbone network structure, an SPP layer and a detection head;
the backbone network structure comprises a residual structure comprising basic units formed in a manner that a 1 × 1Conv layer is processed first and a 3 × 3Conv layer is processed last; the backbone network structures are matched in sequence according to the sequence of processing images; the output end of each backbone network structure is matched with the input end of a corresponding detection head; and the SPP layer is also arranged between the backbone network structure of the final processed image and the corresponding detection head.
2. The method for identifying a target region of a medical image according to claim 1, wherein: in the first step, the information comprises MRI image weight information, and the target keyword is T2;
traversing all the Dicom files, reading the sequence information of each Dicom file by using an open source tool PyDicom, judging whether the sequence information contains a keyword T2, if so, determining that the Dicom file is a target file, reading the matrix information in the Dicom file, and storing the matrix information as an image in a JPG format.
3. The method for identifying a target region of a medical image according to claim 1, wherein: the backbone network structure comprises a shallow backbone network structure, a middle backbone network structure and a deep backbone network structure; the detection head comprises a first detection head, a second detection head and a third detection head;
when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; and when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output.
4. A method of identifying a target region of a medical image according to claim 3, characterized in that: the shallow backbone network structure sequentially comprises two Conv layers of 3 multiplied by 3, a first residual error structure, a Conv layer of 3 multiplied by 3, a second residual error structure, a Conv layer of 3 multiplied by 3 and a third residual error structure according to the processing sequence of the image; the middle layer backbone network structure sequentially comprises a 3 x 3Conv layer and a fourth residual structure according to the processing sequence of the image; the deep backbone network structure sequentially comprises a 3 x 3Conv layer and a fifth residual error structure according to the processing sequence of the images;
the first residual structure comprises a basic unit; the second residual structure comprises two sequentially matched basic units; the third residual structure comprises eight sequentially matched basic units; the fourth residual structure comprises eight sequentially matched basic units; the fifth residual structure comprises four sequentially fitted elementary units.
5. The method for identifying a target region of a medical image according to claim 1, wherein: the SPP layer comprises a 5 × 5 Max Pooling layer, a 9 × 9 Max Pooling layer and a 13 × 13 Max Pooling layer which are arranged in parallel; after being processed by a deep backbone network structure, the images to be recognized are respectively input into a Max Pooling layer of 5 multiplied by 5, a Max Pooling layer of 9 multiplied by 9 and a Max Pooling layer of 13 multiplied by 13 for compression and then fusion output.
6. The method for identifying a target region of a medical image according to claim 1, wherein: a conversation Set is also arranged between each backbone network structure and the detection head matched with the backbone network structure; wherein, the Convolition Set matched with the backbone network structure of the final processed image is positioned between the SPP layer and the corresponding detection head; the convergence Set includes a structure in which 1 × 1Conv layers and 3 × 3Conv layers are alternately stacked, and guarantees that the 1 × 1Conv layers are located at the end.
7. The method for identifying a target region of a medical image according to claim 1, wherein: the method also comprises the training of the neural network model based on the residual error structure, and comprises the following steps: inputting images in a training set into the neural network model based on the residual error structure, training the neural network model based on the residual error structure, and finishing the training of the neural network model based on the residual error structure when the parameters of the neural network model based on the residual error structure lead the loss function to be converged.
8. The method for identifying a target region of a medical image according to claim 7, wherein: the optimizer adopted by the training of the neural network model based on the residual error structure is an Adam optimizer, and the Loss function is a Focal local Loss function;
expression of the Focal local Loss function:
Figure FDA0003122485090000031
wherein, in the expression (1), p represents confidence, α is 0.25, γ is 2; if y is 0, it represents a negative sample, and y is 1, it represents a positive sample.
9. A neural network model based on a residual structure, characterized by: the device comprises a shallow backbone network structure, a middle backbone network structure, a deep backbone network structure, an SPP layer, a first detection head, a second detection head and a third detection head;
each backbone network structure comprises a residual structure which comprises basic units formed in a mode that a 1 × 1Conv layer is processed in advance and a 3 × 3Conv layer is processed in post; at least one basic unit in the same residual error structure; when an image to be identified is processed by the shallow backbone network structure and then input to the first detection head, a target image with a first receptive field can be output; when the image to be recognized is processed by the shallow backbone network structure and the middle backbone network structure in sequence and then input to a second detection head, a target image with a second receptive field can be output; and when the image to be identified is processed by the shallow backbone network structure, the middle backbone network structure, the deep backbone network structure and the SPP layer in sequence, the image is input to a third detection head, and a target image with a third receptive field can be output.
10. Use of the residual structure based neural network of claim 9 for identifying a region of interest in a non-medical image.
CN202110680955.6A 2021-06-18 2021-06-18 Medical image target area identification method, neural network model and application Active CN113517056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110680955.6A CN113517056B (en) 2021-06-18 2021-06-18 Medical image target area identification method, neural network model and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110680955.6A CN113517056B (en) 2021-06-18 2021-06-18 Medical image target area identification method, neural network model and application

Publications (2)

Publication Number Publication Date
CN113517056A true CN113517056A (en) 2021-10-19
CN113517056B CN113517056B (en) 2023-09-19

Family

ID=78065969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110680955.6A Active CN113517056B (en) 2021-06-18 2021-06-18 Medical image target area identification method, neural network model and application

Country Status (1)

Country Link
CN (1) CN113517056B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017106645A1 (en) * 2015-12-18 2017-06-22 The Regents Of The University Of California Interpretation and quantification of emergency features on head computed tomography
WO2018015414A1 (en) * 2016-07-21 2018-01-25 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
CN110969245A (en) * 2020-02-28 2020-04-07 北京深睿博联科技有限责任公司 Target detection model training method and device for medical image
CN111401202A (en) * 2020-03-11 2020-07-10 西南石油大学 Pedestrian mask wearing real-time detection method based on deep learning
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112308822A (en) * 2020-10-10 2021-02-02 杭州电子科技大学 Intervertebral disc CT image detection method based on deep convolutional neural network
CN112365438A (en) * 2020-09-03 2021-02-12 杭州电子科技大学 Automatic pelvis parameter measuring method based on target detection neural network
CN112419332A (en) * 2020-11-16 2021-02-26 复旦大学 Skull stripping method and device for thick-layer MRI (magnetic resonance imaging) image
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017106645A1 (en) * 2015-12-18 2017-06-22 The Regents Of The University Of California Interpretation and quantification of emergency features on head computed tomography
WO2018015414A1 (en) * 2016-07-21 2018-01-25 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
CN110969245A (en) * 2020-02-28 2020-04-07 北京深睿博联科技有限责任公司 Target detection model training method and device for medical image
CN111401202A (en) * 2020-03-11 2020-07-10 西南石油大学 Pedestrian mask wearing real-time detection method based on deep learning
CN112365438A (en) * 2020-09-03 2021-02-12 杭州电子科技大学 Automatic pelvis parameter measuring method based on target detection neural network
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112308822A (en) * 2020-10-10 2021-02-02 杭州电子科技大学 Intervertebral disc CT image detection method based on deep convolutional neural network
CN112419332A (en) * 2020-11-16 2021-02-26 复旦大学 Skull stripping method and device for thick-layer MRI (magnetic resonance imaging) image
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALOM MD ZAHANGIR: "Improved inception-residual convolutional neural network for object recognition", 《NEURAL COMPUTING & APPLICATIONS》 *
周涛;霍兵强;陆惠玲;任海玲;: "残差神经网络及其在医学图像处理中的应用研究", 电子学报, no. 07 *
程叶群: "基于卷积神经网络的轻量化目标检测网络", 《激光与光电子学进展》, vol. 58, no. 16 *

Also Published As

Publication number Publication date
CN113517056B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
US11887311B2 (en) Method and apparatus for segmenting a medical image, and storage medium
US20220051405A1 (en) Image processing method and apparatus, server, medical image processing device and storage medium
CN111524106B (en) Skull fracture detection and model training method, device, equipment and storage medium
CN109166130B (en) Image processing method and image processing device
CN111160269A (en) Face key point detection method and device
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN113920309B (en) Image detection method, image detection device, medical image processing equipment and storage medium
CN110135304A (en) Human body method for recognizing position and attitude and device
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN112614573A (en) Deep learning model training method and device based on pathological image labeling tool
CN111723688B (en) Human body action recognition result evaluation method and device and electronic equipment
CN111429414B (en) Artificial intelligence-based focus image sample determination method and related device
WO2022227193A1 (en) Liver region segmentation method and apparatus, and electronic device and storage medium
CN111881803A (en) Livestock face recognition method based on improved YOLOv3
CN111553250A (en) Accurate facial paralysis degree evaluation method and device based on face characteristic points
CN113517056B (en) Medical image target area identification method, neural network model and application
CN115880358A (en) Construction method of positioning model, positioning method of image mark points and electronic equipment
CN111598144B (en) Training method and device for image recognition model
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
CN110570417A (en) Pulmonary nodule classification method and device and image processing equipment
CN111062935A (en) Breast tumor detection method, storage medium and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Liang Zhen

Inventor after: Shan Chunjie

Inventor after: Zhao Weijia

Inventor before: Shan Chunjie

Inventor before: Zhao Weijia

Inventor before: Liang Zhen

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant