CN114648496A - Intelligent medical system - Google Patents

Intelligent medical system Download PDF

Info

Publication number
CN114648496A
CN114648496A CN202210200007.2A CN202210200007A CN114648496A CN 114648496 A CN114648496 A CN 114648496A CN 202210200007 A CN202210200007 A CN 202210200007A CN 114648496 A CN114648496 A CN 114648496A
Authority
CN
China
Prior art keywords
feature
feature map
layer
neural network
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210200007.2A
Other languages
Chinese (zh)
Inventor
刘盼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sixinkang Medical Technology Co ltd
Original Assignee
Shanghai Sixinkang Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sixinkang Medical Technology Co ltd filed Critical Shanghai Sixinkang Medical Technology Co ltd
Priority to CN202210200007.2A priority Critical patent/CN114648496A/en
Publication of CN114648496A publication Critical patent/CN114648496A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The application relates to the field of intelligent medical treatment, and particularly discloses an intelligent medical treatment system, which extracts local features of a brain image of a patient through an m layer and a last layer of a shallow layer of a convolutional neural network serving as a feature map extractor respectively so as to obtain a first feature map and a second feature map, further uses the convolutional neural network serving as a feature descriptor to enhance the expression of the shallow feature of the first feature map, and performs dimensionality reduction based on singular value decomposition on the second feature map so as to pay attention to dimensional difference between the two feature maps and further improve the accuracy of classification. Therefore, the abnormal region in the brain image of the patient can be accurately detected and judged to ensure the health of the patient.

Description

Intelligent medical system
Technical Field
The present invention relates to the field of smart medical treatment, and more particularly, to a smart medical treatment system.
Background
Currently, brain imaging techniques are important means for studying the brain. The imaging mode of the brain imaging technology mainly comprises structural imaging and functional imaging. Structural imaging can clearly reflect the structural form of organs, but cannot provide functional information of organs, while functional imaging can accurately provide metabolic information and real-time activities of organs, but cannot display structural form details of brains.
Therefore, an intelligent medical system is desired to detect and determine an abnormal region in a brain image of a patient with comprehensive utilization information, and further to ensure the health of the patient.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides an intelligent medical system, which extracts local features of a brain image of a patient through the mth layer and the last layer of the shallow layer of a convolutional neural network serving as a feature map extractor, so as to obtain a first feature map and a second feature map, further uses the convolutional neural network serving as a feature descriptor to enhance the expression of the shallow features of the first feature map, and performs dimensionality reduction based on singular value decomposition on the second feature map so as to pay attention to the dimensionality difference between the two feature maps, and further improves the classification accuracy. Therefore, the abnormal region in the brain image of the patient can be accurately detected and judged to ensure the health of the patient.
According to one aspect of the present application, there is provided an intelligent medical system, comprising:
a source data acquisition unit for acquiring a brain image of a patient;
a neural network coding unit for inputting the acquired brain image into a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and outputting a second feature map from a last layer of the convolutional neural network;
a feature enhancement unit, configured to perform enhancement coding on the first feature map using a second convolutional neural network as a feature descriptor to obtain a third feature map, where the feature descriptor and the convolutional neural network as a feature extractor have a symmetric network structure;
an eigenvalue decomposition unit, configured to perform eigenvalue decomposition on each eigenvector along a channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues;
the feature vector screening unit is used for selecting feature vectors corresponding to feature values larger than a threshold value from the feature vectors and splicing the feature vectors corresponding to the feature values larger than the threshold value in a two-dimensional mode to obtain a principal dimension feature matrix;
the dimension reduction unit is used for multiplying each feature matrix along the channel dimension in the second feature map by the main dimension feature matrix to obtain a dimension reduction feature matrix;
the feature matrix arrangement unit is used for arranging the dimension reduction feature matrix into a fourth feature map along the channel dimension;
a feature map fusion unit, configured to fuse the third feature map and the fourth feature map to obtain a classification feature map; and
and the diagnosis result generating unit is used for enabling the classification characteristic map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
In the above-described smart medical system, each layer of the first convolutional neural network includes a convolutional layer, a pooling layer, and an activation layer, and each layer of the first convolutional neural network performs convolution processing based on a convolution kernel on the input data using the convolutional layer in a forward pass of the layer, performs pooling processing on a convolution feature map output by the convolutional layer using the pooling layer, and performs activation processing on the pooling feature map output by the pooling layer using the activation layer.
In the above intelligent medical system, the value range of M is 2-6 layers.
In the above-mentioned smart medical system, the convolution kernel of each convolution layer of the second convolutional neural network and the convolution kernel of the corresponding convolution layer of the feature extractor are transposes, each anti-pooling layer of the second convolutional neural network corresponds to one pooling layer of the first convolutional neural network, and the second convolutional neural network and the first convolutional neural network share a weight.
In the above intelligent medical system, the eigenvalue decomposition unit is further configured to perform eigenvalue decomposition on each eigenvalue matrix along the channel dimension in the second eigenvalue graph by the following formula to obtain the plurality of eigenvaluesA feature value and the plurality of feature vectors corresponding to the plurality of feature values; wherein the formula is: q Λ QTWherein Λ ═ diag (λ)123...,λn),λ123...,λnIs a characteristic value, Q ═ Q (Q)1,q2,q3...,qn),q1,q2,q3...,qnAnd the feature vectors correspond to the feature values.
In the above intelligent medical system, the dimension reduction unit is further configured to perform matrix multiplication on each feature matrix along a channel dimension in the second feature map and the principal-dimension feature matrix by using the following formula to obtain the dimension reduction feature matrix; wherein the formula is
Figure BDA0003528938820000021
M is the original feature matrix, q1,q2,…,qNFor each feature vector, and
Figure BDA0003528938820000022
representing a matrix multiplication.
In the above intelligent medical system, the diagnostic result generating unit is further configured to process the classification feature map by using the classifier according to the following formula to generate the classification result; wherein the formula is: softmax { (W)n,Bn):…:(W1,B1) L project (F), wherein project (F) represents projecting the classification feature map as a vector, W1To WnAs a weight matrix for each fully connected layer, B1To BnA bias matrix representing the layers of the fully connected layer.
According to another aspect of the present application, a method of operating an intelligent medical system includes:
acquiring a brain image of a patient;
inputting the acquired brain image into a convolutional neural network serving as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and outputting a second feature map from a last layer of the convolutional neural network;
performing enhancement coding on the first feature map by using a second convolutional neural network as a feature descriptor to obtain a third feature map, wherein the feature descriptor and the convolutional neural network as a feature extractor have a symmetrical network structure;
performing eigenvalue decomposition on each eigenvalue matrix along a channel dimension in the second eigenvalue graph to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues;
selecting eigenvectors corresponding to eigenvalues larger than a threshold value from the plurality of eigenvectors, and performing two-dimensional splicing on the eigenvectors corresponding to the eigenvalues larger than the threshold value to obtain a principal-dimension eigenvector matrix;
multiplying each feature matrix along the channel dimension in the second feature map by the main dimension feature matrix to obtain a dimension reduction feature matrix;
arranging the dimensionality reduction feature matrix into a fourth feature map along the channel dimensionality;
fusing the third feature map and the fourth feature map to obtain a classification feature map; and
and passing the classification feature map through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
In the above method of operating the smart medical system, each layer of the first convolutional neural network includes a convolutional layer, a pooling layer, and an activation layer, and each layer of the first convolutional neural network performs convolution processing based on a convolution kernel on the input data using the convolutional layer in a forward direction transfer process of the layer, performs pooling processing on a convolution feature map output by the convolutional layer using the pooling layer, and performs activation processing on a pooling feature map output by the pooling layer using the activation layer.
In the working method of the intelligent medical system, the value range of M is 2-6 layers.
In the above method, the convolution kernel of each convolution layer of the second convolutional neural network and the convolution kernel of the corresponding convolution layer of the feature extractor are transposes, each anti-pooling layer of the second convolutional neural network corresponds to one pooling layer of the first convolutional neural network, and the second convolutional neural network and the first convolutional neural network share a weight.
In the above method of operating the intelligent medical system, decomposing eigenvalues of each eigenmatrix along a channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the eigenvalues includes: performing eigenvalue decomposition on each eigenvalue matrix along the channel dimension in the second eigenvalue to obtain the plurality of eigenvalues and the plurality of eigenvectors corresponding to the plurality of eigenvalues; wherein the formula is: q Λ Q ═ MTWherein Λ ═ diag (λ)123...,λn),λ123...,λnFor a characteristic value, Q ═ Q1,q2,q3...,qn),q1,q2,q3...,qnAnd the characteristic vectors correspond to the characteristic values.
In the above method of operating the intelligent medical system, the matrix multiplication of each feature matrix along the channel dimension in the second feature map and the principal-dimension feature matrix is performed to obtain a dimension-reduced feature matrix, which includes: matrix multiplying each feature matrix along the channel dimension in the second feature map with the principal dimension feature matrix by the following formula to obtain the dimension reduction feature matrix; wherein the formula is
Figure BDA0003528938820000041
Figure BDA0003528938820000042
M is the original feature matrix, q1,q2,…,qNFor each feature vector, and
Figure BDA0003528938820000043
representing a matrix multiplication.
In the above method of operating the intelligent medical system, the step of passing the classification feature map through a classifier to obtain a classification result includes: processing the classification feature map using the classifier in the following formula to generate the classification result; wherein the formula is: softmax { (W)n,Bn):…:(W1,B1) L project (F), wherein project (F) represents projecting the classification feature map as a vector, W1To WnAs a weight matrix for each fully connected layer, B1To BnA bias matrix representing the layers of the fully connected layer.
Compared with the prior art, the intelligent medical system provided by the application extracts the local features of the brain image of the patient through the mth layer and the last layer of the shallow layer of the convolutional neural network serving as the feature map extractor respectively, so as to obtain the first feature map and the second feature map, further uses the convolutional neural network serving as the feature descriptor to enhance the expression of the shallow feature of the first feature map, and performs dimensionality reduction based on singular value decomposition on the second feature map so as to pay attention to the dimensionality difference between the two feature maps, and further improves the classification accuracy. Therefore, the abnormal region in the brain image of the patient can be accurately detected and judged to ensure the health of the patient.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a diagram of an application scenario of an intelligent medical system according to an embodiment of the present application.
Fig. 2 is a block diagram of an intelligent medical system according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a method of operating an intelligent medical system according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram illustrating an operating method of an intelligent medical system according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of a scene
As mentioned above, currently, brain imaging techniques are important means for studying the brain. The imaging mode of the brain imaging technology mainly comprises structural imaging and functional imaging.
Structural imaging can clearly reflect the structural form of organs, but cannot provide functional information of organs, while functional imaging can accurately provide metabolic information and real-time activities of organs, but cannot display structural form details of brains. Therefore, an intelligent medical system is desired to detect and determine an abnormal region in a brain image of a patient with comprehensive utilization information, and further to ensure the health of the patient.
Specifically, in the technical solution of the present application, an image is first input to a convolutional neural network, and a first feature map is extracted from an mth layer of a shallow layer, and an output second feature map is obtained.
Because of the large difference in expression of the first feature map as a shallow feature with respect to the second feature map as a deep feature, the expression of the shallow feature is first enhanced using a feature descriptor corresponding to the feature extractor, specifically, the feature descriptor is a structure symmetrical to the feature extractor, i.e., one transposed convolutional layer per convolutional layer, i.e., the convolutional kernel of the transposed convolutional layer is the transpose of the convolutional kernel of the corresponding convolutional layer, and one antifluidized layer per pooled layer, and the feature descriptor shares a weight with the feature extractor, so that activated neurons in each convolutional layer can be restored.
Thus, the third feature map is obtained from the first feature map.
Further considering the dimension difference between the first feature map and the second feature map, performing dimensionality reduction based on singular value decomposition on the second feature map, that is, performing feature value decomposition on each feature matrix along the channel in the second feature map, selecting a feature vector corresponding to a feature value larger than a threshold value, and multiplying the original feature matrix by a two-dimensional spliced matrix of the feature vectors to obtain a dimensionality-reduced feature matrix, which is expressed as:
Figure BDA0003528938820000061
where M is the original feature matrix, q1,q2,…,qNFor each feature vector, and
Figure BDA0003528938820000062
representing a matrix multiplication.
Then, the feature matrix M' is arranged into a fourth feature map according to channels, and is fused with the third feature map for image classification.
Based on this, the present application proposes an intelligent medical system, which comprises: a source data acquisition unit for acquiring a brain image of a patient; a neural network coding unit for inputting the acquired brain image into a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and outputting a second feature map from a last layer of the convolutional neural network; a feature enhancing unit, configured to perform enhancement coding on the first feature map by using a second convolutional neural network as a feature descriptor to obtain a third feature map, where the feature descriptor and the convolutional neural network as a feature extractor have a symmetric network structure; an eigenvalue decomposition unit, configured to perform eigenvalue decomposition on each eigenvector along a channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues; the feature vector screening unit is used for selecting feature vectors corresponding to feature values larger than a threshold value from the feature vectors and performing two-dimensional splicing on the feature vectors corresponding to the feature values larger than the threshold value to obtain a main-dimensional feature matrix; the dimension reduction unit is used for multiplying each feature matrix along the channel dimension in the second feature map by the main dimension feature matrix to obtain a dimension reduction feature matrix; the feature matrix arrangement unit is used for arranging the dimension reduction feature matrix into a fourth feature map along the channel dimension; a feature map fusion unit, configured to fuse the third feature map and the fourth feature map to obtain a classification feature map; and a diagnosis result generation unit for passing the classification feature map through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
Fig. 1 illustrates an application scenario of an intelligent medical system according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a brain image of a patient (e.g., T as illustrated in fig. 1) is acquired by a brain scanner (e.g., T as illustrated in fig. 1). The acquired brain image of the patient is then input into a server (e.g., a cloud server S as illustrated in fig. 1) deployed with a smart medical algorithm, wherein the server is capable of processing the brain image of the patient with the smart medical algorithm to generate a classification result representing whether an abnormal region exists in the brain image of the patient. And then, accurately detecting and judging abnormal regions in the brain image of the patient based on the classification result so as to ensure the health of the patient.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of an intelligent medical system according to an embodiment of the present application. As shown in fig. 2, the smart medical system 200 according to the embodiment of the present application includes: a source data acquisition unit 210 for acquiring a brain image of a patient; a neural network encoding unit 220 for inputting the acquired brain image into a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and outputting a second feature map from a last layer of the convolutional neural network; a feature enhancement unit 230, configured to perform enhancement coding on the first feature map using a second convolutional neural network as a feature descriptor to obtain a third feature map, where the feature descriptor and the convolutional neural network as a feature extractor have a symmetric network structure; an eigenvalue decomposition unit 240, configured to perform eigenvalue decomposition on each eigenvector along the channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues; a feature vector screening unit 250, configured to select a feature vector corresponding to a feature value greater than a threshold from the multiple feature vectors, and perform two-dimensional stitching on the feature vectors corresponding to the feature values greater than the threshold to obtain a principal-dimension feature matrix; a dimension reduction unit 260, configured to perform matrix multiplication on each feature matrix along a channel dimension in the second feature map and the principal dimension feature matrix to obtain a dimension reduction feature matrix; a feature matrix arrangement unit 270, configured to arrange the dimension-reduced feature matrix into a fourth feature map along a channel dimension; a feature map fusion unit 280, configured to fuse the third feature map and the fourth feature map to obtain a classification feature map; and a diagnostic result generating unit 290, configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether an abnormal region exists in the brain image of the patient.
Specifically, in this embodiment of the present application, the source data obtaining unit 210 and the neural network encoding unit 220 are configured to obtain a brain image of a patient, and input the obtained brain image into a convolutional neural network serving as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and output a second feature map from a last layer of the convolutional neural network. As previously mentioned, it should be understood that structural imaging can clearly reflect the structural morphology of an organ but cannot provide functional information of the organ, while functional imaging can accurately provide metabolic information and real-time activity of the organ but cannot display structural morphology details of the brain. Therefore, in the technical solution of the present application, it is desirable to accurately determine an abnormal region in a brain image of a patient by more comprehensively using feature information in the brain image.
That is, specifically, in the technical solution of the present application, a brain image of a patient is first acquired by a brain scanner. Then, the acquired brain image is input to a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and output a second feature map by a last layer of the convolutional neural network. It is worth mentioning that, here, the value of m ranges from 2 to 6 layers. It should be understood that the convolutional neural network extracts shallow features such as shapes, edges, corners, and the like in 1 to 3 layers, and extracts texture features more focused on the brain image in the feature map in 4 to 6 layers. Therefore, the shallow feature and the deep feature of the brain image can be respectively extracted, so that different feature information of the brain image can be better utilized to carry out accurate judgment.
In particular, in one specific example, each layer of the first convolutional neural network includes a convolutional layer, a pooling layer, and an activation layer, and each layer of the first convolutional neural network performs convolution processing based on a convolution kernel on the input data using the convolutional layer, performs pooling processing on a convolution feature map output by the convolutional layer using the pooling layer, and performs activation processing on a pooled feature map output by the pooling layer using the activation layer during forward transfer of the layer.
Specifically, in this embodiment of the present application, the feature enhancing unit 230 is configured to perform enhancement coding on the first feature map by using a second convolutional neural network as a feature descriptor to obtain a third feature map, where the feature descriptor and the convolutional neural network as a feature extractor have a symmetric network structure. It should be understood that, because the first feature map as a shallow feature has a great difference in expression with respect to the second feature map as a deep feature, in the technical solution of the present application, the expression of the shallow feature is first enhanced using a feature descriptor corresponding to a feature extractor, specifically, the feature descriptor is a structure symmetrical to the feature extractor, i.e., each convolutional layer corresponds to one transposed convolutional layer, i.e., the convolutional kernel of the transposed convolutional layer is the transpose of the convolutional kernel of the corresponding convolutional layer, and each pooled layer corresponds to one anti-pooled layer, and the feature descriptor shares a weight with the feature extractor, so that the activated neurons in each convolutional layer can be restored. That is, the first feature map is enhancement-coded using a second convolutional neural network as a feature descriptor to obtain a third feature map. Accordingly, in one particular example, the convolution kernel of each convolution layer of the second convolutional neural network is a transpose of the convolution kernel of the corresponding convolution layer of the feature extractor, each anti-pooling layer of the second convolutional neural network corresponds to one pooling layer of the first convolutional neural network, and the second convolutional neural network shares weights with the first convolutional neural network.
Specifically, in this embodiment, the eigenvalue decomposition unit 240 and the eigenvector screening unit 250 are configured to perform eigenvalue decomposition on each eigenvector matrix along the channel dimension in the second eigenvector map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues, select an eigenvector corresponding to an eigenvalue greater than a threshold value from the plurality of eigenvectors, and perform two-dimensional stitching on the eigenvectors corresponding to the eigenvalue greater than the threshold value to obtain a principal-dimension eigenvector matrix. It should be understood that, further considering the dimension difference between the first feature map and the second feature map, in the technical solution of the present application, the dimension reduction based on singular value decomposition needs to be performed on the second feature map. That is, specifically, first, eigenvalue decomposition is performed for each of the feature matrices along the channel in the second eigenmap to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues. Further, selecting the eigenvectors corresponding to the eigenvalues larger than the threshold value from the obtained eigenvectors, and performing two-dimensional splicing on the eigenvectors to obtain a principal dimension eigenvector matrix.
More specifically, in this embodiment, the eigenvalue decomposition unit is further configured to: for the channel dimension in the second feature mapPerforming eigenvalue decomposition on each eigenvalue matrix of degrees to obtain the plurality of eigenvalues and the plurality of eigenvectors corresponding to the plurality of eigenvalues; wherein the formula is: q Λ Q ═ MTWherein Λ ═ diag (λ)123...,λn),λ123...,λnIs a characteristic value, Q ═ Q (Q)1,q2,q3...,qn),q1,q2,q3...,qnAnd the feature vectors correspond to the feature values.
Specifically, in this embodiment of the present application, the dimension reduction unit 260 and the feature matrix arrangement unit 270 are configured to perform matrix multiplication on each feature matrix along a channel dimension in the second feature map and the main-dimension feature matrix to obtain a dimension reduction feature matrix, and arrange the dimension reduction feature matrix into a fourth feature map along the channel dimension. That is, in the technical solution of the present application, after the principal dimension feature matrix is obtained, the original feature matrix along the channel dimension in the second feature map is further multiplied by a matrix formed by two-dimensional splicing of feature vectors, that is, the principal dimension feature matrix, so as to obtain a feature matrix after dimension reduction. And then arranging the obtained characteristic matrixes to be the fourth characteristic diagram along the channel dimension so as to facilitate subsequent characteristic fusion and further improve the classification accuracy.
More specifically, in this embodiment of the present application, the dimension reduction unit is further configured to: matrix multiplying each feature matrix along the channel dimension in the second feature map with the principal dimension feature matrix by the following formula to obtain the dimension reduction feature matrix; wherein the formula is
Figure BDA0003528938820000091
Figure BDA0003528938820000092
M is the original feature matrix, q1,q2,…,qNFor each feature vector, and
Figure BDA0003528938820000093
representing a matrix multiplication.
Specifically, in the embodiment of the present application, the feature map fusing unit 280 and the diagnosis result generating unit 290 are configured to fuse the third feature map and the fourth feature map to obtain a classification feature map, and pass the classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether an abnormal region exists in a brain image of a patient. That is, in the technical solution of the present application, the third feature map and the fourth feature map are further fused to better fuse the shallow feature information and the deep feature information in the brain image, so as to obtain a classification feature map, which can make the classification result more accurate. Then, the classification feature map is passed through a classifier to obtain a classification result representing whether an abnormal region exists in the brain image of the patient.
More specifically, in an embodiment of the present application, the diagnosis result generating unit is further configured to: processing the classification feature map using the classifier in the following formula to generate the classification result; wherein the formula is: softmax { (W)n,Bn):…:(W1,B1) L project (F), where project (F) represents the projection of the classification feature map as a vector, W1To WnAs a weight matrix for each fully connected layer, B1To BnA bias matrix representing the layers of the fully connected layer.
In summary, the intelligent medical system 200 according to the embodiment of the present application is illustrated, which extracts local features of the brain image of the patient through the mth layer and the last layer of the shallow layer of the convolutional neural network as a feature extractor, thereby obtaining a first feature map and a second feature map, further uses the convolutional neural network as a feature descriptor to enhance the expression of the shallow feature of the first feature map, and performs dimensionality reduction based on singular value decomposition on the second feature map to focus on the dimensionality difference between the two feature maps, thereby improving the classification accuracy. Therefore, the abnormal region in the brain image of the patient can be accurately detected and judged so as to ensure the health of the patient.
As described above, the smart medical system 200 according to the embodiment of the present application may be implemented in various terminal devices, such as a server of a smart medical algorithm. In one example, the smart medical system 200 according to the embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the intelligent medical system 200 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent medical system 200 can also be one of the hardware modules of the terminal device.
Alternatively, in another example, the intelligent medical system 200 and the terminal device may be separate devices, and the intelligent medical system 200 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information according to an agreed data format.
Exemplary method
Fig. 3 illustrates a flow chart of a method of operation of the intelligent medical system. As shown in fig. 3, the working method of the intelligent medical system according to the embodiment of the present application includes the steps of: s110, acquiring a brain image of a patient; s120, inputting the acquired brain image into a convolutional neural network serving as a feature map extractor to extract a first feature map from the mth layer of the shallow layer of the convolutional neural network and output a second feature map from the last layer of the convolutional neural network; s130, performing enhancement coding on the first feature map by using a second convolutional neural network as a feature descriptor to obtain a third feature map, wherein the feature descriptor and the convolutional neural network as a feature map extractor have a symmetrical network structure; s140, performing eigenvalue decomposition on each eigenvalue matrix along the channel dimension in the second eigenvalue to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the eigenvalues; s150, selecting the eigenvectors corresponding to the eigenvalues larger than the threshold from the plurality of eigenvectors, and performing two-dimensional splicing on the eigenvectors corresponding to the eigenvalues larger than the threshold to obtain a principal-dimension eigenvector matrix; s160, multiplying each feature matrix along the channel dimension in the second feature map by the main dimension feature matrix to obtain a dimension reduction feature matrix; s170, arranging the dimensionality reduction feature matrix into a fourth feature map along the channel dimensionality; s180, fusing the third feature map and the fourth feature map to obtain a classification feature map; and S190, enabling the classification feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
Fig. 4 is a schematic diagram illustrating an architecture of a method of operating an intelligent medical system according to an embodiment of the present application. As shown IN fig. 4, IN the network architecture of the working method of the intelligent medical system, first, an acquired brain image (e.g., IN as illustrated IN fig. 4) is input to a convolutional neural network (e.g., CNN1 as illustrated IN fig. 4) as a feature map extractor to extract a first feature map (e.g., F1 as illustrated IN fig. 4) from an m-th layer of a shallow layer of the convolutional neural network and output a second feature map (e.g., F2 as illustrated IN fig. 4) by a last layer of the convolutional neural network; next, enhancement coding the first feature map using a second convolutional neural network (e.g., CNN2 as illustrated in fig. 4) as a feature descriptor to obtain a third feature map (e.g., F3 as illustrated in fig. 4); then, performing eigenvalue decomposition on each eigenmatrix along the channel dimension in the second eigenmap to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues (e.g., VF1 as illustrated in fig. 4); then, selecting eigenvectors corresponding to eigenvalues greater than a threshold (e.g., VF2 as illustrated in fig. 4) from the plurality of eigenvectors, and two-dimensionally splicing the eigenvectors corresponding to eigenvalues greater than the threshold to obtain a principal-dimension eigenvector matrix (e.g., MF1 as illustrated in fig. 4); then, matrix-multiplying each feature matrix along a channel dimension in the second feature map with the main-dimension feature matrix to obtain a dimension-reduced feature matrix (e.g., MF2 as illustrated in fig. 4); then, arranging the dimension-reduced feature matrix into a fourth feature map along the channel dimension (for example, as illustrated in fig. 4 as F4); then, fusing the third feature map and the fourth feature map to obtain a classification feature map (e.g., FC as illustrated in fig. 4); and, finally, passing the classification feature map through a classifier (e.g., a classifier as illustrated in fig. 4) to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
More specifically, in steps S110 and S120, a brain image of a patient is acquired, and the acquired brain image is input to a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and output a second feature map by a last layer of the convolutional neural network. It should be understood that structural imaging can clearly reflect the structural morphology of an organ but cannot provide functional information of the organ, while functional imaging can accurately provide metabolic information and real-time activities of the organ but cannot display structural morphology details of the brain. Therefore, in the technical solution of the present application, it is desirable to more comprehensively use the feature information in the brain image to accurately determine the abnormal region in the brain image of the patient.
That is, specifically, in the technical solution of the present application, a brain image of a patient is first acquired by a brain scanner. Then, the acquired brain image is input to a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and output a second feature map by a last layer of the convolutional neural network. It is worth mentioning that, here, the value of m ranges from 2 to 6 layers. It should be understood that the convolutional neural network extracts shallow features such as shapes, edges, corners, and the like in 1 to 3 layers, and extracts texture features more focused on the brain image in the feature map in 4 to 6 layers. Therefore, the shallow feature and the deep feature of the brain image can be respectively extracted, so that different feature information of the brain image can be better utilized to carry out accurate judgment.
In particular, in one specific example, each layer of the first convolutional neural network includes a convolutional layer, a pooling layer, and an activation layer, and each layer of the first convolutional neural network performs convolution processing based on a convolution kernel on the input data using the convolutional layer, performs pooling processing on a convolution feature map output by the convolutional layer using the pooling layer, and performs activation processing on a pooled feature map output by the pooling layer using the activation layer during forward transfer of the layer.
More specifically, in step S130, the first feature map is enhancement-coded using a second convolutional neural network as a feature descriptor having a symmetric network structure with the convolutional neural network as a feature map extractor to obtain a third feature map. It should be understood that, because the first feature map as a shallow feature has a great difference in expression with respect to the second feature map as a deep feature, in the technical solution of the present application, the expression of the shallow feature is first enhanced using a feature descriptor corresponding to a feature extractor, specifically, the feature descriptor is a structure symmetrical to the feature extractor, i.e., each convolutional layer corresponds to one transposed convolutional layer, i.e., the convolutional kernel of the transposed convolutional layer is the transpose of the convolutional kernel of the corresponding convolutional layer, and each pooled layer corresponds to one anti-pooled layer, and the feature descriptor shares a weight with the feature extractor, so that the activated neurons in each convolutional layer can be restored. That is, the first feature map is enhancement-coded using a second convolutional neural network as a feature descriptor to obtain a third feature map. Accordingly, in one particular example, the convolution kernel of each convolution layer of the second convolutional neural network is transposed with the convolution kernel of the corresponding convolution layer of the feature extractor, each anti-pooling layer of the second convolutional neural network corresponds to one pooling layer of the first convolutional neural network, and the second convolutional neural network shares a weight with the first convolutional neural network.
More specifically, in steps S140 and S150, eigenvalue decomposition is performed on each eigenvector along the channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues, an eigenvector corresponding to an eigenvalue greater than a threshold value is selected from the plurality of eigenvectors, and the eigenvectors corresponding to the eigenvalue greater than the threshold value are two-dimensionally spliced to obtain a principal-dimension eigenvector. It should be understood that, further considering the dimension difference between the first feature map and the second feature map, in the technical solution of the present application, the dimension reduction based on singular value decomposition needs to be performed on the second feature map. That is, specifically, first, eigenvalue decomposition is performed for each of the eigenvectors along the channel in the second eigenmap to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues. Further, the eigenvectors corresponding to the eigenvalues larger than the threshold are selected from the obtained eigenvectors, and the eigenvectors are subjected to two-dimensional splicing to obtain a principal-dimension eigenvector matrix.
Specifically, in this embodiment of the present application, a process of performing eigenvalue decomposition on each feature matrix along a channel dimension in the second feature map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the eigenvalues includes: performing eigenvalue decomposition on each eigenvalue matrix along the channel dimension in the second eigenvalue to obtain the plurality of eigenvalues and the plurality of eigenvectors corresponding to the plurality of eigenvalues; wherein the formula is: q Λ Q ═ MTWherein Λ ═ diag (λ)123...,λn),λ123...,λnIs a characteristic value, Q ═ Q (Q)1,q2,q3...,qn),q1,q2,q3...,qnAnd the feature vectors correspond to the feature values.
More specifically, in step S160 and step S170, matrix-multiplying each feature matrix along the channel dimension in the second feature map with the principal-dimension feature matrix to obtain a dimension-reduced feature matrix, and arranging the dimension-reduced feature matrices into a fourth feature map along the channel dimension. That is, in the technical solution of the present application, after the principal dimension feature matrix is obtained, the original feature matrix along the channel dimension in the second feature map is further multiplied by a matrix formed by two-dimensional splicing of feature vectors, that is, the principal dimension feature matrix, so as to obtain a feature matrix after dimension reduction. Then, the obtained feature matrix to be classified is arranged into a fourth feature map along the channel dimension, so that subsequent feature fusion is facilitated, and the classification accuracy is further improved.
Specifically, in this embodiment of the present application, a process of performing matrix multiplication on each feature matrix along a channel dimension in the second feature map and the principal-dimension feature matrix to obtain a dimension-reduced feature matrix includes: matrix multiplying each feature matrix along the channel dimension in the second feature map with the principal dimension feature matrix by the following formula to obtain the dimension reduction feature matrix; wherein the formula is
Figure BDA0003528938820000141
Figure BDA0003528938820000142
M is the original feature matrix, q1,q2,…,qNFor each feature vector, an
Figure BDA0003528938820000143
Representing a matrix multiplication.
More specifically, in step S180 and step S190, the third feature map and the fourth feature map are fused to obtain a classification feature map, and the classification feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient. That is, in the technical solution of the present application, the third feature map and the fourth feature map are further fused to better fuse the shallow feature information and the deep feature information in the brain image, so as to obtain a classification feature map, which can make the classification result more accurate. Then, the classification feature map is passed through a classifier to obtain a classification result representing whether an abnormal region exists in the brain image of the patient.
Accordingly, in one particular example, the classification feature map is processed using the classifier in the following formula to generate the classification result; wherein, theThe formula is as follows: softmax { (W)n,Bn):…:(W1,B1) L project (F), where project (F) represents the projection of the classification feature map as a vector, W1To WnAs a weight matrix for each fully connected layer, B1To BnA bias matrix representing the layers of the fully connected layer.
In summary, the working method of the intelligent medical system based on the embodiment of the present application is illustrated, which extracts local features of the brain image of the patient through the mth layer and the last layer of the shallow layer of the convolutional neural network serving as the feature map extractor, respectively, so as to obtain a first feature map and a second feature map, further uses the convolutional neural network serving as the feature descriptor to enhance the expression of the shallow feature of the first feature map, and performs the dimension reduction based on singular value decomposition on the second feature map, so as to focus on the dimension difference between the two feature maps, thereby improving the classification accuracy. Therefore, the abnormal region in the brain image of the patient can be accurately detected and judged to ensure the health of the patient.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (7)

1. An intelligent medical system, comprising:
a source data acquisition unit for acquiring a brain image of a patient;
a neural network coding unit for inputting the acquired brain image into a convolutional neural network as a feature map extractor to extract a first feature map from an m-th layer of a shallow layer of the convolutional neural network and outputting a second feature map from a last layer of the convolutional neural network;
a feature enhancement unit, configured to perform enhancement coding on the first feature map using a second convolutional neural network as a feature descriptor to obtain a third feature map, where the feature descriptor and the convolutional neural network as a feature extractor have a symmetric network structure;
an eigenvalue decomposition unit, configured to perform eigenvalue decomposition on each eigenvector along a channel dimension in the second eigen map to obtain a plurality of eigenvalues and a plurality of eigenvectors corresponding to the plurality of eigenvalues;
the feature vector screening unit is used for selecting feature vectors corresponding to feature values larger than a threshold value from the feature vectors and performing two-dimensional splicing on the feature vectors corresponding to the feature values larger than the threshold value to obtain a main-dimensional feature matrix;
the dimension reduction unit is used for multiplying each feature matrix along the channel dimension in the second feature map by the main dimension feature matrix to obtain a dimension reduction feature matrix;
the feature matrix arrangement unit is used for arranging the dimension reduction feature matrix into a fourth feature map along the channel dimension;
a feature map fusion unit, configured to fuse the third feature map and the fourth feature map to obtain a classification feature map; and
and the diagnosis result generating unit is used for enabling the classification characteristic map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether an abnormal region exists in the brain image of the patient.
2. The intelligent medical system of claim 1, wherein the layers of the first convolutional neural network include a convolutional layer, a pooling layer, and an activation layer, and the layers of the first convolutional neural network perform convolutional kernel-based convolution processing on the input data using the convolutional layer, pool feature maps output by the convolutional layer using the pooling layer, and activation processing on the pooled feature maps output by the pooling layer using the activation layer during forward pass of the layers.
3. The intelligent medical system of claim 2, wherein M has a value in the range of 2-6 levels.
4. The intelligent medical system of claim 3, wherein the convolution kernel of each convolution layer of the second convolutional neural network is transpose of the convolution kernel of the corresponding convolution layer of the feature extractor, each anti-pooling layer of the second convolutional neural network corresponds to one pooling layer of the first convolutional neural network, and the second convolutional neural network shares weights with the first convolutional neural network.
5. The intelligent medical system according to claim 4, wherein the eigenvalue decomposition unit is further configured to perform eigenvalue decomposition on each eigenvalue matrix along the channel dimension in the second eigenvalue graph to obtain the plurality of eigenvalues and the plurality of eigenvectors corresponding to the plurality of eigenvalues;
wherein the formula is: q Λ Q ═ MTWherein Λ ═ diag (λ)1,λ2,λ3...,λn),λ1,λ2,λ3...,λnIs a characteristic value, Q ═ Q (Q)1,q2,q3...,qn),q1,q2,q3…,qnAnd the feature vectors correspond to the feature values.
6. The intelligent medical system of claim 5, wherein the dimension reduction unit is further configured to matrix-multiply each feature matrix along a channel dimension in the second feature map with the principal-dimension feature matrix to obtain the dimension-reduced feature matrix;
wherein the formula is
Figure FDA0003528938810000021
M is the original feature matrix, q1,q2,...,qNFor each feature vector, and
Figure FDA0003528938810000022
representing a matrix multiplication.
7. The intelligent medical system of claim 6, wherein the diagnostic result generation unit is further configured to process the classification feature map using the classifier in the following formula to generate the classification result;
wherein the formula is: softmax { (W)n,Bn):...:(W1,B1) L project (F), wherein project (F) represents projecting the classification feature map as a vector, W1To WnAs a weight matrix for each fully connected layer, B1To BnA bias matrix representing the layers of the fully connected layer.
CN202210200007.2A 2022-03-02 2022-03-02 Intelligent medical system Withdrawn CN114648496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210200007.2A CN114648496A (en) 2022-03-02 2022-03-02 Intelligent medical system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210200007.2A CN114648496A (en) 2022-03-02 2022-03-02 Intelligent medical system

Publications (1)

Publication Number Publication Date
CN114648496A true CN114648496A (en) 2022-06-21

Family

ID=81993739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210200007.2A Withdrawn CN114648496A (en) 2022-03-02 2022-03-02 Intelligent medical system

Country Status (1)

Country Link
CN (1) CN114648496A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115013298A (en) * 2022-06-22 2022-09-06 浙江石水泵业科技有限公司 Real-time performance on-line monitoring system and monitoring method of sewage pump
CN115143128A (en) * 2022-06-28 2022-10-04 浙江石水泵业科技有限公司 Fault diagnosis method and system for small submersible electric pump
CN115375980A (en) * 2022-06-30 2022-11-22 杭州电子科技大学 Block chain-based digital image evidence storing system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115013298A (en) * 2022-06-22 2022-09-06 浙江石水泵业科技有限公司 Real-time performance on-line monitoring system and monitoring method of sewage pump
CN115143128A (en) * 2022-06-28 2022-10-04 浙江石水泵业科技有限公司 Fault diagnosis method and system for small submersible electric pump
CN115375980A (en) * 2022-06-30 2022-11-22 杭州电子科技大学 Block chain-based digital image evidence storing system and method
CN115375980B (en) * 2022-06-30 2023-05-09 杭州电子科技大学 Digital image certification system and certification method based on blockchain

Similar Documents

Publication Publication Date Title
CN114648496A (en) Intelligent medical system
Wang et al. G2DeNet: Global Gaussian distribution embedding network and its application to visual recognition
Luo et al. Adaptive image denoising by targeted databases
Li et al. Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks
CN110659727B (en) Sketch-based image generation method
JP6244059B2 (en) Face image verification method and face image verification system based on reference image
CN110490247B (en) Image processing model generation method, image processing method and device and electronic equipment
EP1709572A2 (en) Method, system, storage medium, and data structure for image recognition using multilinear independent component analysis
CN112149720A (en) Fine-grained vehicle type identification method
WO2022166797A1 (en) Image generation model training method, generation method, apparatus, and device
CN110211205B (en) Image processing method, device, equipment and storage medium
CN112784660A (en) Face image reconstruction method and system
CN115013298A (en) Real-time performance on-line monitoring system and monitoring method of sewage pump
CN115205543A (en) Intelligent manufacturing method and system of stainless steel cabinet
US20230281751A1 (en) Systems and methods for multi-modal multi-dimensional image registration
EP4281908A1 (en) Cross-domain adaptive learning
Gupta et al. Toward unaligned guided thermal super-resolution
CN111210382A (en) Image processing method, image processing device, computer equipment and storage medium
CN113191390A (en) Image classification model construction method, image classification method and storage medium
CN115131194A (en) Method for determining image synthesis model and related device
CN113592769B (en) Abnormal image detection and model training method, device, equipment and medium
CN114627369A (en) Environment monitoring system, method and computer device thereof
Hladnik Image compression and face recognition: Two image processing applications of principal component analysis
CN113379597A (en) Face super-resolution reconstruction method
CN113762277A (en) Multi-band infrared image fusion method based on Cascade-GAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220621

WW01 Invention patent application withdrawn after publication