CN112348082B - Deep learning model construction method, image processing method and readable storage medium - Google Patents

Deep learning model construction method, image processing method and readable storage medium Download PDF

Info

Publication number
CN112348082B
CN112348082B CN202011230732.1A CN202011230732A CN112348082B CN 112348082 B CN112348082 B CN 112348082B CN 202011230732 A CN202011230732 A CN 202011230732A CN 112348082 B CN112348082 B CN 112348082B
Authority
CN
China
Prior art keywords
image
breast
region
interest
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011230732.1A
Other languages
Chinese (zh)
Other versions
CN112348082A (en
Inventor
石磊
张麒
曹一迪
吕君蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shenrui Bolian Technology Co., Ltd
Beijing Shenrui Bolian Technology Co Ltd
Original Assignee
Shanghai Yizhi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yizhi Medical Technology Co ltd filed Critical Shanghai Yizhi Medical Technology Co ltd
Priority to CN202011230732.1A priority Critical patent/CN112348082B/en
Publication of CN112348082A publication Critical patent/CN112348082A/en
Application granted granted Critical
Publication of CN112348082B publication Critical patent/CN112348082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Abstract

The present disclosure relates to a deep learning model construction method, an image processing method, and a readable storage medium; the construction method comprises the following steps: marking the interested region of the breast basic image according to the breast reference image; constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image; training the pre-training model to obtain the deep learning model according to the marking of the interesting region of the breast basic image; wherein: the breast reference image and the breast basic image have different imaging modes. Through each embodiment of the present disclosure, the image analysis of the breast image can be accurately performed based on the constructed deep learning model in combination with the deep learning.

Description

Deep learning model construction method, image processing method and readable storage medium
Technical Field
The disclosure relates to the technical field of intelligent auxiliary medical classification information, in particular to a deep learning model construction method, a mammary gland image processing method and a computer readable storage medium.
Background
In the prior art, AI analysis methods for breast images often only use single-mode analysis methods, such as breast molybdenum target images (MG), Digital Breast Tomography (DBT), ultrasound breast images, etc., but the single-mode models have limited performance. The multi-modal breast image AI analysis method requires the use of multi-modal images at the same time in both training (model building) and testing (inference, actual application). In a breast cancer screening scene or a large number of medical institutions without multi-modal breast image acquisition equipment, only a single MG modality is usually available, and the multi-modal idea is not widely applicable to the scenes or the institutions.
Disclosure of Invention
The present disclosure is intended to provide a deep learning model construction method, a breast image processing method, and a computer-readable storage medium, which can accurately perform image analysis of a breast image based on a constructed deep learning model in combination with deep learning.
According to one aspect of the present disclosure, a deep learning model construction method is provided, including:
marking the interested region of the breast basic image according to the breast reference image;
constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image;
training the pre-training model to obtain the deep learning model according to the marking of the interesting region of the breast basic image;
wherein: the breast reference image and the breast basic image have different imaging modes.
In some embodiments, wherein the building a corresponding pre-training model based on a neural network for performing a corresponding analysis on the region of interest of the breast basic image comprises:
constructing a first pre-training model for identifying the region of interest of the breast basic image based on a first neural network;
wherein the first neural network is configured to include at least:
the characteristic extraction structure is configured into a VGG-like network structure comprising a plurality of convolution layers and pooling layers;
the region selection structure is configured to select regions through a plurality of preset reference frames with different size proportions;
a region pooling structure configured to pool selected regions to the same size.
In some embodiments, the first and second light sources, wherein,
the neural network-based construction of the corresponding pre-training model for the corresponding analysis of the region of interest of the breast basic image comprises the following steps:
constructing a second pre-training model for defining the region of interest of the mammary gland basic image based on a second neural network;
wherein the second neural network is configured to contain at least:
a maximum pooling layer configured to downsample the convolution and pooling several times;
a deconvolution layer configured to upsample several times the deconvolution and pooling;
the down-sampling and up-sampling times are the same.
In some embodiments, wherein the building a corresponding pre-training model based on a neural network for performing a corresponding analysis on the region of interest of the breast basic image comprises:
constructing a third pre-training model for classifying the region of interest of the breast basic image based on a third neural network;
wherein the third neural network is configured to include at least:
a fully-connected layer having a number of neurons;
and configuring output layers with different numbers of neurons according to different classification tasks, and obtaining the normalized probability values of all classes based on the output layers to determine classification results.
In some embodiments, wherein the labeling the region of interest of the breast basic image according to the breast reference image comprises:
selecting a specific image layer of the first mammary gland reference image;
extracting an interested area of the specific image layer;
mapping the region of interest of the specific image layer to the mammary gland basic image;
and marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode.
In some embodiments, wherein the labeling the region of interest of the breast basic image according to the breast reference image comprises:
extracting image parameters of a region of interest marked by a second mammary gland reference image according to the scanning mode of the second mammary gland reference image;
mapping the region of interest marked by the second breast reference image to the breast basic image;
and marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode.
In some embodiments, wherein the breast base image comprises a breast molybdenum target image;
the breast reference image comprises a digital breast tomography or ultrasound breast image.
According to one aspect of the present disclosure, a method for processing a breast image is provided, including:
acquiring a mammary gland basic image;
based on the deep learning model constructed by the deep learning model construction method, the mammary gland basic image is correspondingly analyzed;
and obtaining the corresponding analysis result of the interested area of the breast basic image.
In some embodiments, wherein the performing the corresponding analysis on the breast basic image comprises at least one of:
identifying a region of interest of the breast base image;
defining a region of interest of the breast base image;
classifying the region of interest of the breast base image.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
according to the deep learning model construction method; or
The method for processing the mammary gland image.
According to the deep learning model construction method, the mammary gland image processing method and the computer readable storage medium of various embodiments of the disclosure, according to a mammary gland reference image, a region of interest of a mammary gland basic image different from an imaging mode of the mammary gland reference image is labeled; constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image; and training the pre-training model to obtain the deep learning model according to the labeling of the region of interest of the breast basic image, so as to improve the classification accuracy of the molybdenum target AI on the basis that the AI is divided into two stages of training (model building) and testing (inference and actual application). According to the method, an additional mode (DBT or ultrasonic) is acquired in the training stage of the molybdenum target AI model, MG + DBT or MG + ultrasonic is used simultaneously in the training to obtain a more robust and accurate molybdenum target AI model, and the model also has higher region of interest (ROI) detection, segmentation and classification accuracy in the actual application scene (test stage) of only MG. The additional mode is adopted in the training stage and the additional mode is not contained in the testing stage, the technology is called as a privilege information Learning (LUPI) technology, the privilege information deep learning is formed by combining the deep learning, DBT or ultrasound is used as privilege information in MG classification model training, the analysis precision and performance of MG images are improved, and meanwhile, the accessibility of multi-mode data acquisition is considered. The training phase utilizes an additionally acquired privileged information modality (DBT or ultrasound), while in the testing (inference) phase, only a regular Modality (MG) is needed, which would be applicable to MG-only medical scenarios (e.g., primary breast cancer screening, regular physical examination, etc.).
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may designate like components in different views. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of like components. The drawings illustrate various embodiments generally, by way of example and not by way of limitation, and together with the description and claims, serve to explain the disclosed embodiments.
Fig. 1 shows a flowchart of a deep learning model construction method according to an embodiment of the present disclosure;
fig. 2 shows a flowchart of a method for processing a breast image according to an embodiment of the present disclosure;
fig. 3 shows a process flow diagram of an embodiment of the disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
The present disclosure relates to analysis of region of interest image information based on breast images. The present disclosure will describe embodiments of the present disclosure by way of specific embodiments, with a breast image being the primary illustrative example. It should be understood that in machine vision, image processing, a region to be processed is delineated from a processed image in a manner of a box, a circle, an ellipse, an irregular polygon, etc., and is called a region of interest (roi). Various operators (operators) and functions are commonly used in machine vision software such as Halcon, OpenCV, Matlab and the like to obtain a region of interest ROI, and the image is processed in the next step. All regions of interest with clinical analysis and classification significance are in accordance with the application scenarios of the embodiments of the present disclosure. In conjunction with the clinic, one skilled in the art will understand that: a lesion may be understood as meaning a portion of the body where a lesion occurs; the symptoms are the manifestation of the lesion on the image scan, and may include, for example, masses, calcifications, structural distortions, asymmetries, etc., and the corresponding accompanying symptoms may include: swollen lymph nodes, thickened skin, depressed nipples, etc. In summary, taking breast classification as an example, the token is a description of a focus point, and the token describes the whole breast. For example, focal point a is a lump and the left breast has a depressed nipple. That is, it can be understood that: the focus is the general concept of the symptoms, and the parts of the body where lesions occur are collectively called as the focus, and the symptoms of the lesions are shown on the image scanning. Molybdenum targeting (MG) is a routine imaging technique for breast cancer screening, while more sensitive Digital Breast Tomography (DBT) is not a routine examination tool due to the popularity of the equipment. DBT may be 20% more sensitive than MG in classification performance of breast tumors. The current AI method for breast lesion detection, segmentation and classification usually uses only a single modality (e.g. MG, DBT, ultrasound), but the performance of the single modality model is limited. Recent work has utilized multiple modalities for breast cancer classification at the same time, but the idea of multiple modalities is to utilize multiple modalities at the same time in both training and testing phases. However, in a breast cancer screening scenario or a large number of medical institutions without multi-modal breast image capturing devices, only a single MG modality is usually available, and the multi-modal idea is not widely applicable to these scenarios or institutions.
As one solution, as shown in fig. 1 in combination with fig. 3, an embodiment of the present disclosure provides a deep learning model building method, including:
s101: marking the interested region of the breast basic image according to the breast reference image;
s102: constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image;
s103: training the pre-training model to obtain the deep learning model according to the marking of the interesting region of the breast basic image;
wherein: the breast reference image and the breast basic image have different imaging modes.
One of the inventive concepts of the present disclosure is directed to improving the accuracy of conventional MG inspections using Artificial Intelligence (AI) by changing some ways of simultaneously using multiple modalities in both the AI training and testing phases. AI is divided into two stages of training (model establishment) and testing (inference and practical application), one of the disclosed targets can be understood as improving the accuracy of image information and image parameters of focuses and signs in mammary gland images, and the interested region of the mammary gland basic image is labeled according to the mammary gland reference image; and constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the interested region of the mammary gland basic image. Taking the molybdenum target image as an example, an additional modality (DBT or ultrasound) is acquired at the training stage of the molybdenum target AI model, and the MG + DBT or MG + ultrasound is used simultaneously during training to obtain a more robust and accurate molybdenum target AI model, which also has higher lesion detection, segmentation and classification accuracy in an actual application scene (test stage) with only MG. The training stage adopts an additional mode, the testing stage does not contain the additional mode, and the privilege information deep learning is formed by combining the deep learning according to the privilege information Learning (LUPI) technology, so that the performance can be further improved. According to the method, DBT or ultrasound is used as privilege information in MG classification model training, so that lesion detection and disease classification performance in MG are improved.
In some specific embodiments, step S101 may specifically include:
the marking of the region of interest of the breast basic image according to the breast reference image comprises the following steps:
selecting a specific image layer of the first mammary gland reference image;
extracting an interested area of the specific image layer;
mapping the region of interest of the specific image layer to the mammary gland basic image;
and marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode.
In conjunction with the above, the DBT image is taken as the first breast reference image of the embodiment of the present disclosure, and the MG image is taken as the breast base image. And marking the maximum aspect of the lump in the DBT image aiming at the symptom contained in the interested area in the image as the lump, extracting the position coordinates of the marked interested area and mapping the position coordinates back to the molybdenum target of the MG image. Can be implemented as: and respectively displaying the MG image and the DBT image, wherein the DBT image is displayed on a professional screen and interpreted. Although DBT is a multi-layered image, because there will also be glandular images at different layers, there will often be a layer of clearest, usually the layer is also the largest cross-sectional layer of the tumor, which can be considered as a specific image layer in embodiments of the present disclosure. The image of the layer is corresponding to the image of the molybdenum target sheet on the other display interface, and the specific mode can be as follows: after the DBT image is post-processed, the size of the image is the same as that of the molybdenum target image, the outlines can be overlapped, and therefore marking can be carried out on the molybdenum target at the same position. By means of frame selection, the labeling suitable for the tumor detection task includes but is not limited to rectangular frame, circular or oval labeling, etc. Through the way of drawing, including but not limited to the manual drawing of edge, often the lump also can leak some outlines and conveniently mark on the molybdenum target when the edge is drawn, and surrounding gland shape also can conveniently be fixed a position, carries out more accurate edge mark. Through the marking mode that each embodiment of this disclosure relates to, can get rid of the gland to the interference of lump, than only using molybdenum target piece to mark, the lump edge can carry out more clear mark, and information such as extraction image omics is more accurate.
It will be appreciated by those skilled in the art that the labeling for other indications, such as structural distortion, asymmetric densification, calcification, lymph node lesions, is similar to the labeling of masses in the previous embodiment, i.e., labeling at the largest level of the lesion, extracting the position coordinates of the labeled region of interest and mapping back to the molybdenum target.
In some specific embodiments, step S101 may specifically include:
the marking of the region of interest of the breast basic image according to the breast reference image comprises the following steps:
extracting image parameters of a region of interest marked by a second mammary gland reference image according to the scanning mode of the second mammary gland reference image;
mapping the region of interest marked by the second breast reference image to the breast basic image;
and marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode.
In conjunction with the above, the ultrasound image is taken as the second breast reference image of the embodiment of the present disclosure, and the MG image is taken as the breast basic image. It is understood that the lesion is mapped back to the MG image based on the location and size of the lesion recorded on the ultrasound image. In practical clinical practice, the scanning mode of the ultrasound image can be a transverse cutting method, a longitudinal cutting method and a diagonal cutting method. Preferably, the ultrasound scanning method of the embodiment of the present disclosure may adopt a transection method, that is, scanning starts from the uppermost transection of the breast, the probe is moved from left to right and in parallel, and then the probe is moved downward layer by layer to perform scanning, so that the collected ultrasound transection is easier to perform the localization of the lesion with the MG image. And recording the focus position on the ultrasonic section with the focus displaying the maximum based on the image parameters calibrated by ultrasonic scanning, and marking the same position on the corresponding MG image. And (4) selecting by frames, including but not limited to rectangular frames, circular or elliptical labels and other labeling modes suitable for the focus detection task. Segmentation tasks are adapted by way of delineation, including but not limited to fine-edge delineation of lesions. When the ultrasonic examination adopts a longitudinal cutting method (vertical scanning) or a diagonal cutting method (radial scanning), the mapping of the focus position is carried out, and the focus position is marked to an MG image: the position information of the focus is recorded simultaneously in the ultrasonic image acquisition, and the method comprises the following steps: left and right breast, azimuth and depth, etc.; the azimuth is described in the clock direction with the nipple as the center, for example, according to the directions of 3 o 'clock, 7 o' clock and the like, and the depth is recorded as the distance from the surface of the ultrasonic probe to the center of the focus; the lesion position is mapped back to the MG image based on the positional information in the ultrasound.
The main idea of labeling the region of interest of the breast basic image according to the breast reference image in the embodiments of the present disclosure is to adopt a "feedback" labeling, that is, accurately interpret the DBT image based on professional or AI intelligent means, and label the region of interest such as mass, structural distortion, asymmetric compactness, calcification, lymph node, etc. found on the DBT image on the MG image. And when the marks are marked, a rectangular frame or an ellipse and the like are adopted, so that the method is suitable for a detection task, and if the marks are fine sketching, the method is suitable for a segmentation task. Lesions labeled on conventional MG (without DBT) are used to pre-train AI models, such as convolutional neural network CNN, while "backpropped" labeling is used to continue the iteration to further optimize the AI lesion detection or segmentation model.
In some embodiments, the constructing a corresponding pre-training model based on a neural network for performing a corresponding analysis on the region of interest of the breast basic image of the present disclosure includes:
constructing a first pre-training model for identifying the region of interest of the breast basic image based on a first neural network;
wherein the first neural network is configured to include at least:
the characteristic extraction structure is configured into a VGG-like network structure comprising a plurality of convolution layers and pooling layers;
the region selection structure is configured to select regions through a plurality of preset reference frames with different size proportions;
a region pooling structure configured to pool selected regions to the same size.
Specifically, based on the above description, the breast DBT image is taken as an example for explanation, and the implementation manner of the ultrasound image is completely similar to that of the breast DBT image. Lesion labeling is performed on the mammary gland DBT image and is mapped to the two-dimensional molybdenum target image, and the pre-training model of the embodiment is obtained based on labeling training directly on the molybdenum target image. The specific steps can be realized according to at least the following descriptions:
1. the three-dimensional mammary gland DBT image is mapped to the two-dimensional MG image and spatially registered with the two-dimensional mammary gland molybdenum target image. Mapping the lump mark on the mammary gland DBT to a two-dimensional mammary gland molybdenum target image;
2. the breast molybdenum target image is cut out of an image with a certain size from the edge as the input of the neural network, for example, a rectangle with the aspect ratio of 1024:384 is cut out and is scaled to 1024 × 384 as the input size of the neural network, and bilinear interpolation is adopted for scaling. The left breast intercepts the left half part of the image, and the right breast intercepts the right half part of the image, so that the input image is ensured to contain a breast main body;
3. the pre-training model uses a model suitable for breast lesion detection, and its structure can be constructed as a fast RCNN network, for example. The fast RCNN can be divided into 4 main contents: conv layers first extracted the mammary molybdenum target images (feature maps) using a set of underlying Conv + relu + pooling layers. The feature maps are shared for subsequent RPN layers and full connection layers; region probable Networks for generating candidate regions (Region probable); the method comprises the following steps of Roi Pooling, collecting input feature maps and prosals, extracting the prosal feature maps after integrating the information, and sending the prosal feature maps to a subsequent full-connection layer to judge the target category; classication, utilizing the generic feature maps to calculate the category of the generic, and simultaneously performing border regression (bounding box regression) again to obtain the final accurate position of the detection box. In the embodiment of the present disclosure, specifically, checkpoint with 60000 steps trained previously may be used. Which comprises the following steps: feature extraction, Region selection (RPN), Region Pooling (ROI Pooling), and classified output of four structural divisions. The feature extraction part is a series of VGG-like network structures of Conv + ReLU + Pooling, and the total number of the convolution layers is 12 and the total number of the Pooling layers is 4. The RPN part uses four preset reference frames (anchors), and in the specific embodiment, the RPN part may be rectangles each having an aspect ratio of 1, and the size ratio varies from 0.05 to 0.8. The RPN part includes two classifications for anchor and a size correction with 4 degrees of freedom. In this embodiment, "sigmoid" is used for the anchor's binary loss function, and "smooth _ 1" is selected for the size-modified loss function. And intercepting the obtained selected region on the original feature map to obtain a series of interested regions ROI. The area pooling allows the areas of the selection area to be pooled to the same size and connected. And finally, obtaining the result of the last two classifications by the obtained pooling information through a full connection layer, and simultaneously correcting the region by referring to the previously obtained regression value of the size correction.
4. And performing fine-tune (fine-tune) training on the pre-trained neural network model by using the label of the DBT image mapped to the MG image.
In some embodiments, the constructing a corresponding pre-training model based on a neural network for performing a corresponding analysis on the region of interest of the breast basic image of the present disclosure includes:
constructing a second pre-training model for defining the region of interest of the mammary gland basic image based on a second neural network;
wherein the second neural network is configured to contain at least:
a maximum pooling layer configured to downsample the convolution and pooling several times;
a deconvolution layer configured to upsample several times the deconvolution and pooling;
the down-sampling and up-sampling times are the same.
Specifically, based on the above description, the breast DBT image is taken as an example for explanation, and the implementation manner of the ultrasound image is completely similar to that of the breast DBT image. Lesion labeling is performed on the mammary gland DBT image and is mapped to the two-dimensional molybdenum target image, and the pre-training model of the embodiment is obtained based on labeling training directly on the molybdenum target image. The specific steps can be realized according to at least the following descriptions:
1. the three-dimensional mammary gland DBT image is mapped to the two-dimensional MG image and spatially registered with the two-dimensional mammary gland molybdenum target image. Mapping the lump mark on the mammary gland DBT to a two-dimensional mammary gland molybdenum target image;
2. the breast molybdenum target image is cut out of an image with a certain size from the edge as the input of the neural network, for example, a rectangle with the aspect ratio of 1024:384 is cut out and is scaled to 1024 × 384 as the input size of the neural network, and bilinear interpolation is adopted for scaling. The left breast intercepts the left half part of the image, and the right breast intercepts the right half part of the image, so that the input image is ensured to contain a breast main body;
3. the pre-training model adopts a model suitable for lesion segmentation, and the structure of the pre-training model can be constructed into a U-Net network which mainly comprises a convolution layer, a maximum pooling layer (down sampling), a deconvolution layer (up sampling) and a ReLU nonlinear activation function. The network includes a maximum pooling layer configured to downsample the convolution and pooling several times; and the deconvolution layer is configured to perform up-sampling on the deconvolution and the pooling for several times, for example, the U-Net network of the present embodiment may include four pooling and four up-sampling processes, and the up-sampling employs bilinear interpolation. There are two convolutional layers at each resolution. The size of the input picture may be 512 × 512. Applying a certain score threshold value to the image output by the network for binarization, for example, if the value is 0.5, the position with the pixel value of 1 is regarded as a lump area, and if the value is 0, the position is regarded as a background.
4. And (4) performing fine-tune (fine-tune) training on the pre-trained neural network model by using the labeling from the DBT image to the MG image.
In some embodiments, the constructing a corresponding pre-training model based on a neural network for performing a corresponding analysis on the region of interest of the breast basic image of the present disclosure includes:
constructing a third pre-training model for classifying the region of interest of the breast basic image based on a third neural network;
wherein the third neural network is configured to include at least:
a fully-connected layer having a number of neurons;
and configuring output layers with different numbers of neurons according to different classification tasks, and obtaining the normalized probability values of all classes based on the output layers to determine classification results.
Specifically, based on the above description, the breast DBT image is taken as an example for explanation, and the implementation manner of the ultrasound image is completely similar to that of the breast DBT image. Lesion labeling is performed on the mammary gland DBT image and is mapped to the two-dimensional molybdenum target image, and the pre-training model of the embodiment is obtained based on labeling training directly on the molybdenum target image. The specific steps can be realized according to at least the following descriptions:
1. the three-dimensional mammary gland DBT image is mapped to the two-dimensional MG image and spatially registered with the two-dimensional mammary gland molybdenum target image. Mapping the lump mark on the mammary gland DBT to a two-dimensional mammary gland molybdenum target image;
2. the breast molybdenum target image is cut out of an image with a certain size from the edge as the input of the neural network, for example, a rectangle with the aspect ratio of 1024:384 is cut out and is scaled to 1024 × 384 as the input size of the neural network, and bilinear interpolation is adopted for scaling. The left breast intercepts the left half part of the image, and the right breast intercepts the right half part of the image, so that the input image is ensured to contain a breast main body;
3. the pre-training model adopts a pre-training model and a model suitable for lesion classification, and the structure of the pre-training model can be constructed into an improved ResNet18 residual neural network which at least comprises a full connection layer with a plurality of neuron numbers; and configuring output layers with different numbers of neurons according to different classification tasks, and obtaining the normalized probability values of all classes based on the output layers to determine classification results. The embodiment aims to improve the classification of the focus, high-flux quantitative features are extracted from a conventional Mode (MG) and a privilege information mode (DBT or ultrasonic) through a deep neural network, and privilege information learning classifier methods such as a LUPI version (SVM +) of a support vector machine and a LUPI version (RVFL +) of a random vector function connection network are fused to form a corresponding privilege information deep learning classification algorithm suitable for MG breast cancer classification, so that the intelligent analysis of a breast image with higher precision is realized, and an accurate image analysis result is output. Classification is defined according to a uniform standard for breast image analysis in the clinic and may correspond to several different classification tasks as follows: a) adopting pathological benign and malignant two-class labels, wherein the classification task is a pathological benign and malignant two-class task; b) adopting a multi-class label of BI-RADS classification judged manually by experts, wherein the classification task is a multi-classification task of the BI-RADS; c) manually judging the BI-RADS classification by using an expert, and converting the BI-RADS classification into a good and malignant two classification label, wherein if the BI-RADS 3 level or 4a level or 4b level and the like are used as boundary lines, the classification task is the BI-RADS good and malignant two classification task; d) and (3) adopting a marker negative and positive label of molecular pathology, such as Her2 negative and positive, ER negative and positive, PR negative and positive, HR negative and positive and the like, so that the classification task is a two-classification task of the molecular marker negative and positive. In this embodiment, the specific ResNet18 residual neural network may be configured as: the number of neurons in the full connecting layer is 32, the number of neurons in the output layer L is set to different values according to different classification tasks, and by combining the definition of the classification tasks, the number of neurons in the output layer L is set to 2 for the two classification tasks such as the two classification tasks of pathological malignancy (benign and malignant), "the two classification tasks of molecular marker positivity (molecular pathology negative and positive two classifications)," the two classification tasks of BI-RADS (BI-RADS two classifications), and the number of neurons in the output layer L is set to 6 or 8 for the multi-classification task of BI-RADS (BI-RADS multi classification task), so that the method can be further subdivided according to actual conditions. The output layer obtains the normalized probability value of each category, and the highest value is the classification result.
4. And (4) performing fine-tune (fine-tune) training on the pre-trained neural network model by using the labeling from the DBT image to the MG image.
Based on the foregoing embodiments that the pre-trained model is constructed based on the first, second, and third neural networks, the present disclosure may further optimize and configure, on the basis of performing fine-tune training on the pre-trained neural network model, to: model training the number of data samples (batch _ size) captured in one training can be set to 32, an adaptive moment estimation (Adam) optimizer is adopted, an exponentially decaying learning rate is adopted, the initial learning rate is 0.00001, every 30000 steps decays by a factor of 0.1, 60000 steps are trained in total, the neural network of each embodiment uses a weight decay of 0.0005 to reduce overfitting, and hard sample mining can be adopted to balance positive and negative samples in detection. Furthermore, a non-maximum inhibition method is used for post-processing of detection, and an interaction over Union (IoU), which is a standard for measuring the accuracy of detecting a corresponding object in a data set of a breast image region of interest, is selected to be 0.5.
As one of the solutions of the present disclosure, as shown in fig. 2 in combination with fig. 3, the present disclosure further provides a method for processing a breast image, including:
s201: acquiring a mammary gland basic image;
s202: based on the deep learning model constructed by the deep learning model construction method, the mammary gland basic image is correspondingly analyzed;
s203: and obtaining the corresponding analysis result of the interested area of the breast basic image.
Specifically, in this embodiment, in the case of only an MG image scanning scene and mechanism, an MG image is obtained through MG scanning, and based on the deep learning model constructed in each embodiment of the present disclosure, a more robust and accurate molybdenum target AI model is obtained by simultaneously using MG + DBT or MG + ultrasound, and the model also has higher accuracy of lesion detection, segmentation and classification in an actual application scene (test stage) with only MG. The embodiment considers the accessibility of multi-mode data acquisition through the thought of multi-mode learning. The training phase utilizes an additionally acquired privileged information modality (DBT or ultrasound), while in the testing (inference) phase, only a regular Modality (MG) is required. After the model is trained, the model is applicable to medical scenes with only MG, such as primary breast cancer screening, routine physical examination and the like.
In some embodiments, the performing the corresponding analysis on the breast basic image of the processing method of the breast image of the present disclosure includes at least one of:
identifying a region of interest of the breast base image;
defining a region of interest of the breast base image;
classifying the region of interest of the breast base image.
Specifically, one of the inventive concepts of the present disclosure is directed to labeling a region of interest of a breast basic image different from an imaging mode of a breast reference image according to the breast reference image; constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image; and training the pre-training model to obtain the deep learning model according to the labeling of the region of interest of the breast basic image, so as to improve the classification accuracy of the molybdenum target AI on the basis that the AI is divided into two stages of training (model building) and testing (inference and actual application). According to the method, an additional mode (DBT or ultrasonic) is acquired in the training stage of the molybdenum target AI model, MG + DBT or MG + ultrasonic is used simultaneously in the training to obtain a more robust and accurate molybdenum target AI model, and the model also has higher region of interest (ROI) detection, segmentation and classification accuracy in the actual application scene (test stage) of only MG. The additional mode is adopted in the training stage and the additional mode is not contained in the testing stage, the technology is called as a privilege information Learning (LUPI) technology, the privilege information deep learning is formed by combining the deep learning, DBT or ultrasound is used as privilege information in MG classification model training, the analysis precision and performance of MG images are improved, and meanwhile, the accessibility of multi-mode data acquisition is considered. The training phase utilizes an additionally acquired privileged information modality (DBT or ultrasound), while in the testing (inference) phase, only a conventional Modality (MG) is required, which would be applicable to MG-only medical scenarios.
As one of the solutions of the present disclosure, the present disclosure further provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the method for constructing a deep learning model according to the foregoing is mainly implemented, and the method at least includes:
according to the mammary gland reference image, marking the interested region of the mammary gland basic image which is different from the imaging mode of the mammary gland reference image;
constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image;
and training the pre-training model to obtain the deep learning model according to the marking of the region of interest of the breast basic image.
As one of the aspects of the present disclosure, the present disclosure also provides a computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processor, mainly implement the processing method of breast images according to the above, at least comprising:
acquiring a mammary gland basic image;
based on the deep learning model constructed by the deep learning model construction method, the mammary gland basic image is correspondingly analyzed;
and obtaining the corresponding analysis result of the interested area of the breast basic image.
In some embodiments, a processor executing computer-executable instructions may be a processing device including more than one general-purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer-readable storage medium may be a memory, such as a read-only memory (ROM), a random-access memory (RAM), a phase-change random-access memory (PRAM), a static random-access memory (SRAM), a dynamic random-access memory (DRAM), an electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), a flash disk or other form of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD) or other optical storage, a tape cartridge or other magnetic storage device, or any other potentially non-transitory medium that may be used to store information or instructions that may be accessed by a computer device, and so forth.
In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules that collectively implement the method for displaying medical images according to any one of the present disclosure.
The present disclosure describes various operations or functions that may be implemented as or defined as software code or instructions. The display unit may be implemented as software code or modules of instructions stored on a memory, which when executed by a processor may implement the respective steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). A software implementation of the embodiments described herein may be provided through an article of manufacture having code or instructions stored thereon, or through a method of operating a communication interface to transmit data through the communication interface. A machine or computer-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing display device, an electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display devices, etc.). The communication interface includes any mechanism for interfacing with any of a hardwired, wireless, optical, etc. medium to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.

Claims (7)

1. The deep learning model construction method comprises the following steps:
marking the interested region of the breast basic image according to the breast reference image;
constructing a corresponding pre-training model based on a neural network for carrying out corresponding analysis on the region of interest of the mammary gland basic image;
training the pre-training model to obtain the deep learning model according to the marking of the interesting region of the breast basic image;
wherein: the breast reference image and the breast basic image have different imaging modes;
the breast base image comprises a breast molybdenum target image;
the breast reference image comprises a digital breast tomographic image or an ultrasound breast image;
under the condition that the digital breast tomography image is a first breast reference image, labeling the region of interest of the breast basic image according to the breast reference image comprises the following steps:
selecting a specific image layer of the first mammary gland reference image;
extracting an interested area of the specific image layer;
mapping the region of interest of the specific image layer to the mammary gland basic image;
marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode;
under the condition that the ultrasonic breast image is the second breast reference image, labeling the region of interest of the breast basic image according to the breast reference image comprises the following steps:
extracting image parameters of a region of interest marked by a second mammary gland reference image according to the scanning mode of the second mammary gland reference image;
mapping the region of interest marked by the second breast reference image to the breast basic image;
and marking the interested region of the breast basic image in a frame selection mode and/or a sketching mode.
2. The method according to claim 1, wherein said building a corresponding pre-trained model based on neural network for performing a corresponding analysis of the region of interest of the breast base image comprises:
constructing a first pre-training model for identifying the region of interest of the breast basic image based on a first neural network;
wherein the first neural network is configured to include at least:
the characteristic extraction structure is configured into a VGG-like network structure comprising a plurality of convolution layers and pooling layers;
the region selection structure is configured to select regions through a plurality of preset reference frames with different size proportions;
a region pooling structure configured to pool selected regions to the same size.
3. The method according to claim 1, wherein said building a corresponding pre-trained model based on neural network for performing a corresponding analysis of the region of interest of the breast base image comprises:
constructing a second pre-training model for defining the region of interest of the mammary gland basic image based on a second neural network;
wherein the second neural network is configured to contain at least:
a maximum pooling layer configured to downsample the convolution and pooling several times;
a deconvolution layer configured to upsample several times the deconvolution and pooling;
the down-sampling and up-sampling times are the same.
4. The method according to claim 1, wherein said building a corresponding pre-trained model based on neural network for performing a corresponding analysis of the region of interest of the breast base image comprises:
constructing a third pre-training model for classifying the region of interest of the breast basic image based on a third neural network;
wherein the third neural network is configured to include at least:
a fully-connected layer having a number of neurons;
and configuring output layers with different numbers of neurons according to different classification tasks, and obtaining the normalized probability values of all classes based on the output layers to determine classification results.
5. The processing method of the mammary gland image comprises the following steps:
acquiring a mammary gland basic image;
performing corresponding analysis on the mammary gland basic image based on the deep learning model constructed according to the method of any one of claims 1 to 4;
and obtaining the corresponding analysis result of the interested area of the breast basic image.
6. The method of claim 5, wherein said analyzing said breast base image accordingly comprises at least one of:
identifying a region of interest of the breast base image;
defining a region of interest of the breast base image;
classifying the region of interest of the breast base image.
7. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the deep learning model construction method according to any one of claims 1 to 4; or
The method of processing breast images according to any one of claims 5 to 6.
CN202011230732.1A 2020-11-06 2020-11-06 Deep learning model construction method, image processing method and readable storage medium Active CN112348082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011230732.1A CN112348082B (en) 2020-11-06 2020-11-06 Deep learning model construction method, image processing method and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011230732.1A CN112348082B (en) 2020-11-06 2020-11-06 Deep learning model construction method, image processing method and readable storage medium

Publications (2)

Publication Number Publication Date
CN112348082A CN112348082A (en) 2021-02-09
CN112348082B true CN112348082B (en) 2021-11-09

Family

ID=74429517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011230732.1A Active CN112348082B (en) 2020-11-06 2020-11-06 Deep learning model construction method, image processing method and readable storage medium

Country Status (1)

Country Link
CN (1) CN112348082B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591852B (en) * 2021-08-09 2022-08-23 数坤(北京)网络科技股份有限公司 Method and device for marking region of interest
CN113658151B (en) * 2021-08-24 2023-11-24 泰安市中心医院 Mammary gland lesion magnetic resonance image classification method, device and readable storage medium
CN114120452A (en) * 2021-09-02 2022-03-01 北京百度网讯科技有限公司 Living body detection model training method and device, electronic equipment and storage medium
CN114782676B (en) * 2022-04-02 2023-01-06 北京广播电视台 Method and system for extracting region of interest of video
CN115132357B (en) * 2022-08-30 2022-12-23 深圳大学总医院 Device for predicting target disease index state based on medical image map
CN116630680B (en) * 2023-04-06 2024-02-06 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657984A (en) * 2015-01-28 2015-05-27 复旦大学 Automatic extraction method of three-dimensional breast full-volume image regions of interest
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109447065A (en) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN111428709A (en) * 2020-03-13 2020-07-17 平安科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111739033A (en) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Method for establishing breast molybdenum target and MR image omics model based on machine learning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592137B (en) * 2011-12-27 2014-07-02 中国科学院深圳先进技术研究院 Multi-modality image registration method and operation navigation method based on multi-modality image registration
GB201615051D0 (en) * 2016-09-05 2016-10-19 Kheiron Medical Tech Ltd Multi-modal medical image procesing
US10751548B2 (en) * 2017-07-28 2020-08-25 Elekta, Inc. Automated image segmentation using DCNN such as for radiation therapy
CN108464840B (en) * 2017-12-26 2021-10-19 安徽科大讯飞医疗信息技术有限公司 Automatic detection method and system for breast lumps
CN108765387A (en) * 2018-05-17 2018-11-06 杭州电子科技大学 Based on Faster RCNN mammary gland DBT image lump automatic testing methods
CN110298345A (en) * 2019-07-05 2019-10-01 福州大学 A kind of area-of-interest automatic marking method of medical images data sets
CN110570419A (en) * 2019-09-12 2019-12-13 杭州依图医疗技术有限公司 Method and device for acquiring characteristic information and storage medium
CN110930367B (en) * 2019-10-31 2022-12-20 上海交通大学 Multi-modal ultrasound image classification method and breast cancer diagnosis device
CN111583320B (en) * 2020-03-17 2023-04-07 哈尔滨医科大学 Breast cancer ultrasonic image typing method and system fusing deep convolutional network and image omics characteristics and storage medium
CN111860116B (en) * 2020-06-03 2022-08-26 南京邮电大学 Scene identification method based on deep learning and privilege information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657984A (en) * 2015-01-28 2015-05-27 复旦大学 Automatic extraction method of three-dimensional breast full-volume image regions of interest
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109447065A (en) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN111428709A (en) * 2020-03-13 2020-07-17 平安科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN111739033A (en) * 2020-06-22 2020-10-02 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Method for establishing breast molybdenum target and MR image omics model based on machine learning

Also Published As

Publication number Publication date
CN112348082A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112348082B (en) Deep learning model construction method, image processing method and readable storage medium
US10769791B2 (en) Systems and methods for cross-modality image segmentation
US10867384B2 (en) System and method for automatically detecting a target object from a 3D image
CN111445478B (en) Automatic intracranial aneurysm region detection system and detection method for CTA image
Cao et al. A novel attention-guided convolutional network for the detection of abnormal cervical cells in cervical cancer screening
CN109003267B (en) Computer-implemented method and system for automatically detecting target object from 3D image
CN107451615A (en) Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
CN111028206A (en) Prostate cancer automatic detection and classification system based on deep learning
Ranjbarzadeh et al. A deep learning approach for robust, multi-oriented, and curved text detection
CN111784628A (en) End-to-end colorectal polyp image segmentation method based on effective learning
CN109949304B (en) Training and acquiring method of image detection learning network, image detection device and medium
CN110400302B (en) Method and device for determining and displaying focus information in breast image
CN114758137B (en) Ultrasonic image segmentation method and device and computer readable storage medium
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN116188479B (en) Hip joint image segmentation method and system based on deep learning
WO2022150554A1 (en) Quantification of conditions on biomedical images across staining modalities using a multi-task deep learning framework
Tan et al. A lightweight network guided with differential matched filtering for retinal vessel segmentation
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
CN113379770A (en) Nasopharyngeal carcinoma MR image segmentation network construction method, image segmentation method and device
KR20200041773A (en) Apparatus for compansating cancer region information and method for the same
KR20200041772A (en) Apparatus for evaluating state of cancer region and method for the same
CN113408596B (en) Pathological image processing method and device, electronic equipment and readable storage medium
Tao et al. Anatomical Structure-Aware Pulmonary Nodule Detection via Parallel Multi-task RoI Head
CN117392468B (en) Cancer pathology image classification system, medium and equipment based on multi-example learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220324

Address after: 100080 zone a, 21 / F, block a, No. 8, Haidian Street, Haidian District, Beijing

Patentee after: BEIJING SHENRUI BOLIAN TECHNOLOGY Co.,Ltd.

Patentee after: Hangzhou Shenrui Bolian Technology Co., Ltd

Address before: Unit 06 and 07, 23 / F, 523 Loushanguan Road, Changning District, Shanghai

Patentee before: SHANGHAI YIZHI MEDICAL TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right