CN111768366A - Ultrasonic imaging system, BI-RADS classification method and model training method - Google Patents

Ultrasonic imaging system, BI-RADS classification method and model training method Download PDF

Info

Publication number
CN111768366A
CN111768366A CN202010431222.4A CN202010431222A CN111768366A CN 111768366 A CN111768366 A CN 111768366A CN 202010431222 A CN202010431222 A CN 202010431222A CN 111768366 A CN111768366 A CN 111768366A
Authority
CN
China
Prior art keywords
feature map
rads
classification
feature
breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010431222.4A
Other languages
Chinese (zh)
Inventor
安兴
丛龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202010431222.4A priority Critical patent/CN111768366A/en
Publication of CN111768366A publication Critical patent/CN111768366A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0825Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the breast, e.g. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention provides an ultrasonic imaging system, a BI-RADS classification method and a model training method, wherein the ultrasonic imaging system comprises: an ultrasonic probe; a transmitting/receiving circuit; a processor; performing feature map extraction on the breast ultrasound image based on one or more first feature map extractors to obtain one or more first feature maps about breast lesion features, wherein the breast lesion features comprise BI-RADS features; performing feature map extraction on the breast ultrasound image based on a second feature map extractor to obtain a second feature map related to BI-RADS classification; classifying the first feature map and the second feature map based on a first classification model to obtain a BI-RADS classification result; and the output device is used for outputting the BI-RADS grading result. The method can eliminate the difference between the characteristics and improve the accuracy of BI-RADS classification.

Description

Ultrasonic imaging system, BI-RADS classification method and model training method
Technical Field
The invention relates to the technical field of ultrasonic imaging, in particular to an ultrasonic imaging system, a BI-RADS classification method and a model training method.
Background
With the development of social economy and the change of life style of people, the incidence rate of breast cancer rises year by year, and the early diagnosis and early treatment of breast cancer have great significance. The breast ultrasonic examination has the advantages of no wound, rapidness, strong repeatability, no radioactivity and the like, and can clearly display the change of the position, the form, the internal structure and the adjacent tissues of each layer of soft tissues of the breast and the focus in the soft tissues, so the breast ultrasonic examination becomes an important mode for routine examination and early screening of the breast cancer.
On the other hand, the symptoms of breast lesions are complex, and the diagnosis is limited by the working experience of clinicians and has certain subjectivity. BI-RADS (Breast image reporting and data System) proposed by the American College of Radiology (ACR) is a grading evaluation standard for the wide application of breast ultrasound diagnosis at present. BI-RADS is a standard diagnostic report for all imaging normal and abnormal conditions of the breast as a whole organ, using uniform terminology, standard diagnostic classification and examination procedures.
The BI-RADS diagnosis rules are complicated and various, so that the BI-RADS diagnosis rules are difficult to remember for doctors in low-age capital or primary hospitals, and the accuracy of diagnosis is influenced. With the rapid development of artificial intelligence and image processing technology, the application of computer-aided diagnosis to ultrasonic breast intelligent analysis becomes possible, and an automatic, objective and high-precision aided diagnosis means is provided for clinical diagnosis. Improving the accuracy of computer-aided diagnosis has great clinical value.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the deficiencies of the prior art, a first aspect of embodiments of the present invention provides an ultrasound imaging system, comprising:
an ultrasonic probe;
the transmitting/receiving circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to a mammary gland region of a tested object and receiving ultrasonic echoes returned from the mammary gland region so as to obtain ultrasonic echo data;
a processor to:
processing the ultrasonic echo data to obtain a breast ultrasonic image of the breast region;
performing feature map extraction on the breast ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, the breast lesion features including BI-RADS features;
performing feature map extraction on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map about BI-RADS classification;
classifying the obtained one or more first feature maps and the obtained second feature map based on a pre-trained first classification model to obtain a BI-RADS classification result;
and the output device is used for outputting the BI-RADS grading result.
In one embodiment, the classifying the obtained one or more first feature maps and the second feature map based on the pre-trained first classification model to obtain a BI-RADS ranking result includes:
and fusing the one or more first feature maps and the second feature map to obtain a fused feature map, and classifying the fused feature map based on a pre-trained first classification model to obtain a BI-RADS classification result.
In one embodiment, the processor is further configured to extract a region of interest in the breast ultrasound image, and the performing feature map extraction on the breast ultrasound image includes performing feature map extraction on the region of interest in the breast ultrasound image.
In one embodiment, the processor is further configured to:
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result.
The second aspect of the embodiments of the present invention provides a BI-RADS classification method, including:
obtaining a mammary gland ultrasonic image of a mammary gland area to be detected;
performing feature map extraction on the breast ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, the breast lesion features including BI-RADS features;
performing feature map extraction on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map about BI-RADS classification;
classifying the obtained one or more first feature maps and the obtained second feature map based on a pre-trained first classification model to obtain a BI-RADS classification result;
and outputting the BI-RADS grading result.
In one embodiment, the classifying the obtained one or more first feature maps and the second feature map based on the pre-trained first classification model to obtain the BI-RADS ranking result includes:
and fusing the one or more first feature maps and the second feature map to obtain a fused feature map, and classifying the fused feature map based on a pre-trained first classification model to obtain a BI-RADS classification result.
In one embodiment, the method further comprises extracting a region of interest in the breast ultrasound image, and the performing feature map extraction on the breast ultrasound image comprises performing feature map extraction on the region of interest in the breast ultrasound image.
In one embodiment, the method further comprises:
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result.
A third aspect of embodiments of the present invention provides an ultrasound imaging system, including:
an ultrasonic probe;
the transmitting/receiving circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to a target area of a measured object and receiving ultrasonic echoes returned from the target area so as to obtain ultrasonic echo data;
a processor to:
processing the ultrasonic echo data to obtain an ultrasonic image of the target area;
performing feature map extraction on the ultrasonic image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about the characteristics of the focus;
performing feature map extraction on the ultrasonic image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading;
fusing the one or more first feature maps with the second feature map to obtain a fused feature map;
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion grading result;
and the output device is used for outputting the lesion grading result.
In one embodiment, the lesion features include: at least one of an edge, an internal echo, a calcification, a shape, a direction, a posterior echo, a composition, or a blood flow characteristic.
A fourth aspect of the embodiments of the present invention provides a classification method, where the method includes:
acquiring an ultrasonic image of a target area of a measured object;
performing feature map extraction on the ultrasonic image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about the characteristics of the focus;
performing feature map extraction on the ultrasonic image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading;
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion grading result;
and outputting the lesion grading result.
In one embodiment, the lesion features include: at least one of an edge, an internal echo, a calcification, a shape, a direction, a posterior echo, a composition, or a blood flow characteristic.
A fifth aspect of the embodiments of the present invention provides a model training method, where the method includes:
training a first feature map extractor for one or more breast lesion features, the breast lesion features comprising BI-RADS features;
training a second feature map extractor for the BI-RADS classification;
extracting a first feature map of the sample breast ultrasound image by using the first feature map extractor, and extracting a second feature map of the sample breast ultrasound image by using the second feature map extractor;
training a classification model for outputting BI-RADS classification results based on the first feature map and the second feature map.
In one embodiment, the training of the classification model for outputting the BI-RADS ranking result based on the first feature map and the second feature map comprises:
and fusing the first feature map and the second feature map to obtain a fused feature map, and training a classification model for outputting a BI-RADS classification result based on the fused feature map.
In one embodiment, the sample breast ultrasound image includes one or more of a global sample breast ultrasound image, a global sample breast ultrasound image labeled with a region of interest, or a region of interest extracted from a global sample breast ultrasound image.
In one embodiment, the method further comprises:
training a second classification model for outputting BI-RADS feature results based on the first feature map and the second feature map.
In one embodiment, the training of the first feature map extractor for one or more breast lesion features comprises:
inputting a sample breast ultrasound image with breast lesion feature classification calibrated in advance into a classification model to be trained as a training sample, reducing an error between a breast lesion feature classification result output by the classification model and the breast lesion feature classification calibrated in advance by optimizing parameters of the classification model, and optimizing the classification model until the classification model meets preset conditions to obtain a breast lesion feature classification model;
and reserving a front-end feature map extraction network of the breast lesion feature classification model to be used as the first feature map extractor.
In one embodiment, the training of the second feature map extractor for BI-RADS ranking comprises:
inputting a sample breast ultrasound image with BI-RADS classification calibrated in advance into a classification model to be trained as a training sample, reducing errors between a BI-RADS classification result output by the classification model and the BI-RADS classification calibrated in advance by optimizing parameters of the classification model, and optimizing the classification model to meet preset conditions to obtain the BI-RADS classification model;
and reserving a front-end feature map extraction network of the BI-RADS hierarchical classification model to be used as the second feature map extractor.
In one embodiment, the classification model comprises a machine-learned classifier or a deep-learning neural network model.
A sixth aspect of the embodiments of the present invention provides a model training method, where the method includes:
respectively training a first feature map extractor aiming at one or more focus features;
training a second feature map extractor aiming at lesion grading;
extracting a first feature map of the sample ultrasonic image by using the first feature map extractor, and extracting a second feature map of the sample ultrasonic image by using the second feature map extractor;
training a classification model for outputting a lesion ranking result based on the first feature map and the second feature map.
A seventh aspect of the embodiments of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program executed by the processor, and the computer program, when executed by the processor, performs the steps of the above model training method.
An eighth aspect of the embodiments of the present invention provides a computer storage medium, on which a computer program is stored, where the computer program, when executed by a computer or a processor, implements the steps of the above-mentioned model training method.
According to the ultrasonic imaging system, the BI-RADS classification method and the model training method, the feature graph related to the lesion features and the feature graph related to the classification result are used as the input feature graph of the classification model together for classification, so that the accuracy of lesion classification is improved, the difference among the features is eliminated, and the problem of inconsistent dimension is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
In the drawings:
FIG. 1 shows a schematic block diagram of an ultrasound imaging system according to an embodiment of the present invention;
FIG. 2 shows a schematic flow diagram of a BI-RADS ranking method according to one embodiment of the present invention;
FIG. 3 shows a schematic block diagram of an ultrasound imaging system according to another embodiment of the invention;
FIG. 4 shows a schematic flow diagram of a classification method according to an embodiment of the invention;
FIG. 5 shows a schematic flow diagram of a model training method according to one embodiment of the invention;
FIG. 6 shows a schematic flow diagram of a model training method according to another embodiment of the invention;
FIG. 7 shows a schematic block diagram of an electronic device according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
In order to improve the accuracy of BI-RADS classification of breast ultrasound images based on machine learning techniques, one solution is to optimize the performance of a classifier by means of Feature fusion, such as fusing Feature vectors of a plurality of BI-RADS features obtained by encoding breast ultrasound images with a global Feature Map (Feature Map) extracted from the breast ultrasound images, and then using the fused Feature vectors for the optimization of the classifier, or predicting the BI-RADS classification based on the fused Feature vectors. However, the feature vectors of the BI-RADS features (such as coding the calcification features as 0-no calcification, 1-intraductal calcification, 2-intralump calcification and 3-extralump calcification) are directly fused in series with the global feature map of the breast ultrasound image, so that the problems of different data types and inconsistent dimensions exist, and the performance of the algorithm and the accuracy of the result are influenced.
Based on the method, the embodiment of the invention provides an ultrasonic imaging system, a BI-RADS classification method and a model training method. An ultrasound imaging system according to an embodiment of the present invention is first described below with reference to fig. 1, and fig. 1 shows a schematic block diagram of an ultrasound imaging system 100 according to an embodiment of the present invention.
As shown in fig. 1, the ultrasound imaging system 100 includes an ultrasound probe 110, transmit/receive circuitry 112, a processor 114, and an output device 116. Further, the ultrasound imaging system 100 may further include a beam forming circuit, a transmission/reception selection switch, and the like.
The ultrasound probe 110 typically includes an array of a plurality of array elements. At each transmission of the ultrasound wave, all or part of the elements of the ultrasound probe 110 participate in the transmission of the ultrasound wave. At this time, each array element or each part of array elements participating in ultrasonic wave transmission is excited by the transmission pulse and respectively transmits ultrasonic waves, and the ultrasonic waves respectively transmitted by the array elements are superposed in the propagation process to form a synthesized ultrasonic wave beam transmitted to the mammary gland region of the measured object.
The transmission/reception circuit 112 may be connected to the ultrasound probe 110 through a transmission/reception selection switch. The transmission/reception selection switch, which may also be referred to as a transmission/reception controller, may include a transmission controller for exciting the ultrasonic probe 110 to transmit ultrasonic waves to a breast region of the subject via the transmission circuit; the receiving controller is configured to receive an ultrasound echo returned from a breast of the object to be measured by the ultrasound probe 110 via the receiving circuit, thereby obtaining ultrasound echo data. Then, the transmitting/receiving circuit 112 sends the electrical signal of the ultrasonic echo to the beam forming circuit, and the beam forming circuit performs focusing delay, weighting, channel summation, and the like on the electrical signal, and then sends the processed ultrasonic echo data to the processor 114.
Alternatively, the processor 114 may be implemented by software, hardware, firmware or any combination thereof, and may use circuits, single or multiple Application Specific Integrated Circuits (ASICs), single or multiple general purpose Integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the aforementioned circuits and/or devices, or other suitable circuits or devices, so that the processor 114 may perform the corresponding steps of the methods in the various embodiments of the present specification. Also, the processor 114 may control other components in the ultrasound imaging system 100 to perform desired functions.
The processor 114 processes the received ultrasound echo data to obtain a breast ultrasound image of the breast region of the object. As an example, the processor 114 may perform different processing on the ultrasound echo data acquired by the transmit/receive circuitry 112 according to the imaging mode desired by the user to obtain breast ultrasound images of different modes. The ultrasound image of the breast obtained by the processor 114 may be stored in a memory. Also, the ultrasound image of the breast obtained by the processor 114 can be displayed on a display.
After obtaining the breast ultrasound image, the processor 114 is also used to rank the breast ultrasound image to obtain the BI-RADS ranking result.
In some embodiments, the processor 114 performs a ranking based on the global breast ultrasound image to obtain the BI-RADS ranking result. The detection of the region of interest can be performed simultaneously when the global breast ultrasound image is graded. In other embodiments, the processor 114 may extract a region of interest (ROI region) from the breast ultrasound image and perform a ranking based on the ROI region to obtain a BI-RADS ranking result. The ROI region generally includes a lesion region in the breast ultrasound image, and may be, for example, a rectangular region containing the lesion region. The lesion of the breast is a lesion area on the breast, for example, including an inflammatory lesion or a tumor lesion, and the type of the lesion is not limited by the embodiment of the present invention.
In one example, the processor 114 may automatically determine the location of the ROI on the breast ultrasound image based on a related machine recognition algorithm, for example, specifically, the ROI may be extracted at the processor 114 based on a deep learning ROI detection model, a machine learning ROI detection model, or a conventional image processing method, etc.
Illustratively, the deep learning ROI detection model may use a neural network including, but not limited to, RCNN, fast-RCNN, SSD, YOLO, etc., the neural network may be trained in advance based on the collected ultrasound image data and the lesion region labeling result of the senior physician, an error between the detection result and the labeling result of the ROI region in the iterative process is calculated in the network training stage, and the weight in the network is continuously updated with the purpose of minimizing the error until the detection result gradually approaches the true value of the ROI region, so as to finally obtain the trained deep learning ROI detection model. The model can realize automatic detection and extraction of the ROI area of the input breast ultrasound image.
As another implementation, the processor 114 may also perform automatic extraction of ROI regions using conventional image processing in combination with machine learning. Specifically, a region to be selected in the breast ultrasound image is determined in advance based on a conventional image processing method (for example, using a SelectSearch algorithm), then the region to be selected is transformed to a fixed size, and feature vectors such as gradients and textures of the image are extracted by using image processing methods such as a Sift operator, a HoG operator, a GLCM gray level co-occurrence matrix, and the like. And then, training the feature vectors of the to-be-selected area through a traditional machine learning algorithm to obtain a classification model of the to-be-selected frame. And finally, obtaining a boundary frame of the ROI through a regression method.
Another ROI automatic detection method based on machine learning trains machine learning models such as SVM, Kmeans and Cmeass based on collected breast ultrasound images and labeling results of ROI areas, adopts the pre-trained machine learning ROI detection model to perform secondary classification on gray values or texture values of pixel points in the breast ultrasound images, and judges whether each pixel point belongs to the ROI area or not, so that automatic segmentation and extraction of the ROI area are achieved.
Several exemplary ROI automatic extraction methods are shown above, and in other implementations, the ROI region may be determined manually by the user in the breast ultrasound image, for example, by displaying the breast ultrasound image on a display and determining the ROI region according to a frame selection operation performed by the user. Alternatively, the ROI region may be determined by semi-automatic detection, for example, by first automatically detecting the location of the ROI region on the breast ultrasound image based on a machine recognition algorithm, and then further modified or corrected by the user to obtain a more accurate location.
For a global breast ultrasound image or a ROI region extracted from a breast ultrasound image, the processor 114 performs feature map extraction on the global breast ultrasound image or the ROI region based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, including BI-RADS features. In addition, breast lesion characteristics may also include other breast lesion characteristics related to the malignancy and malignancy of the lesion. Moreover, the processor 114 performs feature map extraction on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map for the BI-RADS classification. The extraction of the first feature map and the extraction of the second feature map may be performed sequentially or in parallel. In one embodiment, one BI-RADS feature corresponds to one first feature map, and in other embodiments, one BI-RADS feature may correspond to a plurality of first feature maps.
The BI-RADS features include, but are not limited to, breast shape type, direction type, edge type, echo type, posterior echo type, calcification type, and other features having a high degree of malignancy correlation with a breast lesion, and one or more of the BI-RADS features may be trained to form a first feature map extractor, or a plurality of first feature map extractors may be trained to form a plurality of BI-RADS features.
Specifically, a classification model for a specific BI-RADS feature may be trained first, and a front-end feature extractor of the trained classification model may be retained as a first feature map extractor for the BI-RADS feature, where the first feature map extracted by the first feature map extractor includes information about the BI-RADS feature. When a classification model is constructed for different BI-RADS features, a typical deep learning convolution classification network, such as AlexNet, Resnet, VGG, etc., can be used for training by minimizing the error between the prediction result and the pre-calibrated true value. The specific training process of the first feature map extractor can be referred to below.
Similarly, in training the second feature map extractor, the classification model for BI-RADS classification of breast ultrasound images may be trained first, and after training is completed, the front-end feature extractor of the classification model is retained as the second feature map extractor. The second feature map extracted by the second feature map extractor contains information about the BI-RADS classification. The specific training process of the second feature map extractor can be referred to below.
The first feature map extractor extracts a first feature map comprising a plurality of BI-RADS features, the second feature map extractor extracts a second feature map comprising BI-RADS grading, and both feature maps are extracted based on a deep learning network, so that the problem that the feature vectors of the BI-RADS features are inconsistent with the feature map dimension of the BI-RADS grading is avoided. In some embodiments, the one or more first feature maps and the second feature map have the same size, and the number of channels of the one or more first feature maps and the second feature map may be the same or different. The first feature map and the second feature map may be one-dimensional feature maps or two-dimensional feature maps, which is not limited in this embodiment of the present invention.
After obtaining the one or more first feature maps and the second feature map, the processor 114 classifies the obtained one or more first feature maps and the second feature map based on a pre-trained first classification model to obtain a BI-RADS classification result.
Wherein, the BI-RADS hierarchy generally comprises 0-6 levels, illustratively, each of the BI-RADS hierarchies is as follows: level 0: the evaluation is incomplete, the patient needs to be recalled, and the evaluation is carried out after other examinations, which indicates that the information obtained by the examinations is possibly incomplete; level 1: no abnormality, normal mammary gland, was found; and 2, stage: benign lesions, basically excluding malignant lesions, and recommending regular follow-up; and 3, level: possibly benign lesion, but needs to shorten follow-up period, and the malignancy rate is less than 2 percent; 4, level: the possibility of malignant lesions cannot be completely excluded, and the biopsy is required to be clear (further classified into 4a grade, low possibility of malignancy tendency, 4b grade, medium possibility of malignancy tendency and 4c grade, high possibility of malignancy tendency); and 5, stage: highly suspected malignant lesions (almost recognized as malignant disease); and 6, level: a malignant lesion has been confirmed by biopsy.
In one embodiment, the processor may categorize all of the BI-RADS ratings; it is also possible to classify only a portion of the BI-RADS rankings, such as only 2-5 rankings, with the resulting BI-RADS ranking being one of 2-5 rankings. In other embodiments, the BI-RADS ranking results may also be in a ranked interval, such as ranking results in 2-3 ranks.
Illustratively, the first classification model may be a conventional machine learning classifier, such as a SVM, KNN, boosting, or the like classifier; alternatively, the first classification model may be a deep learning neural network, and classification may be performed by connecting a fully connected layer or the like after fusing the feature maps, for example.
Since the feature map input to the first classification model includes not only the second feature map regarding the BI-RADS classification but also one or more first feature maps regarding the breast lesion features, the included information is more comprehensive, thereby improving the accuracy of the BI-RADS classification result. In addition, the first feature map and the second feature map are both feature maps generated by the deep learning network, so that the problem of inconsistent dimension of the first feature map and the second feature map is avoided.
In some embodiments, the processor 114 may fuse the one or more first feature maps with the second feature map to obtain a fused feature map. When the feature maps are fused, one or more first feature maps and one or more second feature maps can be directly spliced and fused, or corresponding contribution degree vectors are determined for the different first feature maps and the second feature maps according to different contribution degrees of different breast lesion features to BI-RADS classification, and the first feature maps and the second feature maps are subjected to weighted fusion by using the different contribution degree vectors to obtain a fused feature map.
In some embodiments, the fused feature map may be subjected to a dimensionality reduction operation, for example, a PCA (principal component analysis) -based fused feature map with a high dimensionality is projected to a low-dimensional subspace to achieve dimensionality reduction, so as to obtain feature vectors, and the pre-trained classification model performs classification based on the feature vectors subjected to dimensionality reduction, thereby preventing an over-fitting phenomenon from occurring, increasing an algorithm speed, and reducing an occupied memory space.
In some embodiments, the processor 114 is further configured to classify the obtained one or more first feature maps and the obtained second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result.
The second classification model, like the first classification model, can also be constructed using classifiers in traditional machine learning or fully connected layers in neural networks. The difference is that the second classification model outputs BI-RADS feature results, such as where the BI-RADS edge features are sharp or not sharp, or which of the calcification types is non-calcifications, intraductal calcifications, intramass calcifications, or extramass calcifications.
Illustratively, each second classification model is trained for one BI-RADS feature for outputting one BI-RADS feature result. In the training stage, the second classification model is trained with one or more first feature maps and second feature maps extracted from the sample breast ultrasound image as input feature maps, for example, the one or more first feature maps and the second feature maps are fused to obtain a fused feature map, and training is performed based on the fused feature map. Because the second classification model is classified based on one or more first feature maps and the second feature map, the information contained in the input feature maps is more comprehensive, and the accuracy of the BI-RADS feature results is improved.
The BI-RADS ranking results obtained by the processor 114 are output by an output device 116. The output device 116 may include a display, which may be a touch screen, liquid crystal display, etc., communicatively coupled to the processor 114; or the display may be a separate display device such as a liquid crystal display, a television, etc. independent of the ultrasound imaging system 100; alternatively, the display may be a display screen of an electronic device such as a smartphone or a tablet computer, and the like. The number of the displays may be one or more. For example, the display may include a home screen primarily for displaying ultrasound images and a touch screen primarily for human-computer interaction.
In addition to outputting the BI-RADS ranking results from the processor 114, the display may also display ultrasound images from the processor 114. If the processor also obtains the BI-RADS feature results based on the pre-trained second classification model, the display may also display the obtained BI-RADS feature results.
In addition, the display 114 can provide a graphical interface for human-computer interaction for the user while displaying the ultrasound image, and one or more controlled objects are arranged on the graphical interface, so that the user can input operation instructions by using the human-computer interaction device to control the controlled objects, thereby executing corresponding control operation. For example, an icon is displayed on the graphical interface, which can be manipulated using the human-computer interaction device to perform a particular function, such as selecting an ROI area on a breast ultrasound image.
In addition, ultrasound imaging system 100 may include output devices such as speakers, printers, etc., or any other suitable output device.
Optionally, the ultrasound imaging system 100 may further include other human-computer interaction devices connected to the processor 114, for example, the processor 114 may be connected to the human-computer interaction devices through an external input/output port, which may be a wireless communication module, a wired communication module, or a combination thereof. The external input/output port may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, etc.
Illustratively, the human-computer interaction means may comprise an input device for detecting input information of the user, which may for example be a selection instruction for a region of interest, or may also comprise other instruction types. The input device may include one or more of a keyboard, mouse, scroll wheel, trackball, mobile input device (such as a mobile device with a touch screen display, cell phone, etc.), multi-function knob, and the like.
The ultrasound imaging system 100 may also include a storage medium. The storage medium may be used to store instructions for execution by the processor, for storing received ultrasound echoes, for storing ultrasound images, and so forth. The storage medium may be a flash memory card, solid state memory, hard disk, or the like. Which may be volatile memory and/or non-volatile memory, removable memory and/or non-removable memory, etc.
It should be understood that the components included in the ultrasound imaging system 100 shown in fig. 1 are merely illustrative and that more or fewer components may be included. This is not limited by the present application.
Hereinafter, a BI-RADS ranking method according to an embodiment of the present invention will be described with reference to fig. 2. FIG. 2 is a schematic flow diagram of a BI-RADS ranking method 200 in accordance with an embodiment of the present invention. The BI-RADS classification method 200 may be implemented by the ultrasound imaging system 100 described above, and only the main steps of the BI-RADS classification method 200 will be described below, while details already described above are omitted.
As shown in FIG. 2, a BI-RADS classification method 200 of an embodiment of the present invention includes the steps of:
in step S210, a breast ultrasound image of the breast area to be detected is obtained.
In one embodiment, obtaining an ultrasound image of the breast of the region of the breast to be examined may include acquiring an ultrasound image of the breast in real time. Specifically, in conjunction with fig. 1, an electrical signal with appropriate time delay may be sent to each transducer element in the ultrasound probe 110 by the transmitting circuit in the transmitting/receiving circuit 112, and the transducer converts the electrical signal into an ultrasound wave to be transmitted to the breast area of the measured object; the transducer in the ultrasound probe 110 receives the ultrasound echo returned from the breast area and converts the ultrasound echo into an electrical signal, and the electrical signal is processed by signal amplification, analog-to-digital conversion, and the like and then transmitted to the beam forming circuit for beam forming processing, and then the ultrasonic echo data formed by beam forming is sent to the processor 114.
Depending on the imaging mode desired by the user, the processor 114 may perform different processing on the first ultrasound echo signal to obtain ultrasound data in different modes. Then, the breast ultrasound images in different modes, such as two-dimensional ultrasound images including a B image, a C image, a D image, and the like, are formed through processing such as logarithmic compression, dynamic range adjustment, digital scan conversion, and the like.
In one embodiment, the breast ultrasound image of the breast area to be measured may also be obtained in non-real time, for example, by reading a pre-stored breast ultrasound image, or by receiving a breast ultrasound image transmitted from another device.
After the breast ultrasound image is acquired, the location of a region of interest (ROI region) can also be determined in the breast ultrasound image. The region of interest generally includes a focal region in a breast ultrasound image. The lesion of the breast is a lesion area on the breast, for example, including an inflammatory lesion or a tumor lesion, and the type of the lesion is not limited by the embodiment of the present invention.
In one example, the location of the region of interest may be automatically determined on the breast ultrasound image based on a related machine recognition algorithm, e.g., the region of interest may be automatically extracted based on a pre-trained ROI detection model. In other examples, the region of interest may be determined manually in a breast ultrasound image by a user. Alternatively, the region of interest may be determined by way of semi-automatic detection, for example, first automatically detecting the location of the region of interest on the breast ultrasound image based on a machine recognition algorithm, and then further modified or corrected by the user to obtain a more accurate location of the region of interest.
It should be noted that the determination of the region of interest in the breast ultrasound image is an optional step. The subsequent steps may be performed based on the region of interest in the breast ultrasound image, or may be performed based on the global breast ultrasound image, which is not limited in this embodiment of the present invention.
In step S220, feature map extraction is performed on the breast ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, including BI-RADS features.
Specifically, for each breast lesion feature, a pre-trained first feature map extractor may be respectively used to extract a first feature map from the breast ultrasound image. The BI-RADS features include, but are not limited to, breast shape type, orientation type, edge type, echo type, posterior echo type, calcification type, and other features that are highly correlated with breast lesion malignancy.
In step S230, feature map extraction is performed on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map for BI-RADS classification.
The first feature map extractor extracts a feature map including the features of the BI-RADS, and the second feature map extractor extracts a second feature map including the grading of the BI-RADS, both in the form of feature maps, thereby avoiding the problem that the feature vectors regarding the features of the breast lesion are not in accordance with the dimension of the feature map regarding the grading of the BI-RADS. The first characteristic diagram and the second characteristic diagram may have the same size.
In step S240, the obtained one or more first feature maps and the second feature map are classified based on the pre-trained first classification model to obtain a BI-RADS classification result.
Wherein, one or more first feature maps and the second feature map can be fused to obtain a fused feature map; the first classification model classifies the fused feature map. Since the fused feature map contains information about breast lesion features and information about BI-RADS classification, the contained information is more comprehensive, and thus the classification result is more accurate. In addition, the first feature map and the second feature map are both generated by the deep learning network, so that the problem of inaccurate BI-RADS grading result caused by inconsistent dimensions is solved.
In some embodiments, the fused feature maps may be subjected to dimension reduction, and classification may be performed based on the fused feature maps subjected to dimension reduction, so as to improve the operation speed and reduce the occupied memory space.
In some embodiments, the BI-RADS ranking method 200 further includes: classifying the obtained one or more first feature maps and the second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result. Similarly, the second classification model integrates one or more first feature maps and the second feature map for classification, so that the accuracy of the BI-RADS feature type prediction is improved.
In step S250, the BI-RADS classification result is output.
Wherein, the BI-RADS classification result can be output in any suitable information output mode such as characters, graphs, voice and the like. If the second classification model is adopted for classification to obtain the BI-RADS characteristic result, the BI-RADS characteristic result can be output.
According to the ultrasound imaging system 100 and the BI-RADS classification method 200, the first feature map related to the breast lesion features and the second feature map related to the BI-RADS classification are used as input feature maps of a classification model together for classification, so that the difference among the features is eliminated, and the accuracy of BI-RADS classification is improved.
Next, an ultrasound imaging system according to another embodiment of the present invention is described with reference to fig. 3, and fig. 3 shows a schematic block diagram of an ultrasound imaging system 300 according to an embodiment of the present invention.
As shown in fig. 3, the ultrasound imaging system 300 includes an ultrasound probe 310, transmit/receive circuitry 312, a processor 314, and an output device 316. The associated description of the various components of the ultrasound imaging system 300 may be with reference to the ultrasound imaging system 100 above. Only the main functions of the ultrasound imaging system 300 will be described below, and details that have been described above will be omitted.
The transmitting/receiving circuit 312 is configured to excite the ultrasonic probe 310 to transmit an ultrasonic wave to a target region of a measured object, and receive an ultrasonic echo returned from the target region to obtain ultrasonic echo data; the processor 314 is configured to: processing the ultrasonic echo data to obtain an ultrasonic image of the target area; performing feature map extraction on the ultrasonic image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about the characteristics of the focus; performing feature map extraction on the ultrasonic image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading; classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion grading result; the output device 316 is used for outputting the lesion grading result. Illustratively, the lesion feature includes at least one of a margin, an internal echo, a calcification.
The ultrasound imaging system 300 is similar to the ultrasound imaging system 100 above, and differs therefrom primarily in that: the transmission/reception circuit 312 excites the ultrasonic probe 310 to transmit ultrasonic waves to a target region of the object to be measured, which is not limited to a breast region but also includes a thyroid region, a liver region or other target regions. Accordingly, different target regions correspond to different lesion features and grading criteria.
For example, when the target region is a thyroid region, the classification may be performed based on a TI-RADS classification criterion, in which case one or more pre-trained first feature map extractors extract a first feature map regarding TI-RADS features including components, echoes, shapes, edges, and focal hyperechoes, etc.; the second feature map extractor extracts a second feature map regarding the TI-RADS classification. The TI-RADS criterion classifies the degree of thyroid nodule malignancy into 5 grades TR1-TR5 representing benign to highly suspicious malignancy, with the higher the TI-RADS grade, the greater the likelihood of thyroid nodule malignancy.
In some embodiments, the contribution degrees of different TI-RADS features to TI-RADS ranking are different, after obtaining a plurality of first feature maps about different TI-RADS features, corresponding contribution degree vectors may be determined for the different TI-RADS features, and the first feature map and the second feature map are subjected to weighted fusion by using the different contribution degree vectors to obtain a fused feature map.
In addition, the target region may also include a liver region, and accordingly, an ultrasound image of the liver region is graded using a grading criterion for the liver and a lesion feature. Alternatively, the target region may be another tumor region and the grading criteria may be the PD-L1 expression level, KI67 expression level, etc. of the tumor region.
Next, a classification method according to an embodiment of the present invention is described with reference to fig. 4. FIG. 4 is a schematic flow chart diagram of a ranking method 400 in accordance with an embodiment of the present invention. The ranking method 400 can be implemented by the ultrasound imaging system 300 described above, and only the main functions of the ranking method 400 will be described below, while details already described above will be omitted.
As shown in fig. 4, a classification method 400 according to an embodiment of the present invention includes the following steps:
in step S410, an ultrasound image of a target area of the object to be measured is acquired;
in step S420, feature map extraction is performed on the ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps related to lesion features;
in step S430, performing feature map extraction on the ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading;
in step S440, classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion classification result;
in step S450, the lesion grading result is output.
In one embodiment, the lesion feature comprises at least one of a margin, an internal echo, a calcification.
For other specific details of the classification method 400 according to the embodiment of the present invention, reference may be made to the above description, which is not repeated herein.
The ultrasonic imaging system 300 and the grading method 400 according to the embodiment of the invention grade the first feature map related to the lesion features and the second feature map related to the lesion grading together as the input feature maps of the classification model, thereby eliminating the difference between the features and improving the accuracy of lesion grading.
The embodiment of the invention also provides a model training method. FIG. 5 shows a schematic flow diagram of a model training method 500 according to an embodiment of the invention. As shown in FIG. 5, the model training method 500 includes the following steps:
in step S510, training a first feature map extractor for one or more breast lesion features, including BI-RADS features;
in step S520, training a second feature map extractor for BI-RADS classification;
in step S530, extracting a first feature map of the sample breast ultrasound image by using the first feature map extractor, and extracting a second feature map of the sample breast ultrasound image by using the second feature map extractor;
in step S540, a classification model for outputting a BI-RADS classification result is trained based on the first feature map and the second feature map.
According to the model training method 500 provided by the embodiment of the invention, the first feature map extractor aiming at the breast lesion features and the second feature map extractor aiming at the BI-RADS classification are respectively trained, and the classification model for outputting the BI-RADS classification result is trained by combining the first feature map and the second feature map.
For example, before the model training method is executed, a sample set is first constructed for the first feature map extractor and the second feature map extractor, respectively. The two sample sets can be used in the same batch of breast ultrasound images. The breast lesion feature calibration method comprises the following steps that a breast lesion feature calibration is carried out on a sample breast ultrasonic image in a sample set constructed by a first feature map extractor; and calibrating the BI-RADS classification of the sample breast ultrasound image in the sample set constructed by aiming at the second feature map extractor.
Illustratively, the sample breast ultrasound image includes a global sample breast ultrasound image, a global sample breast ultrasound image labeled with a region of interest, or a region of interest extracted from a global sample breast ultrasound image. Wherein, the global sample breast ultrasound image is a breast ultrasound image acquired by an ultrasound imaging system, and the region of interest in the global sample breast ultrasound image may include a lesion area in the breast ultrasound image, for example, may be a rectangular region containing the lesion area. The lesion of the breast is a lesion area on the breast, and includes, for example, an inflammatory lesion, a tumor lesion, or the like.
In step S510, different breast lesion features may be regarded as different classification problems, and corresponding classification models are trained for one or more breast lesion features, where each classification model may be used to classify one breast lesion feature, and may also be used to classify multiple breast lesion features. The breast lesion features include BI-RADS features, and specifically include, but are not limited to, breast shape types, direction types, edge types, echo types, posterior echo types, calcification types, and other features with high association degree with breast lesions.
Specifically, a sample breast ultrasound image with breast lesion feature classification calibrated in advance is input into a classification model to be trained as a training sample, the error between a breast lesion feature classification result output by the classification model and the breast lesion feature classification calibrated in advance is reduced by optimizing the parameters of the classification model, and the classification model is optimized to meet preset conditions to obtain the breast lesion feature classification model. The preset conditions comprise: the preset iteration times or the error is reduced to a preset value, and the like. Typical deep learning convolutional neural networks, such as AlexNet, Resnet, VGG, etc., may be used in building the classification model. When training is carried out based on the convolutional neural network, the sample breast ultrasound image is input into the neural network, classification results such as edge attributes, direction attributes and echo properties of the breast lesion are output, and the error is minimized by calculating the error between the prediction result and the calibration and adopting optimization algorithms such as a back propagation algorithm and the like to continuously iterate and gradually approach. After the trained breast lesion feature classification model is obtained, a front-end feature map extractor of the breast lesion feature classification model is reserved to be used as a first feature map extractor aiming at the breast lesion features.
Specifically, taking the breast lesion feature as a BI-RADS edge feature (clear/unclear) as an example, the constructed breast lesion feature classification model is a two-classification network model for distinguishing the edge clear or unclear. During training, the sample breast ultrasound image marked with the BI-RADS edge features is input into a neural network (such as a VGG19 network), a feature map is extracted from the sample breast ultrasound image by a convolutional layer and a pooling layer at the front end of the neural network, a high-dimensional feature map is mapped into a clear or unclear probability value of input data by a rear end network (such as a full connection layer or Softmax), and the probability value is corrected based on a pre-marked real edge feature classification result in a continuous iteration process until the classification model reaches a certain accuracy, so that a well-trained classification model for the BI-RADS edge features is obtained. And then, a front-end feature map extractor of the classification model is reserved, so that a first feature map extractor aiming at the BI-RADS edge features is obtained, and therefore digital coding (such as edge features: 0-clearness/1-unclear) of the BI-RADS edge features is converted into first feature map extraction aiming at the BI-RADS edge features.
Similarly, in step S520, the BI-RADS classification may be regarded as a multi-classification problem to train a classification model for the BI-RADS classification, and a feature map extractor at the front end of the classification model is retained as a second feature map extractor. Specifically, a sample breast ultrasound image with pre-calibrated BI-RADS classification is input into a classification model to be trained as a training sample, the error between a BI-RADS classification result output by the classification model and the pre-calibrated BI-RADS classification is reduced by optimizing the parameters of the classification model, and the classification model is optimized to meet preset conditions to obtain the BI-RADS classification model. The classification model may use various suitable deep learning convolutional neural networks. And after the training is finished, a front-end feature map extractor of the trained BI-RADS hierarchical classification model is reserved to be used as a second feature map extractor for the BI-RADS hierarchy.
In step S530, for each sample breast ultrasound image, a first feature map of the sample breast ultrasound image is extracted using the first feature map extractor obtained in step S510, and a second feature map of the sample breast ultrasound image is extracted using the second feature map extractor obtained in step S520. Wherein, a plurality of first feature maps about different breast lesion features may be extracted using a plurality of first feature map extractors. The first feature map and the second feature map may have the same size; the number of channels in the first characteristic diagram and the second characteristic diagram may be the same or different.
In step S540, the classification model for BI-RADS classification is trained again based on the one or more first feature maps extracted by the one or more first feature map extractors and the second feature map extracted by the second feature map extractor.
Specifically, one or more first feature maps and one or more second feature maps may be fused to obtain a fused feature map, and the classification model may be trained based on the fused feature map. The fused feature map combines information about one or more breast lesion features with information about the BI-RADS classification, which is more comprehensive and the resulting classification model trained is more accurate. When the feature maps are fused, one or more first feature maps and one or more second feature maps can be directly spliced and fused, or corresponding contribution degree vectors are determined for the different first feature maps and the second feature maps according to different contribution degrees of different breast lesion features to BI-RADS classification, and the first feature maps and the second feature maps are subjected to weighted fusion by using the different contribution degree vectors to obtain a fused feature map.
The classification model herein may use a traditional machine learning classifier, such as an SVM, KNN, boosting, etc., to train the classification model, i.e., the fusion feature map is input into the machine learning classifier, and the parameters of the classifier are optimized to make the output BI-RADS classification result approach to the pre-labeled real BI-RADS classification result. Alternatively, the classification model may use a deep learning neural network, for example, a fully connected layer is connected after the feature maps are fused, and the deep learning neural network is trained.
In some embodiments, dimension reduction may also be performed on the fused feature map, for example, based on PCA (principal component analysis), the fused feature map with a high dimension is projected to a low-dimension subspace to achieve dimension reduction, so as to obtain a feature vector, and a classification model is trained based on the feature vector after dimension reduction as a training sample, so that an over-fitting phenomenon can be prevented, an algorithm speed is increased, and an occupied memory space is reduced.
In some embodiments, the model training method 500 further includes training a second classification model for outputting BI-RADS feature results based on the first feature map and the second feature map. One or more first feature maps and the second feature map can be fused to obtain a fused feature map, and the second classification model is trained based on the fused feature map. The second classification model, like the first classification model described above, may also be constructed using a conventional machine learning classifier or a deep learning neural network. When the second classification model is trained, the fusion feature map or the fusion feature map after dimensionality reduction can be used as a training sample, the BI-RADS feature result is used as a classification result, and the model parameters are optimized by reducing the error between the BI-RADS feature result output by the second classification model and the real BI-RADS feature result labeled in advance. The training process integrates the information of the first feature map about the breast lesion features and the information of the second feature map about the BI-RADS classification, so that the accuracy of the BI-RADS feature result output by the second classification model is improved.
According to the model training method 500 provided by the embodiment of the invention, the classification model for outputting the BI-RADS classification result is trained by combining the first feature map related to the breast lesion features and the second feature map related to the BI-RADS classification, so that the accuracy of the BI-RADS classification model can be improved, the difference among the features can be eliminated, and the problem of inconsistent dimension is avoided.
In the following, a model training method according to another embodiment of the present invention is described with reference to fig. 6, and fig. 6 shows a schematic flow chart of a model training method 600 according to an embodiment of the present invention.
As shown in FIG. 6, the model training method 600 includes the following steps:
in step S610, training a first feature map extractor for one or more lesion features, respectively;
in step S620, a second feature map extractor is trained for lesion grading;
in step S630, a first feature map of the sample ultrasound image is extracted by using the first feature map extractor, and a second feature map of the sample ultrasound image is extracted by using the second feature map extractor;
in step S640, a classification model for outputting a lesion ranking result is trained based on the first feature map and the second feature map.
The model training method 600 is similar to the model training method 500 above, and differs therefrom primarily in that: the sample ultrasound image is not limited to a breast ultrasound image, but also includes an ultrasound image of a thyroid region, a liver region, or other target region. Accordingly, ultrasound images of different regions correspond to different lesion features and grading criteria.
For example, when the target region is a thyroid region, the classification may be performed based on a TI-RADS classification standard, in which case the first feature map extractor is configured to extract a first feature map related to TI-RADS features, where the TI-RADS features include components, echoes, shapes, edges, and focal hyperechoes; the second feature map extractor is used for extracting a second feature map related to TI-RADS classification, wherein the TI-RADS standard divides the benign and malignant degree of the thyroid nodule into 5 grades of TR1-TR5, representing benign to highly suspicious malignant degree, and the higher the TI-RADS classification is, the higher the possibility that the thyroid nodule is malignant is.
In step S640, the first feature map and the second feature map may be fused to obtain a fused feature map, and further a classification model for outputting a lesion classification result may be trained based on the fused feature map. In some embodiments, the contribution degrees of different TI-RADS features to TI-RADS ranking are different, and specifically, corresponding contribution degree vectors may be determined for the different TI-RADS features, and the first feature map and the second feature map are subjected to weighted fusion by using the different contribution degree vectors to obtain a fused feature map.
In addition, the target region may also include a liver region, and accordingly, an ultrasound image of the liver region is graded using a grading criterion for the liver and a lesion feature. Alternatively, the target region may be another tumor region and the grading criteria may be the PD-L1 expression level, KI67 expression level, etc. of the tumor region.
Other specific details of the model training method 600 can refer to the above description of the model training method 500, which is not repeated herein.
Referring to fig. 7, an embodiment of the present application further provides an electronic device 700, where the electronic device 700 may be used to implement the model training method 500 or the model training method 600 described above. The electronic device 700 comprises a memory 710 and a processor 720, the memory 710 having stored thereon a computer program for execution by the processor 720.
Wherein the processor 720 may be implemented by software, hardware, firmware, or any combination thereof, may utilize circuitry, single or multiple application specific integrated circuits, single or multiple general purpose integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the foregoing, or other suitable circuitry or devices, and the processor 720 may control other components in the electronic device 700 to perform desired functions.
The memory 710 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory and/or cache memory or the like. The non-volatile memory may include, for example, read-only memory, hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 720 to implement the model training method and/or various other desired functions in embodiments of the present invention. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
Furthermore, according to an embodiment of the present application, there is also provided a computer storage medium having stored thereon program instructions for executing the respective steps of the model training method of any of the embodiments of the present application when the program instructions are executed by a computer or a processor. In some embodiments, the computer storage medium is a non-volatile computer-readable storage medium, which may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created during execution of program instructions, and the like. Further, the non-volatile computer-readable storage medium may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-transitory computer readable storage medium optionally includes memory that is remotely located from the processor.
Illustratively, the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In addition, according to the embodiment of the application, a computer program is further provided, and the computer program can be stored on a storage medium in a cloud or a local place. When being executed by a computer or a processor, for performing the corresponding steps of the model training method of the embodiments of the present application.
In summary, according to the ultrasound imaging system, the BI-RADS classification method and the model training method of the embodiment of the present invention, the feature map related to the lesion feature and the feature map related to the classification result are used as the input feature map of the classification model together for classification, so as to improve the accuracy of lesion classification, eliminate the difference between features, and avoid the problem of inconsistent dimensions.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (22)

1. An ultrasound imaging system, characterized in that the ultrasound imaging system comprises:
an ultrasonic probe;
the transmitting/receiving circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to a mammary gland region of a tested object and receiving ultrasonic echoes returned from the mammary gland region so as to obtain ultrasonic echo data;
a processor to:
processing the ultrasonic echo data to obtain a breast ultrasonic image of the breast region;
performing feature map extraction on the breast ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, the breast lesion features including BI-RADS features;
performing feature map extraction on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map about BI-RADS classification;
classifying the obtained one or more first feature maps and the obtained second feature map based on a pre-trained first classification model to obtain a BI-RADS classification result;
and the output device is used for outputting the BI-RADS grading result.
2. The ultrasound imaging system of claim 1, wherein the classifying the one or more derived first feature maps and the second feature map based on the pre-trained first classification model to derive the BI-RADS ranking comprises:
and fusing the one or more first feature maps and the second feature map to obtain a fused feature map, and classifying the fused feature map based on a pre-trained first classification model to obtain a BI-RADS classification result.
3. The ultrasound imaging system of claim 1 or 2, wherein the processor is further configured to extract a region of interest in the breast ultrasound image, wherein the characterizing the breast ultrasound image comprises characterizing the region of interest in the breast ultrasound image.
4. The ultrasound imaging system of claim 1 or 2, wherein the processor is further configured to:
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result.
5. A BI-RADS classification method, the method comprising:
obtaining a mammary gland ultrasonic image of a mammary gland area to be detected;
performing feature map extraction on the breast ultrasound image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about breast lesion features, the breast lesion features including BI-RADS features;
performing feature map extraction on the breast ultrasound image based on a pre-trained second feature map extractor to obtain a second feature map about BI-RADS classification;
classifying the obtained one or more first feature maps and the obtained second feature map based on a pre-trained first classification model to obtain a BI-RADS classification result;
and outputting the BI-RADS grading result.
6. The method for ranking according to claim 5, wherein the classifying the obtained one or more first feature maps and the second feature map based on the pre-trained first classification model to obtain the BI-RADS ranking result comprises:
and fusing the one or more first feature maps and the second feature map to obtain a fused feature map, and classifying the fused feature map based on a pre-trained first classification model to obtain a BI-RADS classification result.
7. The ranking method according to claim 5 or 6, further comprising extracting a region of interest in the breast ultrasound image, wherein the feature map extraction of the breast ultrasound image comprises feature map extraction of the region of interest in the breast ultrasound image.
8. The classification method according to claim 5 or 6, further comprising:
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained second classification model to obtain a BI-RADS feature result.
9. An ultrasound imaging system, characterized in that the ultrasound imaging system comprises:
an ultrasonic probe;
the transmitting/receiving circuit is used for exciting the ultrasonic probe to transmit ultrasonic waves to a target area of a measured object and receiving ultrasonic echoes returned from the target area so as to obtain ultrasonic echo data;
a processor to:
processing the ultrasonic echo data to obtain an ultrasonic image of the target area;
performing feature map extraction on the ultrasonic image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about the characteristics of the focus;
performing feature map extraction on the ultrasonic image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading;
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion grading result;
and the output device is used for outputting the lesion grading result.
10. The ultrasound imaging system of claim 9, wherein the lesion features comprise: at least one of an edge, an internal echo, a calcification, a shape, a direction, a posterior echo, a composition, or a blood flow characteristic.
11. A method of classification, the method comprising:
acquiring an ultrasonic image of a target area of a measured object;
performing feature map extraction on the ultrasonic image based on one or more pre-trained first feature map extractors to obtain one or more first feature maps about the characteristics of the focus;
performing feature map extraction on the ultrasonic image based on a pre-trained second feature map extractor to obtain a second feature map about lesion grading;
classifying the obtained one or more first feature maps and the second feature map based on a pre-trained classification model to obtain a lesion grading result;
and outputting the lesion grading result.
12. The grading method according to claim 11, wherein the lesion features comprise: at least one of an edge, an internal echo, a calcification, a shape, a direction, a posterior echo, a composition, or a blood flow characteristic.
13. A method of model training, the method comprising:
training a first feature map extractor for one or more breast lesion features, the breast lesion features comprising BI-RADS features;
training a second feature map extractor for the BI-RADS classification;
extracting a first feature map of the sample breast ultrasound image by using the first feature map extractor, and extracting a second feature map of the sample breast ultrasound image by using the second feature map extractor;
training a classification model for outputting BI-RADS classification results based on the first feature map and the second feature map.
14. The model training method of claim 13, wherein the training of the classification model for outputting the BI-RADS classification result based on the first feature map and the second feature map comprises:
and fusing the first feature map and the second feature map to obtain a fused feature map, and training a classification model for outputting a BI-RADS classification result based on the fused feature map.
15. The model training method of claim 13, wherein the sample breast ultrasound image comprises one or more of a global sample breast ultrasound image, a global sample breast ultrasound image labeled with a region of interest, or a region of interest extracted from a global sample breast ultrasound image.
16. The model training method of claim 13, further comprising:
training a second classification model for outputting BI-RADS feature results based on the first feature map and the second feature map.
17. The model training method of any one of claims 13 to 16, wherein the training of the first feature map extractor for one or more breast lesion features comprises:
inputting a sample breast ultrasound image with breast lesion feature classification calibrated in advance into a classification model to be trained as a training sample, reducing an error between a breast lesion feature classification result output by the classification model and the breast lesion feature classification calibrated in advance by optimizing parameters of the classification model, and optimizing the classification model until the classification model meets preset conditions to obtain a breast lesion feature classification model;
and reserving a front-end feature map extraction network of the breast lesion feature classification model to be used as the first feature map extractor.
18. The model training method of any one of claims 13 to 16, wherein the training of the second feature map extractor for the BI-RADS classification comprises:
inputting a sample breast ultrasound image with BI-RADS classification calibrated in advance into a classification model to be trained as a training sample, reducing errors between a BI-RADS classification result output by the classification model and the BI-RADS classification calibrated in advance by optimizing parameters of the classification model, and optimizing the classification model to meet preset conditions to obtain the BI-RADS classification model;
and reserving a front-end feature map extraction network of the BI-RADS hierarchical classification model to be used as the second feature map extractor.
19. The model training method of any one of claims 13 to 18, wherein the classification model comprises a machine learning classifier or a deep learning neural network model.
20. A method of model training, the method comprising:
respectively training a first feature map extractor aiming at one or more focus features;
training a second feature map extractor aiming at lesion grading;
extracting a first feature map of the sample ultrasonic image by using the first feature map extractor, and extracting a second feature map of the sample ultrasonic image by using the second feature map extractor;
training a classification model for outputting a lesion ranking result based on the first feature map and the second feature map.
21. An electronic device, characterized in that the electronic device comprises a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, performing the steps of the model training method of any one of claims 13-21.
22. A computer storage medium on which a computer program is stored, which, when being executed by a computer or a processor, carries out the steps of the model training method according to any one of claims 13 to 21.
CN202010431222.4A 2020-05-20 2020-05-20 Ultrasonic imaging system, BI-RADS classification method and model training method Pending CN111768366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010431222.4A CN111768366A (en) 2020-05-20 2020-05-20 Ultrasonic imaging system, BI-RADS classification method and model training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010431222.4A CN111768366A (en) 2020-05-20 2020-05-20 Ultrasonic imaging system, BI-RADS classification method and model training method

Publications (1)

Publication Number Publication Date
CN111768366A true CN111768366A (en) 2020-10-13

Family

ID=72719487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010431222.4A Pending CN111768366A (en) 2020-05-20 2020-05-20 Ultrasonic imaging system, BI-RADS classification method and model training method

Country Status (1)

Country Link
CN (1) CN111768366A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508942A (en) * 2020-12-25 2021-03-16 上海深博医疗器械有限公司 Method and system for acquiring BI-RADS (bidirectional radio Access data System) grade
CN112735593A (en) * 2021-01-22 2021-04-30 吉林大学 Renal cancer risk grade prediction method of associated immunohistochemical index Ki67
CN112907525A (en) * 2021-02-07 2021-06-04 中国人民解放军总医院 Mammary gland magnetic resonance image intelligent classification method, device, equipment and medium
CN112927808A (en) * 2021-03-01 2021-06-08 北京小白世纪网络科技有限公司 Thyroid ultrasound image-based nodule grading system and method
CN112990267A (en) * 2021-02-07 2021-06-18 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN113033667A (en) * 2021-03-26 2021-06-25 九江学院 Ultrasound image two-stage deep learning breast tumor classification method and device
CN113053523A (en) * 2021-04-23 2021-06-29 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system
CN113393931A (en) * 2021-06-16 2021-09-14 哈尔滨理工大学 Breast cancer diagnosis and treatment method based on system dynamics
CN113592835A (en) * 2021-08-05 2021-11-02 深圳开立生物医疗科技股份有限公司 Mammary gland ultrasonic image grading method and device, electronic equipment and storage medium
CN114219807A (en) * 2022-02-22 2022-03-22 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
CN114757952A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image processing method, device, equipment and storage medium
CN114943717A (en) * 2022-05-31 2022-08-26 北京医准智能科技有限公司 Method and device for detecting breast lesions, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622757A (en) * 2012-03-16 2012-08-01 中国矿业大学 Indexes for quantitatively evaluating details and contrast of remote-sensing images on basis of IDL (Interface Definition Language) programming
CN104636493A (en) * 2015-03-04 2015-05-20 浪潮电子信息产业股份有限公司 Dynamic data grading method based on multi-classifier fusion
CN107133448A (en) * 2017-04-10 2017-09-05 温州医科大学 A kind of metabolism group data fusion optimized treatment method
CN107958216A (en) * 2017-11-27 2018-04-24 沈阳航空航天大学 Based on semi-supervised multi-modal deep learning sorting technique
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109063886A (en) * 2018-06-12 2018-12-21 阿里巴巴集团控股有限公司 A kind of method for detecting abnormality, device and equipment
CN109359166A (en) * 2018-10-10 2019-02-19 广东国地规划科技股份有限公司 A kind of space increasing trends simulate calculation method synchronous with driving force factors contribution degree
CN109660382A (en) * 2018-11-15 2019-04-19 中国科学院计算机网络信息中心 Multi-source data fusion method, device and the storage medium of mobile terminal
CN110009126A (en) * 2019-01-23 2019-07-12 北京化工大学 The online alarm analysis method merged based on PLS model with PCA contribution degree
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN110728674A (en) * 2019-10-21 2020-01-24 清华大学 Image processing method and device, electronic equipment and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622757A (en) * 2012-03-16 2012-08-01 中国矿业大学 Indexes for quantitatively evaluating details and contrast of remote-sensing images on basis of IDL (Interface Definition Language) programming
CN104636493A (en) * 2015-03-04 2015-05-20 浪潮电子信息产业股份有限公司 Dynamic data grading method based on multi-classifier fusion
CN107133448A (en) * 2017-04-10 2017-09-05 温州医科大学 A kind of metabolism group data fusion optimized treatment method
CN107958216A (en) * 2017-11-27 2018-04-24 沈阳航空航天大学 Based on semi-supervised multi-modal deep learning sorting technique
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109063886A (en) * 2018-06-12 2018-12-21 阿里巴巴集团控股有限公司 A kind of method for detecting abnormality, device and equipment
CN109359166A (en) * 2018-10-10 2019-02-19 广东国地规划科技股份有限公司 A kind of space increasing trends simulate calculation method synchronous with driving force factors contribution degree
CN109660382A (en) * 2018-11-15 2019-04-19 中国科学院计算机网络信息中心 Multi-source data fusion method, device and the storage medium of mobile terminal
CN110009126A (en) * 2019-01-23 2019-07-12 北京化工大学 The online alarm analysis method merged based on PLS model with PCA contribution degree
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN110728674A (en) * 2019-10-21 2020-01-24 清华大学 Image processing method and device, electronic equipment and computer readable storage medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508942A (en) * 2020-12-25 2021-03-16 上海深博医疗器械有限公司 Method and system for acquiring BI-RADS (bidirectional radio Access data System) grade
CN112508942B (en) * 2020-12-25 2023-04-28 上海深博医疗器械有限公司 Method and system for acquiring BI-RADS grade
CN112735593A (en) * 2021-01-22 2021-04-30 吉林大学 Renal cancer risk grade prediction method of associated immunohistochemical index Ki67
CN112907525B (en) * 2021-02-07 2021-09-28 中国人民解放军总医院 Mammary gland magnetic resonance image intelligent classification method, device, equipment and medium
CN112907525A (en) * 2021-02-07 2021-06-04 中国人民解放军总医院 Mammary gland magnetic resonance image intelligent classification method, device, equipment and medium
CN112990267A (en) * 2021-02-07 2021-06-18 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN112990267B (en) * 2021-02-07 2022-06-28 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN112927808A (en) * 2021-03-01 2021-06-08 北京小白世纪网络科技有限公司 Thyroid ultrasound image-based nodule grading system and method
CN113033667A (en) * 2021-03-26 2021-06-25 九江学院 Ultrasound image two-stage deep learning breast tumor classification method and device
CN113053523A (en) * 2021-04-23 2021-06-29 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system
CN113053523B (en) * 2021-04-23 2024-07-26 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast lump accurate identification system
CN113393931A (en) * 2021-06-16 2021-09-14 哈尔滨理工大学 Breast cancer diagnosis and treatment method based on system dynamics
CN113592835A (en) * 2021-08-05 2021-11-02 深圳开立生物医疗科技股份有限公司 Mammary gland ultrasonic image grading method and device, electronic equipment and storage medium
CN114219807A (en) * 2022-02-22 2022-03-22 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
CN114219807B (en) * 2022-02-22 2022-07-12 成都爱迦飞诗特科技有限公司 Mammary gland ultrasonic examination image grading method, device, equipment and storage medium
CN114943717A (en) * 2022-05-31 2022-08-26 北京医准智能科技有限公司 Method and device for detecting breast lesions, electronic equipment and readable storage medium
CN114943717B (en) * 2022-05-31 2023-04-07 北京医准智能科技有限公司 Method and device for detecting breast lesions, electronic equipment and readable storage medium
CN114757952A (en) * 2022-06-15 2022-07-15 深圳瀚维智能医疗科技有限公司 Ultrasonic image processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111768366A (en) Ultrasonic imaging system, BI-RADS classification method and model training method
US11051790B2 (en) System comprising indicator features in high-resolution micro-ultrasound images
US9277902B2 (en) Method and system for lesion detection in ultrasound images
Zhang et al. Intelligent scanning: Automated standard plane selection and biometric measurement of early gestational sac in routine ultrasound examination
CN110838114B (en) Pulmonary nodule detection method, device and computer storage medium
US10238368B2 (en) Method and system for lesion detection in ultrasound images
CN111553892A (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN103169506A (en) Ultrasonic diagnosis device and method capable of recognizing liver cancer automatically
WO2022110525A1 (en) Comprehensive detection apparatus and method for cancerous region
Kumar et al. A Novel Approach for Breast Cancer Detection by Mammograms
CN111528918B (en) Tumor volume change trend graph generation device after ablation, equipment and storage medium
Galimzianova et al. Quantitative framework for risk stratification of thyroid nodules with ultrasound: a step toward automated triage of thyroid cancer
CN114521914A (en) Ultrasonic parameter measuring method and ultrasonic parameter measuring system
CN112529900A (en) Method, device, terminal and storage medium for matching ROI in mammary gland image
Gomez et al. Feature selection and classifier performance in computer-aided diagnosis for breast ultrasound
Patrício et al. Differentiating malignant thyroid nodule with statistical classifiers based on demographic and ultrasound features
CN114298958A (en) Ultrasonic imaging system and ultrasonic image analysis method
CN111768367B (en) Data processing method, device and storage medium
CN113768544A (en) Ultrasonic imaging method and equipment for mammary gland
CN113229850A (en) Ultrasonic pelvic floor imaging method and ultrasonic imaging system
AlZoubi et al. Explainable DCNN Decision Framework for Breast Lesion Classification from Ultrasound Images Based on Cancer Characteristics
Paul et al. Experimental Analysis for Non-Alcoholic Fatty Liver Disease in Ultrasound Images Based on Efficient Net Classifier
WO2022134028A1 (en) Similar case retrieval method, similar case retrieval system and ultrasonic imaging system
CN113974688B (en) Ultrasonic imaging method and ultrasonic imaging system
Mashak et al. Transfer learning; powerful and fast segmentation and classification prostate cancer from MRI scans, in the development set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination