CN111462049B - Automatic lesion area form labeling method in mammary gland ultrasonic radiography video - Google Patents

Automatic lesion area form labeling method in mammary gland ultrasonic radiography video Download PDF

Info

Publication number
CN111462049B
CN111462049B CN202010159426.7A CN202010159426A CN111462049B CN 111462049 B CN111462049 B CN 111462049B CN 202010159426 A CN202010159426 A CN 202010159426A CN 111462049 B CN111462049 B CN 111462049B
Authority
CN
China
Prior art keywords
network
video
lesion
module
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010159426.7A
Other languages
Chinese (zh)
Other versions
CN111462049A (en
Inventor
龚勋
赵绪
杨子奇
邹海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010159426.7A priority Critical patent/CN111462049B/en
Publication of CN111462049A publication Critical patent/CN111462049A/en
Application granted granted Critical
Publication of CN111462049B publication Critical patent/CN111462049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Abstract

The invention discloses a method for automatically labeling the lesion area form in a mammary gland ultrasonic radiography video, wherein an end-to-end network model structure is designed, only data to be identified are sent into a model, the model automatically carries out convolution operation on each frame of image, and discrimination characteristics of classification bases are extracted. The range of a lesion area does not need to be manually drawn in the whole identification process, because some lesion morphological characteristics describe contrast changes, such as enhanced intensity, enhanced time sequence and the like, of the related normal tissues and contrast changes of the lesion tissues, the convolution in the convolutional neural network is used for automatically carrying out convolution calculation on the whole contrast video frame sequence, the calculated characteristic values show mapping data of the normal tissues and the lesion areas, and the comparison is carried out according to network rules to obtain results. In addition, morphological characteristics such as crab feet shape, enhancement sequence and the like are used for automatically calculating the characteristics corresponding to the dynamic change of the morphological characteristics of the continuous frames of the video by using the designed network.

Description

Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
Technical Field
The invention relates to the field of medical ultrasonic image data processing, in particular to an automatic lesion form labeling method in a mammary gland ultrasonic radiography video.
Background
Compared with the processing of natural images, the medical ultrasonic image data has poor characteristic learning effect on the images by using common machine learning and deep learning methods due to the characteristics of large noise, low resolution, small data quantity and the like. The invention designs an intelligent marking method, which can realize automatic marking of lesion morphological characteristics in ultrasonic radiography, and marking results can be used for subsequent data analysis, machine learning, data archiving and medical assistance, and have important application value.
The contrast agent is applied to the traditional ultrasonic imaging, so that the reflection of ultrasonic waves can be effectively enhanced, and the ultrasonic images are relatively clear. Professional doctors often need to visually observe dynamic changes of lesions in an ultrasonic contrast video, record characteristic conditions and morphological changes of the lesion parts, provide auxiliary diagnosis information for lesion nodule diagnosis and provide data information for auxiliary disease observation in the subsequent treatment process. When a doctor works, errors caused by visual judgment under subjective experience cannot ensure the accuracy of lesion characteristic morphology recording.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic lesion form labeling method in a breast ultrasound contrast video, which solves the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing ultrasound contrast video in each case into continuous images by taking breast ultrasound video data provided by a hospital as a unit and storing ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file, wherein each ultrasonic case video contains the morphological type, so that the labels are required to be made into multiple labels for carrying out category marking;
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic film-making video sequence pixel value;
s5, in order to complete form recognition by using an end-to-end classification model, a 3D network is designed, the 3D network uses a residual error network resnet50 and a C3D network as a basic training network, the network comprises 8 modules, each module comprises a convolution layer, the size of the convolution core is 3 multiplied by 3, and the step length is 1 multiplied by 1; after the residual error network resnet50 is pre-trained by using a migration natural image classification task, the sample of the step S4 is transmitted into a network structure to calculate the network model weight parameters, and the detailed calculation steps are as follows;
s6, inputting 16 ultrasound contrast continuous frames with the size of 224 x 224 into the network, and performing sample space-time feature extraction on input data or feature map of an upper network by using the C3D network according to whether the input data is the first module or not;
s7, using a pre-trained residual error network resnet50, receiving input data or feature map of an upper network according to whether the first module is used, extracting spatial feature information, and averaging results of all feature maps to obtain a whole spatial feature residual block of the first module; meanwhile, according to whether the module is the first module, performing time series feature extraction on input data or a feature map of an upper network by using 3 x 1 one-dimensional time convolution;
s8, in 8 modules of the 3D network, the spatial feature information and the time are respectively fused in each moduleThe inter-sequence characteristic information and the space-time characteristic information are specifically as follows: adding the residual block of the temporal and spatial feature map calculated by the features extracted in S7 to the features output by each module in the C3D network as the input of the next module, wherein the calculation formula is shown in formula 1, where Xt represents the input of a network module unit, Xt +1 represents the output of the network module unit, and S (X) ist) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing the space-time characteristics extracted by the C3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
s9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
s10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using a formula (2) to be used as the network identification accuracy, wherein TP, TN, FP and FN are the numbers of predicted true positive, true negative, false positive and false negative respectively;
Figure GDA0003568533810000031
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
and S12, inputting the weight of the trained model by using the verification part in the data set to obtain an automatic recognition result and accuracy.
Preferably, the lesion morphological feature text in the step S3 includes the following 6 types: the intensity is enhanced, the time phase is enhanced, the sequence is enhanced, the enhancement is uniform, the shape is regular after the enhancement, and the crab foot shape is enhanced, each shape type has a value range label, the true value is obtained by evaluating the ultrasonic radiography characteristics of the breast lesion by 2 high-age sonographers, the characteristics are important lesion shape description information, and the change before and after the treatment can be observed in the treatment process to assist the observation of the state of an illness.
Preferably, the tag class flag in step S3 is obtained by using a unique hot coding method to set the tag attribute value field in each type to 1, and set the tag attribute value field in the absence of the tag attribute field to 0. And the data set is divided into 6: 2: 2 into training set, test set and validation set 3.
The invention has the beneficial effects that: the invention designs an end-to-end network model structure, only data to be identified is sent into the model, the model automatically carries out convolution operation on each frame of image, and the distinguishing characteristics of classification bases are extracted. The range of a lesion area does not need to be manually drawn in the whole identification process, because some lesion morphological characteristics describe contrast changes, such as enhanced intensity, enhanced time sequence and the like, of the related normal tissues and contrast changes of the lesion tissues, the convolution in the convolutional neural network is used for automatically carrying out convolution calculation on the whole contrast video frame sequence, the calculated characteristic values show mapping data of the normal tissues and the lesion areas, and the comparison is carried out according to network rules to obtain results. In addition, morphological characteristics such as crab feet shape, enhancement sequence and the like are used for automatically calculating the characteristics corresponding to the dynamic change of the morphological characteristics of the continuous frames of the video by using the designed network.
Drawings
FIG. 1 is a schematic view of a data sample json file;
FIG. 2 is a diagram of spatiotemporal features in conjunction with a network architecture;
FIG. 3 is a block diagram of a temporal feature and spatial feature residual error module;
FIG. 4 is a diagram illustrating predicted results;
FIG. 5 is a flow chart of a labeling method;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention provides a technical solution: a method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing the ultrasound contrast video in each case into continuous images by taking the breast ultrasound video data provided by a hospital as a unit and storing the ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file,focus of diseaseThe morphological feature text comprises the types shown in the following table 6, the true values are all obtained by evaluating the ultrasound contrast features of the breast lesions by 2 high-age sonographers, the features are important lesion morphological description information, and the change before and after treatment of the breast lesions can be observed in the treatment process to assist the observation of the disease condition. The modality type and value range labels include:
type of modality Value range label
Enhanced strength High, equal, low
Enhanced time phase Fast, synchronous and slow advance
Order of enhancement Centripetal and non-centripetal
Enhancing uniformity Uniform and non-uniform
Enhanced morphological rules Yes, no, difficult to distinguish
Crab foot shape Yes, no
Since each ultrasound case video contains the above morphological types, it is necessary to make the label multi-label for category labeling. And (3) setting the label attribute value field in each type to be 1 by using a one-hot coding mode, and setting the label attribute value field in each type to be 0 without the attribute. And the data set is divided into 6: 2: 2 into a training set, a test set and a verification set, 3, and the data samples are shown in fig. 1, which shows an example of a set of videos after being serialized, and their label attribute values,
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic radiography video sequence pixel value;
s5, in order to complete form recognition by using an end-to-end classification model, the invention designs a 3D network, a 3D network framework uses a residual error network (resnet50) and a C3D network as a basic training network, the network comprises 8 modules, each module comprises a convolution layer, the size of the convolution kernel is 3 multiplied by 3, the step size is 1 multiplied by 1, the size of the convolution kernel is 2 multiplied by 2, and the step size is 2 multiplied by 2. After a residual error network (resnet50) is pre-trained by using a migration natural image classification task, the sample of the step S4 is transmitted into a network structure to calculate a network model weight parameter, and the detailed calculation steps are as follows;
s6, the input of the network is 16 ultrasound contrast continuous frames with size 224 x 224, and the C3D network is used to extract the sample spatio-temporal features from the input data or the feature map (feature map) of the upper network according to whether it is the first module.
S7, using a pre-trained residual error network (resnet50), receiving input data or a feature map (feature map) of an upper network according to whether the first module is used, extracting spatial feature information, averaging results of all feature maps to be used as a whole spatial feature residual block of the layer, and simultaneously using 3 x 1 one-dimensional time convolution to extract time sequence features of the input data or the feature map (feature map) of the upper network according to whether the first module is used;
s8, in 8 modules of the 3D network, spatial feature information, time series feature information, and spatio-temporal feature information are respectively fused in each module, and the specific method includes: adding the residual block of the temporal and spatial feature map calculated by the features extracted in S7 to the features output by each module in the C3D network as the input of the next module, wherein the calculation formula is shown in formula 1, where Xt represents the input of a network module unit, Xt +1 represents the output of the network module unit, and S (X) ist) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing spatiotemporal features extracted by a 3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
fig. 3 shows the combination of temporal features and spatial features and spatio-temporal features extracted by a certain module of the C3D network, which constitute the whole residual module structure.
S9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
and S10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using the formula (2) to obtain the network identification accuracy. Wherein TP, TN, FP and FN are respectively the number of true positive, true negative, false positive and false negative;
Figure GDA0003568533810000071
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
s12, inputting the trained model weight by using a data set verification part to obtain an automatic recognition result and accuracy, wherein the prediction result is 100100101001010, and the decoding result is as follows: the reinforcing strength is high, the reinforcing time phase is rapid, the reinforcing sequence is non-centripetal, the reinforcing is uniform and uneven, the reinforcing shape is regular if not, and the reinforcing shape is crab-foot-shaped if not.
In the whole identification process, the range of a focus area is not required to be manually drawn, and the designed network is used for automatically calculating the characteristics corresponding to the dynamic change of the form of the space-time characteristics of the continuous frames of the video.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.

Claims (3)

1. A method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing ultrasound contrast video in each case into continuous images by taking breast ultrasound video data provided by a hospital as a unit and storing ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file, wherein each ultrasonic case video contains the morphological type, so that the labels are required to be made into multiple labels for carrying out category marking;
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic film-making video sequence pixel value;
s5, in order to complete form recognition by using an end-to-end classification model, a 3D network is designed, the 3D network uses a residual error network resnet50 and a C3D network as a basic training network, the network comprises 8 modules, each module comprises a convolution layer, the size of the convolution core is 3 multiplied by 3, and the step length is 1 multiplied by 1; after the residual error network resnet50 is pre-trained by using a migration natural image classification task, the sample of the step S4 is transmitted into a network structure to calculate the network model weight parameters, and the detailed calculation steps are as follows;
s6, inputting 16 ultrasound contrast continuous frames with the size of 224 x 224 into the network, and performing sample space-time feature extraction on input data or feature map of an upper network by using the C3D network according to whether the input data is the first module or not;
s7, using a pre-trained residual error network resnet50, receiving input data or feature map of an upper network according to whether the first module is used, extracting spatial feature information, and averaging results of all feature maps to obtain a whole spatial feature residual block of the first module; meanwhile, according to whether the module is the first module, performing time series feature extraction on input data or a feature map of an upper network by using 3 x 1 one-dimensional time convolution;
s8, in 8 modules of the 3D network, spatial feature information, time series feature information, and spatio-temporal feature information are respectively fused in each module, and the specific steps are as follows: adding the residual block of the temporal and spatial feature map calculated by the features extracted in S7 to the features output by each module in the C3D network as the input of the next module, wherein the calculation formula is shown in formula 1, where Xt represents the input of a network module unit, Xt +1 represents the output of the network module unit, and S (X) ist) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing the space-time characteristics extracted by the C3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
s9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
s10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using a formula (2) to be used as the network identification accuracy, wherein TP, TN, FP and FN are the numbers of predicted true positive, true negative, false positive and false negative respectively;
Figure FDA0003568533800000021
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
and S12, inputting the weight of the trained model by using the verification part in the data set to obtain an automatic recognition result and accuracy.
2. The method for automatically labeling the focal zone morphology in the breast ultrasound contrast video according to claim 1, characterized in that: the lesion morphological feature text in the step S3 includes the following 6 types: the intensity is enhanced, the time phase is enhanced, the sequence is enhanced, the enhancement is uniform, the shape is regular after the enhancement, and the crab foot shape is enhanced, each shape type has a value range label, the true value is obtained by evaluating the ultrasonic radiography characteristics of the breast lesion by 2 high-age sonographers, the characteristics are important lesion shape description information, and the change before and after the treatment can be observed in the treatment process to assist the observation of the state of an illness.
3. The method for automatically labeling the focal zone morphology in the breast ultrasound contrast video according to claim 1, characterized in that: the tag category flag in step S3 is obtained by using a one-hot encoding method to set the presence in the tag attribute value field of each type to 1, and set the absence of the attribute to 0, and the data set is represented by 6: 2: 2 into training set, test set and validation set 3.
CN202010159426.7A 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video Active CN111462049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010159426.7A CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010159426.7A CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Publications (2)

Publication Number Publication Date
CN111462049A CN111462049A (en) 2020-07-28
CN111462049B true CN111462049B (en) 2022-05-17

Family

ID=71684216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010159426.7A Active CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Country Status (1)

Country Link
CN (1) CN111462049B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899848B (en) * 2020-08-05 2023-07-07 中国联合网络通信集团有限公司 Image recognition method and device
CN113781440B (en) * 2020-11-25 2022-07-29 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN112488937B (en) * 2020-11-27 2022-07-01 河北工业大学 Medical image feature enhancement method for segmentation task
CN112419396B (en) * 2020-12-03 2024-04-26 前线智能科技(南京)有限公司 Automatic thyroid ultrasonic video analysis method and system
CN113239951B (en) * 2021-03-26 2024-01-30 无锡祥生医疗科技股份有限公司 Classification method, device and storage medium for ultrasonic breast lesions
CN113593707B (en) * 2021-09-29 2021-12-14 武汉楚精灵医疗科技有限公司 Stomach early cancer model training method and device, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN208274569U (en) * 2017-07-06 2018-12-25 上海长海医院 A kind of resistance anti-detection devices for detecting glioblastoma boundary in body
CN110148113A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of lesion target area information labeling method based on tomoscan diagram data
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110378281A (en) * 2019-07-17 2019-10-25 青岛科技大学 Group Activity recognition method based on pseudo- 3D convolutional neural networks
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
CN110781830A (en) * 2019-10-28 2020-02-11 西安电子科技大学 SAR sequence image classification method based on space-time joint convolution

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning
CN208274569U (en) * 2017-07-06 2018-12-25 上海长海医院 A kind of resistance anti-detection devices for detecting glioblastoma boundary in body
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN110148113A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of lesion target area information labeling method based on tomoscan diagram data
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110378281A (en) * 2019-07-17 2019-10-25 青岛科技大学 Group Activity recognition method based on pseudo- 3D convolutional neural networks
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
CN110781830A (en) * 2019-10-28 2020-02-11 西安电子科技大学 SAR sequence image classification method based on space-time joint convolution

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"A CNN-based methodology for breast cancer diagnosis using thermal image";Juan Zuluaga-Gomez等;《arxiv》;20191030;第1-19页 *
"一种基于语义模型的乳腺钙化病灶标注方法";赵可心等;《生物医学工程学杂志》;20120229;第29卷(第1期);第160-163页 *
"基于三维残差稠密网络的人体行为识别算法";郭明祥等;《计算机应用》;20191210;第39卷(第12期);第3482-3489页 *
"基于深度学习的癌症计算机辅助分类诊断研究进展";肖焕辉等;《国际医学放射学杂志》;20191231;第42卷(第1期);第22-25页及第58页 *
"基于深度学习的视频行为识别方法综述";赵朵朵等;《电信科学》;20191231;第35卷(第12期);第1-13页 *
"基于级联生成对抗网络的人脸图像修复";陈俊周等;《电子科技大学学报》;20191130;第48卷(第6期);第910-917页 *

Also Published As

Publication number Publication date
CN111462049A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462049B (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN107895367B (en) Bone age identification method and system and electronic equipment
CN109886273B (en) CMR image segmentation and classification system
WO2022063199A1 (en) Pulmonary nodule automatic detection method, apparatus and computer system
EP3770850A1 (en) Medical image identifying method, model training method, and computer device
CN108133476B (en) Method and system for automatically detecting pulmonary nodules
Kim et al. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images
Ni et al. Standard plane localization in ultrasound by radial component model and selective search
CN109754361A (en) The anisotropic hybrid network of 3D: the convolution feature from 2D image is transmitted to 3D anisotropy volume
Zhang et al. Intelligent scanning: Automated standard plane selection and biometric measurement of early gestational sac in routine ultrasound examination
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
Sinha et al. Medical image processing
CN112070119A (en) Ultrasonic tangent plane image quality control method and device and computer equipment
CN108052909B (en) Thin fiber cap plaque automatic detection method and device based on cardiovascular OCT image
CN112820399A (en) Method and device for automatically diagnosing benign and malignant thyroid nodules
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN110570425B (en) Pulmonary nodule analysis method and device based on deep reinforcement learning algorithm
CN111754485A (en) Artificial intelligence ultrasonic auxiliary system for liver
Clark et al. Developing and testing an algorithm for automatic segmentation of the fetal face from three-dimensional ultrasound images
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
CN112862786B (en) CTA image data processing method, device and storage medium
CN112862785B (en) CTA image data identification method, device and storage medium
CN113177953B (en) Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant