CN111462049A - Automatic lesion area form labeling method in mammary gland ultrasonic radiography video - Google Patents

Automatic lesion area form labeling method in mammary gland ultrasonic radiography video Download PDF

Info

Publication number
CN111462049A
CN111462049A CN202010159426.7A CN202010159426A CN111462049A CN 111462049 A CN111462049 A CN 111462049A CN 202010159426 A CN202010159426 A CN 202010159426A CN 111462049 A CN111462049 A CN 111462049A
Authority
CN
China
Prior art keywords
network
lesion
video
feature
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010159426.7A
Other languages
Chinese (zh)
Other versions
CN111462049B (en
Inventor
龚勋
赵绪
杨子奇
邹海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010159426.7A priority Critical patent/CN111462049B/en
Publication of CN111462049A publication Critical patent/CN111462049A/en
Application granted granted Critical
Publication of CN111462049B publication Critical patent/CN111462049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically labeling the lesion area form in a mammary gland ultrasonic radiography video, wherein an end-to-end network model structure is designed, only data to be identified are sent into a model, the model automatically carries out convolution operation on each frame of image, and discrimination characteristics of classification bases are extracted. The range of a lesion area does not need to be manually drawn in the whole identification process, because some lesion morphological characteristics describe contrast changes, such as enhanced intensity, enhanced time sequence and the like, of the related normal tissues and contrast changes of the lesion tissues, the convolution in the convolutional neural network is used for automatically carrying out convolution calculation on the whole contrast video frame sequence, the calculated characteristic values show mapping data of the normal tissues and the lesion areas, and the comparison is carried out according to network rules to obtain results. In addition, morphological characteristics such as crab feet shape, enhancement sequence and the like are used for automatically calculating the characteristics corresponding to the dynamic change of the morphological characteristics of the continuous frames of the video by using the designed network.

Description

Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
Technical Field
The invention relates to the field of medical ultrasonic image data processing, in particular to an automatic lesion form labeling method in a mammary gland ultrasonic radiography video.
Background
Compared with the processing of natural images, the medical ultrasonic image data has poor characteristic learning effect on the images by using common machine learning and deep learning methods due to the characteristics of large noise, low resolution, small data quantity and the like. The invention designs an intelligent marking method, which can realize automatic marking of lesion morphological characteristics in ultrasonic radiography, and marking results can be used for subsequent data analysis, machine learning, data archiving and medical assistance, and have important application value.
The contrast agent is applied to the traditional ultrasonic imaging, so that the reflection of ultrasonic waves can be effectively enhanced, and the ultrasonic images are relatively clear. Professional doctors often need to visually observe dynamic changes of lesions in an ultrasonic contrast video, record characteristic conditions and morphological changes of the lesion parts, provide auxiliary diagnosis information for lesion nodule diagnosis and provide data information for auxiliary disease observation in the subsequent treatment process. When a doctor works, errors caused by visual judgment under subjective experience cannot ensure the accuracy of lesion characteristic morphology recording.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic lesion form labeling method in a breast ultrasound contrast video, which solves the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing ultrasound contrast video in each case into continuous images by taking breast ultrasound video data provided by a hospital as a unit and storing ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file, wherein each ultrasonic case video contains the morphological type, so that the labels are required to be made into multiple labels for carrying out category marking;
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic film-making video sequence pixel value;
s5, finishing form recognition by using an end-to-end classification model, using a residual error network (resnet50) and a 3-dimensional convolution network (C3D network) as a basic training network by the whole network framework, and transmitting the sample of the step S4 into the network to calculate a network model weight parameter;
s6, inputting 16 ultrasound contrast continuous frames with the size of 224 × 224 into the network, extracting sample space-time characteristics of the feature map received in the last step by using a C3D network, wherein the 3D network has 8 convolution layers, the convolution kernels of the convolution layers are 3 × 3, and the step size is 1 × 1;
s7, using the weight of a residual error network (resnet50) for classifying the migrated natural images, receiving a feature map (feature map) of an upper network for extracting spatial feature information, averaging the results of all the feature maps to be used as a whole spatial feature residual block of the layer, and simultaneously using 3 x 1 one-dimensional time convolution to extract time series features of the output feature map of the upper network of the receiving layer;
s8, in 8 modules of the 3D network, adding the feature output by each module to the feature extracted in S7The residual block of the temporal and spatial feature map calculated by the last module has a calculation formula shown in formula 1, where Xt represents the input of the network module unit, Xt +1 represents the output of the network module unit, and S (X)t) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing spatiotemporal features extracted by a 3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
s9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
and S10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using the formula (2) to obtain the network identification accuracy. Wherein TP, TN, FP and FN are respectively the number of true positive, true negative, false positive and false negative;
Figure BDA0002405244340000031
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
and S12, inputting the weight of the trained model by using the verification part in the data set to obtain an automatic recognition result and accuracy.
Preferably, the lesion morphological feature text in the step S3 includes the following 7 types: the intensity is enhanced, the time phase is enhanced, the sequence is enhanced, the enhancement is uniform, the shape is regular after the enhancement, and the crab foot shape is enhanced, each shape type has a value range label, the true value is obtained by evaluating the ultrasonic radiography characteristics of the breast lesion by 2 high-age sonographers, the characteristics are important lesion shape description information, and the change before and after the treatment can be observed in the treatment process to assist the observation of the state of an illness.
Preferably, the tag class flag in step S3 is obtained by setting the presence of the tag attribute value field in each type to 1 using a one-hot encoding method, and setting the tag attribute value field to 0 if the tag attribute value field is not present. And the data set is divided into 6: 2: 2 into training set, test set and validation set 3.
The invention has the beneficial effects that: the invention designs an end-to-end network model structure, only data to be identified is sent into the model, the model automatically carries out convolution operation on each frame of image, and the distinguishing characteristics of classification bases are extracted. The range of a lesion area does not need to be manually drawn in the whole identification process, because some lesion morphological characteristics describe contrast changes, such as enhanced intensity, enhanced time sequence and the like, of the related normal tissues and contrast changes of the lesion tissues, the convolution in the convolutional neural network is used for automatically carrying out convolution calculation on the whole contrast video frame sequence, the calculated characteristic values show mapping data of the normal tissues and the lesion areas, and the comparison is carried out according to network rules to obtain results. In addition, morphological characteristics such as crab feet shape, enhancement sequence and the like are used for automatically calculating the characteristics corresponding to the dynamic change of the morphological characteristics of the continuous frames of the video by using the designed network.
Drawings
FIG. 1 is a schematic view of a data sample json file;
FIG. 2 is a diagram of spatiotemporal features in conjunction with a network architecture;
FIG. 3 is a block diagram of a temporal feature and spatial feature residual error module;
FIG. 4 is a diagram illustrating predicted results;
FIG. 5 is a flow chart of a labeling method;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention provides a technical solution: a method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing ultrasound contrast video in each case into continuous images by taking breast ultrasound video data provided by a hospital as a unit and storing ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file,focus of diseaseThe morphological feature text comprises the types shown in the following table 7, the true values are all obtained by evaluating the ultrasound contrast features of the breast lesions by 2 high-age sonographers, the features are important lesion morphological description information, and the change before and after treatment of the breast lesions can be observed in the treatment process to assist the observation of the disease condition. The modality type and value range labels include:
type of modality Value range label
Enhanced strength High, equal, low
Enhanced time phase Fast, synchronous and slow advance
Order of enhancement Centripetal and non-centripetal
Enhancing uniformity Uniform and non-uniform
Enhanced morphological rules Yes, no, difficult to distinguish
Crab foot shape Yes, no
Since each ultrasound case video contains the above morphological types, it is necessary to make the label multi-label for category labeling. And (3) setting the label attribute value field in each type to be 1 by using a one-hot coding mode, and setting the label attribute value field in each type to be 0 without the attribute. And the data set is divided into 6: 2: 2 into training set, test set and validation set 3, data samples are shown in figure 1,
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic film-making video sequence pixel value;
s5, finishing form recognition by using an end-to-end classification model, using a residual error network (resnet50) and a 3-dimensional convolution network (C3D network) as a basic training network by the whole network framework, and transmitting the sample of the step S4 into the network to calculate a network model weight parameter; the entire spatiotemporal features in combination with the network structure are shown in figure 2,
s6, inputting 16 ultrasound contrast continuous frames with the size of 224 × 224 into the network, extracting sample space-time characteristics of the feature map received in the last step by using a C3D network, wherein the 3D network has 8 convolution layers, the convolution kernels of the convolution layers are 3 × 3, and the step size is 1 × 1;
s7, using the weight of a residual error network (resnet50) for classifying the migrated natural images, receiving a feature map (feature map) of an upper network for extracting spatial feature information, averaging the results of all the feature maps to be used as a whole spatial feature residual block of the layer, and simultaneously using 3 x 1 one-dimensional time convolution to extract time series features of the output feature map of the upper network of the receiving layer;
s8, adding the feature output from each module to the residual block of the temporal and spatial feature map calculated by the previous module extracted from S7 in 8 modules of the 3D network, the calculation formula is shown in formula 1, where Xt represents the input of the network module unit, Xt +1 represents the output of the network module unit, and S (X) (X + 1) represents the output of the network module unitt) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing spatiotemporal features extracted by a 3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
the structure diagram of the corresponding residual module in the network is shown in figure 3,
s9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
and S10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using the formula (2) to obtain the network identification accuracy. Wherein TP, TN, FP and FN are respectively the number of true positive, true negative, false positive and false negative;
Figure BDA0002405244340000061
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
and S12, inputting the trained model weight by using the verification part in the data set to obtain an automatic recognition result and accuracy, wherein the prediction result is shown in figure 4.
In the whole identification process, the range of a focus area is not required to be manually drawn, and the designed network is used for automatically calculating the characteristics corresponding to the dynamic change of the form of the space-time characteristics of the continuous frames of the video.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and/or modifications of the invention can be made, and equivalents and modifications of some features of the invention can be made without departing from the spirit and scope of the invention.

Claims (3)

1. A method for automatically labeling the lesion form in a mammary gland ultrasonic radiography video adopts a convolutional neural network architecture to automatically extract lesion form characteristic parameter information in the ultrasonic radiography video to complete form recognition and classification, and performs lesion labeling on case data, and comprises the following steps:
s1, constructing a breast ultrasound contrast multi-label data set, processing ultrasound contrast video in each case into continuous images by taking breast ultrasound video data provided by a hospital as a unit and storing ultrasound contrast frame sequence images of each case by using a single folder;
s2, preprocessing the data sorted in the step S1, and performing image drying processing by using an edge-enhanced anisotropic diffusion denoising algorithm to remove speckle noise of the ultrasonic image in the data set and keep detail features and edge features in the ultrasonic image;
s3, storing the storage address of each case file of the data dried in the step S2 and the focus morphological feature text of each case recorded by the corresponding doctor as a sample as a json file, wherein each ultrasonic case video contains the morphological type, so that the labels are required to be made into multiple labels for carrying out category marking;
s4, subtracting 127.5 from 3 channels of each pixel of the training data set in the step S3, and dividing by 128 to obtain a normalized ultrasonic film-making video sequence pixel value;
s5, finishing form recognition by using an end-to-end classification model, using a residual error network (resnet50) and a 3-dimensional convolution network (C3D network) as a basic training network by the whole network framework, and transmitting the sample of the step S4 into the network to calculate a network model weight parameter;
s6, inputting 16 continuous frames of ultrasound contrast with the size of 224 × 224 into the network, and performing sample space-time feature extraction on the featuremap received in the previous step by using a C3D network, wherein the 3D network has 8 convolutional layers, the sizes of convolutional cores of the convolutional layers are 3 × 3, and the step size is 1 × 1;
s7, using the weight of a residual error network (resnet50) for classifying the migrated natural images, receiving a feature map (feature map) of an upper network for extracting spatial feature information, averaging the results of all the feature maps to be used as a whole spatial feature residual block of the layer, and simultaneously using 3 x 1 one-dimensional time convolution to extract time series features of the output feature map of the upper network of the receiving layer;
s8, adding the feature output from each module to the residual block of the temporal and spatial feature map calculated by the previous module extracted from S7 in 8 modules of the 3D network, the calculation formula is shown in formula 1, where Xt represents the input of the network module unit, Xt +1 represents the output of the network module unit, and S (X) (X + 1) represents the output of the network module unitt) Representing a block of spatial feature residues, T (X)t) Representing a temporal characteristic residual block, ST (X)t) Representing spatiotemporal features extracted by a 3D network;
Xt+1=S(Xt)+T(Xt)+ST(Xt) (1)
s9, adding a full-connection layer to output 4096-dimensional description features at last in the network, then conducting L2 regularization, judging each note by using a SIGMOD function, and finally outputting a prediction result of each morphological feature of a focus in each ultrasound contrast video data;
and S10, comparing the prediction result with the real result recorded by the doctor, and calculating and evaluating each label by using the formula (2) to obtain the network identification accuracy. Wherein TP, TN, FP and FN are respectively the number of true positive, true negative, false positive and false negative;
Figure FDA0002405244330000021
s11, using BCEloss as a loss function to train and constrain, repeating S6-S10 to train the network until loss convergence and store the model;
and S12, inputting the weight of the trained model by using the verification part in the data set to obtain an automatic recognition result and accuracy.
2. The method for automatically labeling the focal zone morphology in the breast ultrasound contrast video according to claim 1, characterized in that: the lesion morphological feature text in the step S3 includes the following 7 types: the intensity is enhanced, the time phase is enhanced, the sequence is enhanced, the enhancement is uniform, the shape is regular after the enhancement, and the crab foot shape is enhanced, each shape type has a value range label, the true value is obtained by evaluating the ultrasonic radiography characteristics of the breast lesion by 2 high-age sonographers, the characteristics are important lesion shape description information, and the change before and after the treatment can be observed in the treatment process to assist the observation of the state of an illness.
3. The method for automatically labeling the focal zone morphology in the breast ultrasound contrast video according to claim 1, characterized in that: the tag class flag in step S3 is obtained by using a one-hot encoding method to set the presence of the tag attribute value field in each type to 1, and set the tag attribute value field to 0 if the tag attribute value field is not present. And the data set is divided into 6: 2: 2 into training set, test set and validation set 3.
CN202010159426.7A 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video Active CN111462049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010159426.7A CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010159426.7A CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Publications (2)

Publication Number Publication Date
CN111462049A true CN111462049A (en) 2020-07-28
CN111462049B CN111462049B (en) 2022-05-17

Family

ID=71684216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010159426.7A Active CN111462049B (en) 2020-03-09 2020-03-09 Automatic lesion area form labeling method in mammary gland ultrasonic radiography video

Country Status (1)

Country Link
CN (1) CN111462049B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899848A (en) * 2020-08-05 2020-11-06 中国联合网络通信集团有限公司 Image recognition method and device
CN112419396A (en) * 2020-12-03 2021-02-26 前线智能科技(南京)有限公司 Thyroid ultrasonic video automatic analysis method and system
CN112488937A (en) * 2020-11-27 2021-03-12 河北工业大学 Medical image feature enhancement method for segmentation task
CN113159195A (en) * 2021-04-26 2021-07-23 深圳市大数据研究院 Ultrasonic image classification method, system, electronic device and storage medium
CN113239951A (en) * 2021-03-26 2021-08-10 无锡祥生医疗科技股份有限公司 Ultrasonic breast lesion classification method and device and storage medium
CN113593707A (en) * 2021-09-29 2021-11-02 武汉楚精灵医疗科技有限公司 Stomach early cancer model training method and device, computer equipment and storage medium
CN113781440A (en) * 2020-11-25 2021-12-10 北京医准智能科技有限公司 Ultrasonic video focus detection method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN208274569U (en) * 2017-07-06 2018-12-25 上海长海医院 A kind of resistance anti-detection devices for detecting glioblastoma boundary in body
CN110148113A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of lesion target area information labeling method based on tomoscan diagram data
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110378281A (en) * 2019-07-17 2019-10-25 青岛科技大学 Group Activity recognition method based on pseudo- 3D convolutional neural networks
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
CN110781830A (en) * 2019-10-28 2020-02-11 西安电子科技大学 SAR sequence image classification method based on space-time joint convolution

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991445A (en) * 2017-04-05 2017-07-28 重庆大学 A kind of ultrasonic contrast tumour automatic identification and detection method based on deep learning
CN208274569U (en) * 2017-07-06 2018-12-25 上海长海医院 A kind of resistance anti-detection devices for detecting glioblastoma boundary in body
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108596069A (en) * 2018-04-18 2018-09-28 南京邮电大学 Neonatal pain expression recognition method and system based on depth 3D residual error networks
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN110148113A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of lesion target area information labeling method based on tomoscan diagram data
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110378281A (en) * 2019-07-17 2019-10-25 青岛科技大学 Group Activity recognition method based on pseudo- 3D convolutional neural networks
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
CN110781830A (en) * 2019-10-28 2020-02-11 西安电子科技大学 SAR sequence image classification method based on space-time joint convolution

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JUAN ZULUAGA-GOMEZ等: ""A CNN-based methodology for breast cancer diagnosis using thermal image"", 《ARXIV》 *
肖焕辉等: ""基于深度学习的癌症计算机辅助分类诊断研究进展"", 《国际医学放射学杂志》 *
赵可心等: ""一种基于语义模型的乳腺钙化病灶标注方法"", 《生物医学工程学杂志》 *
赵朵朵等: ""基于深度学习的视频行为识别方法综述"", 《电信科学》 *
郭明祥等: ""基于三维残差稠密网络的人体行为识别算法"", 《计算机应用》 *
陈俊周等: ""基于级联生成对抗网络的人脸图像修复"", 《电子科技大学学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899848A (en) * 2020-08-05 2020-11-06 中国联合网络通信集团有限公司 Image recognition method and device
CN111899848B (en) * 2020-08-05 2023-07-07 中国联合网络通信集团有限公司 Image recognition method and device
CN113781440A (en) * 2020-11-25 2021-12-10 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN112488937A (en) * 2020-11-27 2021-03-12 河北工业大学 Medical image feature enhancement method for segmentation task
CN112488937B (en) * 2020-11-27 2022-07-01 河北工业大学 Medical image feature enhancement method for segmentation task
CN112419396A (en) * 2020-12-03 2021-02-26 前线智能科技(南京)有限公司 Thyroid ultrasonic video automatic analysis method and system
CN112419396B (en) * 2020-12-03 2024-04-26 前线智能科技(南京)有限公司 Automatic thyroid ultrasonic video analysis method and system
CN113239951A (en) * 2021-03-26 2021-08-10 无锡祥生医疗科技股份有限公司 Ultrasonic breast lesion classification method and device and storage medium
CN113239951B (en) * 2021-03-26 2024-01-30 无锡祥生医疗科技股份有限公司 Classification method, device and storage medium for ultrasonic breast lesions
CN113159195A (en) * 2021-04-26 2021-07-23 深圳市大数据研究院 Ultrasonic image classification method, system, electronic device and storage medium
CN113593707A (en) * 2021-09-29 2021-11-02 武汉楚精灵医疗科技有限公司 Stomach early cancer model training method and device, computer equipment and storage medium
CN113593707B (en) * 2021-09-29 2021-12-14 武汉楚精灵医疗科技有限公司 Stomach early cancer model training method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111462049B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN111462049B (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
EP3770850B1 (en) Medical image identifying method, model training method, and computer device
CN107895367B (en) Bone age identification method and system and electronic equipment
WO2022063199A1 (en) Pulmonary nodule automatic detection method, apparatus and computer system
CN108133476B (en) Method and system for automatically detecting pulmonary nodules
CN112070119B (en) Ultrasonic section image quality control method, device and computer equipment
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
Sinha et al. Medical image processing
CN112132166B (en) Intelligent analysis method, system and device for digital cell pathology image
CN110414607A (en) Classification method, device, equipment and the medium of capsule endoscope image
CN108052909B (en) Thin fiber cap plaque automatic detection method and device based on cardiovascular OCT image
CN104182984B (en) Method and system for rapidly and automatically collecting blood vessel edge forms in dynamic ultrasonic image
CN112820399A (en) Method and device for automatically diagnosing benign and malignant thyroid nodules
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
Liu et al. Automated classification and measurement of fetal ultrasound images with attention feature pyramid network
Wang et al. Automatic measurement of fetal head circumference using a novel GCN-assisted deep convolutional network
CN110570425A (en) Lung nodule analysis method and device based on deep reinforcement learning algorithm
CN117523350A (en) Oral cavity image recognition method and system based on multi-mode characteristics and electronic equipment
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
CN116631584A (en) General medical image report generation method and system, electronic equipment and readable storage medium
CN113643263B (en) Identification method and system for upper limb bone positioning and forearm bone fusion deformity
CN112862786B (en) CTA image data processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant