CN109993732A - The pectoral region image processing method and device of mammography X - Google Patents

The pectoral region image processing method and device of mammography X Download PDF

Info

Publication number
CN109993732A
CN109993732A CN201910223713.7A CN201910223713A CN109993732A CN 109993732 A CN109993732 A CN 109993732A CN 201910223713 A CN201910223713 A CN 201910223713A CN 109993732 A CN109993732 A CN 109993732A
Authority
CN
China
Prior art keywords
pectoral region
mammography
region image
image processing
default
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910223713.7A
Other languages
Chinese (zh)
Inventor
王润泽
张树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Hangzhou Shenrui Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shenrui Bolian Technology Co Ltd filed Critical Hangzhou Shenrui Bolian Technology Co Ltd
Priority to CN201910223713.7A priority Critical patent/CN109993732A/en
Publication of CN109993732A publication Critical patent/CN109993732A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

This application discloses the pectoral region image processing methods and device of a kind of mammography X.This method includes inputting mammography X to be processed;The fisrt feature figure of the mammography X to be processed is obtained by the first default convolutional neural networks;The fisrt feature figure and the second feature figure after global context information coding and sampling processing are spliced;And Fusion Features are carried out by the second default convolutional network, obtained original image size is as the forecast image result to the pectoral region image.Present application addresses the ineffective technical problems of dividing processing.Accurate Ground Split pectoral region may be implemented by the application.

Description

The pectoral region image processing method and device of mammography X
Technical field
This application involves medical imaging process fields, in particular to a kind of pectoral region image of mammography X Processing method and processing device.
Background technique
Mammary X-ray camera work is the prefered method and early prevention and diagnosis of generally acknowledged mammary gland disease early detection The most effective and reliable tool of breast cancer.
Anti- people's discovery is invented, in the computer-aided diagnosis system based on mammary X-ray photography, (English is complete for loxosis view Claim: Mediolateral obliqe, referred to as: MLO) pectoral region can for breast density estimate and Mass detection cause not Necessary influence.The pectoral region is also the important symbol in multiple view registration simultaneously, if can not accurate Ground Split chest Flesh region can then impact processing result image.
For the ineffective problem of dividing processing in the related technology, currently no effective solution has been proposed.
Summary of the invention
The main purpose of the application is to provide the pectoral region image processing method and device of a kind of mammography X, To solve the problems, such as that dividing processing is ineffective.
To achieve the goals above, according to the one aspect of the application, a kind of pectoral region of mammography X is provided Image processing method.
Pectoral region image processing method according to the mammography X of the application includes: input mammary X-ray figure to be processed Picture;The fisrt feature figure of the mammography X to be processed is obtained by the first default convolutional neural networks;It is special by described first Sign figure splices with the second feature figure after global context information coding and sampling processing;And pass through the second default volume Product network carries out Fusion Features, and obtained original image size is as the forecast image result to the pectoral region image.
Further, the global context information coding of passing through includes: to be encoded on global using global average Chi Hualai Context information.
Further, the sampling processing includes: up-sampling treatment.
Further, by the described first default convolutional neural networks, for obtaining in default convolutional neural networks most The characteristic pattern exported on the latter convolutional layer.
Further, by the described second default convolutional network, for convolution fusion feature and by being upsampled to original image The characteristic pattern of size.
To achieve the goals above, according to the another aspect of the application, a kind of pectoral region of mammography X is provided Image processing apparatus.
Include: input module according to the pectoral region image processing apparatus of the mammography X of the application, for input to Handle mammography X;Characteristic extracting module, for obtaining the mammary gland X to be processed by the first default convolutional neural networks The fisrt feature figure of line image;Splicing module, for by the fisrt feature figure and by global context information coding and Second feature figure splicing after sampling processing;And Fusion Features module, for carrying out feature by the second default convolutional network Fusion, obtained original image size is as the forecast image result to the pectoral region image.
Further, it in the splicing module, is also used to using global average Chi Hualai coding global context information.
Further, it in the splicing module, is also used to carry out up-sampling treatment.
Further, in the characteristic extracting module, it is also used to obtain the last one volume in default convolutional neural networks The characteristic pattern exported on lamination.
Further, in the Fusion Features module, it is also used to convolution fusion feature and by being upsampled to original image size Characteristic pattern.
In the embodiment of the present application, by the way of inputting mammography X to be processed, pass through the first default convolutional Neural Network obtains the fisrt feature figure of the mammography X to be processed, has reached by the fisrt feature figure and by the overall situation The purpose of second feature figure splicing after context information coding and sampling processing, to realize through the second default convolution net Network progress Fusion Features, technical effect of the obtained original image size as the forecast image result to the pectoral region image, And then solves the ineffective technical problem of dividing processing.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present application, so that the application's is other Feature, objects and advantages become more apparent upon.The illustrative examples attached drawing and its explanation of the application is for explaining the application, not Constitute the improper restriction to the application.In the accompanying drawings:
Fig. 1 is the pectoral region image processing method schematic diagram according to the mammography X of the embodiment of the present application;
Fig. 2 is the pectoral region image processing apparatus schematic diagram according to the mammography X of the embodiment of the present application;
Fig. 3 is the global context schematic network structure in the embodiment of the present application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection It encloses.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein.In addition, term " includes " and " tool Have " and their any deformation, it is intended that cover it is non-exclusive include, for example, containing a series of steps or units Process, method, system, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include without clear Other step or units listing to Chu or intrinsic for these process, methods, product or equipment.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
As shown in Figure 1, this method includes the following steps, namely S102 to step S108:
Step S102 inputs mammography X to be processed;
The mammography X includes loxosis view (MLO) and another kind position view (CC) end to end, and pectoral region image It is segmented on loxosis view (MLO) and carries out.
Step S104 obtains the fisrt feature of the mammography X to be processed by the first default convolutional neural networks Figure;
As the CNN convolutional neural networks occurred in the network architecture, i.e., the first default convolutional neural networks are used for After mammography X to be processed is input to the first default convolutional neural networks, output obtains characteristic pattern.
Step S106, by the fisrt feature figure and second after global context information coding and sampling processing Characteristic pattern splicing;And
The contextual information for not accounting for image when due to dividing in the prior art for chest muscle, for pectoral region and Contrast low sample decomposition effect in mammary region is poor.
By obtaining second feature by the fisrt feature figure and by global context information coding and by sampling Figure, then splices fisrt feature figure and second feature figure again.
Specifically, by the way that the original fisrt feature figure is increased global context information coding by pond layer, New characteristic pattern is obtained after default sampling processing later as second feature figure, later by the i.e. first spy of original characteristic pattern Sign figure and the treated second feature of process are spliced.
It should be noted that the global context information is chest muscle in the location of full figure and surrounding ring Border information etc..
It should be noted that can be by there is the convolutional neural networks of supervision by global context information coding process Acquistion is arrived, and those skilled in the art can encode by other means, as long as can meet coding requirement.
Step S108 carries out Fusion Features by the second default convolutional network, and obtained original image size is used as to the chest The forecast image result of flesh area image.
After carrying out Fusion Features by the described second default convolutional network, the size of the available original image is used as to institute The forecast image result for stating pectoral region image refers to, obtains the black and white of size identical as mammography X to be processed Bianry image is as forecast image, and wherein pectoral region is white, and non-pectoral region is black.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, using will be in the overall situation The network model that context information is taken into account, and it is named as global context network (Global Context Net, GCNet), Specifically, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, and deep learning frame is adopted With pytorch, hardware platform is Nvidia Titan XP.GCNet network training uses stochastic gradient descent method, wherein criticizing place Managing size (batch size) is 16, initial learning rate 0.005, momentum parameter 0.9, weight attenuation parameter (weight decay) It is 0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to alleviate model Overfitting problem, the application uses Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Preferably, the global context information coding of passing through includes: to encode the overall situation up and down using global average Chi Hualai Literary information.
Preferably, the sampling processing includes: up-sampling treatment.
Preferably, last in default convolutional neural networks for obtaining by the described first default convolutional neural networks The characteristic pattern exported on one convolutional layer.
Preferably, for convolution fusion feature and big by being upsampled to original image by the described second default convolutional network Small characteristic pattern.
It can be seen from the above description that the application realizes following technical effect:
In the embodiment of the present application, by the way of inputting mammography X to be processed, pass through the first default convolutional Neural Network obtains the fisrt feature figure of the mammography X to be processed, has reached by the fisrt feature figure and by the overall situation The purpose of second feature figure splicing after context information coding and sampling processing, to realize through the second default convolution net Network progress Fusion Features, technical effect of the obtained original image size as the forecast image result to the pectoral region image, And then solves the ineffective technical problem of dividing processing.
It should be noted that step shown in the flowchart of the accompanying drawings can be in such as a group of computer-executable instructions It is executed in computer system, although also, logical order is shown in flow charts, and it in some cases, can be with not The sequence being same as herein executes shown or described step.
According to the embodiment of the present application, additionally provide a kind of for implementing the pectoral region of the mammography X of the above method Image processing apparatus, as shown in Fig. 2, the device includes: input module 10, for inputting mammography X to be processed;Feature mentions Modulus block 20, for obtaining the fisrt feature figure of the mammography X to be processed by the first default convolutional neural networks;It spells Connection module 30, for by the fisrt feature figure and second feature after global context information coding and sampling processing Figure splicing;And Fusion Features module 40, for carrying out Fusion Features, obtained original image size by the second default convolutional network As the forecast image result to the pectoral region image.
Mammography X described in the input module 10 of the embodiment of the present application includes loxosis view (MLO) and another head Tail position view (CC), and pectoral region image segmentation carries out on loxosis view (MLO).
First default convolutional neural networks described in the characteristic extracting module 20 of the embodiment of the present application are used for as in network The CNN convolutional neural networks occurred in structure, i.e., by the way that mammography X to be processed is input to the first default convolutional Neural net After network, output obtains characteristic pattern.
Due to not accounting for image when dividing in the prior art for chest muscle in the splicing module 30 of the embodiment of the present application Contextual information, the sample decomposition effect low for pectoral region and mammary region contrast be poor.
By obtaining second feature by the fisrt feature figure and by global context information coding and by sampling Figure, then splices fisrt feature figure and second feature figure again.
Specifically, by the way that the original fisrt feature figure is increased global context information coding by pond layer, New characteristic pattern is obtained after default sampling processing later as second feature figure, later by the i.e. first spy of original characteristic pattern Sign figure and the treated second feature of process are spliced.
It should be noted that the global context information is chest muscle in the location of full figure and surrounding ring Border information etc..
It should be noted that can be by there is the convolutional neural networks of supervision by global context information coding process Acquistion is arrived, and those skilled in the art can encode by other means, as long as can meet coding requirement.
After carrying out Fusion Features by the described second default convolutional network in the Fusion Features module 40 of the embodiment of the present application, The size of the available original image refers to as the forecast image result to the pectoral region image, obtain with it is to be processed to The black and white binary image of the identical size of mammography X is handled as forecast image, wherein pectoral region is white, non-pectoral region Domain is black.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, using will be in the overall situation The network model that context information is taken into account, and it is named as global context network G CNet, Global Context Net, have Body, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, and deep learning frame uses Pytorch, hardware platform are Nvidia Titan XP.GCNet network training uses stochastic gradient descent method, wherein batch processing Size batch size is 16, initial learning rate 0.005, momentum parameter 0.9, and weight attenuation parameter weight decay is 0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to alleviate model Overfitting problem, the application use Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Preferably, it in the splicing module, is also used to using global average Chi Hualai coding global context information.
Preferably, it in the splicing module, is also used to carry out up-sampling treatment.
Preferably, in the characteristic extracting module, it is also used to obtain the last one convolution in default convolutional neural networks The characteristic pattern exported on layer.
Preferably, in the Fusion Features module, it is also used to convolution fusion feature and by being upsampled to original image size Characteristic pattern.
Preferably, the described first default convolutional neural networks or the second default convolutional neural networks can using FCN, Any or a variety of semantic segmentation subnetwork in SegNet or U-Net.
The realization principle of the application is as follows:
Due to carrying out full-automatic chest muscle point to mammography X using existing some deep learning semantic segmentation models It cuts, such as FCN, SegNet and U-Net etc., however these models are the Direct Classifications to image pixel, due to not accounting for figure The contextual information of picture, the sample decomposition effect low for pectoral region and mammary region contrast are poor.To in pectoral region Domain and partial breast region result in U-Net and are lumped together in intensity without notable difference, can not accomplish effectively to divide It cuts.
In view of the above-mentioned problems, implementation method is as follows in embodiments herein:
Mammography X to be processed is inputted first;Then it is obtained by the first default convolutional neural networks described to be processed The fisrt feature figure of mammography X;Later by the fisrt feature figure and by global context information coding and sampling Second feature figure splicing that treated;And Fusion Features are carried out finally by the second default convolutional network, obtained original image is big The small forecast image result as to the pectoral region image.
For above-mentioned semantic segmentation network the problem of mammography X chest muscle is divided, devise a kind of new The network that global context information is taken into account, and be named as global context network (Global Context Net, GCNet), network structure is as shown in Figure 3.Given input picture (a), obtains last using convolutional neural networks CNN first The output characteristic pattern (b) of a convolutional layer is then encoded global context and is believed by the overall situation that is simple and efficient of the design Chi Hualai that be averaged Breath is spliced by up-sampling and characteristic pattern (b), finally by convolution fusion feature and is upsampled to original image size acquisition prognostic chart Picture.
Specifically, the core network of GCNet is all made of the ResNet50 of the pre-training on ImageNet database, depth It practises frame and uses pytorch, hardware platform is Nvidia Titan XP.Network training uses stochastic gradient descent method, wherein criticizing Handling size (batch size) is 16, initial learning rate 0.005, momentum parameter 0.9, weight attenuation parameter (weight It decay) is 0.0001.Model training is total to iteration 30 times, saves the optimal model of verification result for testing.Meanwhile in order to slow The overfitting problem of model is solved, the application uses Random-Rotation, and the modes such as random overturning and random scale scaling expand training set.
Experimental analysis
Experimental data amounts to 1009 loxosis images in the application, according to model split training, verifying and the test of 8:1:1 Collection, wherein the image of the same patient point is in the same set.
Chest is used as using Dice Index=2*TP/ (2*TP+FP+FN) with comparison model performance, the application in order to assess The evaluation index of flesh segmentation effect.Wherein, TP refers to that the chest muscle pixel quantity correctly classified, FP refer to non-chest muscle pixel by misclassification For the quantity of chest muscle pixel, FN refers to that chest muscle pixel is mistakenly classified as the quantity of non-chest muscle pixel.Experimental result is shown in Table 1.
Table 1.U-Net and GCNet mammography X chest muscle split-run test result
Dice Index (%)
U-Net 70.27
GCNet (does not have pre-training) 90.22
GCNet (pre-training) 95.93
The GCNet of pre-training (while both keeping answering on an equal basis by comparison U-Net and on ImageNet database Miscellaneous degree), it is seen that the GCNet for taking into account global context information that the application proposes has very chest muscle segmentation effect Big promotion.Meanwhile the chest muscle segmentation Dice Index of the GCNet by pre-training has reached 95.93%.Clearly for chest For flesh region and the apparent image of mammary region contrast, the model of the application has good chest muscle segmentation performance, even if It is that the biggish chest muscle of some curvature can also be accomplished accurately to predict with mammary gland line of demarcation;And for pectoral region and mammary region pair Image more unconspicuous than degree, the model of the application still are able to by making accurately in conjunction with global context information to pectoral region Prediction.This demonstrate that the importance that global context information divides chest muscle, while also demonstrating the GCNet of the application proposition The automatic chest muscle segmentation of mammography X can be competent at.
Obviously, those skilled in the art should be understood that each module of above-mentioned the application or each step can be with general Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored Be performed by computing device in the storage device, perhaps they are fabricated to each integrated circuit modules or by they In multiple modules or step be fabricated to single integrated circuit module to realize.In this way, the application be not limited to it is any specific Hardware and software combines.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.

Claims (10)

1. a kind of pectoral region image processing method of mammography X characterized by comprising
Input mammography X to be processed;
The fisrt feature figure of the mammography X to be processed is obtained by the first default convolutional neural networks;
The fisrt feature figure and the second feature figure after global context information coding and sampling processing are spliced;With And
Fusion Features are carried out by the second default convolutional network, obtained original image size is as to the pre- of the pectoral region image Altimetric image result.
2. pectoral region image processing method according to claim 1, which is characterized in that described to believe by global context Breath coding includes: using global average Chi Hualai coding global context information.
3. pectoral region image processing method according to claim 1, which is characterized in that on the sampling processing includes: Sampling processing.
4. pectoral region image processing method according to claim 1, which is characterized in that pass through the described first default convolution Neural network, for obtaining the characteristic pattern exported on the last one convolutional layer in default convolutional neural networks.
5. pectoral region image processing method according to claim 1, which is characterized in that pass through the described second default convolution Network, for convolution fusion feature and the characteristic pattern by being upsampled to original image size.
6. a kind of pectoral region image processing apparatus of mammography X characterized by comprising
Input module, for inputting mammography X to be processed;
Characteristic extracting module, for obtaining the first of the mammography X to be processed by the first default convolutional neural networks Characteristic pattern;
Splicing module, for by the fisrt feature figure with after global context information coding and sampling processing second Characteristic pattern splicing;And
Fusion Features module, for carrying out Fusion Features by the second default convolutional network, obtained original image size is used as to institute State the forecast image result of pectoral region image.
7. pectoral region image processing apparatus according to claim 6, which is characterized in that in the splicing module, also use In using global average Chi Hualai coding global context information.
8. pectoral region image processing apparatus according to claim 6, which is characterized in that in the splicing module, also use In progress up-sampling treatment.
9. pectoral region image processing apparatus according to claim 6, which is characterized in that in the characteristic extracting module, It is also used to obtain the characteristic pattern exported on the last one convolutional layer in default convolutional neural networks.
10. pectoral region image processing apparatus according to claim 6, which is characterized in that in the Fusion Features module, It is also used to convolution fusion feature and the characteristic pattern by being upsampled to original image size.
CN201910223713.7A 2019-03-22 2019-03-22 The pectoral region image processing method and device of mammography X Pending CN109993732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910223713.7A CN109993732A (en) 2019-03-22 2019-03-22 The pectoral region image processing method and device of mammography X

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910223713.7A CN109993732A (en) 2019-03-22 2019-03-22 The pectoral region image processing method and device of mammography X

Publications (1)

Publication Number Publication Date
CN109993732A true CN109993732A (en) 2019-07-09

Family

ID=67130831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910223713.7A Pending CN109993732A (en) 2019-03-22 2019-03-22 The pectoral region image processing method and device of mammography X

Country Status (1)

Country Link
CN (1) CN109993732A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085729A (en) * 2020-09-18 2020-12-15 无锡祥生医疗科技股份有限公司 Pleural line region extraction method, storage medium, and ultrasound diagnostic apparatus
CN112185550A (en) * 2020-09-29 2021-01-05 强联智创(北京)科技有限公司 Typing method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147101A1 (en) * 2005-01-04 2006-07-06 Zhang Daoxian H Computer aided detection of microcalcification clusters
CN105447879A (en) * 2015-12-15 2016-03-30 上海联影医疗科技有限公司 Method and apparatus for detecting breast muscle in breast image
CN108052977A (en) * 2017-12-15 2018-05-18 福建师范大学 Breast molybdenum target picture depth study classification method based on lightweight neutral net
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108564561A (en) * 2017-12-29 2018-09-21 广州柏视医疗科技有限公司 Pectoralis major region automatic testing method in a kind of molybdenum target image
CN109101972A (en) * 2018-07-26 2018-12-28 天津大学 A kind of semantic segmentation convolutional neural networks with contextual information coding

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060147101A1 (en) * 2005-01-04 2006-07-06 Zhang Daoxian H Computer aided detection of microcalcification clusters
CN105447879A (en) * 2015-12-15 2016-03-30 上海联影医疗科技有限公司 Method and apparatus for detecting breast muscle in breast image
CN108052977A (en) * 2017-12-15 2018-05-18 福建师范大学 Breast molybdenum target picture depth study classification method based on lightweight neutral net
CN108564561A (en) * 2017-12-29 2018-09-21 广州柏视医疗科技有限公司 Pectoralis major region automatic testing method in a kind of molybdenum target image
CN108229580A (en) * 2018-01-26 2018-06-29 浙江大学 Sugared net ranking of features device in a kind of eyeground figure based on attention mechanism and Fusion Features
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN109101972A (en) * 2018-07-26 2018-12-28 天津大学 A kind of semantic segmentation convolutional neural networks with contextual information coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GONGJIE ZHANG等: "CAD-Net: A Context-Aware Detection Network for Objects in Remote Sensing Imagery", 《ARXIV:1903.00857V1 [CS.CV]》 *
HENGSHUANG ZHAO等: "Pramid Scene Parsing Network", 《ARXIV:1612.01105V2 [CS.CV]》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085729A (en) * 2020-09-18 2020-12-15 无锡祥生医疗科技股份有限公司 Pleural line region extraction method, storage medium, and ultrasound diagnostic apparatus
CN112185550A (en) * 2020-09-29 2021-01-05 强联智创(北京)科技有限公司 Typing method, device and equipment

Similar Documents

Publication Publication Date Title
EP3553742A1 (en) Method and device for identifying pathological picture
CN110136809A (en) A kind of medical image processing method, device, electromedical equipment and storage medium
CN108492271A (en) A kind of automated graphics enhancing system and method for fusion multi-scale information
CN105640577A (en) Method and system automatically detecting local lesion in radiographic image
Polat et al. COVID-19 diagnosis from chest X-ray images using transfer learning: Enhanced performance by debiasing dataloader
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN109460717B (en) Digestive tract confocal laser microscopy endoscope lesion image identification method and device
CN110246109A (en) Merge analysis system, method, apparatus and the medium of CT images and customized information
CN109993732A (en) The pectoral region image processing method and device of mammography X
WO2020234349A1 (en) Sampling latent variables to generate multiple segmentations of an image
Souid et al. Xception-ResNet autoencoder for pneumothorax segmentation
CN113129310B (en) Medical image segmentation system based on attention routing
Magpantay et al. A transfer learning-based deep CNN approach for classification and diagnosis of acute lymphocytic leukemia cells
CN115100180A (en) Pneumonia feature identification method and device based on neural network model and electronic equipment
CN111415331B (en) Abnormal detection method and system based on category relation in positive chest radiography
Desai et al. Comparative analysis using transfer learning models vgg16, resnet 50 and xception to predict pneumonia
CN112950552A (en) Rib segmentation marking method and system based on convolutional neural network
CN113393445B (en) Breast cancer image determination method and system
Sowmya et al. Vision transformer based ResNet model for pneumonia prediction
Long et al. A Deep Learning Method for Brain Tumor Classification Based on Image Gradient
KR102566095B1 (en) Deep learning apparatus and method for joint classification and segmentation of histopathology image
CN109711467A (en) Data processing equipment and method, computer system
CN114445421B (en) Identification and segmentation method, device and system for nasopharyngeal carcinoma lymph node region
CN112766333B (en) Medical image processing model training method, medical image processing method and device
CN117476219B (en) Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190722

Address after: Room 705, 8 Building 1818-2 Wenyi West Road, Yuhang District, Hangzhou City, Zhejiang Province, 311100

Applicant after: SHENZHEN DEEPWISE BOLIAN TECHNOLOGY Co.,Ltd.

Applicant after: BEIJING SHENRUI BOLIAN TECHNOLOGY Co.,Ltd.

Address before: 310026 Room 705, 8 Building 1818-2 Wenyi West Road, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: SHENZHEN DEEPWISE BOLIAN TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190709

RJ01 Rejection of invention patent application after publication