CN109214388A - A kind of lesion segmentation approach and device based on personalized converged network - Google Patents

A kind of lesion segmentation approach and device based on personalized converged network Download PDF

Info

Publication number
CN109214388A
CN109214388A CN201811389439.2A CN201811389439A CN109214388A CN 109214388 A CN109214388 A CN 109214388A CN 201811389439 A CN201811389439 A CN 201811389439A CN 109214388 A CN109214388 A CN 109214388A
Authority
CN
China
Prior art keywords
ultrasound image
tumour ultrasound
tumour
network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811389439.2A
Other languages
Chinese (zh)
Other versions
CN109214388B (en
Inventor
袭肖明
于治楼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Group Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN201811389439.2A priority Critical patent/CN109214388B/en
Publication of CN109214388A publication Critical patent/CN109214388A/en
Application granted granted Critical
Publication of CN109214388B publication Critical patent/CN109214388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Vascular Medicine (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of lesion segmentation approach based on personalized converged network, it is related to technical field of medical image processing, this method first classifies to tumour ultrasound image by expert's observation, it is used as training sample input gray level identification network or the full convolutional neural networks of more sizes to be trained after then carrying out preliminary treatment to tumour ultrasound image according to classification results, carry out the optimal segmentation result that tumour ultrasound image is obtained after repeatedly training, finally, tumour ultrasound image input gray level is identified that segmentation can be completed in network or the full convolutional neural networks of more sizes, the dividing method is adapted to the tumour ultrasound image of different classifications, with segmentation precision height, divide high-efficient advantage.Invention additionally discloses a kind of lesion segmentation devices based on personalized converged network, combine with above-mentioned dividing method, preferably complete the segmentation of tumour ultrasound image.

Description

A kind of lesion segmentation approach and device based on personalized converged network
Technical field
The present invention relates to technical field of medical image processing, specifically a kind of tumour based on personalized converged network Dividing method and device.
Background technique
For women, breast cancer has become the number one killer of women, and breast cancer is that disease incidence and lethality are higher One of disease, morbidity number is significantly risen with the speed of average annual 3%-5%, and is had the tendency that increasingly serious.Studies have shown that if Early stage it can check that in time cancer can cure, and cure rate is up to 92% or more.As it can be seen that the early detection of tumor of breast There is vital effect to healing the sick, and early discovery early treatment is the key that improve therapeutic efficiency.
Medical image has become the major way of clinically aided disease diagnosis.Compare molybdenum target, nuclear magnetic resonance etc. other Image, ultrasound have many advantages, such as radiate less, it is cheap, sensitive to compactness tissue detection.Therefore, supplemented by ultrasound image has become Help one of the main tool of early diagnosing mammary cancer.Since the experience of image doctor is different, so that manually to breast ultrasound image Carrying out diagnosis has certain subjectivity.And breast ultrasound image can be divided automatically using computer-aided diagnosis technology Analysis, so as to provide an objective diagnostic result for clinician.
Lesion segmentation is the basis of breast ultrasound analysis.Although traditional method can obtain certain segmentation effect, It is to be extremely difficult to satisfactory result in terms of segmentation precision and efficiency two.Therefore, how breast ultrasound is effectively solved simultaneously Precision and low efficiency problem in image segmentation design accurately lesion segmentation algorithm, have important research significance and application Value.
Summary of the invention
The present invention is directed to the demand and shortcoming of current technology development, provides a kind of swollen based on personalized converged network Tumor dividing method and device.
A kind of lesion segmentation approach based on personalized converged network of the invention solves the skill that above-mentioned technical problem uses Art scheme is as follows:
A kind of lesion segmentation approach based on personalized converged network, method includes the following steps:
1) training part:
The tumour ultrasound image of acquisition 1a) is divided into two classes: the uniform tumour ultrasound image of grey value profile and intensity profile The more serious tumour ultrasound image of inhomogeneity;
It is used as training sample after the uniform tumour ultrasound image of grey value profile 1b) is carried out binary conversion treatment, expert is divided Then true value label of the pixel of result figure as training sample carries out the training of gray scale identification network, repeatedly training is completed The building of gray scale identification network;
1c) the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, after division Tumour ultrasound image accesses full convolutional neural networks and is trained, and repeatedly the structure of more full convolutional neural networks of size is completed in training It builds;
2) partitioning portion:
The uniform tumour ultrasound image of grey value profile 2a) is subjected to the gray scale identification net completed after binary conversion treatment using building Network is split;
2b) the full convolutional Neural of more sizes for completing the more serious tumour ultrasound image of intensity profile inhomogeneity using building Network is split.
Further, carry out gray scale identification network training be using Resnet as base net network, and introduce expert demarcate True value label, building gray scale identify network.
Optionally, gray proces are carried out to tumour ultrasound image using Matlab, after expert visually observes gray proces Tumour ultrasound image, rule of thumb to the classification of tumour ultrasound image.
Further, the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, Include:
Tumour ultrasound image is divided into the identical or different tumour ultrasound image unit of at least ten parts of areas, every part of tumour ultrasound Area S=the n*n, n of elementary area are any natural number, and the sum of area of at least ten parts tumour ultrasound image units is equal to tumour The area of ultrasound image.
Further, the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of stroke Point, further includes:
Same tumour ultrasound image is at least divided twice, and is divided every time tumour ultrasound image unit number and/ Or tumour ultrasound image cellar area is not identical, tumour ultrasound image when being divided to same tumour ultrasound image next time Unit number and/or tumour ultrasound image cellar area are with reference to the full convolutional neural networks module outputs of size more after last segmentation Segmentation result.
Based on the above method, the present invention also protects a kind of lesion segmentation device based on personalized converged network, the device Include:
Gradation processing module, for carrying out gray proces to tumour ultrasound image, after visually observing gray proces in order to expert Tumour ultrasound image, and be classified as the uniform tumour ultrasound image of grey value profile and intensity profile inhomogeneity is more serious Tumour ultrasound image;
Mark module, for the pixel of expert's segmentation result figure to be labeled as to the true value label of training sample;
Binary processing module, for the uniform tumour ultrasound image of grey value profile to be carried out binary conversion treatment;
Training building module one, for using the tumour ultrasound image after binary conversion treatment be used as training sample, and with reference to expert mark The true value label of note is trained, and repeatedly training building gray scale identifies network module;
Training building module two, it is various sizes of for carrying out the more serious tumour ultrasound image of intensity profile inhomogeneity It divides, the tumour ultrasound image after division is accessed into full convolutional neural networks and is trained, repeatedly training constructs more sizes and rolls up entirely Product neural network module;
Gray scale identifies network module, for being split to the uniform tumour ultrasound image of grey value profile;
More full convolutional neural networks modules of size, for being carried out to the more serious tumour ultrasound image of intensity profile inhomogeneity More sized divisions.
Optionally, for involved gray scale identification network module using Resnet as base net network, mark module demarcates expert True value label access gray scale identify network module, in order in the range of true value label construct gray scale identify network.
Optionally, involved gradation processing module selects Matlab.
Optionally, the full convolutional neural networks module of involved more sizes includes full convolutional neural networks unit, at least two Delaminating units, each delaminating units are split tumour ultrasound image, surpass same tumour when delaminating units are divided every time The tumour ultrasound image unit that acoustic image is divided at least ten parts of areas identical or different, the face of every part of tumour ultrasound image unit Product S=n*n, n are any natural number, and the sum of area of at least ten parts tumour ultrasound image units is equal to the face of tumour ultrasound image Product.
Optionally, the full convolutional neural networks module of involved more sizes further includes feedback unit, is used for full convolutional Neural The segmentation result of network unit inputs next delaminating units to be executed, so that carrying out down to same tumour ultrasound image Tumour ultrasound image unit number and/or tumour ultrasound image cellar area can be based on complete after last segmentation when primary segmentation The segmentation result of convolutional neural networks unit output.
A kind of lesion segmentation approach and device based on personalized converged network of the invention, has compared with prior art Beneficial effect be:
1) lesion segmentation approach of the invention based on personalized converged network allows expert to divide tumour ultrasound image first Class then carries out the training of different modes according to classification results, after carrying out the training of different modes and repeatedly training, can obtain Training sample identifies that network or the full convolutional neural networks of more sizes are trained further according to training sample input gray level, finally obtains Obtaining can be to the method that tumour ultrasound image is split;This method can automatically select ash according to the characteristic of tumour ultrasound image Degree identification network or the full convolutional neural networks of more sizes are split, and the segmentation precision of tumour ultrasound image not only can be improved, The segmentation efficiency of tumour ultrasound image can also be improved;
2) the lesion segmentation device of the invention based on personalized converged network is combined with above-mentioned oncology tools, first by special Family classifies to tumour ultrasound image, then carries out the training of corresponding manner, instruction to tumour ultrasound image according to classification results Building gray scale identification network module and the full convolutional neural networks module of more sizes, can carry out tumour ultrasound image after practicing repeatedly In high precision, efficient automatic segmentation.
Detailed description of the invention
Attached drawing 1 is training department's split flow block diagram of lesion segmentation approach of the present invention;
Attached drawing 2 is the structural block diagram of the embodiment of the present invention three.
Each label information indicates in attached drawing:
10, gradation processing module, 20, mark module,
30, binary processing module, 40, gray scale identification network module,
50, the full convolutional neural networks module of more sizes, 51, delaminating units,
52, full convolutional neural networks unit, 53, feedback unit,
60, training building module one, 70, training building module two.
Specific embodiment
The technical issues of to make technical solution of the present invention, solving and technical effect are more clearly understood, below in conjunction with tool Body embodiment is checked technical solution of the present invention, is completely described, it is clear that described embodiment is only this hair Bright a part of the embodiment, instead of all the embodiments.Based on the embodiment of the present invention, those skilled in the art are not doing All embodiments obtained under the premise of creative work out, all within protection scope of the present invention.
Embodiment one:
The present embodiment proposes a kind of lesion segmentation approach based on personalized converged network, method includes the following steps:
1) training part:
The tumour ultrasound image of acquisition 1a) is divided into two classes: the uniform tumour ultrasound image of grey value profile and intensity profile The more serious tumour ultrasound image of inhomogeneity;
It is used as training sample after the uniform tumour ultrasound image of grey value profile 1b) is carried out binary conversion treatment, expert is divided Then true value label of the pixel of result figure as training sample carries out the training of gray scale identification network, repeatedly training is completed The building of gray scale identification network;
1c) the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, after division Tumour ultrasound image accesses full convolutional neural networks and is trained, and repeatedly the structure of more full convolutional neural networks of size is completed in training It builds;
2) partitioning portion:
The uniform tumour ultrasound image of grey value profile 2a) is subjected to the gray scale identification net completed after binary conversion treatment using building Network is split;
2b) the full convolutional Neural of more sizes for completing the more serious tumour ultrasound image of intensity profile inhomogeneity using building Network is split.
The tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, comprising:
Tumour ultrasound image is divided into the identical or different tumour ultrasound image unit of 50 parts of areas, every part of tumour ultrasound figure As area S=n*n of unit, n are any natural number, the sum of area of 50 parts of tumour ultrasound image units is equal to tumour ultrasound The area of image.
The lesion segmentation approach of the present embodiment allows expert to classify tumour ultrasound image first, is then tied according to classification Fruit carries out the training of different modes, after carrying out the training of different modes and repeatedly training, training sample can be obtained, further according to instruction Practice sample input gray level identification network or full convolutional neural networks are trained, final obtain can carry out tumour ultrasound image The method of segmentation;This method can automatically select gray scale identification network or full convolutional Neural net according to the characteristic of tumour ultrasound image Network is split, and the segmentation precision of tumour ultrasound image not only can be improved, and can also improve the segmentation effect of tumour ultrasound image Rate.
Embodiment two:
The present embodiment proposes a kind of lesion segmentation approach based on personalized converged network, method includes the following steps:
1) training part:
The tumour ultrasound image of acquisition 1a) is divided into two classes: the uniform tumour ultrasound image of grey value profile and intensity profile The more serious tumour ultrasound image of inhomogeneity;
It is used as training sample after the uniform tumour ultrasound image of grey value profile 1b) is carried out binary conversion treatment, expert is divided Then true value label of the pixel of result figure as training sample carries out the training of gray scale identification network, repeatedly training is completed The building of gray scale identification network;
1c) the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, after division Tumour ultrasound image accesses full convolutional neural networks and is trained, and repeatedly the structure of more full convolutional neural networks of size is completed in training It builds;
2) partitioning portion:
The uniform tumour ultrasound image of grey value profile 2a) is subjected to the gray scale identification net completed after binary conversion treatment using building Network is split;
2b) the full convolutional Neural of more sizes for completing the more serious tumour ultrasound image of intensity profile inhomogeneity using building Network is split.
The tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, comprising:
Tumour ultrasound image is divided into the identical or different tumour ultrasound image unit of 50 parts of areas, every part of tumour ultrasound figure As area S=n*n of unit, n are any natural number, the sum of area of 50 parts of tumour ultrasound image units is equal to tumour ultrasound The area of image.
The tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, further includes:
Same tumour ultrasound image is at least divided twice, and is divided every time tumour ultrasound image unit number and/ Or tumour ultrasound image cellar area is not identical, tumour ultrasound image when being divided to same tumour ultrasound image next time Unit number and/or tumour ultrasound image cellar area are defeated with reference to more full convolutional neural networks modules 60 of size after last segmentation Segmentation result out.
The lesion segmentation approach of the present embodiment allows expert to classify tumour ultrasound image first, is then tied according to classification Fruit carries out the training of different modes, after carrying out the training of different modes and repeatedly training, training sample can be obtained, further according to instruction Practice sample input gray level identification network or full convolutional neural networks are trained, final obtain can carry out tumour ultrasound image The method of segmentation;This method can automatically select gray scale identification network or full convolutional Neural net according to the characteristic of tumour ultrasound image Network is split, and the segmentation precision of tumour ultrasound image not only can be improved, and can also improve the segmentation effect of tumour ultrasound image Rate.
Embodiment three:
With reference to attached drawing 2, the present embodiment proposes that a kind of lesion segmentation device based on personalized converged network, the device include:
Gradation processing module 10, for carrying out gray proces to tumour ultrasound image, in order to which expert visually observes gray proces Tumour ultrasound image afterwards, and it is classified as the uniform tumour ultrasound image of grey value profile and intensity profile inhomogeneity is more tight The tumour ultrasound image of weight;
Mark module 20, for the pixel of expert's segmentation result figure to be labeled as to the true value label of training sample;
Binary processing module 30, for the uniform tumour ultrasound image of grey value profile to be carried out binary conversion treatment;
Training building module 1, for using the tumour ultrasound image after binary conversion treatment as training sample, and with reference to expert The true value label of label is trained, and repeatedly training building gray scale identifies network module 40;
Training building module 2 70, for the more serious tumour ultrasound image of intensity profile inhomogeneity to be carried out different sizes Division, the tumour ultrasound image after division is accessed into full convolutional neural networks and is trained, repeatedly it is complete to construct more sizes for training Convolutional neural networks module 50;
Gray scale identifies network module 40, for being split to the uniform tumour ultrasound image of grey value profile;
More full convolutional neural networks modules 50 of size, for the more serious tumour ultrasound image of intensity profile inhomogeneity into The more sized divisions of row.
Involved gray scale identification network module 40 using Resnet as base net network, demarcate expert true by mark module 20 It is worth label access gray scale and identifies network module 40, identifies network in order to construct gray scale in the range of true value label.
Involved gradation processing module 10 selects Matlab.
The involved full convolutional neural networks module 50 of more sizes includes 52, three delaminating units of full convolutional neural networks unit 51, each delaminating units 51 are split tumour ultrasound image, three delaminating units 51 by same tumour ultrasound image according to It is secondary to be divided into 50 parts, 80 parts, the identical or different tumour ultrasound image unit of 100 parts of areas, every part of tumour ultrasound image Area S=the n*n, n of unit are any natural number, and the sum of area of at least ten parts tumour ultrasound image units is equal to tumour ultrasound The area of image.
The involved full convolutional neural networks module 50 of more sizes further includes feedback unit 53, is used for full convolutional neural networks The segmentation result of unit 52 inputs next delaminating units 51 to be executed, so that carrying out down to same tumour ultrasound image Tumour ultrasound image unit number and/or tumour ultrasound image cellar area can be based on complete after last segmentation when primary segmentation The segmentation result that convolutional neural networks unit 52 exports.This is that is, three delaminating units 51 successively work, and upper one point Layer unit 51 inputs full convolutional neural networks unit 52 and is split after dividing to tumour ultrasound image, next layering is single Member 51 divides tumour ultrasound image with reference to the segmentation result of a upper delaminating units 51.
The lesion segmentation device of the present embodiment is combined with the dividing method that embodiment one, embodiment two are protected, logical first It crosses expert to classify to tumour ultrasound image, then carries out the instruction of corresponding manner to tumour ultrasound image according to classification results Practice, building gray scale identification network module 40 and the full convolutional neural networks module 50 of more sizes after training repeatedly can be super to tumour Acoustic image carries out high-precision, efficient automatic segmentation.
Although the embodiment according to limited quantity describes the present invention, benefit from above description, the art Technical staff should be understood that in the scope of the present invention thus described, it can be envisaged that other embodiments.
Additionally, it should be noted that language used in this specification primarily to readable and introduction purpose and select , rather than in order to explain or defining the subject matter of the present invention and select.Therefore, in the model without departing from the appended claims In the case where enclosing and being spiritual, for those skilled in the art, many modifications and changes are all apparent 's.For the scope of the present invention, the disclosure that the present invention is done is illustrative and be not restrictive, and the scope of the present invention is by appended Claims limit.

Claims (10)

1. a kind of lesion segmentation approach based on personalized converged network, which is characterized in that method includes the following steps:
1) training part:
The tumour ultrasound image of acquisition 1a) is divided into two classes: the uniform tumour ultrasound image of grey value profile and intensity profile The more serious tumour ultrasound image of inhomogeneity;
It is used as training sample after the uniform tumour ultrasound image of grey value profile 1b) is carried out binary conversion treatment, expert is divided Then true value label of the pixel of result figure as training sample carries out the training of gray scale identification network, repeatedly training is completed The building of gray scale identification network;
1c) the tumour ultrasound image more serious to intensity profile inhomogeneity carries out various sizes of division, after division Tumour ultrasound image accesses full convolutional neural networks and is trained, and repeatedly the structure of more full convolutional neural networks of size is completed in training It builds;
2) partitioning portion:
The uniform tumour ultrasound image of grey value profile 2a) is subjected to the gray scale identification net completed after binary conversion treatment using building Network is split;
2b) the full convolutional Neural of more sizes for completing the more serious tumour ultrasound image of intensity profile inhomogeneity using building Network is split.
2. a kind of lesion segmentation approach based on personalized converged network according to claim 1, which is characterized in that carry out Gray scale identification network training be using Resnet as base net network, and introduce expert calibration true value label, construct gray scale knowledge Other network.
3. a kind of lesion segmentation approach based on personalized converged network according to claim 1, which is characterized in that utilize Matlab carries out gray proces to tumour ultrasound image, and expert visually observes the tumour ultrasound image after gray proces, according to warp Test the classification to tumour ultrasound image.
4. a kind of lesion segmentation approach based on personalized converged network according to claim 1, which is characterized in that ash The more serious tumour ultrasound image of degree distribution inhomogeneity carries out various sizes of division, comprising:
Tumour ultrasound image is divided into the identical or different tumour ultrasound image unit of at least ten parts of areas, every part of tumour ultrasound Area S=the n*n, n of elementary area are any natural number, and the sum of area of at least ten parts tumour ultrasound image units is equal to tumour The area of ultrasound image.
5. a kind of lesion segmentation approach based on personalized converged network according to claim 4, which is characterized in that ash The more serious tumour ultrasound image of degree distribution inhomogeneity carries out various sizes of division, further includes:
Same tumour ultrasound image is at least divided twice, and is divided every time tumour ultrasound image unit number and/ Or tumour ultrasound image cellar area is not identical, tumour ultrasound image when being divided to same tumour ultrasound image next time Unit number and/or tumour ultrasound image cellar area are with reference to the full convolutional neural networks module outputs of size more after last segmentation Segmentation result.
6. a kind of lesion segmentation device based on personalized converged network, which is characterized in that the device includes:
Gradation processing module, for carrying out gray proces to tumour ultrasound image, after visually observing gray proces in order to expert Tumour ultrasound image, and be classified as the uniform tumour ultrasound image of grey value profile and intensity profile inhomogeneity is more serious Tumour ultrasound image;
Mark module, for the pixel of expert's segmentation result figure to be labeled as to the true value label of training sample;
Binary processing module, for the uniform tumour ultrasound image of grey value profile to be carried out binary conversion treatment;
Training building module one, for using the tumour ultrasound image after binary conversion treatment be used as training sample, and with reference to expert mark The true value label of note is trained, and repeatedly training building gray scale identifies network module;
Training building module two, it is various sizes of for carrying out the more serious tumour ultrasound image of intensity profile inhomogeneity It divides, the tumour ultrasound image after division is accessed into full convolutional neural networks and is trained, repeatedly training constructs more sizes and rolls up entirely Product neural network module;
Gray scale identifies network module, for being split to the uniform tumour ultrasound image of grey value profile;
More full convolutional neural networks modules of size, for being carried out to the more serious tumour ultrasound image of intensity profile inhomogeneity More sized divisions.
7. a kind of lesion segmentation device based on personalized converged network according to claim 6, which is characterized in that described Gray scale identifies network module using Resnet as base net network, and the true value label access gray scale that mark module demarcates expert identifies Network module identifies network in order to construct gray scale in the range of true value label.
8. a kind of lesion segmentation device based on personalized converged network according to claim 6, which is characterized in that described Gradation processing module selects Matlab.
9. a kind of lesion segmentation device based on personalized converged network according to claim 6, which is characterized in that described More full convolutional neural networks modules of size include full convolutional neural networks unit, at least two delaminating units, each delaminating units Tumour ultrasound image is split, by same tumour Ultrasound Image Segmentation at least ten parts of faces when delaminating units are divided every time The identical or different tumour ultrasound image unit of product, the area S=n*n, n of every part of tumour ultrasound image unit are any natural number, The sum of area of at least ten parts tumour ultrasound image units is equal to the area of tumour ultrasound image.
10. a kind of lesion segmentation device based on personalized converged network according to claim 9, which is characterized in that institute Stating the full convolutional neural networks module of more sizes further includes feedback unit, for the segmentation result of full convolutional neural networks unit is defeated Enter next delaminating units to be executed, so that tumour ultrasound figure when being divided to same tumour ultrasound image next time As unit number and/or tumour ultrasound image cellar area can be based on convolutional neural networks unit outputs complete after last segmentation Segmentation result.
CN201811389439.2A 2018-11-21 2018-11-21 Tumor segmentation method and device based on personalized fusion network Active CN109214388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811389439.2A CN109214388B (en) 2018-11-21 2018-11-21 Tumor segmentation method and device based on personalized fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811389439.2A CN109214388B (en) 2018-11-21 2018-11-21 Tumor segmentation method and device based on personalized fusion network

Publications (2)

Publication Number Publication Date
CN109214388A true CN109214388A (en) 2019-01-15
CN109214388B CN109214388B (en) 2021-06-08

Family

ID=64993405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811389439.2A Active CN109214388B (en) 2018-11-21 2018-11-21 Tumor segmentation method and device based on personalized fusion network

Country Status (1)

Country Link
CN (1) CN109214388B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948575A (en) * 2019-03-27 2019-06-28 中国科学技术大学 Eyeball dividing method in ultrasound image
CN111539961A (en) * 2019-12-13 2020-08-14 山东浪潮人工智能研究院有限公司 Target segmentation method, device and equipment
CN111899264A (en) * 2020-06-18 2020-11-06 济南浪潮高新科技投资发展有限公司 Target image segmentation method, device and medium
CN114926482A (en) * 2022-05-31 2022-08-19 泰安市中心医院 DCE-MRI breast tumor segmentation method and device based on full convolution network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447682A (en) * 2016-08-29 2017-02-22 天津大学 Automatic segmentation method for breast MRI focus based on Inter-frame correlation
CN106530298A (en) * 2016-11-14 2017-03-22 同济大学 Three-way-decision-based liver tumor CT image classification method
CN107749061A (en) * 2017-09-11 2018-03-02 天津大学 Based on improved full convolutional neural networks brain tumor image partition method and device
US20180101953A1 (en) * 2016-07-25 2018-04-12 Sony Corporation Automatic 3d brain tumor segmentation and classification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180101953A1 (en) * 2016-07-25 2018-04-12 Sony Corporation Automatic 3d brain tumor segmentation and classification
CN106447682A (en) * 2016-08-29 2017-02-22 天津大学 Automatic segmentation method for breast MRI focus based on Inter-frame correlation
CN106530298A (en) * 2016-11-14 2017-03-22 同济大学 Three-way-decision-based liver tumor CT image classification method
CN107749061A (en) * 2017-09-11 2018-03-02 天津大学 Based on improved full convolutional neural networks brain tumor image partition method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948575A (en) * 2019-03-27 2019-06-28 中国科学技术大学 Eyeball dividing method in ultrasound image
CN109948575B (en) * 2019-03-27 2023-03-24 中国科学技术大学 Eyeball area segmentation method in ultrasonic image
CN111539961A (en) * 2019-12-13 2020-08-14 山东浪潮人工智能研究院有限公司 Target segmentation method, device and equipment
CN111899264A (en) * 2020-06-18 2020-11-06 济南浪潮高新科技投资发展有限公司 Target image segmentation method, device and medium
CN114926482A (en) * 2022-05-31 2022-08-19 泰安市中心医院 DCE-MRI breast tumor segmentation method and device based on full convolution network

Also Published As

Publication number Publication date
CN109214388B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN109214388A (en) A kind of lesion segmentation approach and device based on personalized converged network
Guo et al. Classification of thyroid ultrasound standard plane images using ResNet-18 networks
CN108052977B (en) Mammary gland molybdenum target image deep learning classification method based on lightweight neural network
Hou et al. Classification of tongue color based on CNN
CN107368670A (en) Stomach cancer pathology diagnostic support system and method based on big data deep learning
CN109215040A (en) A kind of tumor of breast dividing method based on multiple dimensioned weighting study
CN107368671A (en) System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
CN109512464A (en) A kind of disorder in screening and diagnostic system
CN104899926A (en) Medical image segmentation method and device
CN109528230A (en) A kind of tumor of breast dividing method and device based on multi-stage transformation network
Guan et al. Automatic detection and localization of thighbone fractures in X-ray based on improved deep learning method
CN108922602A (en) The same period new chemoradiation therapy curative effect evaluation system and method before rectal cancer based on big data analysis MRI image
CN109255354A (en) medical CT-oriented computer image processing method and device
Ashraf et al. An efficient technique for skin cancer classification using deep learning
Nayan et al. A deep learning approach for brain tumor detection using magnetic resonance imaging
Aslam et al. Liver-tumor detection using CNN ResUNet
Velliangiri et al. Investigation of deep learning schemes in medical application
CN109447120A (en) A kind of method, apparatus and computer readable storage medium of Image Automatic Segmentation
Wang et al. A predictive model of radiation-related fibrosis based on the radiomic features of magnetic resonance imaging and computed tomography
Subashini et al. Brain tumour detection using Pulse coupled neural network (PCNN) and back propagation network
CN101968851A (en) Medical image processing method based on dictionary studying upsampling
Sobhaninia et al. Brain tumor classification using medial residual encoder layers
Shao et al. Imageological examination of pulmonary nodule detection
Haider et al. Computer-assisted image processing technique for tracking wound progress
Fatima et al. Evaluation of multi-modal mri images for brain tumor segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210513

Address after: No. 1036, Shandong high tech Zone wave road, Ji'nan, Shandong

Applicant after: INSPUR GROUP Co.,Ltd.

Address before: 250100 First Floor of R&D Building 2877 Kehang Road, Sun Village Town, Jinan High-tech Zone, Shandong Province

Applicant before: JINAN INSPUR HI-TECH INVESTMENT AND DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant