CN104484886B - A kind of dividing method and device of MR images - Google Patents
A kind of dividing method and device of MR images Download PDFInfo
- Publication number
- CN104484886B CN104484886B CN201410856328.3A CN201410856328A CN104484886B CN 104484886 B CN104484886 B CN 104484886B CN 201410856328 A CN201410856328 A CN 201410856328A CN 104484886 B CN104484886 B CN 104484886B
- Authority
- CN
- China
- Prior art keywords
- images
- modal
- dictionary
- test
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Abstract
The present invention is applied to technical field of medical image processing, there is provided a kind of dividing method and device of MR images, including:Carry out the dictionary learning of each mode respectively by multi-modal sample MR images;Establish multi-modal joint sparse and represent model;Model is represented by the multi-modal joint sparse, MR images will be tested and combine the linear combination that rarefaction representation is a small number of atoms under the dictionary, the rarefaction representation coefficient of the test MR images is obtained by sparse coding;According to the rarefaction representation coefficient of the test MR images, each pixel of the test MR images is classified, obtains image segmentation result.Multi-modal joint sparse proposed by the present invention represents model, can combine the information that multi-modal MR images are provided and carry out multivariable joint sparse expression, greatly increase the accuracy of image segmentation.
Description
Technical field
The invention belongs to technical field of medical image processing, more particularly to a kind of dividing method and device of MR images.
Background technology
Magnetic resonance (Magnetic Resonance, MR) imaging technique has higher soft tissue resolution and not damaged
Property, fault imaging can be carried out to different anatomic position, has and realizes that image contrast weights with different parameters, obtains high tissue
Resolution ratio, fine definition and the ability for providing a variety of diagnostic messages, have been widely used in brain tumor diagnosis field at present.For
The local patholoic change of quantitative analysis brain tumor, it is necessary to split to the tumour in brain image, determine the volume of tumour, size and
Position.
Rarefaction representation is a kind of machine learning method of newly-developed, and this method is instructed by learning to training sample
The corresponding dictionary of such sample is practised, and image is sparsely expressed as a series of the linear of a small number of atoms under the dictionary space
Combination.At present, rarefaction representation has been successfully applied in various visual tasks, for example, the classification based on rarefaction representation
(Sparse Representation based Classification, SRC) algorithm, however, the algorithm its be initially used for
Recognition of face, recognition of face without the concern for the spatial relationship between each face, and image segmentation in, each pixel
Be not it is isolated existing, all can there is certain to contact with the adjacent pixel of surrounding space, therefore, SRC algorithms are directly used
In image segmentation, it is difficult to obtain accurate segmentation result.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of dividing method of MR images, it is intended to which solving prior art will be based on
The problem of algorithm of rarefaction representation is directly applied in image segmentation, and the accurate rate for causing image to be split is bad.
The embodiment of the present invention is achieved in that a kind of dividing method of MR images, including:
Carry out the dictionary learning of each mode respectively by multi-modal sample MR images;
Establish multi-modal joint sparse and represent model;
Model is represented by the multi-modal joint sparse, test MR images are combined into rarefaction representation under the dictionary is
The linear combination of a small number of atoms, the rarefaction representation coefficient of the test MR images is obtained by sparse coding;
According to the rarefaction representation coefficient of the test MR images, each pixel of the test MR images is classified,
Obtain image segmentation result.
The another object of the embodiment of the present invention is to provide a kind of segmenting device of magnetic resonance MR images, including:
Training unit, for carrying out the dictionary learning of each mode respectively by multi-modal sample MR images;
Joint sparse represents unit, and model is represented for establishing multi-modal joint sparse;
Sparse coding unit, for representing model by the multi-modal joint sparse, by test MR images in the word
Combine the linear combination that rarefaction representation is a small number of atoms under allusion quotation, the rarefaction representation of the test MR images is obtained by sparse coding
Coefficient;
Cutting unit, for the rarefaction representation coefficient according to the test MR images, by each of the test MR images
Pixel is classified, and obtains image segmentation result.
The multi-modal joint sparse that the embodiment of the present invention proposes represents model, can combine what multi-modal MR images were provided
Information carries out multivariable joint sparse expression, greatly increases the accuracy of image segmentation.
Brief description of the drawings
Fig. 1 is the implementation process figure of the dividing method of MR images provided in an embodiment of the present invention;
Fig. 2 is the dividing method S101 of MR images provided in an embodiment of the present invention specific implementation flow chart;
Fig. 3 is the schematic flow sheet of the dividing method of MR images provided in an embodiment of the present invention;
Fig. 4 is the structured flowchart of the segmenting device of MR images provided in an embodiment of the present invention.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Fig. 1 shows the implementation process of the dividing method of MR images provided in an embodiment of the present invention, and details are as follows:
In S101, the dictionary learning of each mode is carried out respectively by multi-modal sample MR images.
Before splitting to test MR images, image is respectively trained with the training sample of multi-modal training image first
The dictionary of the different conditions under each mode needed for segmentation, the training process of the dictionary are the mistakes of a joint sparse optimization
Journey, one is synthesized greatly using each classifying dictionary that the sample joint training of multiple mode is multi-modal, then by each classifying dictionary
Dictionary.
As shown in Fig. 2 S101 is specially:
In S201, the multi-modal MR images corresponding to each sample patient are subjected to registration.
The multi-modal MR images, including T1Weighting picture, T2Weighting picture, T1CEnhancing picture and Flair pictures, the registration,
Exactly different images is mapped in the same coordinate system by spatial alternation, makes the position of the image of corresponding organ in space
Unanimously.In multi-modal MR images after registration, different modalities image be in brain tissue corresponding to the pixel of same position into
Part is same composition.
In S202, the training sample of different conditions, institute are extracted in the multi-modal MR images after registration respectively
Stating different conditions includes edematous state (the state E i.e. described in the embodiment of the present invention), neoplastic state (i.e. in the embodiment of the present invention
Described state T) and normal cerebral tissue's state (the state N i.e. described in the embodiment of the present invention).
In the present embodiment, the extraction of training sample is the ground truth labels that are provided according to training data to enter
Capable.Specifically, for the MR images of the i-th mode, respectively in the training of edematous state, neoplastic state and normal cerebral tissue's state
M is extracted in data at random1Individual, m2Individual, m3The stereo-picture block of individual n × n × n sizes, and these image blocks can be with overlapping.
The each image block extracted is expressed as column vector, then the vector represented by each image block isTherefore, for
The MR images of i-th mode, the training sample of c kind states is a matrix
Wherein, mcIt is the number of the training sample of the c kind states of the i-th mode.
In S203, the training sample of the jth class state of the i-th mode extracted is learnt respectively, generates the i-th mould
The joint dictionary of stateWherein, it is describedRepresent the sub- dictionary of the jth class state of the i-th mode, the i
=1,2 ... ..., the j=1,2,3 ∈ { N, T, E }, the N represent normal cerebral tissue's state, and the T represents described swollen
Warty state, the E represent the edematous state.
It is assumed that the size of dictionary is K, then, the dictionary of the i-th mode is expressed as
HereRepresent the sub- dictionary of the jth class state of the i-th mode, j=1,2,3 ∈ { N, T, E }.For every sub- dictionaryIt is
Pass through the jth class sample to the i-th modeObtained after being learnt.For the instruction of the i-th mode
Practice sample Xi, can sparsely be expressed as a linear combination of a small number of atoms
Here(i=1,2 ..., 4) it is an excessively complete dictionary.
In S102, establish multi-modal joint sparse and represent model.
Give multi-modal training sample X byFormed etc. multiple mode, here each mode
XiIt is made up of three class training samplesAndIt is the normal brain activity of the i-th mode
The training sample of structural state,It is the training sample of the neoplastic state of the i-th mode,The training sample of the edematous state of i-th mode.
In the present embodiment, propose that a multi-modal joint sparse represents model and the multi-modal sparse coefficient to acquisition is done
Go out joint determination decisions:
By each training sample x of jth class statejIt is expressed as a matrixHereJ=1,2,3 ∈ { N, T, E }.It is assumed thatIn sub- dictionaryUpper energy
It is linear combination by linear expressionHereSparse coefficient matrix, then we
It is by the expression model tormulation of multi-modal joint sparse:
Wherein, hereIt it is one by three sub- category dictionaries
The excessively complete dictionary matrix of composition,
As one embodiment of the present of invention, above-mentioned (1) formula easily causes over-fitting problem, therefore, in the present embodiment,
Represent to introduce figure canonical method in model in the default multi-modal joint sparse, and pass through l1,2The side of joint sparse optimization
Method can eliminate over-fitting problem to multi-modal carry out rarefaction representation, meanwhile, l1Figure canonical considers the figure between adjacent pixel
Space structure relation so that the dictionary for learning out has more identification, can further improve the correctness of segmentation.Therefore,
(1) formula is revised as:
Wherein, G (A) is a function being embedded in figure G, and λ and γ are for coefficient of balance to be openness and figure canonical
Parameter.Herein propose one and be based on the free l of parameter1Graph structure building method, this method calculate figure adjacent formation and side right
It is integrated into a step.Due in l1In optimization, it is contemplated that the relation between sample, we are that adjacent graph structure utilizes sparse system
Number, therefore, figure function G (A) is defined as:
Wherein, L=S-W is Laplacian matrixes.Formula (3) is brought into formula (2), whole object function can be expressed as:
In S103, model is represented by the multi-modal joint sparse, test MR images are combined under the dictionary
Rarefaction representation is the linear combination of a small number of atoms, and the rarefaction representation coefficient of the test MR images is obtained by sparse coding.
For given test MR images, we slide one successively with the method similar to training image by MR images are tested
Individual pixel is expressed as the set of test sample fritter.For each test sampleAllow
Due to aiAnd ajThere is identical sparse mode, therefore we can be by solving l1/l2The least square problem of regularization
To obtain the joint sparse coefficient matrix of test MR imagesRarefaction representation coefficient in the matrix is exactly to test MR
The new expression feature of image.
In S104, each pixel of the test MR images is classified, obtains image segmentation result.
For the joint sparse coefficient matrix for the test MR images tried to achieve in S103Utilize minimum sparse reconstruction error plan
Slightly classify to do, i.e.,
Wherein, it is describedIt is a matrix function, it is defined as the row for only retaining jth class, and sets every other row member
Element is all 0, it is possible thereby to realize the segmentation to image.
Fig. 3 shows the schematic diagram of whole image segmentation flow.
The multi-modal joint sparse that the embodiment of the present invention proposes represents model, can combine what multi-modal MR images were provided
Information carries out multivariable joint sparse expression, greatly increases the accuracy of image segmentation.
As one embodiment of the present of invention, alternating direction multiplier (Alternating Direction can be used
Method of Multipliers, ADMM) method optimizes the dictionary and the rarefaction representation coefficient:
First, introduce auxiliary variable and formula (5) is changed into a constrained optimization problem:
For the problem with equality constraint in (7) formula, we use Augmented Lagrange method (the Augmented
Lagrangian Method, ALM) solve.Therefore, augmentation Suzanne Lenglen day function can be re-written as:
Because each variable is independent, therefore we solve formula (8) with ADMM methods.
Renewal to A:
L is solved firstρThe minimum of (A, V, B) on A, it is one on aiConvex function:
Wherein, d=4, orderObtain in A to
Measure aiSolution;
Renewal to V:
LρThe minimum of (A, V, B) on V, can solve following optimization problem:
Both sides simultaneously divided by ρ, then the right of (11) formula be:
So second liberalization problem is that variable V is asked most value, constant term C is given up, then optimization problem converts
For:
Because V is separable structure in (12), it is possible to individually seek minimum l to V each row vector2Norm.Order
ak,t+1,bk,t,vk,t+1Respective representing matrix At+1,Bt,Vt+1Row vector, then for each k=1,2 ... 3K, we solve such as
Lower subproblem:
Here z=ak,t+1+ρ-1bk,t,Formula (7) | | v | |2It can not be led in zero point, it is non-convex, it is impossible to derivation, so
It is soft threshold method using approximate function, obtains
Here (v)+It is a vector, its element value is taken as max (vi,0)。
Therefore, above-mentioned optimized algorithm flow can be summarized as:
1、At+1=argminLρ(A,Vt,Bt);
2、Vt+1=argminLρ(At+1,V,Bt);
3、Bt+1=Bt+ρ(At+1,-Vt+1)。
In embodiments of the present invention, the feature dimensions from different modalities image are handled using joint sparse representational framework
Number, it is allowed to be expressed as identical space sparse coefficient, and joint sparse represents that the characteristic vector of high dimension can be handled;
Represented using the joint sparse of multiple mode, the various features that multi-modal MR images are provided can be combined in dictionary learning and examined
Disconnected information so that the characteristic information contained by dictionary is more abundant, and the accuracy of pixel classifications is improved with this;Figure canonical is drawn
In the optimization for entering to joint sparse to represent model, and utilize the alternative optimization of the multiple variables of ADMM algorithms progress so that learn
The dictionary come has more identification.
Compared to existing SRC methods, in embodiments of the present invention, dictionary is no longer former by the use of original image block as dictionary
Son, but by learning to obtain dictionary, this dictionary come out by learning training is one for big training sample problem
Good solution route.Moreover, in embodiments of the present invention, it is contemplated that the spacial influence relation between adjacent pixel/voxel, carry
The high accuracy of segmentation.Meanwhile the embodiment of the present invention propose be full-automatic dividing model, solve in semi-automatic segmentation
Influence to implement the bottleneck problem of operation because speed does not catch up with, because in full-automatic dividing, the speed of speed is no longer
It is the emphasis factors of segmentation.
Based on the dividing method of MR images described above, Fig. 4 shows point of MR images provided in an embodiment of the present invention
Cut the structured flowchart of device.For convenience of description, it illustrate only part related to the present embodiment.
Reference picture 4, the device include:
Training unit 41, carry out the dictionary learning of each mode respectively by multi-modal sample MR images.
Joint sparse represents unit 42, establishes multi-modal joint sparse and represents model.
Sparse coding unit 43, model is represented by the multi-modal joint sparse, by test MR images in the dictionary
Lower joint rarefaction representation is the linear combination of a small number of atoms, and the rarefaction representation system of the test MR images is obtained by sparse coding
Number.
Cutting unit 44, according to the rarefaction representation coefficient of the test MR images, by each picture of the test MR images
Element is classified, and obtains image segmentation result.
Alternatively, the training unit 41 includes:
Registering subelement, the multi-modal MR images corresponding to each sample patient are subjected to registration.
Sample extraction subelement, the training sample of different conditions is extracted in the multi-modal MR images after registration respectively
This, the different conditions include edematous state, neoplastic state and normal cerebral tissue's state.
Dictionary training subelement, the training sample of the jth class state of the i-th mode extracted is learnt respectively, it is raw
Into the joint dictionary of the i-th modeWherein, it is describedRepresent the sub- word of the jth class state of the i-th mode
Allusion quotation, the i=1,2 ... ..., the j=1,2,3 ∈ { N, T, E }, the N represent normal cerebral tissue's state, the T tables
Show the neoplastic state, the E represents the edematous state.
Alternatively, the joint sparse represents that unit 42 is specifically used for:
Represent to introduce figure canonical method in model in the default multi-modal joint sparse, and pass through l1,2Joint sparse
The method of optimization is to multi-modal carry out rarefaction representation.
Alternatively, described device also includes:
Optimize unit, optimize the rarefaction representation coefficient using alternating direction multiplier ADMM methods.
Alternatively, the cutting unit 44 is specifically used for:
Each pixel of the test MR images is classified using minimum sparse reconstruction error, obtains image segmentation
As a result.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement made within refreshing and principle etc., should be included in the scope of the protection.
Claims (10)
- A kind of 1. dividing method of magnetic resonance MR images, it is characterised in that including:Carry out the dictionary learning of each mode respectively by multi-modal sample MR images;Combined using the multi-modal sample Multi-modal each classifying dictionary is trained, then each classifying dictionary is synthesized into a big dictionary;Establish multi-modal joint sparse and represent model;Model is represented by the multi-modal joint sparse, test MR images are combined into rarefaction representation for minority under the dictionary The linear combination of atom, the rarefaction representation coefficient of the test MR images is obtained by sparse coding;According to the rarefaction representation coefficient of the test MR images, each pixel of the test MR images is classified, obtained Image segmentation result.
- 2. the method as described in claim 1, it is characterised in that it is described carried out respectively by multi-modal sample MR images it is each The dictionary learning of mode includes:Multi-modal MR images corresponding to each sample patient are subjected to registration;The training sample of different conditions is extracted in the multi-modal MR images after registration respectively, the different conditions include Edematous state, neoplastic state and normal cerebral tissue's state;The training sample of the jth class state of the i-th mode extracted is learnt respectively, generates the joint dictionary of the i-th modeWherein, it is describedRepresent the sub- dictionary of the jth class state of the i-th mode, the i=1,2 ... ..., The j=1,2,3 ∈ { N, T, E }, the N represent normal cerebral tissue's state, and the T represents the neoplastic state, the E Represent the edematous state.
- 3. the method as described in claim 1, it is characterised in that described to establish multi-modal joint sparse and represent that model includes:Represent to introduce figure canonical method in model in default multi-modal joint sparse, and pass through l1,2The side of joint sparse optimization Method is to multi-modal carry out rarefaction representation.
- 4. the method as described in claim 1, it is characterised in that methods described also includes:Optimize the rarefaction representation coefficient using alternating direction multiplier ADMM methods.
- 5. the method as described in claim 1, it is characterised in that each pixel by the test MR images is divided Class, obtaining image segmentation result includes:Each pixel of the test MR images is classified using minimum sparse reconstruction error, obtains image segmentation knot Fruit.
- A kind of 6. segmenting device of magnetic resonance MR images, it is characterised in that including:Training unit, for carrying out the dictionary learning of each mode respectively by multi-modal sample MR images;Using described more The multi-modal each classifying dictionary of the sample joint training of mode, then each classifying dictionary is synthesized into a big dictionary;Joint sparse represents unit, and model is represented for establishing multi-modal joint sparse;Sparse coding unit, for representing model by the multi-modal joint sparse, by test MR images under the dictionary Joint sparse is expressed as the linear combination of a small number of atoms, and the rarefaction representation system of the test MR images is obtained by sparse coding Number;Cutting unit, for the rarefaction representation coefficient according to the test MR images, by each pixel of the test MR images Classified, obtain image segmentation result.
- 7. device as claimed in claim 6, it is characterised in that the training unit includes:Registering subelement, for the multi-modal MR images corresponding to each sample patient to be carried out into registration;Sample extraction subelement, for extracting the training sample of different conditions in the multi-modal MR images after registration respectively This, the different conditions include edematous state, neoplastic state and normal cerebral tissue's state;Dictionary training subelement, it is raw for learning respectively to the training sample of the jth class state of the i-th mode extracted Into the joint dictionary of the i-th modeWherein, it is describedRepresent the sub- word of the jth class state of the i-th mode Allusion quotation, the i=1,2 ... ..., the j=1,2,3 ∈ { N, T, E }, the N represent normal cerebral tissue's state, the T tables Show the neoplastic state, the E represents the edematous state.
- 8. device as claimed in claim 6, it is characterised in that the joint sparse represents that unit is specifically used for:Represent to introduce figure canonical method in model in default multi-modal joint sparse, and pass through l1,2The side of joint sparse optimization Method is to multi-modal carry out rarefaction representation.
- 9. device as claimed in claim 6, it is characterised in that described device also includes:Optimize unit, for optimizing the rarefaction representation coefficient using alternating direction multiplier ADMM methods.
- 10. device as claimed in claim 6, it is characterised in that the cutting unit is specifically used for:Each pixel of the test MR images is classified using minimum sparse reconstruction error, obtains image segmentation knot Fruit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410856328.3A CN104484886B (en) | 2014-12-31 | 2014-12-31 | A kind of dividing method and device of MR images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410856328.3A CN104484886B (en) | 2014-12-31 | 2014-12-31 | A kind of dividing method and device of MR images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104484886A CN104484886A (en) | 2015-04-01 |
CN104484886B true CN104484886B (en) | 2018-02-09 |
Family
ID=52759426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410856328.3A Active CN104484886B (en) | 2014-12-31 | 2014-12-31 | A kind of dividing method and device of MR images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104484886B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106127794A (en) * | 2016-07-29 | 2016-11-16 | 天津大学 | Based on probability FCM algorithm MRI tumor image dividing method and system |
CN106504245A (en) * | 2016-10-28 | 2017-03-15 | 东北大学 | A kind of damaging pathological tissues image partition method of multi-modal brain image |
CN106530321B (en) * | 2016-10-28 | 2019-07-12 | 南方医科大学 | A kind of multichannel chromatogram image partition method based on direction and scale description |
CN106991435A (en) * | 2017-03-09 | 2017-07-28 | 南京邮电大学 | Intrusion detection method based on improved dictionary learning |
CN107464246A (en) * | 2017-07-14 | 2017-12-12 | 浙江大学 | A kind of image partition method based on collection of illustrative plates dictionary learning |
CN107657989B (en) * | 2017-09-11 | 2021-05-28 | 山东第一医科大学(山东省医学科学院) | Multimodal medical image platform based on sparse learning and mutual information |
WO2019104702A1 (en) * | 2017-12-01 | 2019-06-06 | 深圳先进技术研究院 | Adaptive joint sparse coding-based parallel magnetic resonance imaging method and apparatus and computer readable medium |
EP3830793A4 (en) * | 2018-07-30 | 2022-05-11 | Memorial Sloan Kettering Cancer Center | Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy |
CN112419247B (en) * | 2020-11-12 | 2022-03-18 | 复旦大学 | MR image brain tumor detection method and system based on machine learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129573A (en) * | 2011-03-10 | 2011-07-20 | 西安电子科技大学 | SAR (Synthetic Aperture Radar) image segmentation method based on dictionary learning and sparse representation |
CN102831614A (en) * | 2012-09-10 | 2012-12-19 | 西安电子科技大学 | Sequential medical image quick segmentation method based on interactive dictionary migration |
CN103714536A (en) * | 2013-12-17 | 2014-04-09 | 深圳先进技术研究院 | Sparse-representation-based multi-mode magnetic resonance image segmentation method and device |
-
2014
- 2014-12-31 CN CN201410856328.3A patent/CN104484886B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102129573A (en) * | 2011-03-10 | 2011-07-20 | 西安电子科技大学 | SAR (Synthetic Aperture Radar) image segmentation method based on dictionary learning and sparse representation |
CN102831614A (en) * | 2012-09-10 | 2012-12-19 | 西安电子科技大学 | Sequential medical image quick segmentation method based on interactive dictionary migration |
CN103714536A (en) * | 2013-12-17 | 2014-04-09 | 深圳先进技术研究院 | Sparse-representation-based multi-mode magnetic resonance image segmentation method and device |
Non-Patent Citations (2)
Title |
---|
Two·level Bregmanized method for image interpolation with graph regularized sparse coding;Liu Qiegen 等;《Journal of Southeast University(English Edition)》;20131231;第29卷(第4期);384-388 * |
基于空间相关性约束稀疏表示的高光谱图像分类;刘建军 等;《电子与信息学报》;20121115;第34卷(第11期);2666-2671 * |
Also Published As
Publication number | Publication date |
---|---|
CN104484886A (en) | 2015-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104484886B (en) | A kind of dividing method and device of MR images | |
WO2020215985A1 (en) | Medical image segmentation method and device, electronic device and storage medium | |
CN110111313B (en) | Medical image detection method based on deep learning and related equipment | |
CN107506761B (en) | Brain image segmentation method and system based on significance learning convolutional neural network | |
WO2019200747A1 (en) | Method and device for segmenting proximal femur, computer apparatus, and storage medium | |
WO2022001623A1 (en) | Image processing method and apparatus based on artificial intelligence, and device and storage medium | |
CN108428229A (en) | It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network | |
WO2020024127A1 (en) | Bone age assessment and height prediction model, system thereof and prediction method therefor | |
CN104573742B (en) | Classification method of medical image and system | |
CN111932529B (en) | Image classification and segmentation method, device and system | |
CN111597920B (en) | Full convolution single-stage human body example segmentation method in natural scene | |
Han et al. | Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning | |
CN110136122B (en) | Brain MR image segmentation method based on attention depth feature reconstruction | |
CN108648179A (en) | A kind of method and device of analysis Lung neoplasm | |
CN107784319A (en) | A kind of pathological image sorting technique based on enhancing convolutional neural networks | |
CN108664986B (en) | Based on lpNorm regularized multi-task learning image classification method and system | |
WO2021164280A1 (en) | Three-dimensional edge detection method and apparatus, storage medium and computer device | |
CN109949304B (en) | Training and acquiring method of image detection learning network, image detection device and medium | |
Abdullah et al. | Multi-sectional views textural based SVM for MS lesion segmentation in multi-channels MRIs | |
Gao et al. | Joint disc and cup segmentation based on recurrent fully convolutional network | |
CN110930378A (en) | Emphysema image processing method and system based on low data demand | |
CN111739037B (en) | Semantic segmentation method for indoor scene RGB-D image | |
CN114600155A (en) | Weakly supervised multitask learning for cell detection and segmentation | |
CN114550169A (en) | Training method, device, equipment and medium for cell classification model | |
CN110246567A (en) | A kind of medical image preprocess method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |