CN109166130A - A kind of image processing method and image processing apparatus - Google Patents
A kind of image processing method and image processing apparatus Download PDFInfo
- Publication number
- CN109166130A CN109166130A CN201810885716.2A CN201810885716A CN109166130A CN 109166130 A CN109166130 A CN 109166130A CN 201810885716 A CN201810885716 A CN 201810885716A CN 109166130 A CN109166130 A CN 109166130A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- mentioned
- image data
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of image processing method and image processing apparatus, and wherein method includes: that raw image data is converted to the destination image data for meeting target resolution threshold value, and the raw image data includes characteristic element;The destination image data is inputted into the first dividing processing module and carries out image segmentation, obtains the first segmented image;The raw image data is cut according to first segmented image and obtains the image area data for meeting target image parameter;Described image area data is inputted into the second dividing processing module and carries out image segmentation, obtains the second segmented image;Second segmented image is restored to the original resolution space of the raw image data, obtains the object segmentation result of the characteristic element, the treatment effeciency of magnetic resonance image atrium segmentation can be improved, improves the accuracy rate of atrium segmentation.
Description
Technical field
The present invention relates to field of image processings, and in particular to a kind of image processing method, image processing apparatus and computer
Readable storage medium storing program for executing.
Background technique
Atrial fibrosis trembling is presently most common one of heart-rate-turbulence illness, the probability occurred in general population
Reach 2%, and the disease incidence in elderly population is higher and has certain lethality, has seriously threatened the strong of the mankind
Health.Current effective treatment means for auricular fibrillation also compare shortage, mainly due to lacking to the anatomical structure in atrium
Deeply understand.Mr techniques can be used to generate the 3-D image of endocardial different structure, and the magnetic resonance figure of gadolinium enhancing
The difference of the tissue of health and fibrosis is more obvious as in, is frequently used to auxiliary and formulates targetedly Atrial fibrosis trembling
Surgical ablation therapeutic scheme.Therefore it is the key that understand and analyze Atrial fibrosis to the Accurate Segmentation in atrium, facilitates effectively
The planning in advance of exploitation and the operation for the treatment of means.
But the differentiation since the contrast of gadolinium enhancing magnetic resonance image is relatively low, between atrial tissue and the tissue of surrounding
Not obvious enough, being directly split to compare to atrium, especially atrium sinistrum has challenge, main in currently practical application or benefit
The method manually divided, it will usually consume the plenty of time, and it is lower to divide accuracy rate.
Summary of the invention
The embodiment of the present application provides a kind of image processing method and image processing apparatus, and the magnetic resonance image heart can be improved
The treatment effeciency of room segmentation, improves the accuracy rate of atrium segmentation.
The embodiment of the present application first aspect provides a kind of image processing method, comprising:
Raw image data is converted to the destination image data for meeting target resolution threshold value, the raw image data
Include characteristic element;
The destination image data is inputted into the first dividing processing module and carries out image segmentation, obtains the first segmented image;
The raw image data is cut and is obtained according to first segmented image and meets target image parameter
Image area data;
Described image area data is inputted into the second dividing processing module and carries out image segmentation, obtains the second segmented image;
Second segmented image is restored to the original resolution space of the raw image data, obtains the feature
The object segmentation result of element.
It is described that raw image data is converted to the mesh for meeting target resolution threshold value in a kind of optional embodiment
Logo image data include:
The raw image data is converted to the first image data for meeting first resolution threshold value;
Down-sampling is carried out to the first image data, obtains the target image for meeting the target resolution threshold value
Data.
It is described that the raw image data is carried out according to first segmented image in a kind of optional embodiment
Before cutting and obtaining the image area data for meeting target image parameter, the method also includes:
Obtain the barycentric coodinates of characteristic element described in first segmented image;
It is described the raw image data is cut and is obtained according to first segmented image meet target image
The image area data of parameter, comprising:
Centered on the barycentric coodinates, first segmented image is restored to the original resolution space, and cut out
Cut the area data for meeting target image size.
In a kind of optional embodiment, the target image parameter includes target image size threshold value.
In a kind of optional embodiment, the raw image data includes gadolinium enhancing magnetic resonance image.
In a kind of optional embodiment, the first dividing processing module includes first nerves network structure, described
Second dividing processing module includes nervus opticus network structure.
In a kind of optional embodiment, the training method of the first nerves network structure includes:
First original training data is converted to the first training data for meeting first resolution threshold value, described first is original
Training data includes fisrt feature element;
Down-sampling is carried out to first training data, obtains the target training data for meeting second resolution threshold value;
The target training data is inputted into the first training module, obtains fisrt feature figure;
First object characteristic pattern is generated according to the characteristic pattern, the resolution ratio of the first object characteristic pattern is described second
Resolution threshold;
The fisrt feature figure is merged with the first object characteristic pattern, obtains the first probability distribution information;
The network parameter in first training module is updated according to first probability distribution information, after being trained
The first nerves network structure.
In a kind of optional embodiment, first probability distribution information include in the fisrt feature figure element be
The probability of atrium sinistrum and/or be atrium sinistrum probability.
In a kind of optional embodiment, the training method of the nervus opticus network structure includes:
Obtain the barycentric coodinates of second feature element in original segmentation data;
Centered on the barycentric coodinates, the original segmentation data are cut out, obtain training area data;
The trained area data is inputted into the second training module, obtains second feature figure;
According to the second feature figure generate the second target signature, the resolution ratio of second target signature with it is above-mentioned
The resolution ratio of training area data is identical;
The second feature figure is merged with second target signature, obtains the second probability distribution information;
The network parameter in second training module is updated according to second probability distribution information, after being trained
The nervus opticus network structure.
In a kind of optional embodiment, second probability distribution information include in second feature figure element be the left heart
The probability in room and/or be atrium sinistrum probability.
The embodiment of the present application second aspect provides a kind of image processing apparatus, comprising:
Image conversion module, for raw image data to be converted to the target image number for meeting target resolution threshold value
According to the raw image data includes characteristic element;
First dividing processing module obtains the first segmented image for carrying out image segmentation to the destination image data;
Module is cut out, for the raw image data to be cut and met according to first segmented image
The image area data of target image parameter;
Second dividing processing module obtains the second segmented image for carrying out image segmentation to described image area data;
Recovery module obtains the feature for second segmented image to be restored to the original resolution space
The object segmentation result of element.
The embodiment of the present application third aspect provides another image processing apparatus, including processor and memory, described
For storing one or more programs, one or more of programs are configured to be executed by the processor memory, described
Program includes for executing the step some or all of as described in the embodiment of the present application first aspect either method.
The embodiment of the present application fourth aspect provides a kind of computer readable storage medium, the computer readable storage medium
For storing the computer program of electronic data interchange, wherein the computer program executes computer as the application is real
Some or all of apply described in a first aspect either method step.
The embodiment of the present application meets the destination image data of target resolution threshold value by being converted to raw image data,
Above-mentioned raw image data includes characteristic element, and above-mentioned destination image data is inputted the first dividing processing module and carries out image point
It cuts, obtains the first segmented image, raw image data is cut and is obtained further according to above-mentioned first segmented image meet mesh
Above-mentioned image area data is inputted the second dividing processing module and carries out image segmentation by the image area data of logo image parameter,
The second segmented image is obtained, finally, the original resolution that above-mentioned second segmented image is restored to above-mentioned raw image data is empty
Between, the object segmentation result of features described above element is obtained, the treatment effeciency of magnetic resonance image atrium segmentation can be improved, improves the heart
The accuracy rate of room segmentation.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described.
Fig. 1 is a kind of flow diagram of image processing method disclosed in the embodiment of the present application;
Fig. 2 is the process of the training method of neural network structure in a kind of image processing method disclosed in the embodiment of the present application
Schematic diagram;
Fig. 3 is the stream of the training method of neural network structure in another kind image processing method disclosed in the embodiment of the present application
Journey schematic diagram;
Fig. 4 is a kind of structural schematic diagram of image processing apparatus disclosed in the embodiment of the present application;
Fig. 5 is the structural schematic diagram of another kind image processing apparatus disclosed in the embodiment of the present application.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first " in above-mentioned attached drawing, " second " etc. are for distinguishing
Different objects, are not use to describe a particular order.In addition, term " includes " and " having " and their any deformations, it is intended that
It is to cover and non-exclusive includes.Such as the process, method, system, product or equipment for containing a series of steps or units do not have
It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also wrap
Include other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
Containing at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Image processing apparatus involved by the embodiment of the present application can be electronic equipment, including terminal device, specific reality
In existing, above-mentioned terminal device is including but not limited to such as with touch sensitive surface (for example, touch-screen display and/or touch
Plate) mobile phone, laptop computer or tablet computer etc other portable devices.It is to be further understood that
In some embodiments, the equipment is not portable communication device, but has touch sensitive surface (for example, touch screen is shown
Device and/or touch tablet) desktop computer.
The concept of deep learning in the embodiment of the present application is derived from the research of artificial neural network.Multilayer sense containing more hidden layers
Know that device is exactly a kind of deep learning structure.Deep learning, which forms more abstract high level by combination low-level feature, indicates Attribute class
Other or feature, to find that the distributed nature of data indicates.
Deep learning is a kind of based on the method for carrying out representative learning to data in machine learning.Observation (such as a width
Image) various ways can be used to indicate, such as vector of each pixel intensity value, or be more abstractively expressed as a series of
Side, region of specific shape etc..And use certain specific representation methods be easier from example learning tasks (for example, face
Identification or human facial expression recognition).The benefit of deep learning is feature learning and the layered characteristic with non-supervisory formula or Semi-supervised
It extracts highly effective algorithm and obtains feature by hand to substitute.Deep learning is a new field in machine learning research, motivation
Be to establish, simulation human brain carries out the neural network of analytic learning, the mechanism that it imitates human brain explains data, such as image,
Sound and text.
In machine learning, convolutional neural networks (Convolutional Neural Network, CNN) are a kind of depth
Feed forward-fuzzy control has been applied successfully to image recognition.Convolutional neural networks are more next in the application of image domains
More extensive, a general CNN network mainly includes convolutional layer, pond layer (pooling), full articulamentum, loss layer etc..Convolution mind
It include one-dimensional convolutional neural networks, two-dimensional convolution neural network and Three dimensional convolution neural network through network.One-dimensional convolutional Neural
Network is commonly applied to the data processing of sequence class;Two-dimensional convolution neural network is commonly applied to the identification of image class text;Three-dimensional volume
Product neural network is mainly used in medical image and video class data identification.The neural network mentioned in the embodiment of the present application can
Think Three dimensional convolution neural network.It has increased income many deep learning frames (such as MxNet, Caffe etc.) now, training one
A model becomes very simple.
It describes in detail below to the embodiment of the present application.
Referring to Fig. 1, Fig. 1 is a kind of flow diagram of image procossing disclosed in the embodiment of the present application, as shown in Figure 1,
The image processing method can be executed by above-mentioned image processing apparatus, be included the following steps;
101, raw image data is converted to the destination image data for meeting target resolution threshold value, above-mentioned original image
Data include characteristic element.
The raw image data mentioned in the embodiment of the present application can be the heart obtained by various medical image equipments
3-D image, such as the 3-D image of the endocardial different structure by mr techniques generation.Above-mentioned mr techniques, can
To generate magnetic resonance imaging (Magnetic Resonance Imaging, MRI), can be determined in the case where not destroying sample
The Density Distribution of the chemical structure of substance and certain ingredient, application be rapidly spread to medical treatment except physics, chemical field,
Bioengineering etc. becomes one of analysis large biological molecule labyrinth and the diagnosis the most powerful method of the state of an illness.
And the difference of the tissue of health and fibrosis is more obvious in gadolinium enhancing magnetic resonance image, is frequently used to auxiliary and formulates
The targetedly surgical ablation therapeutic scheme of Atrial fibrosis trembling.Raw image data in the application can enhance magnetic for gadolinium
Resonance image.
Features described above element can be interpreted as the cutting object of image procossing, for example, above-mentioned raw image data be comprising
The gadolinium of heart enhances magnetic resonance image, and characteristic element therein then can be atrium sinistrum, i.e. this method realizes atrium sinistrum segmentation.
Before executing image procossing by deep learning model, first raw image data can be pre-processed, turned
It is changed to the destination image data for meeting target resolution threshold value, then executes step 102.
The raw image data of input is unified for the same resolution sizes, the efficiency of image procossing can be improved, side
Continue the process of convolution carried out after an action of the bowels.
Specifically, above-mentioned steps 101 can include:
(1) above-mentioned raw image data is converted to the first image data for meeting first resolution threshold value;
(2) down-sampling is carried out to above-mentioned first image data, obtains the above-mentioned target for meeting above-mentioned target resolution threshold value
Image data.
Specifically, the method that can be expanded by image cropping and/or image, the raw image data of input is unified for
Above-mentioned first resolution threshold value, such as pre-stored first resolution threshold value can be 576*576*96, i.e. the step will be former
The conversion of resolution of beginning image data obtains above-mentioned first image data to 576*576*96, be convenient for subsequent unitized processing;
Down-sampling (or being down-sampled) in the embodiment of the present application, it can be understood as a kind of mode of downscaled images,
There are two main purposes: 1, image being made to meet the size of display area;2, the thumbnail of correspondence image is generated.For a width figure
Picture I carries out s times of down-sampling to it having a size of M*N to get the image in different resolution of (M/s) * (N/s) size is arrived, s is the public affairs of M and N
Approximate number, if it is considered that be matrix form image, the image in original image s*s window is exactly become a pixel, this
The value of a pixel is exactly the mean value of all pixels in window, in this case number of pixels be reduced to original S square times.
Above-mentioned target resolution threshold value can be stored in advance in image processing apparatus, for example 144*144*48 resolution ratio is big
It is small, above-mentioned first downsampling image data can be obtained into the destination image data of 144*144*48 resolution sizes, then execute
Step 102, video memory consumption can be reduced, and is convenient for subsequent process of convolution.
Wherein, above-mentioned video memory is also designated as frame buffer, its effect is processed or will for storing display card chip
The rendering data of extraction.As the memory of computer, video memory is the component for storing graphical information to be processed.
102, above-mentioned destination image data is inputted into the first dividing processing module and carries out image segmentation, obtain the first segmentation figure
Picture.
By above-mentioned first dividing processing module, may be implemented to divide the destination image data of low resolution after sampling roughly
It cuts, and features described above element position is estimated according to segmentation result.Wherein above-mentioned first dividing processing module may include full convolution
Neural network structure.Convolutional neural networks in the embodiment of the present application, are a kind of feedforward neural networks, and artificial neuron can ring
Surrounding cells are answered, large-scale image procossing can be carried out.Convolutional neural networks include convolutional layer and pond layer.In the embodiment of the present application
Three dimensional convolution neural network can be used, because which are mainly applied to medical image and video class data identifications.
103, above-mentioned raw image data is cut and is obtained according to above-mentioned first segmented image and meet target image
The image area data of parameter.
Wherein, raw image data is 3 d image data, and resolution space existing for it is known as original resolution sky
Between.Specifically, it is empty to can be the original resolution that the center of above-mentioned first segmented image is restored to above-mentioned raw image data
Between after, cut out raw image data, obtain the image area data for meeting target image parameter.It is obtained according in above-mentioned steps 102
The first segmented image arrived can revert to its center in above-mentioned original resolution space, wherein above-mentioned target image parameter
Including target image size threshold value, according still further to above-mentioned target image parameter according to above-mentioned first segmented image center to original image
Data are cut out processing, obtain the image area data comprising features described above element.
For example, target image size threshold value can be 240*160*96, it can obtain comprising size around characteristic element
240*160*96 3-D image region.Step 104 is executed after further cutting out image, and image procossing can be improved
Accuracy.
Optionally, it is above-mentioned above-mentioned raw image data is cut and is obtained according to above-mentioned first segmented image meet mesh
Before the image area data of logo image parameter, this method further include:
Obtain the barycentric coodinates of characteristic element in above-mentioned first segmented image;
It is above-mentioned the raw image data is cut and is obtained according to first segmented image meet target image
The image area data of parameter, comprising:
Centered on above-mentioned barycentric coodinates, above-mentioned first segmented image is restored to above-mentioned original resolution space and to original
Beginning image data is cut, and the image graph area data for meeting target image size threshold value is obtained.
Specifically, above-mentioned neural network can be used in above-mentioned first dividing processing module, the probability of background and prospect is exported
Figure is tested in the two body spatial probability distributions using simple binary, and wherein voxel is divided according to the height of corresponding probability
With for prospect or background, and the binary mask can be used for coarse localization, for example can calculate prediction mask and (can manage
Solution be the first segmented image in atrium sinistrum) center of gravity this center is then restored to original using the center of gravity as the center of atrium sinistrum
The original resolution space of beginning image data, and cut out a size as center and be the region of 240*160*96, then present
Enter the second dividing processing module, i.e. execution step 104.
104, above-mentioned image area data is inputted into the second dividing processing module and carries out image segmentation, obtain the second segmentation figure
Picture.
By above-mentioned second dividing processing module, the further segmentation to above-mentioned image area data may be implemented, obtain
Above-mentioned second segmented image.In this step, the convolutional neural networks obtained using training are to the image area data cut out
Accurate Segmentation is carried out, the higher segmentation result of accuracy rate can be obtained.
105, above-mentioned second segmented image is restored to above-mentioned original resolution space, obtains the target of features described above element
Segmentation result.
Above-mentioned second segmented image after the completion of Accurate Segmentation is restored to above-mentioned original resolution space, needle can be obtained
To the object segmentation result of features described above element.
In general processing method, it is directly input with three-dimensional magnetic resonance data, consumes a large amount of video memorys, calculate the time very
Long, to calculating, equipment requirement is very high.And segmentation is divided into two steps of coarse localization and fine segmentation in the embodiment of the present application, it instructs
Practice two similar neural networks to realize, reduces video memory consumption, reduce and calculate the time.In the specific implementation, completing primary
Segmentation only needs 2 seconds, 2.6GB video memory, and this method can also be deployed in an ordinary computer, realizes simple.
The embodiment of the present application can use full convolutional neural networks carry out atrium sinistrum in gadolinium enhancing magnetic resonance image it is complete from
Dynamic segmentation, compared to the accuracy rate that conventional method substantially increases segmentation.
The embodiment of the present application meets the destination image data of target resolution threshold value by being converted to raw image data,
Above-mentioned raw image data includes characteristic element, and above-mentioned destination image data is inputted the first dividing processing module and carries out image point
It cuts, obtains the first segmented image, above-mentioned raw image data is cut and accorded with further according to above-mentioned first segmented image
Above-mentioned image area data is inputted the second dividing processing module and carries out image point by the image area data for closing target image parameter
It cuts, obtains the second segmented image, finally, above-mentioned second segmented image is restored to above-mentioned original resolution space, obtain above-mentioned
The treatment effeciency of magnetic resonance image atrium segmentation can be improved in the object segmentation result of characteristic element, improves the standard of atrium segmentation
True rate.
Referring to Fig. 2, Fig. 2 is a kind of process signal of the training method of neural network structure disclosed in the embodiment of the present application
Figure can be trained by this method and obtain above-mentioned first nerves network structure, and can be used for the first segmentation module and realize its function
Energy.The main body for executing the embodiment of the present application step can be a kind of image processing apparatus for medical image processing.Such as Fig. 2 institute
Show, the training method of the neural network structure includes the following steps:
201, the first original training data is converted to the first training data for meeting first resolution threshold value, above-mentioned first
Original training data includes fisrt feature element.
Before executing image processing method shown in FIG. 1, it can use that have segmentation data right to (image+Masks)
Neural network structure is trained.Wherein, above-mentioned fisrt feature element can be interpreted as the cutting object of image procossing, such as
The first original training data in the embodiment of the present application is that the gadolinium comprising heart enhances magnetic resonance image, fisrt feature member therein
Element is atrium sinistrum, i.e., the training method is for realizing atrium sinistrum segmentation.
Before training neural network structure executes image procossing, first the first raw image data can be located in advance
Reason, is converted to the first training data for meeting first resolution threshold value, then execute step 202.
Specifically, the method that can be expanded by image cropping and/or image, the raw image data of input is unified for
Above-mentioned first resolution threshold value, for example by conversion of resolution to 576*576*96, that is, obtain above-mentioned first image data.It will input
The first raw image data be unified for the same resolution sizes, the efficiency of image procossing can be improved, facilitate subsequent progress
Process of convolution.
202, down-sampling is carried out to above-mentioned first training data, obtains the target training number for meeting second resolution threshold value
According to.
Specifically, above-mentioned can be stored in advance in image processing apparatus to above-mentioned first training data down-sampling
Two resolution thresholds, such as 144*144*48 resolution sizes can obtain the target instruction of 144*144*48 resolution ratio after sampling
Practice data, then execute step 203, video memory consumption can be reduced, and be convenient for subsequent process of convolution.
203, above-mentioned target training data is inputted into the first training module, obtains fisrt feature figure.
In the embodiment of the present application, for three-dimensional image segmentation, it is complete that the three-dimensional based on V-Net or 3D-U-Net can be used
Convolutional neural networks structure.Main there are two operation in convolutional network, one is convolution (Convolution), is mentioned for feature
It takes, it will usually deeper characteristic pattern (feature map) is obtained using multilayer convolutional layer;One is pond
(Pooling), the characteristic pattern of input is compressed, on the one hand characteristic pattern is made to become smaller, simplify network query function complexity, on the one hand
Feature Compression is carried out, main feature is extracted.Wherein, pond layer can't be between feature channel (or abbreviation channel, channel)
Interaction have an impact, only operated in each feature channel, and convolutional layer can then carry out between channel and channel
Interaction generates new feature channel at next layer later.
V-Net is the neural network of a complete convolution, extracts the feature of different scale from data using convolution algorithm,
And resolution ratio is reduced by application step-length appropriate.The left-hand component of network is the coding structure of Standard convolution network, it
Contextual information is captured on part to global sense, and signal is decoded as its original size to right-hand component and output two logical
Road respectively indicates the probability of contexts.
The several stages operated under different resolution are divided on the left of network, each stage is by one or two volume
Lamination composition, and learn residual error function, that is, the input in each stage is added to the output of the last convolutional layer in the stage.
The convolution executed in each layer using size be 5 × 5 × 5 body convolution kernel, and by size be 2 × 2 × 2 and stride be
2 convolution algorithm realizes pondization operation.In addition, the number in feature channel doubles, and resolution ratio in each stage of coding path
Halve.At the end of each layer, batch normalization (Batch Normalization) and parametrization line rectification function are used
(PRelu)。
It is the peer-to-peer on the symmetrical left side on the right side of the network, extracts feature and extending space is supported to export two channels
Preceding background probability distribution.Similar to the left-hand component of network, each stage of right-hand component also includes one or two convolutional layer,
And still learn residual error function.It is 5 × 5 × 5 volume core that the convolution executed in each layer, which also uses volume size, and
Anti- pondization operation is realized by de-convolution operation.
To after sampling above-mentioned target training data carry out multiple convolution, Chi Hua, Batch Normalization and
PRelu, wherein Batch Normalization is commonly used accelerans network training in depth network, accelerates convergence
The algorithm of speed and stability;PReLU (Parametric Rectified Linear Unit), increases parameters revision
Line rectification function (Rectified Linear Unit, ReLU), also known as amendment linear unit, are a kind of artificial neural networks
In common activation primitive (activation function), generally refer to using ramp function and its mutation as the non-thread of representative
Property function, simplifies calculating process.By the first training module, multiple characteristic patterns can be generated.Specifically, in step 202
Above-mentioned second resolution threshold value be 144*144*48 resolution ratio continue for example, the fisrt feature figure resolution ratio that can be obtained
Size is respectively 144*144*48,72*72*24,36*36*12,18*18*6 and 9*9*3, and feature channel is promoted from 8
To 128.
204, first object characteristic pattern is generated according to above-mentioned fisrt feature figure, the resolution ratio of above-mentioned first object characteristic pattern is
Above-mentioned second resolution threshold value.
First object characteristic pattern is obtained specifically, can use deconvolution operation and gradually restore fisrt feature figure, it is above-mentioned
The resolution ratio of first object characteristic pattern be above-mentioned second resolution threshold value, i.e., fisrt feature figure can be reduced to after down-sampling
The same resolution ratio of target training data, 144*144*48 resolution ratio as the aforementioned, then execute step 205.
205, features described above figure is merged with above-mentioned target signature, obtains the first probability distribution information.
The corresponding fisrt feature figure obtained by obtained first object characteristic pattern and before merges, so as to
Into image, part and global information, obtain the first probability distribution information finally by one softmax layers, wherein can be with
It is the output data of above-mentioned second resolution threshold value including two resolution ratio.Above-mentioned first probability distribution information may include above-mentioned
Element in fisrt feature figure be the probability of atrium sinistrum and/or be not atrium sinistrum probability.
In the concrete realization, above-mentioned second resolution threshold value is 144*144*48, and can obtain two resolution ratio herein is
The output data of 144*144*48, can represent respectively the element in a characteristic pattern whether be atrium sinistrum probability distribution.
Above-mentioned softmax can be understood as normalizing, and softmax can be used in last in network under normal conditions
Layer, for carrying out last classification and normalization.As the input layer of softmax with the dimension of output layer is, such as figure at present
Piece classification has 100 kinds, that is exactly one 100 vector tieed up by softmax layers of output.First value in vector be exactly
Current image belongs to the probability value of the first kind, second value in vector be exactly current image belong to the second class probability value ... this
The sum of the vector of 100 dimensions is 1.
Step 206 can be executed after obtaining above-mentioned first probability distribution information.
206, the network parameter in above-mentioned first training module is updated according to above-mentioned first probability distribution information, is trained
Above-mentioned first nerves network structure afterwards.
Specifically, can use DICE, IoU or unknown losses function, network parameter is updated using back-propagation algorithm,
Until model is restrained, this is the above-mentioned first nerves network structure for dividing roughly.Wherein, in statistics, statistical decision reason
By in economics, loss function refers to a kind of an event (element in a sample space) is mapped to one
Express a kind of function on the real number of economic cost relevant to its event or opportunity cost.More generally, in statistics
Loss function is a kind of measurement loss and mistake (this lose related with " mistakenly " estimation, such as the loss of expense or equipment)
The function of degree.
The learning process of above-mentioned back-propagation algorithm (BP algorithm) is made of forward-propagating process and back-propagation process.?
During forward-propagating, input information, through hidden layer, is successively handled by input layer and is transmitted to output layer.If obtained in output layer
Less than desired output valve, then the quadratic sum of output and desired error is taken to be transferred to backpropagation as objective function, successively ask
Objective function constitutes objective function and measures to the ladder of weight vector, as modification weight to the partial derivative of each neuron weight out
The study of foundation, network is completed during weight is modified.When error reaches desired value, e-learning terminates.The application is logical
The learning rules of above-mentioned back-propagation algorithm are crossed, the first nerves network structure after can more accurately being trained.
Above-mentioned first nerves network structure can be used in embodiment illustrated in fig. 1, i.e., above-mentioned first dividing processing module can
To include above-mentioned first nerves network structure, to realize its dividing processing function.
The embodiment of the present application meets the first training of first resolution threshold value by being converted to the first original training data
Data, above-mentioned first original training data include fisrt feature element, carry out down-sampling to above-mentioned first training data, are accorded with
The target training data of second resolution threshold value is closed, then above-mentioned target training data is inputted into the first training module, obtains first
Characteristic pattern generates first object characteristic pattern according to above-mentioned fisrt feature figure, and the resolution ratio of above-mentioned first object characteristic pattern is above-mentioned
Then second resolution threshold value merges features described above figure with above-mentioned target signature, obtain the first probability distribution information, can
To update the network parameter in above-mentioned first training module according to above-mentioned first probability distribution information, after being trained above-mentioned
One neural network structure improves the accuracy of the neural network, obtains the neural network knot for being particularly suitable for atrium sinistrum segmentation
Structure.
Referring to Fig. 3, the process that Fig. 3 is the training method of another kind neural network structure disclosed in the embodiment of the present application is shown
It is intended to, can be trained by this method and obtain above-mentioned nervus opticus network structure, and it is real to can be used for above-mentioned second segmentation module
Its existing function.The main body for executing the embodiment of the present application step can be a kind of image processing apparatus for medical image processing.
As shown in figure 3, the training method of the neural network structure includes the following steps:
301, the barycentric coodinates of second feature element in original segmentation data are obtained.
Wherein, above-mentioned original segmentation data are the segmentation data for training neural network, and existing segmentation data are to packet
Include image and Masks, can treated to have carried out over-segmentation image data, can come from training dataset.
Above-mentioned second feature element can be interpreted as the cutting object of image procossing, such as the original in the embodiment of the present application
The segmentation data that begin are that the gadolinium comprising heart enhances magnetic resonance image, and second feature element therein is atrium sinistrum, i.e. the training side
Method is for realizing atrium sinistrum segmentation.
Here Mask can be understood as a kind of special gray level image with image data equal resolution, each of which
The brightness value of pixel indicates the classification of the pixel in the image data of the pixel corresponding position, is frequently utilized for indicating in image data
Different cutting objects, particularly, then in the application Mask indicates that respective pixel is atrium sinistrum when being 1, indicated when Mask is 0 pair
Answering pixel is background.According to the Mask that existing segmentation data centering is given, the barycentric coodinates of atrium sinistrum can be calculated, then hold
Row step 302.
302, centered on above-mentioned barycentric coodinates, above-mentioned original segmentation data are cut out, obtain training area data.
After obtaining above-mentioned barycentric coodinates, it can be cut out in this in original segmentation data centered on the coordinate
The fixed size region of atrium sinistrum is completely covered around the heart, wherein the resolution ratio of above-mentioned trained area data can be preset
Third resolution threshold, such as 240*160*96 obtain the above-mentioned trained area data of 240*160*96 resolution sizes, can
To reduce video memory consumption, and it is convenient for subsequent process of convolution.
303, above-mentioned trained area data is inputted into the second training module, obtains second feature figure.
Again using the full convolutional neural networks structure of three-dimensional based on V-Net or 3D-U-Net, to training area data
(the three-dimensional magnetic resonance image after sampling) carries out multiple convolution, Chi Hua, Batch Normalization and PRelu processing, raw
At characteristic pattern.Such as can be generated resolution sizes be 240*160*96,120*80*48,60*40*24,30*20*12 and
The characteristic pattern of 15*20*6.
Wherein, above-mentioned steps 303 can refer to the specific descriptions in the step 203 of embodiment shown in Fig. 2, herein no longer
It repeats.
304, the second target signature is generated according to above-mentioned second feature figure.The resolution ratio of above-mentioned second target signature with
The resolution ratio of above-mentioned trained area data is identical.
Specifically, can use the resolution that second feature figure is gradually reverted to above-mentioned trained area data by deconvolution operation
Rate, such as above-mentioned 4th resolution threshold 240*160*96, then execute step 305.
Wherein, above-mentioned steps 304 can refer to the specific descriptions in the step 204 of embodiment shown in Fig. 2, herein no longer
It repeats.
305, above-mentioned second feature figure is merged with above-mentioned second target signature, obtains the second probability distribution information.
The corresponding second feature figure obtained by the second obtained target signature and before merges, so as to
Into image, part and global information, obtain the second probability distribution information finally by one softmax layers, wherein can be with
It is the output data of above-mentioned 4th resolution threshold including two resolution ratio.Above-mentioned second probability distribution information may include second
In characteristic pattern element be the probability of atrium sinistrum and/or be not atrium sinistrum probability.
In the concrete realization, above-mentioned 4th resolution threshold is 240*160*96, and can obtain two resolution ratio herein is
The output data of 240*160*96, can represent respectively the element in a characteristic pattern whether be atrium sinistrum probability distribution.?
Step 306 can be executed later by obtaining above-mentioned second probability distribution information.
Wherein, above-mentioned steps 305 can refer to the specific descriptions in the step 205 of embodiment shown in Fig. 2, herein no longer
It repeats.
306, the network parameter in above-mentioned second training module is updated according to above-mentioned second probability distribution information, is trained
Above-mentioned nervus opticus network structure afterwards.
Specifically, can use DICE, IoU or unknown losses function, network parameter is updated using back-propagation algorithm,
Until model is restrained, this is the nervus opticus network structure for fine segmentation.
Above-mentioned nervus opticus network structure can be used in embodiment illustrated in fig. 1, i.e., above-mentioned second dividing processing module can
To include above-mentioned nervus opticus network structure, to realize its dividing processing function.
The embodiment of the present application is by obtaining the original barycentric coodinates for dividing second feature element in data, with above-mentioned center of gravity seat
It is designated as center, cuts out above-mentioned original segmentation data, obtains training area data, by above-mentioned trained the second training of area data input
Module, obtain second feature figure, according to above-mentioned second feature figure generate the second target signature, by above-mentioned second feature figure with it is upper
The fusion of the second target signature is stated, the second probability distribution information is obtained, updates above-mentioned the according to above-mentioned second probability distribution information
Network parameter in two training modules, the above-mentioned nervus opticus network structure after being trained, improves the standard of the neural network
Exactness obtains the neural network structure for being particularly suitable for atrium sinistrum segmentation.
By step described in figure 1 above and Fig. 2, the available one first nerves net divided roughly for atrium sinistrum
Network structure (network 1) and a nervus opticus network structure (network 2) for atrium sinistrum fine segmentation, to a completely new gadolinium
The step of enhancing magnetic resonance image, its atrium sinistrum is divided can network 1 and Fig. 3 based on above-mentioned Fig. 2 network 2, in application
Two neural networks are stated to realize in image processing method shown in FIG. 1, it can before the step of embodiment shown in Fig. 1,
The training method of Fig. 2 and Fig. 3 is executed, network 1 and network 2 are obtained, is respectively applied to above-mentioned first segmentation module and second
Divide in module, completes the image processing process of embodiment illustrated in fig. 1.
In the specific implementation, training data includes 100 groups of nuclear magnetic resonance datas, wherein every group of data are using clinical whole body core
Magnetic resonance scanner obtains, and contains the label of original nuclear magnetic resonance image data and corresponding atrium sinistrum cavity.These
The original resolution of data can be 0.625 × 0.625 × 0.625 cubic millimeter, wherein 47 are 576 × 576 × 88 voxels,
53 are 640 × 640 × 88 voxels, and since memory limits, are difficult using neural network directly from such high-resolution
It is directly split on image.And in fact, atrium sinistrum cavity even when entire heart, all only account for the one of nuclear magnetic resonance image
Fraction, the other positions in image are all mostly unrelated tissues, or even there is nothing, therefore segmentation can be divided into two
A step.First is positioning atrium, and second is segmentation atrium cavity, Ke Yiyong from the body after a very small cutting
Carry out the training network on common home computer.In order to be of the same size input data and be suitable for V-Net framework,
The embodiment of the present application uniformly cuts and fills all images to fixed resolution, for example size is 576 × 576 × 96.
As it can be seen that carrying out the full-automatic dividing of atrium sinistrum in gadolinium enhancing magnetic resonance image using full convolutional neural networks, compare
Conventional method substantially increases the accuracy rate of segmentation.If also, be directly input with three-dimensional magnetic resonance data, it consumes a large amount of aobvious
It deposits, the calculating time is very long, and to calculating, equipment requirement is very high, and segmentation is divided into coarse localization and fine segmentation two by the embodiment of the present application
A step, two similar networks of training reduce video memory consumption, reduce and calculate the time, such as segmentation an example only needs 2 seconds,
2.6GB video memory can be deployed on a common home computer, be realized simple.
The embodiment of the present application is suitable for medical field, for example, Cardiologists are total in the gadolinium enhancing magnetic for obtaining patient
It shakes after image, the fast automatic Ground Split of the above method can be used and go out patient atrium sinistrum, then doctor can preliminarily basis
The three-dimensional structure of patient atrium sinistrum judge whether to occur it is abnormal, further, doctor can by the three-dimensional structure of atrium sinistrum into
The planning, such as the intervention path of conduit etc. of row operation.
Cardiologists can be combined with the segmentation of fibrosed tissue after obtaining the three-dimensional structure of atrium sinistrum to understand
The reason of with research heart tissue fibrosis, and then heart tissue fibrosis is fundamentally prevented or delays, reduce the hair of patient
Sick rate and the death rate.
It is above-mentioned that mainly the scheme of the embodiment of the present application is described from the angle of method side implementation procedure.It is understood that
, in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software for image processing apparatus
Module.It will be appreciated by those skilled in the art that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and calculation
Method step, the present invention can be realized with the combining form of hardware or hardware and computer software.Some function is actually with hardware
Or computer software drives the mode of hardware to execute, the specific application and design constraint depending on technical solution.Specially
Industry technical staff can to specifically realizing described function using distinct methods, but it is this realize it is not considered that
It is beyond the scope of this invention.
The embodiment of the present application can carry out the division of functional module, example according to above method example to image processing apparatus
Such as, each functional module of each function division can be corresponded to, two or more functions can also be integrated at one
It manages in module.Above-mentioned integrated module both can take the form of hardware realization, can also use the form of software function module
It realizes.It should be noted that being schematical, only a kind of logic function stroke to the division of module in the embodiment of the present application
Point, there may be another division manner in actual implementation.
Referring to Fig. 4, Fig. 4 is a kind of structural schematic diagram of image processing apparatus disclosed in the embodiment of the present application.Such as Fig. 4 institute
Show, which includes:
Image conversion module 410, for raw image data to be converted to the target image for meeting target resolution threshold value
Data, the raw image data include characteristic element;
First dividing processing module 420 obtains the first segmentation figure for carrying out image segmentation to the destination image data
Picture;
Module 430 is cut out, for above-mentioned raw image data to be cut and obtained according to above-mentioned first segmented image
Meet the image area data of target image parameter;
Second dividing processing module 440 obtains the second segmentation figure for carrying out image segmentation to described image area data
Picture;
Recovery module 450, for second segmented image to be restored to the original resolution of the raw image data
Space obtains the object segmentation result of the characteristic element.
Optionally, image conversion module 410 includes the first converting unit 411 and the second converting unit 412,
Above-mentioned first converting unit 411 meets first resolution threshold value for being converted to the raw image data
First image data;
Above-mentioned second converting unit 412, for carrying out down-sampling to the first image data, acquisition meets the target
The destination image data of resolution threshold.
Optionally, above-mentioned recovery module 450 is also used to, and is cutting out module 430 according to above-mentioned first segmented image to above-mentioned
Before raw image data is cut and obtains the image area data for meeting target image parameter, first segmentation is obtained
The barycentric coodinates of characteristic element described in image;
Above-mentioned module 430 of cutting out is also used to: centered on the barycentric coodinates, above-mentioned first segmented image being restored to
Original resolution space is stated, and cuts out the area data for meeting target image size.
Optionally, the target image parameter includes target image size threshold value.
Optionally, the raw image data includes gadolinium enhancing magnetic resonance image.
Optionally, the first dividing processing module 420 includes first nerves network structure, the second dividing processing mould
Block 440 includes nervus opticus network structure.
In a kind of optional embodiment, the modules of above-mentioned image processing apparatus 400 can be also used for the first mind
Training through network structure;
Above-mentioned first converting unit 411, is also used to be converted to the first original training data and meets first resolution threshold value
The first training data, first original training data include fisrt feature element;
Above-mentioned second converting unit 412 is also used to carry out first training data down-sampling, and acquisition meets second point
The target training data of resolution threshold value;
Above-mentioned image processing apparatus 400 further includes the first training module 460, is used for:
Fisrt feature figure is obtained according to the target training data;
First object characteristic pattern is generated according to the characteristic pattern, the resolution ratio of the first object characteristic pattern is described second
Resolution threshold;
The fisrt feature figure is merged with the first object characteristic pattern, obtains the first probability distribution information;
The network parameter in first training module is updated according to first probability distribution information, after being trained
The first nerves network structure.
Optionally, first probability distribution information include in the fisrt feature figure element be atrium sinistrum probability and/
It or is not the probability of atrium sinistrum.
In a kind of optional embodiment, the modules of above-mentioned image processing apparatus 400 can be also used for above-mentioned
The training of two neural network structures;
Above-mentioned recovery module 450 is also used to:
Obtain the barycentric coodinates of second feature element in original segmentation data;
Centered on the barycentric coodinates, the original segmentation data are cut out, obtain training area data;
Above-mentioned image processing apparatus 400 further includes the second training module 470, is used for:
Second feature figure is obtained according to the trained area data;
According to the second feature figure generate the second target signature, the resolution ratio of second target signature with it is above-mentioned
The resolution ratio of training area data is identical;
The second feature figure is merged with second target signature, obtains the second probability distribution information;
The network parameter in second training module is updated according to second probability distribution information, after being trained
The nervus opticus network structure.
Optionally, second probability distribution information includes the probability and/or not that element is atrium sinistrum in second feature figure
For the probability of atrium sinistrum.
Implement image processing apparatus 400 shown in Fig. 4, image processing apparatus 400 can be converted to raw image data
Meet the destination image data of target resolution threshold value, above-mentioned raw image data includes characteristic element, by above-mentioned target image
Data input the first dividing processing module and carry out image segmentation, the first segmented image are obtained, further according to above-mentioned first segmented image
The image area data for meeting target image parameter is cut and obtained to raw image data, by above-mentioned image area data
It inputs the second dividing processing module and carries out image segmentation, obtain the second segmented image, finally, above-mentioned second segmented image is restored
To the original resolution space of above-mentioned raw image data, the object segmentation result of features described above element is obtained, magnetic can be improved
The treatment effeciency of resonance image atrium segmentation, improves the accuracy rate of atrium segmentation.
Referring to Fig. 5, Fig. 5 is the structural schematic diagram of another kind image processing apparatus disclosed in the embodiment of the present application.Such as Fig. 5
Shown, which includes processor 501 and memory 502, wherein image processing apparatus 500 can also include
Bus 503, processor 501 and memory 502 can be connected with each other by bus 503, and bus 503 can be Peripheral Component Interconnect
Standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended
Industry Standard Architecture, EISA) bus etc..Bus 503 can be divided into address bus, data/address bus,
Control bus etc..Only to be indicated with a thick line in Fig. 5, it is not intended that an only bus or a seed type convenient for indicating
Bus.Wherein, image processing apparatus 500 can also include input-output equipment 504, and input-output equipment 504 may include
Display screen, such as liquid crystal display.Memory 502 is used to store one or more programs comprising instruction;Processor 501 is used for
It calls and some or all of mentions method step in above-mentioned Fig. 1, Fig. 2 and Fig. 3 embodiment of instruction execution that is stored in memory 502
Suddenly.Above-mentioned processor 501 can correspond to the function of realizing each module in the image processing apparatus 500 in Fig. 5.
Implement image processing apparatus 500 shown in fig. 5, image processing apparatus, which can be converted to raw image data, to be met
The destination image data of target resolution threshold value, above-mentioned raw image data includes characteristic element, by above-mentioned destination image data
It inputs the first dividing processing module and carries out image segmentation, the first segmented image is obtained, further according to above-mentioned first segmented image to original
Beginning image data is cut and is obtained the image area data for meeting target image parameter, and above-mentioned image area data is inputted
Second dividing processing module carries out image segmentation, the second segmented image is obtained, finally, above-mentioned second segmented image is restored to
The original resolution space of raw image data is stated, the object segmentation result of features described above element is obtained, magnetic resonance can be improved
The treatment effeciency of image atrium segmentation, improves the accuracy rate of atrium segmentation.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of image processing method step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the module (or unit), only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple module or components can be tied
Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed
Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or module
Letter connection can be electrical or other forms.
The module as illustrated by the separation member may or may not be physically separated, aobvious as module
The component shown may or may not be physical module, it can and it is in one place, or may be distributed over multiple
On network module.Some or all of the modules therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in a processing module
It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.
If the integrated module is realized in the form of software function module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, technical solution of the present invention substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the present invention
Step.And memory above-mentioned includes: USB flash disk, read-only memory (Read-Only Memory, ROM), random access memory
The various media that can store program code such as (Random Access Memory, RAM), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
It may include: flash disk, read-only memory, random access device, disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the present invention and
Embodiment is expounded, and the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the present invention
There is change place, in conclusion the contents of this specification are not to be construed as limiting the invention.
Claims (10)
1. a kind of image processing method, which is characterized in that the described method includes:
Raw image data is converted to the destination image data for meeting target resolution threshold value, the raw image data includes
Characteristic element;
The destination image data is inputted into the first dividing processing module and carries out image segmentation, obtains the first segmented image;
The raw image data is cut according to first segmented image and obtains the figure for meeting target image parameter
As area data;
Described image area data is inputted into the second dividing processing module and carries out image segmentation, obtains the second segmented image;
Second segmented image is restored to the original resolution space of the raw image data, obtains the characteristic element
Object segmentation result.
2. image processing method according to claim 1, which is characterized in that described be converted to raw image data meets
The destination image data of target resolution threshold value includes:
The raw image data is converted to the first image data for meeting first resolution threshold value;
Down-sampling is carried out to the first image data, obtains the target image number for meeting the target resolution threshold value
According to.
3. image processing method according to claim 1 or 2, which is characterized in that described according to first segmented image
Before being cut to the raw image data and obtain the image area data for meeting target image parameter, the method is also
Include:
Obtain the barycentric coodinates of characteristic element described in first segmented image;
It is described the raw image data is cut and is obtained according to first segmented image meet target image parameter
Image area data, comprising:
Centered on the barycentric coodinates, first segmented image is restored to the original resolution space, and cut out
Meet the area data of target image size.
4. image processing method according to claim 1-3, which is characterized in that the target image parameter includes
Target image size threshold value.
5. image processing method according to claim 1-4, which is characterized in that the first dividing processing module
Including first nerves network structure, the second dividing processing module includes nervus opticus network structure.
6. image processing method according to claim 5, which is characterized in that the training side of the first nerves network structure
Method includes:
First original training data is converted to the first training data for meeting first resolution threshold value, the first original training
Data include fisrt feature element;
Down-sampling is carried out to first training data, obtains the target training data for meeting second resolution threshold value;
The target training data is inputted into the first training module, obtains fisrt feature figure;
First object characteristic pattern is generated according to the characteristic pattern, the resolution ratio of the first object characteristic pattern is second resolution
Rate threshold value;
The fisrt feature figure is merged with the first object characteristic pattern, obtains the first probability distribution information;
The network parameter in first training module is updated according to first probability distribution information, it is described after being trained
First nerves network structure.
7. image processing method according to claim 6, which is characterized in that the training side of the nervus opticus network structure
Method includes:
Obtain the barycentric coodinates of second feature element in original segmentation data;
Centered on the barycentric coodinates, the original segmentation data are cut out, obtain training area data;
The trained area data is inputted into the second training module, obtains second feature figure;
The second target signature, the resolution ratio and above-mentioned training of second target signature are generated according to the second feature figure
The resolution ratio of area data is identical;
The second feature figure is merged with second target signature, obtains the second probability distribution information;
The network parameter in second training module is updated according to second probability distribution information, it is described after being trained
Nervus opticus network structure.
8. a kind of image processing apparatus characterized by comprising
Image conversion module, for raw image data to be converted to the destination image data for meeting target resolution threshold value, institute
Stating raw image data includes characteristic element;
First dividing processing module obtains the first segmented image for carrying out image segmentation to the destination image data;
Module is cut out, meets target for the raw image data to be cut and obtained according to first segmented image
The image area data of image parameter;
Second dividing processing module obtains the second segmented image for carrying out image segmentation to described image area data;
Recovery module is obtained for second segmented image to be restored to the original resolution space of the raw image data
Obtain the object segmentation result of the characteristic element.
9. a kind of image processing apparatus, which is characterized in that including processor and memory, the memory is for storing one
Or multiple programs, one or more of programs are configured to be executed by the processor, described program includes for executing such as
The described in any item methods of claim 1-7.
10. a kind of computer readable storage medium, which is characterized in that the computer readable storage medium is for storing electron number
According to the computer program of exchange, wherein the computer program executes computer as claim 1-7 is described in any item
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810885716.2A CN109166130B (en) | 2018-08-06 | 2018-08-06 | Image processing method and image processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810885716.2A CN109166130B (en) | 2018-08-06 | 2018-08-06 | Image processing method and image processing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109166130A true CN109166130A (en) | 2019-01-08 |
CN109166130B CN109166130B (en) | 2021-06-22 |
Family
ID=64895116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810885716.2A Active CN109166130B (en) | 2018-08-06 | 2018-08-06 | Image processing method and image processing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109166130B (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829920A (en) * | 2019-02-25 | 2019-05-31 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109919932A (en) * | 2019-03-08 | 2019-06-21 | 广州视源电子科技股份有限公司 | The recognition methods of target object and device |
CN109978886A (en) * | 2019-04-01 | 2019-07-05 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110033005A (en) * | 2019-04-08 | 2019-07-19 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110136135A (en) * | 2019-05-17 | 2019-08-16 | 深圳大学 | Dividing method, device, equipment and storage medium |
CN110363774A (en) * | 2019-06-17 | 2019-10-22 | 上海联影智能医疗科技有限公司 | Image partition method, device, computer equipment and storage medium |
CN110427946A (en) * | 2019-07-04 | 2019-11-08 | 天津车之家数据信息技术有限公司 | A kind of file and picture binary coding method, device and calculate equipment |
CN110738635A (en) * | 2019-09-11 | 2020-01-31 | 深圳先进技术研究院 | feature tracking method and device |
CN110874860A (en) * | 2019-11-21 | 2020-03-10 | 哈尔滨工业大学 | Target extraction method of symmetric supervision model based on mixed loss function |
CN111091541A (en) * | 2019-12-12 | 2020-05-01 | 哈尔滨市科佳通用机电股份有限公司 | Method for identifying fault of missing nut in cross beam assembly of railway wagon |
CN111407245A (en) * | 2020-03-19 | 2020-07-14 | 南京昊眼晶睛智能科技有限公司 | Non-contact heart rate and body temperature measuring method based on camera |
CN111445478A (en) * | 2020-03-18 | 2020-07-24 | 吉林大学 | Intracranial aneurysm region automatic detection system and detection method for CTA image |
CN111684488A (en) * | 2019-05-22 | 2020-09-18 | 深圳市大疆创新科技有限公司 | Image cropping method and device and shooting device |
CN111833325A (en) * | 2020-07-09 | 2020-10-27 | 合肥多彩谱色科技有限公司 | Colloidal gold reagent strip detection method and system based on deep learning |
CN111832493A (en) * | 2020-07-17 | 2020-10-27 | 平安科技(深圳)有限公司 | Image traffic signal lamp detection method and device, electronic equipment and storage medium |
CN111862223A (en) * | 2020-08-05 | 2020-10-30 | 西安交通大学 | Visual counting and positioning method for electronic element |
CN111914698A (en) * | 2020-07-16 | 2020-11-10 | 北京紫光展锐通信技术有限公司 | Method and system for segmenting human body in image, electronic device and storage medium |
CN112444784A (en) * | 2019-08-29 | 2021-03-05 | 北京市商汤科技开发有限公司 | Three-dimensional target detection and neural network training method, device and equipment |
CN109166130B (en) * | 2018-08-06 | 2021-06-22 | 北京市商汤科技开发有限公司 | Image processing method and image processing device |
CN113808147A (en) * | 2021-09-14 | 2021-12-17 | 北京航星永志科技有限公司 | Image processing method, device and system and computer equipment |
CN114241505A (en) * | 2021-12-20 | 2022-03-25 | 苏州阿尔脉生物科技有限公司 | Method and device for extracting chemical structure image, storage medium and electronic equipment |
CN116071375A (en) * | 2023-03-10 | 2023-05-05 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Image segmentation method and device, storage medium and electronic equipment |
US12016717B2 (en) | 2019-01-30 | 2024-06-25 | Tencent Technology (Shenzhen) Company Limited | CT image generation method and apparatus, computer device, and computer-readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960195A (en) * | 2017-03-27 | 2017-07-18 | 深圳市丰巨泰科电子有限公司 | A kind of people counting method and device based on deep learning |
CN107016681A (en) * | 2017-03-29 | 2017-08-04 | 浙江师范大学 | Brain MRI lesion segmentation approach based on full convolutional network |
CN107808132A (en) * | 2017-10-23 | 2018-03-16 | 重庆邮电大学 | A kind of scene image classification method for merging topic model |
WO2018050207A1 (en) * | 2016-09-13 | 2018-03-22 | Brainlab Ag | Optimized semi-robotic alignment workflow |
CN108268870A (en) * | 2018-01-29 | 2018-07-10 | 重庆理工大学 | Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109166130B (en) * | 2018-08-06 | 2021-06-22 | 北京市商汤科技开发有限公司 | Image processing method and image processing device |
-
2018
- 2018-08-06 CN CN201810885716.2A patent/CN109166130B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018050207A1 (en) * | 2016-09-13 | 2018-03-22 | Brainlab Ag | Optimized semi-robotic alignment workflow |
CN106960195A (en) * | 2017-03-27 | 2017-07-18 | 深圳市丰巨泰科电子有限公司 | A kind of people counting method and device based on deep learning |
CN107016681A (en) * | 2017-03-29 | 2017-08-04 | 浙江师范大学 | Brain MRI lesion segmentation approach based on full convolutional network |
CN107808132A (en) * | 2017-10-23 | 2018-03-16 | 重庆邮电大学 | A kind of scene image classification method for merging topic model |
CN108268870A (en) * | 2018-01-29 | 2018-07-10 | 重庆理工大学 | Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109166130B (en) * | 2018-08-06 | 2021-06-22 | 北京市商汤科技开发有限公司 | Image processing method and image processing device |
US12016717B2 (en) | 2019-01-30 | 2024-06-25 | Tencent Technology (Shenzhen) Company Limited | CT image generation method and apparatus, computer device, and computer-readable storage medium |
CN109829920B (en) * | 2019-02-25 | 2021-06-15 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109829920A (en) * | 2019-02-25 | 2019-05-31 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109919932A (en) * | 2019-03-08 | 2019-06-21 | 广州视源电子科技股份有限公司 | The recognition methods of target object and device |
TWI758234B (en) * | 2019-04-01 | 2022-03-11 | 大陸商北京市商湯科技開發有限公司 | Image processing method and image processing device, electronic device and computer-readable storage medium |
CN109978886B (en) * | 2019-04-01 | 2021-11-09 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109978886A (en) * | 2019-04-01 | 2019-07-05 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
TWI758233B (en) * | 2019-04-01 | 2022-03-11 | 大陸商北京市商湯科技開發有限公司 | Image processing method and image processing device, electronic device and computer-readable storage medium |
CN110033005A (en) * | 2019-04-08 | 2019-07-19 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110136135B (en) * | 2019-05-17 | 2021-07-06 | 深圳大学 | Segmentation method, device, equipment and storage medium |
CN110136135A (en) * | 2019-05-17 | 2019-08-16 | 深圳大学 | Dividing method, device, equipment and storage medium |
CN111684488A (en) * | 2019-05-22 | 2020-09-18 | 深圳市大疆创新科技有限公司 | Image cropping method and device and shooting device |
CN110363774A (en) * | 2019-06-17 | 2019-10-22 | 上海联影智能医疗科技有限公司 | Image partition method, device, computer equipment and storage medium |
CN110363774B (en) * | 2019-06-17 | 2021-12-21 | 上海联影智能医疗科技有限公司 | Image segmentation method and device, computer equipment and storage medium |
CN110427946B (en) * | 2019-07-04 | 2021-09-03 | 天津车之家数据信息技术有限公司 | Document image binarization method and device and computing equipment |
CN110427946A (en) * | 2019-07-04 | 2019-11-08 | 天津车之家数据信息技术有限公司 | A kind of file and picture binary coding method, device and calculate equipment |
CN112444784B (en) * | 2019-08-29 | 2023-11-28 | 北京市商汤科技开发有限公司 | Three-dimensional target detection and neural network training method, device and equipment |
CN112444784A (en) * | 2019-08-29 | 2021-03-05 | 北京市商汤科技开发有限公司 | Three-dimensional target detection and neural network training method, device and equipment |
CN110738635A (en) * | 2019-09-11 | 2020-01-31 | 深圳先进技术研究院 | feature tracking method and device |
CN110874860B (en) * | 2019-11-21 | 2023-04-25 | 哈尔滨工业大学 | Target extraction method of symmetrical supervision model based on mixed loss function |
CN110874860A (en) * | 2019-11-21 | 2020-03-10 | 哈尔滨工业大学 | Target extraction method of symmetric supervision model based on mixed loss function |
CN111091541A (en) * | 2019-12-12 | 2020-05-01 | 哈尔滨市科佳通用机电股份有限公司 | Method for identifying fault of missing nut in cross beam assembly of railway wagon |
CN111445478B (en) * | 2020-03-18 | 2023-09-08 | 吉林大学 | Automatic intracranial aneurysm region detection system and detection method for CTA image |
CN111445478A (en) * | 2020-03-18 | 2020-07-24 | 吉林大学 | Intracranial aneurysm region automatic detection system and detection method for CTA image |
CN111407245A (en) * | 2020-03-19 | 2020-07-14 | 南京昊眼晶睛智能科技有限公司 | Non-contact heart rate and body temperature measuring method based on camera |
CN111407245B (en) * | 2020-03-19 | 2021-11-02 | 南京昊眼晶睛智能科技有限公司 | Non-contact heart rate and body temperature measuring method based on camera |
CN111833325A (en) * | 2020-07-09 | 2020-10-27 | 合肥多彩谱色科技有限公司 | Colloidal gold reagent strip detection method and system based on deep learning |
CN111914698A (en) * | 2020-07-16 | 2020-11-10 | 北京紫光展锐通信技术有限公司 | Method and system for segmenting human body in image, electronic device and storage medium |
CN111832493A (en) * | 2020-07-17 | 2020-10-27 | 平安科技(深圳)有限公司 | Image traffic signal lamp detection method and device, electronic equipment and storage medium |
CN111862223B (en) * | 2020-08-05 | 2022-03-22 | 西安交通大学 | Visual counting and positioning method for electronic element |
CN111862223A (en) * | 2020-08-05 | 2020-10-30 | 西安交通大学 | Visual counting and positioning method for electronic element |
CN113808147A (en) * | 2021-09-14 | 2021-12-17 | 北京航星永志科技有限公司 | Image processing method, device and system and computer equipment |
CN114241505A (en) * | 2021-12-20 | 2022-03-25 | 苏州阿尔脉生物科技有限公司 | Method and device for extracting chemical structure image, storage medium and electronic equipment |
CN116071375A (en) * | 2023-03-10 | 2023-05-05 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Image segmentation method and device, storage medium and electronic equipment |
CN116071375B (en) * | 2023-03-10 | 2023-09-26 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Image segmentation method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109166130B (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166130A (en) | A kind of image processing method and image processing apparatus | |
Liu et al. | A survey on U-shaped networks in medical image segmentations | |
Zhou et al. | nnformer: Volumetric medical image segmentation via a 3d transformer | |
WO2020133636A1 (en) | Method and system for intelligent envelope detection and warning in prostate surgery | |
EP3923238A1 (en) | Medical image segmentation method and device, computer device and readable storage medium | |
Tang et al. | An augmentation strategy for medical image processing based on statistical shape model and 3D thin plate spline for deep learning | |
CN110475505A (en) | Utilize the automatic segmentation of full convolutional network | |
CN109598722B (en) | Image analysis method based on recurrent neural network | |
JP2022518446A (en) | Medical image detection methods and devices based on deep learning, electronic devices and computer programs | |
CN109872306A (en) | Medical image cutting method, device and storage medium | |
Men et al. | Cascaded atrous convolution and spatial pyramid pooling for more accurate tumor target segmentation for rectal cancer radiotherapy | |
Zhang et al. | Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons | |
CN110211165B (en) | Image multi-mode registration method based on asynchronous depth reinforcement learning | |
CN110084863A (en) | A kind of multiple domain image conversion method and system based on generation confrontation network | |
CN108603922A (en) | Automatic cardiac volume is divided | |
CN108596833A (en) | Super-resolution image reconstruction method, device, equipment and readable storage medium storing program for executing | |
CN109285157A (en) | Myocardium of left ventricle dividing method, device and computer readable storage medium | |
CN109508787A (en) | Neural network model training method and system for ultrasound displacement estimation | |
CN106127783A (en) | A kind of medical imaging identification system based on degree of depth study | |
Liu et al. | Automatic segmentation algorithm of ultrasound heart image based on convolutional neural network and image saliency | |
Zamzmi et al. | Trilateral attention network for real-time cardiac region segmentation | |
Qiu et al. | Deep bv: A fully automated system for brain ventricle localization and segmentation in 3d ultrasound images of embryonic mice | |
CN109637629A (en) | A kind of BI-RADS hierarchy model method for building up | |
Lu et al. | Fine-grained calibrated double-attention convolutional network for left ventricular segmentation | |
CN111724395A (en) | Heart image four-dimensional context segmentation method, device, storage medium and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |