CN109978888A - A kind of image partition method, device and computer readable storage medium - Google Patents

A kind of image partition method, device and computer readable storage medium Download PDF

Info

Publication number
CN109978888A
CN109978888A CN201910124587.XA CN201910124587A CN109978888A CN 109978888 A CN109978888 A CN 109978888A CN 201910124587 A CN201910124587 A CN 201910124587A CN 109978888 A CN109978888 A CN 109978888A
Authority
CN
China
Prior art keywords
image
dimensional
dimensional grid
mentioned
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910124587.XA
Other languages
Chinese (zh)
Other versions
CN109978888B (en
Inventor
马进
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910124587.XA priority Critical patent/CN109978888B/en
Priority to PCT/CN2019/088975 priority patent/WO2020168648A1/en
Publication of CN109978888A publication Critical patent/CN109978888A/en
Application granted granted Critical
Publication of CN109978888B publication Critical patent/CN109978888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the present application discloses a kind of image partition method, device and computer readable storage medium, is related to image procossing, and it is multiple three-dimensional grid images that wherein method, which includes: by 3-D image cutting to be split,;Two dimensional image conversion process are carried out to above-mentioned multiple three-dimensional grid images, obtain the corresponding two dimensional image group of each three-dimensional grid image in above-mentioned multiple three-dimensional grid images;The corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images is input in trained image classification model, the classification results of above-mentioned multiple three-dimensional grids are obtained;The segmented image of above-mentioned 3-D image is generated according to the classification results of above-mentioned multiple three-dimensional grid images.By the embodiment of the present application, it can be constructed by simpler two-dimensional network with lower resource requirement and complete three-dimensional image segmentation, and reached and the approximate effect of three-dimensional network.

Description

A kind of image partition method, device and computer readable storage medium
Technical field
This application involves technical field of image segmentation more particularly to a kind of image partition methods, device and computer-readable Storage medium.
Background technique
With the development of image Segmentation Technology, image Segmentation Technology is widely used in medical domain.For example, cartilage moves back Change usually can indicate osteoarthritis, and become a main cause of work disability.And in terms for the treatment of inspection, knee nuclear-magnetism Cartilage image segmentation has become cartilage degradation quantization and comments after resonance (Magnetic Resonance Imaging, MRI) scanning Estimate one of the important selection of analysis.Under normal conditions, cartilage image resolution compares completion by each tomographic image of image doctor respectively, It is unquestionably very elapsed time and energy.In addition, the difference between observer itself and different observers is also very big, meeting Strong influence image resolution effect.It can be seen that improving the mesh of recognition accuracy and effect simultaneously for reduction cost of labor , Automatic image segmentation program is applied in research and production field all rich in potentiality.
There is a wide in range extension that them is enabled to be applied to three-dimensional figure now for convolutional neural networks As segmentation, however these Three dimensional convolution neural networks truly need the training time of huge memory and magnanimity, this One property limits them in the application of image segmentation field.
Summary of the invention
The embodiment of the present application provides a kind of image partition method, can be constructed and lower money by simpler two-dimensional network Source demand completes three-dimensional image segmentation, and reaches and the approximate effect of three-dimensional network.
In a first aspect, the embodiment of the present application provides a kind of image method, this method comprises:
It is multiple three-dimensional grid images by 3-D image cutting to be split;
Two dimensional image conversion process is carried out to the multiple three-dimensional grid image, is obtained in the multiple three-dimensional grid image The corresponding two dimensional image group of each three-dimensional grid image;
The corresponding two dimensional image group of the multiple three-dimensional grid image is input in trained image classification model, Obtain the classification results of the multiple three-dimensional grid;
The segmented image of the 3-D image is generated according to the classification results of the multiple three-dimensional grid image.
Second aspect, the embodiment of the present application provide a kind of image segmentation device, which includes for holding The unit of the method for the above-mentioned first aspect of row, the image segmentation device include:
Cutting unit, for being multiple three-dimensional grid images by 3-D image cutting to be split;
Converting unit obtains the multiple for carrying out two dimensional image conversion process to the multiple three-dimensional grid image The corresponding two dimensional image group of each three-dimensional grid image in three-dimensional grid image;
Taxon, for the corresponding two dimensional image group of the multiple three-dimensional grid image to be input to trained figure Classification results as in disaggregated model, obtaining the multiple three-dimensional grid;
Generation unit, for generating the segmentation of the 3-D image according to the classification results of the multiple three-dimensional grid image Image.
The third aspect, the embodiment of the present application provide a kind of image segmentation device, including processor, memory and communication mould Block, wherein for the memory for storing program code, the processor executes above-mentioned for calling said program code The method of method and its any optional way in one side.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, and the computer program includes program instruction, and described program instruction is used when being executed by processor Method to execute above-mentioned first aspect.
The embodiment of the present application is by being multiple three-dimensional grid images by 3-D image cutting to be split;And to the multiple three It ties up grid image and carries out two dimensional image conversion process, it is corresponding to obtain each three-dimensional grid image in the multiple three-dimensional grid image Two dimensional image group.Then, the corresponding two dimensional image group of the multiple three-dimensional grid image is input to trained image In disaggregated model, the classification results of the multiple three-dimensional grid are obtained.Finally, according to the classification of the multiple three-dimensional grid image As a result the segmented image of the 3-D image is generated.In the embodiment of the present application, by 3 d image data to be processed into Two-dimensional image data is obtained after row pretreatment, so that the two-dimensional image data of input of Image Segmentation Model is come to be split three It ties up image and carries out image dividing processing, reduce memory required for image classification model, calculation amount, to reduce image point Training time and the difficulty of model are cut, while being met and the approximate effect of three-dimensional network.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in embodiment description Attached drawing is briefly described.
Fig. 1 is that the embodiment of the present application provides a kind of schematic flow diagram of image partition method;
Fig. 2 is a kind of schematic diagram that 3-D image to be split is converted to two dimensional image group provided in the application is implemented;
Fig. 3 is a kind of schematic block diagram of image segmentation device provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of image segmentation device provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
It is that the embodiment of the present application provides a kind of schematic flow diagram of image partition method referring to Fig. 1, Fig. 1, as shown in the figure should Method can include:
101: 3-D image cutting to be split is multiple three-dimensional grid images by image segmentation device;To above-mentioned multiple three-dimensionals Grid image carries out two dimensional image conversion process, and it is corresponding to obtain each three-dimensional grid image in above-mentioned multiple three-dimensional grid images Two dimensional image group.
In the embodiment of the present application, above-mentioned 3-D image to be split can be three-dimensional ultrasound pattern, magnetic resonance imaging (Magnetic Resonance Imaging, MRI), CT scan (Computed Tomography, CT) figure As etc..It is worth noting that, above-mentioned 3-D image to be split should train sample with the three-dimensional used when training image disaggregated model This image belongs to same type.
In the embodiment of the present application, the image classification model for carrying out image dividing processing to 3-D image is three two Tie up what convolutional neural networks were constituted.Therefore, the input of above-mentioned image classification model should be two dimensional image.However figure to be processed As therefore when performing image segmentation, needing to obtain original 3-D image to be split by pretreatment for three-dimensional image To can be by the two dimensional image of above-mentioned image classification model treatment.
Specifically, the 3-D image to be split that will acquire is cut into several three-dimensional grid figures according to preset size Picture.Then, three are intercepted from the first three-dimensional grid image in above-mentioned several three-dimensional grid image and passes through the three-dimensional grid The center of image, and it is respectively parallel to three two-dimensional slice images on three orthogonal surfaces of the three-dimensional grid.It will be upper Three two-dimensional slice images are stated as two dimensional image group corresponding with the three-dimensional grid.And using the acquisition of identical method and other The corresponding two dimensional image group of three-dimensional grid image.
As shown in Fig. 2, Fig. 2 be provided in the application is implemented a kind of 3-D image to be split is converted into two dimensional image group Schematic diagram.Assuming that original 3-D image to be split is the cube figure as shown in Fig. 2 (a), when getting to be split three After tieing up image, 3-D image to be split is cut to obtain the three-dimensional side as shown in Fig. 2 (b) according to preset size Table images.Then interception is parallel to xOy, yOz, the figure of xOz plane across origin O from the three-dimensional grid as shown in Fig. 2 (b) Picture obtains one group of two dimensional image such as Fig. 2 (c).This process, which is equivalent to for image to be split to be cut into, has pre-set dimension " pixel ", then, three sectioning images for intercepting " pixel " represent " pixel ".Actually above-mentioned " pixel " In include several real pixel points.
It is understood that 3-D image to be split can be irregular stereo-picture, therefore to irregular When 3-D image to be split carries out cutting, the three-dimensional grid on boundary does not have the place of pixel that can use 0 polishing.
102: the corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images is input to and has trained by image segmentation device Image classification model in, obtain the classification results of above-mentioned multiple three-dimensional grids.
In the embodiment of the present application, since two dimensional image group is the slice map intercepted from the three-dimensional grid uniquely determined It is one-to-one relationship between picture, i.e. two dimensional image group and three-dimensional grid image.It therefore, can be corresponding by each three-dimensional grid Two dimensional image group be input to point using classification results obtained in trained image classification model as three-dimensional grid image Class result.
In the embodiment of the present application, classified due to image classification model to be used to every group of two dimensional image.Therefore, exist Before classifying using image classification model to every group of two dimensional image, need to construct the network structure of image disaggregated model.So The training sample set of above-mentioned image classification model is obtained afterwards.Then, using above-mentioned training set sample to above-mentioned image classification model Network structure be trained, to obtain above-mentioned trained image classification model.
As an alternative embodiment, the network structure of the above-mentioned above-mentioned image classification model of building specifically can wrap It includes: constructing three two-dimensional convolution neural networks, each convolutional neural networks in above three two-dimensional convolution neural network include Three convolutional layers, a sample level and an output layer;After three output layers of above three two-dimensional convolution neural network A softmax classifier is connected, the network structure of above-mentioned image classification model is obtained.
Wherein, in three convolutional neural networks of above-mentioned image classification model, the convolutional layer of above-mentioned convolutional neural networks Output are as follows:
Wherein, l indicates l layers,Indicate j-th of characteristic pattern of l layers of output,It then indicates defeated from upper one layer The ith feature figure entered, * then represent convolution algorithm,Weight is represented,It represents and lays particular stress on,It then indicates that size meets to work as The all 1's matrix of front layer output;
The sample level of above-mentioned convolutional neural networks exports are as follows:
Wherein,Indicate the value in l layers of j-th of characteristic pattern at (x, y), S represents downsampling factor, in we 2 are set as in case.M and n represents sample level biasing,For offset parameter;
The output of above-mentioned output layer are as follows:
Wherein, For the size of upper one layer of characteristic pattern;
The input of above-mentioned classifier is that the output splicing of the output layer of above three convolutional neural networks obtains later;
The kth time training cases of above-mentioned classifier output meet the probability of u-th of classification are as follows:
Wherein, the parameter matrix that θ is softmax layers, θ haveSize, K represent classification number,For The quantity of the characteristic pattern of l layers of output,
Preferably, in above-mentioned image classification model two-dimensional convolution neural network input two dimensional image size be 28 × 28, the size of first layer convolutional layer convolution kernel is 5 × 5, and the quantity of convolution kernel is N (it is 28 that N is practical in the present embodiment), convolution Step-length is 1, and N number of characteristic pattern is extracted after first layer convolutional layer, and the size of each characteristic pattern is (28-5+1) × (28-5+ 1);The pond window size of next first layer pond layer (sample level) is 2 × 2, above-mentioned N number of feature after pondization processing Scheming equal size becomes 12 × 12;Then the size of the convolution kernel of second layer convolutional layer is 5 × 5, and the quantity of convolution kernel is 2N, Step-length is 1, and 2N characteristic pattern is extracted after second layer convolutional layer, and the size of each characteristic pattern is 8 × 8;Then third layer The size of the convolution kernel of convolutional layer is 5 × 5, and the quantity of convolution kernel is 4N, and step-length 1 extracts after third layer convolutional layer 4N characteristic pattern, the size of each characteristic pattern are 4 × 4;It is finally big by after full articulamentum (i.e. output layer), obtaining 64N The small characteristic pattern for being 1 × 1.
In the embodiment of the present application, in convolutional neural networks to the calculated result of output layer use Softmax as return Model is returned to carry out probability assignments, above-mentioned formula (4) Lai Jinhang can be used in the probability that kth time training cases meet u-th of classification It calculates.
In the embodiment of the present application, using formula (5) as the loss function of convolutional neural networks:
Wherein t(k)True value mark is represented,Then represent three original input pictures of two dimensional image group, ΩTIt represents All parameters and biasing setting,The full articulamentum output integrated of three planes is then represented,Power Value attenuation parameter λ is then set as 10-2
Further, it after the network structure of above-mentioned 3-D graphic parted pattern is built, needs to obtain training sample Collection, to use above-mentioned training sample set to be trained above-mentioned image classification model.
In the embodiment of the present application, obtaining above-mentioned training sample set can specifically include: firstly, obtaining three-dimensional training sample Image;Then, above-mentioned three-dimensional training sample image is sampled, obtains multiple three-dimensional grid training sample images.Then, right Above-mentioned multiple three-dimensional grid training sample images carry out above-mentioned two dimensional image conversion, to obtain above-mentioned multiple three-dimensional grid training samples The corresponding two-dimentional training sample image group of each three-dimensional grid training sample image in this image.Finally, to every group of two dimension training Sample image group is marked, to obtain above-mentioned training sample set.
Wherein, above-mentioned three-dimensional training sample image is to belong to same category of image with above-mentioned 3-D image to be split.Example Such as, if above-mentioned 3-D image to be split is brain MRI image, above-mentioned three-dimensional training sample image should also be schemed for brain MRI Picture.
Since every group of two dimension training image group that training sample is concentrated requires to be marked classification.It therefore can be to above-mentioned Three-dimensional training sample image sampled before using existing image Segmentation Technology to above-mentioned three-dimensional training sample image not Generic region carries out image segmentation, to obtain the sample decomposition image of above-mentioned three-dimensional training sample image.To on obtaining State the corresponding two-dimentional training sample image group of each three-dimensional grid training sample image in multiple three-dimensional grid training sample images Later, every group of two dimension training sample image group can be marked according to sample decomposition image, to obtain above-mentioned training sample Collection.
After obtaining training sample set, above-mentioned image classification model is trained by training sample set.Due to above-mentioned Image classification model is made of three two-dimensional convolution neural networks, therefore can use the training of training convolutional neural networks Method trains above-mentioned image classification model into property.
The training process of convolutional neural networks is divided into two stages.First stage is data from low level to high-level biography The stage broadcast, i.e. propagated forward stage.Another stage is, when the result that propagated forward obtains is with being expected not to be consistent, by Error carries out propagating trained stage, i.e. back-propagation phase to bottom from high-level.
In the present embodiment, due to needing a large amount of weight and bias term in convolutional neural networks model, in the first of weight It needs to be added a small amount of noise when beginningization to break symmetry and avoid the problem that 0 gradient, the data of normal distribution can be used It is initialized;Bias term is initialized using a lesser positive number, is asked to avoid neuron node output is permanent for 0 Topic.
After by weight initialization, three two dimensions of the two-dimentional training sample image group of above-mentioned training sample set are cut Picture is separately input in three two-dimensional convolution neural networks in above-mentioned image classification model, above-mentioned by above-mentioned training sample Three two-dimensional slice images of the two-dimentional training sample image group of collection are separately input to three two in above-mentioned image classification model Tie up communication process, that is, propagated forward process in convolutional neural networks, in the forward propagation process, the two-dimensional image data of input By the convolution sum pondization processing of the multilayer convolutional layer of convolutional neural networks, proposes feature vector, feature vector is passed to and is connected entirely It connects in layer, obtains the result of Classification and Identification.When the result of output is consistent with our desired value, result is exported.Due to above-mentioned There are three convolutional neural networks for tool in image classification model, can obtain the output of three full articulamentums (output layer) as a result, therefore Needed in above-mentioned image classification model by above three output result be incorporated as above-mentioned Classification and Identification as a result, then will Classification recognition result after merging is input in classifier, obtains the class probability result of this time training.
When the class probability result of above-mentioned image classification model output is not consistent with our desired value, then carry out reversed Communication process.The error of result and desired value, then the return by error in layer are found out, calculates each layer of error, so After carry out right value update.The main purpose of the process is to adjust network weight by training sample and desired value.The biography of error The process of passing can understand in this way, firstly, data from input layer to output layer, during which have passed through convolutional layer, down-sampling layer, Quan Lian Layer is connect, and inevitably will cause the loss of data during data are transmitted between the layers, then also has led to the generation of error. And error amount caused by each layer is different, so needing for error to be passed to after we find out the overall error of network In network, acquire how many specific gravity should be undertaken for total error for each layer.When error is greater than our desired value, by error It passes back in network, successively acquires full articulamentum, down-sampling layer, the error of convolutional layer.Finally according to the error and passback of each layer Parameter come using optimization algorithm carry out right value update.When the above-mentioned training process of repetition knows that loss function is optimal, terminate Training.
In the embodiment of the present application, optimized using L-BFGS algorithm in convolutional neural networks parameter (such as weight ginseng Number, offset parameter etc.) so that loss function reaches minimum, L-BFGS algorithm is exactly an improvement to Quasi-Newton algorithm, L- The basic thought of BFGS algorithm is: algorithm only saves and constructs the close of Hessian matrix using the curvature information of nearest m iteration Like matrix.L-BFGS algorithm execution speed is fast, and since every single-step iteration can guarantee the positive definite of approximate matrix, algorithm Strong robustness.
103: image segmentation device generates above-mentioned 3-D image according to the classification results of above-mentioned multiple three-dimensional grid images Segmented image.
In the embodiment of the present application, the more lattice three-dimensional grid images for having 3-D image to be split to cut out can be regarded as It is " pixel " bigger in image to be split.Therefore, when process image classification model obtains the classification of each three-dimensional grid As a result to get the tag along sort of " pixel " each into above-mentioned 3-D image to be split after, so that it may according to above-mentioned three-dimensional side The tag along sort of coordinate position and each three-dimensional grid image of the table images in above-mentioned 3-D image to be split generates above-mentioned The segmented image of 3-D image to be split.
As can be seen that the embodiment of the present application is by being multiple three-dimensional grid images by 3-D image cutting to be split;And it is right Above-mentioned multiple three-dimensional grid images carry out two dimensional image conversion process, obtain each three-dimensional side in above-mentioned multiple three-dimensional grid images The corresponding two dimensional image group of table images.Then, the corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images is input to and has been instructed In the image classification model perfected, the classification results of above-mentioned multiple three-dimensional grids are obtained.Finally, according to above-mentioned multiple three-dimensional grids The classification results of image generate the segmented image of above-mentioned 3-D image.In the embodiment of the present application, by three-dimensional to be processed Image data obtains two-dimensional image data after being pre-processed, so that the two-dimensional image data of the input of Image Segmentation Model is come pair 3-D image to be split carries out image dividing processing, reduces memory required for image classification model, calculation amount, to subtract The training time of small Image Segmentation Model and difficulty, while meeting and the approximate effect of three-dimensional network.
The embodiment of the present application also provides a kind of image segmentation device, which is used to execute any one of aforementioned The unit of method.It specifically, is a kind of schematic block diagram of image segmentation device provided by the embodiments of the present application referring to Fig. 3, Fig. 3. The image segmentation device 300 of the present embodiment includes: cutting unit 310, the first converting unit 320, taxon 330, generates list Member 340.
Cutting unit 310, for being multiple three-dimensional grid images by 3-D image cutting to be split;
First converting unit 320 is obtained for carrying out two dimensional image conversion process to above-mentioned multiple three-dimensional grid images State the corresponding two dimensional image group of each three-dimensional grid image in multiple three-dimensional grid images;
Taxon 330 has been trained for the corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images to be input to Image classification model in, obtain the classification results of above-mentioned multiple three-dimensional grids;
Generation unit 340, for generating above-mentioned 3-D image according to the classification results of above-mentioned multiple three-dimensional grid images Segmented image.
Optionally, above-mentioned image segmentation device further include:
Acquiring unit, for obtaining the training sample set of above-mentioned image classification model;
Training unit, it is above-mentioned to obtain for being trained using above-mentioned training sample set to above-mentioned image classification model Trained image classification model.
Further, above-mentioned image classification model includes three two-dimensional convolution neural networks, above three two-dimensional convolution mind It include three convolutional layers, a sample level and an output layer through each convolutional neural networks in network;
A softmax classifier is connected after three output layers of above three two-dimensional convolution neural network.
Further, the convolutional layer output of above-mentioned convolutional neural networks are as follows:
Wherein, l indicates l layers,Indicate j-th of characteristic pattern of l layers of output,It then indicates defeated from upper one layer The ith feature figure entered, * then represent convolution algorithm,Weight is represented,It represents and lays particular stress on,It then indicates that size meets to work as The all 1's matrix of front layer output;
The sample level of above-mentioned convolutional neural networks exports are as follows:
Wherein,Indicate the value in l layers of j-th of characteristic pattern at (x, y), S represents downsampling factor, in we It is set as 2, m and n in case and represents sample level biasing,For offset parameter;
The output of above-mentioned output layer are as follows:
Wherein, For the size of upper one layer of characteristic pattern;
The input of above-mentioned classifier is that the output splicing of the output layer of above three convolutional neural networks obtains later;
The kth time training cases of above-mentioned classifier output meet the probability of u-th of classification are as follows:
Wherein, the parameter matrix that θ is softmax layers, θ haveSize, K represent classification number,For The quantity of the characteristic pattern of l layers of output,
Further, above-mentioned acquiring unit, for obtaining 3-D image training sample;
Above-mentioned first acquisition unit includes:
Acquisition unit obtains multiple three-dimensional grid training samples for sampling to above-mentioned 3-D image training sample Image;
Second converting unit, for carrying out above-mentioned two dimensional image conversion to above-mentioned multiple three-dimensional grid training sample images, Obtain the corresponding two-dimentional training sample of each three-dimensional grid training sample image in above-mentioned multiple three-dimensional grid training sample images Image group;
Marking unit, for every group of training sample image group to be marked, to obtain above-mentioned training sample set.
Further, above-mentioned training unit includes:
Input unit carries out propagated forward for above-mentioned training sample concentration to be input in above-mentioned image classification model, Obtain the classification results of training sample set;
Updating unit, for carrying out backpropagation according to the classification results and loss function of above-mentioned training sample set, with more The weighting parameter of new above-mentioned image classification model.
Further, above-mentioned converting unit, for obtaining three two dimensional images from the first three-dimensional grid image, above-mentioned the One three-dimensional grid image is any one three-dimensional grid image in above-mentioned multiple three-dimensional grid images, above three two dimensional image It is the slice across the center of three-dimensional grid image, and parallel with three orthogonal surfaces of three-dimensional grid image respectively Image;Using above three two dimensional image as the corresponding two dimensional image group of above-mentioned first three-dimensional grid image;Using with above-mentioned The identical method of one three-dimensional grid image obtains the two dimensional image of other three-dimensional grid images in above-mentioned multiple three-dimensional grid images Group.
As can be seen that the embodiment of the present application is by being multiple three-dimensional grid images by 3-D image cutting to be split;And it is right Above-mentioned multiple three-dimensional grid images carry out two dimensional image conversion process, obtain each three-dimensional side in above-mentioned multiple three-dimensional grid images The corresponding two dimensional image group of table images.Then, the corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images is input to and has been instructed In the image classification model perfected, the classification results of above-mentioned multiple three-dimensional grids are obtained.Finally, according to above-mentioned multiple three-dimensional grids The classification results of image generate the segmented image of above-mentioned 3-D image.In the embodiment of the present application, by three-dimensional to be processed Image data obtains two-dimensional image data after being pre-processed, so that the two-dimensional image data of the input of Image Segmentation Model is come pair 3-D image to be split carries out image dividing processing, reduces memory required for image classification model, calculation amount, to subtract The training time of small Image Segmentation Model and difficulty, while meeting and the approximate effect of three-dimensional network.
Referring to Fig. 4, Fig. 4 is a kind of structural schematic diagram of image segmentation device 400 provided by the embodiments of the present application, such as scheme Shown in 4, image segmentation device 400 includes processor, memory, communication interface and one or more programs, wherein above-mentioned one A or multiple programs are different from said one or multiple application programs, and said one or multiple programs are stored in above-mentioned storage In device, and it is configured to be executed by above-mentioned processor.
Above procedure includes the instruction for executing following steps: being multiple three-dimensional grids by 3-D image cutting to be split Image;Two dimensional image conversion process is carried out to above-mentioned multiple three-dimensional grid images, is obtained every in above-mentioned multiple three-dimensional grid images The corresponding two dimensional image group of a three-dimensional grid image;The corresponding two dimensional image group of above-mentioned multiple three-dimensional grid images is input to In trained image classification model, the classification results of above-mentioned multiple three-dimensional grids are obtained;According to above-mentioned multiple three-dimensional grid figures The classification results of picture generate the segmented image of above-mentioned 3-D image.
It should be appreciated that in the embodiment of the present application, alleged processor can be central processing unit (Central Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at Reason device is also possible to any conventional processor etc..
A kind of computer readable storage medium, above-mentioned computer-readable storage medium are provided in another embodiment of the application Matter is stored with computer program, realization when above-mentioned computer program is executed by processor: being more by 3-D image cutting to be split A three-dimensional grid image;Two dimensional image conversion process is carried out to above-mentioned multiple three-dimensional grid images, obtains above-mentioned multiple three-dimensional sides The corresponding two dimensional image group of each three-dimensional grid image in table images;By the corresponding two dimensional image of above-mentioned multiple three-dimensional grid images Group is input in trained image classification model, obtains the classification results of above-mentioned multiple three-dimensional grids;According to above-mentioned multiple The classification results of three-dimensional grid image generate the segmented image of above-mentioned 3-D image.
Above-mentioned computer readable storage medium can be the internal storage unit of the above-mentioned terminal of aforementioned any embodiment, example Such as the hard disk or memory of terminal.Above-mentioned computer readable storage medium is also possible to the External memory equipment of above-mentioned terminal, such as The plug-in type hard disk being equipped in above-mentioned terminal, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, above-mentioned computer readable storage medium can also be wrapped both The internal storage unit for including above-mentioned terminal also includes External memory equipment.Above-mentioned computer readable storage medium is above-mentioned for storing Other programs and data needed for computer program and above-mentioned terminal.Above-mentioned computer readable storage medium can be also used for temporarily When store the data that has exported or will export.
In several embodiments provided herein, it should be understood that disclosed system, server and method, it can To realize by another way.For example, the apparatus embodiments described above are merely exemplary, for example, said units Division, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or group Part can be combined or can be integrated into another system, or some features can be ignored or not executed.In addition, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, be also possible to electricity, mechanical or other forms connections.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.Some or all of unit therein can be selected to realize the embodiment of the present application scheme according to the actual needs Purpose.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment above method of the application Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should all cover within the scope of protection of this application.Therefore, the protection scope of the application should be with right It is required that protection scope subject to.

Claims (10)

1. a kind of image partition method characterized by comprising
It is multiple three-dimensional grid images by 3-D image cutting to be split;
Two dimensional image conversion process is carried out to the multiple three-dimensional grid image, is obtained each in the multiple three-dimensional grid image The corresponding two dimensional image group of three-dimensional grid image;
The corresponding two dimensional image group of the multiple three-dimensional grid image is input in trained image classification model, is obtained The classification results of the multiple three-dimensional grid;
The segmented image of the 3-D image is generated according to the classification results of the multiple three-dimensional grid image.
2. the method according to claim 1, wherein described by the multiple three-dimensional grid image corresponding two Before dimension image group is input in trained image classification model, the method also includes:
Obtain the training sample set of described image disaggregated model;
Described image disaggregated model is trained using the training sample set, to obtain the trained image classification mould Type.
3. according to the method described in claim 2, it is characterized in that, described image disaggregated model includes three two-dimensional convolution nerves Network, each convolutional neural networks in three two-dimensional convolution neural networks include three convolutional layers, a sample level and One output layer;
A softmax classifier is connected after three output layers of three two-dimensional convolution neural networks.
4. according to the method described in claim 3, it is characterized in that, the convolutional layer of the convolutional neural networks exports are as follows:
Wherein, l indicates l layers,Indicate j-th of characteristic pattern of l layers of output,Then expression is inputted from upper one layer Ith feature figure, * then represent convolution algorithm,Weight is represented,It represents and lays particular stress on,It is defeated then to indicate that size meets current layer All 1's matrix out;
The sample level of the convolutional neural networks exports are as follows:
Wherein,Indicate the value in l layers of j-th of characteristic pattern at (x, y), S represents downsampling factor, in this programme It is set as 2, m and n and represents sample level biasing,For offset parameter;
The output of the output layer are as follows:
Wherein, For the size of upper one layer of characteristic pattern;
The input of the classifier is that the output splicing of the output layer of three convolutional neural networks obtains later;
The kth time training cases of the classifier output meet the probability of u-th of classification are as follows:
Wherein, the parameter matrix that θ is softmax layers, θ haveSize, K represent classification number,It is l layers The quantity of the characteristic pattern of output,
5. according to the method described in claim 4, it is characterized in that, the training sample for obtaining described image disaggregated model Collection, comprising:
Obtain 3-D image training sample;
The 3-D image training sample is sampled, multiple three-dimensional grid training sample images are obtained;
The two dimensional image conversion is carried out to the multiple three-dimensional grid training sample image, obtains the multiple three-dimensional grid instruction Practice the corresponding two-dimentional training sample image group of each three-dimensional grid training sample image in sample image;
Every group of training sample image group is marked, to obtain the training sample set.
6. according to the method described in claim 5, it is characterized in that, described use the training set sample to above-mentioned image classification Model is trained, comprising:
Training sample concentration is input in above-mentioned image classification model and carries out propagated forward, obtains point of training sample set Class result;
Backpropagation is carried out according to the classification results of the training sample set and loss function, to update described image disaggregated model Weighting parameter.
7. method according to claim 1-6, which is characterized in that it is described to the multiple three-dimensional grid image into Row two dimensional image conversion process obtains the corresponding two dimensional image of each three-dimensional grid image in the multiple three-dimensional grid image Group, comprising:
Three two dimensional images are obtained from the first three-dimensional grid image, the described first three-dimensional grid image is the multiple three-dimensional side Any one three-dimensional grid image in table images, three two dimensional images be across the center of three-dimensional grid image, and The sectioning image parallel with three orthogonal surfaces of three-dimensional grid image respectively;
Using three two dimensional images as the corresponding two dimensional image group of the first three-dimensional grid image;
Other three-dimensional sides in the multiple three-dimensional grid image are obtained using method identical with the first three-dimensional grid image The two dimensional image group of table images.
8. a kind of image segmentation device, which is characterized in that including for executing as claimed in any one of claims 1 to 7 The unit of method.
9. a kind of image segmentation device, which is characterized in that described image segmenting device includes processor, memory and communication mould Block, wherein the memory is for storing program code, and the processor is for calling said program code to execute such as right It is required that the described in any item methods of 1-7.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence, the computer program include program instruction, and described program instruction executes the processor such as The described in any item methods of claim 1-7.
CN201910124587.XA 2019-02-18 2019-02-18 Image segmentation method, device and computer readable storage medium Active CN109978888B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910124587.XA CN109978888B (en) 2019-02-18 2019-02-18 Image segmentation method, device and computer readable storage medium
PCT/CN2019/088975 WO2020168648A1 (en) 2019-02-18 2019-05-29 Image segmentation method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910124587.XA CN109978888B (en) 2019-02-18 2019-02-18 Image segmentation method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109978888A true CN109978888A (en) 2019-07-05
CN109978888B CN109978888B (en) 2023-07-28

Family

ID=67077046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910124587.XA Active CN109978888B (en) 2019-02-18 2019-02-18 Image segmentation method, device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109978888B (en)
WO (1) WO2020168648A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443839A (en) * 2019-07-22 2019-11-12 艾瑞迈迪科技石家庄有限公司 A kind of skeleton model spatial registration method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102284A (en) * 2020-09-14 2020-12-18 推想医疗科技股份有限公司 Marking method, training method and device of training sample of image segmentation model
CN111915609B (en) * 2020-09-22 2023-07-14 平安科技(深圳)有限公司 Focus detection analysis method, apparatus, electronic device and computer storage medium
CN115937229B (en) * 2022-12-29 2023-08-04 深圳优立全息科技有限公司 Three-dimensional automatic segmentation method and device based on super-voxel and graph cutting algorithm

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317972A (en) * 2009-02-13 2012-01-11 哈里公司 Registration of 3d point cloud data to 2d electro-optical image data
US20120148162A1 (en) * 2010-12-09 2012-06-14 The Hong Kong University Of Science And Technology Joint semantic segmentation of images and scan data
CN107563983A (en) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 Image processing method and medical imaging devices
CN108573491A (en) * 2017-03-10 2018-09-25 南京大学 A kind of three-dimensional ultrasound pattern dividing method based on machine learning
CN108717568A (en) * 2018-05-16 2018-10-30 陕西师范大学 A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6901277B2 (en) * 2001-07-17 2005-05-31 Accuimage Diagnostics Corp. Methods for generating a lung report
US10871536B2 (en) * 2015-11-29 2020-12-22 Arterys Inc. Automated cardiac volume segmentation
CN106355194A (en) * 2016-08-22 2017-01-25 广东华中科技大学工业技术研究院 Treatment method for surface target of unmanned ship based on laser imaging radar
CN106803251B (en) * 2017-01-12 2019-10-08 西安电子科技大学 The apparatus and method of aortic coaractation pressure difference are determined by CT images
CN108664848B (en) * 2017-03-30 2020-12-25 杭州海康威视数字技术股份有限公司 Image target identification method and device
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN109118564B (en) * 2018-08-01 2023-09-19 山东佳音信息科技有限公司 Three-dimensional point cloud marking method and device based on fusion voxels

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317972A (en) * 2009-02-13 2012-01-11 哈里公司 Registration of 3d point cloud data to 2d electro-optical image data
US20120148162A1 (en) * 2010-12-09 2012-06-14 The Hong Kong University Of Science And Technology Joint semantic segmentation of images and scan data
CN108573491A (en) * 2017-03-10 2018-09-25 南京大学 A kind of three-dimensional ultrasound pattern dividing method based on machine learning
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy
CN107563983A (en) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 Image processing method and medical imaging devices
CN108717568A (en) * 2018-05-16 2018-10-30 陕西师范大学 A kind of image characteristics extraction and training method based on Three dimensional convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱婷 等: "基于WRN-PPNet的多模态MRI脑肿瘤全自动分割", 计算机工程, no. 12 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443839A (en) * 2019-07-22 2019-11-12 艾瑞迈迪科技石家庄有限公司 A kind of skeleton model spatial registration method and device

Also Published As

Publication number Publication date
CN109978888B (en) 2023-07-28
WO2020168648A1 (en) 2020-08-27

Similar Documents

Publication Publication Date Title
EP3961484A1 (en) Medical image segmentation method and device, electronic device and storage medium
CN109978888A (en) A kind of image partition method, device and computer readable storage medium
Wang et al. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN111832655B (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN110084318A (en) A kind of image-recognizing method of combination convolutional neural networks and gradient boosted tree
CN110310287A (en) It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
Wu et al. Classification of defects with ensemble methods in the automated visual inspection of sewer pipes
CN110288586A (en) A kind of multiple dimensioned transmission line of electricity defect inspection method based on visible images data
JP2013210207A (en) Target identification device for radar image, target identification method, and target identification program
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN109961446A (en) CT/MR three-dimensional image segmentation processing method, device, equipment and medium
Bhutto et al. CT and MRI medical image fusion using noise-removal and contrast enhancement scheme with convolutional neural network
CN108009557A (en) A kind of threedimensional model method for describing local characteristic based on shared weight convolutional network
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
Luo et al. Research on Digital Image Processing Technology and Its Application
Gadasin et al. Application of Convolutional Neural Networks for Three-Dimensional Reconstruction of the Geometry of Objects in the Image
CN113724185A (en) Model processing method and device for image classification and storage medium
CN103985111A (en) 4D-MRI super-resolution reconstruction method based on double-dictionary learning
CN116310194A (en) Three-dimensional model reconstruction method, system, equipment and storage medium for power distribution station room
CN116128820A (en) Pin state identification method based on improved YOLO model
CN109345545A (en) A kind of method, apparatus and computer readable storage medium of segmented image generation
CN111461091B (en) Universal fingerprint generation method and device, storage medium and electronic device
CN110533663A (en) A kind of image parallactic determines method, apparatus, equipment and system
CN108470176A (en) A kind of notable extracting method of stereo-picture vision indicated based on frequency-domain sparse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant