CN109829920A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109829920A
CN109829920A CN201910138465.6A CN201910138465A CN109829920A CN 109829920 A CN109829920 A CN 109829920A CN 201910138465 A CN201910138465 A CN 201910138465A CN 109829920 A CN109829920 A CN 109829920A
Authority
CN
China
Prior art keywords
target
characteristic pattern
image
processed
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910138465.6A
Other languages
Chinese (zh)
Other versions
CN109829920B (en
Inventor
高云河
黄锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201910138465.6A priority Critical patent/CN109829920B/en
Publication of CN109829920A publication Critical patent/CN109829920A/en
Application granted granted Critical
Publication of CN109829920B publication Critical patent/CN109829920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

This disclosure relates to a kind of image processing method and device, electronic equipment and storage medium, which comprises carry out feature extraction to image to be processed, obtain the characteristic pattern of the image to be processed;First positioning and dividing processing are carried out to the characteristic pattern, determine the first segmentation result of first object;Second positioning and dividing processing are carried out to the characteristic pattern, determine the second segmentation result of the second target;According to first segmentation result and second segmentation result, the segmentation result of the image to be processed is determined.The embodiment of the present disclosure can realize the differentiation processing of the target different to the size of different zones in image to be processed, improve the precision of image procossing.

Description

Image processing method and device, electronic equipment and storage medium
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of image processing method and device, electronic equipment and Storage medium.
Background technique
In image technique field, area-of-interest or target area are split, are to carry out image analysis and target knowledge Other basis.For example, clearly identifying the boundary between one or more organ or tissues by segmentation in medical image. Accurately Medical Image Segmentation is vital for many clinical applications.
Summary of the invention
The present disclosure proposes a kind of image processing techniques schemes.
According to the one side of the disclosure, a kind of image processing method is provided, comprising: feature is carried out to image to be processed and is mentioned It takes, obtains the characteristic pattern of the image to be processed;First positioning and dividing processing are carried out to the characteristic pattern, determine first object The first segmentation result;Second positioning and dividing processing are carried out to the characteristic pattern, determine the second segmentation result of the second target; According to first segmentation result and second segmentation result, the segmentation result of the image to be processed is determined.
In one possible implementation, the second positioning and dividing processing are carried out to the characteristic pattern, determines the second mesh The second segmentation result of target, comprising: the second localization process is carried out to the characteristic pattern and cuts processing, respectively obtains the second target Location information and target signature;According to the target signature, the location information of second target, image to be processed with And the characteristic pattern, determine the second segmentation result of second target.
In one possible implementation, the second localization process is carried out to the characteristic pattern and cuts processing, respectively To the location information and target signature of the second target, comprising: carry out the second localization process to the characteristic pattern, determine the second mesh Target location information;The characteristic pattern is carried out to cut processing according to the location information of second target, obtains described second The target signature of target.
In one possible implementation, according to the target signature, the location information of second target, wait locate Image and the characteristic pattern are managed, determines the second segmentation result of second target, comprising: to the target signature, institute The location information, image to be processed and the characteristic pattern for stating the second target carry out image co-registration, obtain fusion results;To described Fusion results carry out the second segmentation, determine the second segmentation result of second target.
In one possible implementation, the characteristic pattern includes N layers of characteristic pattern, and N is the integer greater than 1, wherein right The characteristic pattern carries out the second localization process, determines the location information of the second target, comprising: carries out second to n-th layer characteristic pattern Localization process determines the location probability figure of the second target.
In one possible implementation, the characteristic pattern is cut according to the location information of second target Processing, obtains the target signature of second target, comprising: according to the location information of second target to n-th layer feature Figure carries out cutting processing, obtains the target signature of second target.
In one possible implementation, to the location information of the target signature, second target, to be processed Image and the characteristic pattern carry out image co-registration, obtain fusion results, comprising: according to the location information of second target, Third segmentation is carried out to the image to be processed and first layer characteristic pattern respectively, after the image to be processed and segmentation after being divided First layer characteristic pattern;To after the location information of the target signature, second target, segmentation image to be processed and point First layer characteristic pattern after cutting carries out image co-registration, obtains fusion results.
In one possible implementation, feature extraction is carried out to image to be processed, obtains the image to be processed Characteristic pattern, comprising: process of convolution is carried out to image to be processed, obtains convolution results;Residual error and pressure are carried out to the convolution results Contracting activation processing, obtains activation result;Multi resolution feature extraction and deconvolution processing are carried out to the activation result, obtained described The characteristic pattern of image to be processed.
In one possible implementation, for the method by neural fusion, the neural network includes main point Cut network, first positioning network and first segmentation network, the main segmentation network include feature extraction network and second positioning and Divide network,
Wherein, the feature extraction network handles processing image carries out feature extraction, second positioning and segmentation network For carrying out the first positioning and dividing processing to the characteristic pattern, the first positioning network is used to carry out the to the characteristic pattern Two localization process, the first segmentation network are used to determine the second segmentation result of second target.
In one possible implementation, the method also includes: according to preset training set, the training nerve net Network.
In one possible implementation, according to preset training set, the training neural network, comprising: according to institute State training set, the training main segmentation network;According to the training set and the main segmentation network trained, training described first Position network;According to the training set, the main segmentation network trained and the first positioning network trained, training described the One segmentation network.
In one possible implementation, according to preset training set, the training neural network, comprising: according to Focal loss function and generalized dice loss function, determine the network losses of the neural network;According to described Network losses adjust the network parameter of the neural network.
In one possible implementation, the image to be processed is the medical image comprising jeopardizing organ OAR.
According to the one side of the disclosure, a kind of image processing apparatus is provided, comprising:
Characteristic extracting module obtains the characteristic pattern of the image to be processed for carrying out feature extraction to image to be processed;
First determining module determines the of first object for carrying out the first positioning and dividing processing to the characteristic pattern One segmentation result;
Second determining module determines the of the second target for carrying out the second positioning and dividing processing to the characteristic pattern Two segmentation results;
Segmentation result determining module, described in determining according to first segmentation result and second segmentation result The segmentation result of image to be processed.
In one possible implementation, second determining module includes: positioning submodule, for the feature Figure carries out the second localization process and cuts processing, respectively obtains the location information and target signature of the second target;Determine submodule Block, for according to the target signature, the location information of second target, image to be processed and the characteristic pattern, really Second segmentation result of fixed second target.
In one possible implementation, the positioning submodule is also used to: carrying out the second positioning to the characteristic pattern Processing, determines the location information of the second target;The characteristic pattern is carried out to cut place according to the location information of second target Reason, obtains the target signature of second target.
In one possible implementation, the determining submodule is also used to: to the target signature, described second The location information of target, image to be processed and the characteristic pattern carry out image co-registration, obtain fusion results;The fusion is tied Fruit carries out the second segmentation, determines the second segmentation result of second target.
In one possible implementation, the characteristic pattern includes N layers of characteristic pattern, and N is the integer greater than 1, wherein institute It states positioning submodule to be also used to: the second localization process being carried out to n-th layer characteristic pattern, determines the location probability figure of the second target.
In one possible implementation, the positioning submodule is also used to: being believed according to the position of second target Breath carries out n-th layer characteristic pattern to cut processing, obtains the target signature of second target.
In one possible implementation, the determining submodule is also used to: being believed according to the position of second target Breath carries out third segmentation to the image to be processed and first layer characteristic pattern respectively, image to be processed after divide and divides First layer characteristic pattern after cutting;To the image to be processed after the location information of the target signature, second target, segmentation And the first layer characteristic pattern after segmentation carries out image co-registration, obtains fusion results.
In one possible implementation, the characteristic extracting module includes: convolution submodule, for figure to be processed As carrying out process of convolution, convolution results are obtained;Submodule is activated, for carrying out at residual error and compression activation to the convolution results Reason, obtains activation result;Extracting sub-module, for carrying out Multi resolution feature extraction and deconvolution processing to the activation result, Obtain the characteristic pattern of the image to be processed.
In one possible implementation, for described device by neural fusion, the neural network includes main point Cut network, first positioning network and first segmentation network, the main segmentation network include feature extraction network and second positioning and Divide network,
Wherein, the feature extraction network handles processing image carries out feature extraction, second positioning and segmentation network For carrying out the first positioning and dividing processing to the characteristic pattern, the first positioning network is used to carry out the to the characteristic pattern Two localization process, the first segmentation network are used to determine the second segmentation result of second target.
In one possible implementation, described device further include: training module is used for according to preset training set, The training neural network.
In one possible implementation, the training module includes: the first training submodule, for according to the instruction Practice collection, the training main segmentation network;Second training submodule, for according to the training set and the main segmentation net trained Network, training the first positioning network;Third trains submodule, for according to the training set, the main segmentation network trained And the first positioning network trained, training the first segmentation network.
In one possible implementation, the training module includes: and loses to determine submodule, for according to focal Loss function and generalized dice loss function, determine the network losses of the neural network;Adjusting submodule is used According to the network losses, the network parameter of the neural network is adjusted.
In one possible implementation, the image to be processed is the medical image comprising jeopardizing organ OAR.
According to the one side of the disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute above-mentioned image processing method.
According to the one side of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with Instruction, the computer program instructions realize above-mentioned image processing method when being executed by processor.
In the embodiments of the present disclosure, for first object and the second target, pass through the image to be processed to extraction respectively Characteristic pattern carries out the first positioning and dividing processing obtains the first segmentation result of first object, fixed by carrying out second to characteristic pattern Position and dividing processing obtain the segmentation result of the second target, and according to the segmentation knot of the segmentation result of first object and the second target Fruit obtains the segmentation result of image to be processed.It is may be implemented by the above process to the sizes of different zones in image to be processed not The differentiation processing of same target, improves the precision of image procossing.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than Limit the disclosure.According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will It becomes apparent.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the image processing method according to the embodiment of the present disclosure.
Fig. 2 a shows the flow chart of the method for the step S10 according to the embodiment of the present disclosure.
Fig. 2 b shows the flow chart of the method for the step S12 according to the embodiment of the present disclosure.
Fig. 3 shows the schematic diagram of the neural network according to the embodiment of the present disclosure.
Fig. 4 shows the flow chart of the method for the step S122 according to the embodiment of the present disclosure.
Fig. 5 shows the flow chart of the image processing method according to the embodiment of the present disclosure.
Fig. 6 shows the flow chart of the method for the step S14 according to the embodiment of the present disclosure.
Fig. 7 shows the block diagram of the image processing apparatus according to the embodiment of the present disclosure.
Fig. 8 shows the block diagram of a kind of electronic equipment according to the embodiment of the present disclosure.
Fig. 9 shows the block diagram of a kind of electronic equipment according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A, B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below in order to which the disclosure is better described. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the image processing method according to the embodiment of the present disclosure, and this method can be applied at image Device is managed, image processing apparatus can be terminal device, server or other processing equipments etc..Wherein, terminal device can be with For user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, wireless phone, a number Word processing (Personal Digital Assistant, PDA), calculates equipment, mobile unit, wearable device at handheld device Deng.
In some possible implementations, which can be called in memory by processor and be stored The mode of computer-readable instruction is realized.
As shown in Figure 1, described image processing method may include:
Step S10 carries out feature extraction to image to be processed, obtains the characteristic pattern of the image to be processed;
Step S11 carries out the first positioning and dividing processing to the characteristic pattern, determines the first segmentation knot of first object Fruit;
Step S12 carries out the second positioning and dividing processing to the characteristic pattern, determines the second segmentation knot of the second target Fruit;
Step S13 determines point of the image to be processed according to first segmentation result and second segmentation result Cut result.
Wherein, the size of the first object can be greater than the size of second target, alternatively, the size of first object It might be less that the size of the second target, the disclosure are not construed as limiting this.
The image processing method of the embodiment of the present disclosure, for the different first object of size and the second target, respectively The first positioning is carried out by the characteristic pattern of the image to be processed to extraction and dividing processing obtains the first segmentation knot of first object Fruit, by obtaining the segmentation result of the second target to the second positioning of characteristic pattern progress and dividing processing, and according to first object Segmentation result and the segmentation result of the second target obtain the segmentation result of image to be processed.It may be implemented to treat by the above process The differentiation processing of the different target of the size of different zones, improves the precision of image procossing in processing image.
The image processing method of the embodiment of the present disclosure can be realized the target object in automatic, efficient segmented image, and And be split target various sizes of in image to be processed by different cutting procedures, it can obtain and more accurately divide As a result.
Wherein, the image processing method of the embodiment of the present disclosure can be applied to the processing of medical image, for example, for identification Target area in medical image, the target area can be lesion, diseased organ, jeopardize organ etc..A kind of possible In implementation, image to be processed can be the medical image comprising jeopardizing organ OAR, that is to say, that the embodiment of the present disclosure Image processing method can be applied in radiotherapy planning process clinically, jeopardize organ for identification, by accurately identifying danger And the effect of radiotherapy is improved to reduce side effect of the radiotherapy to normal organ in the position of organ.
It, can be with it should be noted that the image processing method of the embodiment of the present disclosure is not limited to apply in Medical Image Processing Applied to arbitrary image procossing, the disclosure is not construed as limiting this.
In one possible implementation, image to be processed may include plurality of pictures, can be with according to the plurality of pictures Identify the organ of one or more three-dimensionals.
For step S10, the characteristic pattern of image to be processed can be extracted using relevant Feature Extraction Technology.For example, can To extract the characteristic pattern of image to be processed based on artificial design features such as image local brightness, organ shape features.
It in one possible implementation, can be using based on coder-decoder (Encoder-Decoder) framework The full convolutional neural networks of 3D U-Net primary or multiple convolution carried out to image to be processed handle to obtain convolution results, then The deconvolution processing for carrying out corresponding number again obtains the characteristic pattern of image to be processed.The disclosure to the concrete mode of feature extraction not It is restricted.
In alternatively possible implementation, Fig. 2 a shows the stream of the method for the step S10 according to the embodiment of the present disclosure Cheng Tu, as shown in Figure 2 a, step S10 may include:
Step S101 carries out process of convolution to image to be processed, obtains convolution results;
Step S102 carries out residual error and compression activation processing to the convolution results, obtains activation result;
Step S103 carries out Multi resolution feature extraction to activation result and deconvolution is handled, obtains the image to be processed Characteristic pattern.
In one possible implementation, the image processing method of the neural fusion embodiment of the present disclosure can be passed through Method.Fig. 3 shows the schematic diagram of the neural network according to the embodiment of the present disclosure, as shown in figure 3, the neural network may include master Divide network 1, first and position network 2 and the first segmentation network 3, the main segmentation network includes that feature extraction network and second are fixed Position and segmentation network 13.
Wherein, the feature extraction network handles processing image carries out the characteristic pattern that feature extraction obtains image to be processed 12, second positioning is used to carry out the characteristic pattern the first positioning and dividing processing with segmentation network 13, and described first is fixed Position network 2 is used to carry out the characteristic pattern 12 second localization process, and the first segmentation network 3 is for determining second mesh The second segmentation result of target 31.
According to the above-mentioned image processing method of the neural fusion of the embodiment of the present disclosure, main segmentation network 1, first positions net Shared parameter (for example, characteristic pattern etc.) between network 2 and the first segmentation network 3, without redundant computation, improve segmentation efficiency and The accuracy of separation.
In one possible implementation, main segmentation network 1 can use the 3D based on Encoder-Decoder framework The convolutional neural networks that U-Net modifies, main segmentation network 1 may include feature extraction network and the second positioning and segmentation Network 13.Wherein, feature extraction network may include residual error and compression active module Squeeze-and-Excitation Residual Block (SEResBlock) and the empty spatial convolution pyramid module (densely intensively connected Connected atrous spatial pyramid pooling, DenseASPP).
Feature extraction network, which can be reduced, carries out down-sampled number to image to be processed, to reduce high resolution information Loss;Meanwhile in order to enhance the feature representation ability of network, feature extraction Web vector graphic residual error module (Residue Block includes convolutional layer, line rectification function and batch normalization layer) as basic structure, it further joined compression-and swash Attention mechanism of the flexible module (squeeze-and-excitation module, SE module) as feature level, passes through DenseASPP come capture study different scale feature to merge Analysis On Multi-scale Features, ensure sufficiently large convolution kernel receptive field Different scale feature may be implemented in (receptive field), the spreading rate (dilation rate) by the way that convolution is arranged It practises.
For above-mentioned steps S101-S103, feature extraction network can carry out process of convolution to image to be processed and be rolled up Product result, wherein feature extraction network may include N layers of convolutional layer, and N is the integer greater than 1.Then swashed by residual error and compression Flexible module carries out residual error to convolution results and compression activation handles to obtain activation result.Then, activation is tied by DenseASPP Fruit carries out Multi resolution feature extraction, is handled later by deconvolution, obtains the characteristic pattern of image to be processed.
It should be noted that above embodiment is only some examples of the disclosure, this is not limited in any way It is open.It will be appreciated by persons skilled in the art that the feature extraction to image to be processed can also be realized using other modes, As long as the characteristic pattern of image to be processed can be obtained.
First object therein can be the human organ bigger relative to the second target size, such as first object can Think the parotid gland, the second target can be the crystalline lens, etc. of eye.For step S11, pass through above-mentioned main 1 couple of spy of segmentation network The positioning of figure further progress first and dividing processing are levied, can determine the first segmentation result of first object, such as can determine Big organ in image to be processed.Disclosure embodiment to the first positioning and the detailed process of dividing processing, use it is specific Technology is not construed as limiting, and is not limited by the mode of neural network.
For step S12, may be implemented to carry out characteristic pattern by above-mentioned first positioning network 2 and the first segmentation network 3 Second positioning and dividing processing, determine the second segmentation result of the second target, for example, it may be determined that the small device in image to be processed Official.Detailed process, the particular technique of use of second positioning and dividing processing are not construed as limiting, and are not limited by the side of neural network Formula.
Fig. 2 b shows the flow chart of the method for the step S12 according to the embodiment of the present disclosure, as shown in Figure 2 b, in a kind of possibility Implementation in, step S12 may include:
Step S121 carries out the second localization process to the characteristic pattern and cuts processing, respectively obtains the position of the second target Confidence breath and target signature;
Step S122, according to the target signature, the location information of second target, image to be processed and described Characteristic pattern determines the second segmentation result of second target.
For step S121, the second localization process can be carried out to the characteristic pattern, determine the location information of the second target; The characteristic pattern is carried out to cut processing according to the location information of second target, obtains the target signature of second target Figure.Then, in step S122, the comprehensive target signature of second target, the location information of the second target, image to be processed with And characteristic pattern does further Accurate Segmentation to the second target.
As described above, feature extraction network may include N layers of convolutional layer, N is the integer greater than 1, then characteristic pattern can be with Including N layers of characteristic pattern, above-mentioned " carrying out the second localization process to the characteristic pattern, determine the location information of the second target " can To include: to carry out the second localization process to n-th layer characteristic pattern, the location probability figure of the second target is determined.
It should be noted that the second localization process can also be carried out (such as to first layer to the characteristic pattern of other layers simultaneously The second localization process is carried out simultaneously with the last layer characteristic pattern), the location probability figure of the second target is determined, to obtain more accurate The second target, the embodiment of the present disclosure do not limit this.
In one possible implementation, as shown in figure 3, above-mentioned neural network can also include the first positioning network 2, First positioning network 2 may include two SEResBlock.Feature extraction network handles processing image is carried out feature extraction to obtain To n-th layer characteristic pattern (characteristic pattern of the last layer of feature extraction network decoder) be input to the first positioning network 2, first The second localization process can be carried out to the last layer characteristic pattern by positioning network 2, determine the location probability figure of the second target.Specifically Ground, the first positioning network 2 can first position the center of the second target, create the 3D Gauss point of the center of the second target Location probability figure of the Butut as the second target.
In one possible implementation, for the second target of different size or shapes, second can be separately provided The corresponding first positioning network 2 of target, positions the second target.That is, the neural network of the embodiment of the present disclosure can To include multiple first positioning networks 2.
It should be noted that being not limited to above example to the positioning method of the second target, those skilled in the art can be managed Solution can also realize the positioning of the second target by other technologies, obtain the location information of the second target, such as, based on ground Atlas method carries out image registration and obtains the second target position, or must include the detection of the second target by object detection method Frame.
It is above-mentioned " characteristic pattern to be carried out to cut processing according to the location information of second target, obtains described second The target signature of target " may include: to carry out cutting place to n-th layer characteristic pattern according to the location information of second target Reason, obtains the target signature of second target.
Wherein, n-th layer characteristic pattern is also possible to the feature of the last layer of feature extraction network decoder as described above Figure, this layer of characteristic pattern include Analysis On Multi-scale Features amount.It, can be according to the position of the second target after the location information for obtaining the second target Confidence breath cuts n-th layer characteristic pattern, obtains the target signature of second target.In other words, that is, by feature The second clarification of objective part in figure is cut out, such as, it is found in characteristic pattern according to the location information of the second target Target signature of the part of the pixel composition of corresponding position as the second target.
In one possible implementation, n-th layer characteristic pattern can be carried out according to the location probability figure of the second target It cuts, obtains the segmentation characteristic pattern of second target.
It should be noted that can also cut simultaneously to the characteristic pattern of other layers, the mesh of second target is obtained Characteristic pattern is marked, to obtain more accurate second target, the embodiment of the present disclosure is not limited this.
For step S122, target signature includes more Analysis On Multi-scale Features amount, and image to be processed includes high-resolution features Amount, integration objective characteristic pattern, the location information of the second target, image to be processed and characteristic pattern determine second point of the second target Cut the Accurate Segmentation as a result, it is possible to achieve lesser second target of size.
Fig. 4 shows the flow chart of the method for the step S122 according to the embodiment of the present disclosure, as shown in figure 4, in step S122 In, according to the target signature, the location information of second target, image to be processed and the characteristic pattern, determine institute The second segmentation result for stating the second target may include:
Step S1221, to the location information of the target signature, second target, image to be processed and described Characteristic pattern carries out image co-registration, obtains fusion results;
Step S1222 carries out the second segmentation to the fusion results, determines the second segmentation result of second target.
Pass through fusion target signature, the location information of the second target, image to be processed and the various letters of characteristic pattern Breath, can obtain more image informations (for example, scale, feature, resolution ratio etc.), improve the precision of subsequent image segmentation, have Accurate Segmentation is carried out to the lesser target of size conducive to realizing.The process for carrying out image co-registration can use relevant image co-registration System realizes that the embodiment of the present disclosure is not construed as limiting specific fusion process.
It in one possible implementation, can be first to characteristic pattern, image to be processed, second before being merged The location information of target carries out the pond ROI (region of interest), reduces data dimension, improves the efficiency of processing.
In one possible implementation, as shown in figure 3, above-mentioned neural network can also include the first segmentation network 3, which can also be built using SEResBlock, for example, in one example, the first segmentation network 3 It may include 5 SEResBlock.The second dividing processing is carried out to fusion results by the first segmentation network, can be obtained accurate The second target the second segmentation result.
In one possible implementation, for the second target of different size or shapes, second can be separately provided The corresponding first segmentation network 3 of target, is split the second target.That is, the neural network of the embodiment of the present disclosure can To include multiple first segmentation networks 3.
In one possible implementation, step S1221 may include: according to the location information of second target, Third segmentation is carried out to the image to be processed and first layer characteristic pattern respectively, after the image to be processed and segmentation after being divided First layer characteristic pattern;To after the location information of the target signature, second target, segmentation image to be processed and point First layer characteristic pattern after cutting carries out image co-registration, obtains fusion results.
Wherein, include high-resolution features amount in image and first layer characteristic pattern to be processed, feature extraction network is mentioned It has been encoded in the location information and target signature of second target of characteristic pattern the second localization process acquisition of progress taken The segmentation result of second target further carries out image to be processed and first layer characteristic pattern by the location information of the second target Segmentation, image to be processed after divide and the first layer characteristic pattern after dividing, and based on after segmentation image to be processed and First layer characteristic pattern after segmentation carries out image co-registration, can be further improved the accuracy of segmentation.
For step S13, as shown in figure 3, first object can be merged in order to export the unified segmentation result of all targets The first segmentation result and the second target the second segmentation result, with obtain the segmentation result of image to be processed export it is final Segmentation figure 4.
Fig. 5 shows the flow chart of the image processing method according to the embodiment of the present disclosure.In one possible implementation, As shown in figure 5, the method for the embodiment of the present disclosure can also include:
Step S14, according to preset training set, the training neural network.
Wherein, preset training set, which can be, carries out after the pretreatment such as cutting out manually samples pictures, and splits into more A pictures.In the multiple pictures split into, two adjacent pictures may include a part of identical picture, for example, By taking medical image as an example, multiple samples can be acquired from hospital, the multiple samples pictures for including in a sample can be continuously The picture of a certain organ of the human body of acquisition, passes through the three-dimensional structure of the available organ of multiple samples pictures, Ke Yiyan One direction is split, and first pictures may include 1-30 frame picture, and second pictures may include 16-45 Frame picture ..., it is identical that two adjacent in this way pictures, which are concentrated with 15 frame pictures,.It, can in such a way that this overlapping is split To improve the accuracy of segmentation.
Fig. 6 shows the flow chart of the method for the step S14 according to the embodiment of the present disclosure, as shown in fig. 6, step S14 can be with Include:
Step S141, according to the training set, the training main segmentation network;
Step S142, according to the training set and the main segmentation network trained, training the first positioning network;
Step S143, according to the training set, the main segmentation network trained and the first positioning network trained, instruction Practice the first segmentation network.
As shown in figure 3, main segmentation network is first trained, in the case that then the parameter of main segmentation network is fixed, according to training Collection and main segmentation network training first position network, that is to say, that training set be input in the main segmentation network trained, it will The main segmentation network trained carries out the characteristic pattern that feature extraction obtains to training set and is input in the first positioning network to first Positioning network is trained.
It is last fixed according to the main segmentation network trained and the first positioning network trained and training set training first Position network.Specifically, training set is input in the main segmentation network trained, the main segmentation network trained to training set into The characteristic pattern that row feature extraction obtains carries out localization process to characteristic pattern using the first positioning network trained and obtains target Location information carries out cutting processing obtaining target signature, by target signature, mesh according to the location information of target to characteristic pattern Target location information, training set and characteristic pattern are input to the first segmentation network and are trained to the first segmentation network.
It in one possible implementation, can be according to focal loss function and broad sense dice during training Loss function determines the network losses of the neural network;According to the network losses, the network ginseng of the neural network is adjusted Number.It is, according to preset training set, the training neural network can also include: to adjust net above according to network losses The process of network parameter.
For example, as shown in figure 3, can be carried out first according to the main segmentation network of training set training to main segmentation network In trained process, main segmentation network can be determined according to focal loss function and generalized dice loss function Network losses;According to the network losses, the network parameter of main segmentation network is adjusted, until network losses meet default item Part, for example, network losses no longer decline.
In one possible implementation, (1) the main network losses for dividing network can be determined according to the following formula:
Ltotal=LFocal+λLDice (1)
Wherein, LtotalFor total network losses, LFocalFor focal loss, LDiceFor generalized dice loss, λ is to balance two kinds of losses in the weight of total losses proportion, and in one example, λ can be 1.
It, can be according to training set and the main segmentation network training first trained after the training for completing main segmentation network Position network, training first positioning network during, can according to MSE Loss (mean square error loss, Mean Square Error loss) function, determine the network losses of the first positioning network;According to the network losses, adjustment first is fixed The network parameter of position network, until network losses meet preset condition.
Completion first position network training after, can according to training set and the main segmentation network trained, instructed The first experienced positioning network, training the first segmentation network still can be according to focuses during training the first segmentation network Loss function and broad sense dice loss function, determine the network losses of the first segmentation network;According to the network losses, adjustment the The network parameter of one segmentation network, until network losses meet preset condition.
In one possible implementation, the network losses of the first segmentation network can also be true according to above formula (1) It is fixed.
It should be noted that during the main segmentation network of training and the first segmentation network, the focal loss letter of use Several and broad sense dice loss function parameter can it is identical, can also be different, focal loss can be selected according to the characteristics of network The parameter of function and/or broad sense dice loss function, the embodiment of the present disclosure are not construed as limiting this.
Neural network is trained according to the embodiment of the present disclosure, due to main segmentation network, the first positioning network and the One segmentation network is trained respectively, even if the imbalanced training sets of training set, it is also possible that the neural network energy that training obtains It enough identifies various sizes of target, improves the segmentation precision to target various sizes of in image.
Also, the image processing method of the embodiment of the present disclosure loses function and generalized dice using focal loss The network losses of loss function evaluation neural network adjust the parameter of neural network according to network losses during training, Reduce influence of the imbalanced training sets to determining network losses, further solve the problems, such as that imbalanced training sets are brought, improves training Effect so that the obtained neural network of training is improved more suitable for identifying various sizes of target to sizes different in image Target segmentation precision.
Application Scenarios-Example
When clinically carrying out radiotherapy planning, there is more than 20 to jeopardize organ (organs at risk, OAR) and need to be considered Inside, it usually needs doctor is delineated on three-dimensional computer layer radiography (Computed Tomography, CT) image, However it is marked on three-dimensional CT image usually time-consuming and laborious.
For example, the characteristic insensitive to soft tissue due to the anatomical structure and CT of incidence complexity, leads to many devices Official and surrounding tissue contrast are lower, and obscure boundary is clear, this, which is also further increased, delineates difficulty, while to the professional of doctor Propose very high requirement.The organ for usually delineating a patient needs to spend medical practitioner 2.5 more than hour, in addition, by It is influenced in subjective factor, different doctors may be not quite identical to the delineating for organ of same patient.
Therefore, one rapidly and efficiently, the area of computer aided dividing method of excellent performance, strong robustness can greatly reduce The workload of doctor improves radiotherapy and plans speed and quality, and improves the effect of radiotherapy.
The problem of jeopardizing in organ includes the different organ of many volumes, and accordingly, there exist great imbalanced training sets.For Big organ, such as parotid gland, volume are lenticular 250 times of minimum organ or more.How big organ and organella are balanced, to difference Organ can have preferable segmentation precision, be a urgent problem needed to be solved.
The medical image comprising the different organ of many volumes is handled using the image processing method of the disclosure, it can To realize the Accurate Segmentation to different volumes organ, especially realizes the accurate segmentation to organella, improve radiotherapy and plan speed And quality, and improve the effect of radiotherapy.
It, can be with it should be noted that the image processing method of the embodiment of the present disclosure is not limited to apply in Medical Image Processing Applied to arbitrary image procossing, the disclosure is not construed as limiting this.
It is appreciated that above-mentioned each embodiment of the method that the disclosure refers to, without prejudice to principle logic, To engage one another while the embodiment to be formed after combining, as space is limited, the disclosure is repeated no more.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function It can be determined with possible internal logic.
Fig. 7 shows the block diagram of the image processing apparatus according to the embodiment of the present disclosure.The image processing apparatus can be terminal Equipment, server or other processing equipments etc..Wherein, terminal device can for user equipment (User Equipment, UE), Mobile device, user terminal, terminal, cellular phone, wireless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, calculate equipment, mobile unit, wearable device etc..
In some possible implementations, which can be called in memory by processor and be stored The mode of computer-readable instruction is realized.
As shown in fig. 7, described image processing unit may include:
Characteristic extracting module 71 obtains the feature of the image to be processed for carrying out feature extraction to image to be processed Figure;
First determining module 72 determines first object for carrying out the first positioning and dividing processing to the characteristic pattern First segmentation result;
Second determining module 73 determines the second target for carrying out the second positioning and dividing processing to the characteristic pattern Second segmentation result;
Segmentation result determining module 74, for determining institute according to first segmentation result and second segmentation result State the segmentation result of image to be processed.
For first object and the second target, the first positioning is carried out by the characteristic pattern of the image to be processed to extraction respectively And dividing processing obtains the first segmentation result of first object, by carrying out the second positioning to characteristic pattern and dividing processing obtains the The segmentation result of two targets, and image to be processed is obtained according to the segmentation result of the segmentation result of first object and the second target Segmentation result.At the differentiation that the target different to the size of different zones in image to be processed may be implemented by the above process Reason, improves the precision of image procossing.
In one possible implementation, second determining module 73 includes:
Positioning submodule respectively obtains the second mesh for carrying out the second localization process to the characteristic pattern and cutting processing Target location information and target signature;
Determine submodule, for according to the target signature, the location information of second target, image to be processed with And the characteristic pattern, determine the second segmentation result of second target.
In one possible implementation, the positioning submodule is also used to:
Second localization process is carried out to the characteristic pattern, determines the location information of the second target;
The characteristic pattern is carried out to cut processing according to the location information of second target, obtains second target Target signature.
In one possible implementation, the determining submodule is also used to:
The location information, image to be processed and the characteristic pattern of the target signature, second target are carried out Image co-registration obtains fusion results;
Second segmentation is carried out to the fusion results, determines the second segmentation result of second target.
In one possible implementation, the characteristic pattern includes N layers of characteristic pattern, and N is the integer greater than 1, wherein institute It states positioning submodule to be also used to: the second localization process being carried out to n-th layer characteristic pattern, determines the location probability figure of the second target.
In one possible implementation, the positioning submodule is also used to:
N-th layer characteristic pattern is carried out to cut processing according to the location information of second target, obtains second target Target signature.
In one possible implementation, the determining submodule is also used to:
According to the location information of second target, third is carried out to the image to be processed and first layer characteristic pattern respectively Segmentation, the first layer characteristic pattern after image to be processed and segmentation after being divided;
To after the location information of the target signature, second target, segmentation image to be processed and segmentation after First layer characteristic pattern carries out image co-registration, obtains fusion results.
In one possible implementation, the characteristic extracting module 71 includes:
Convolution submodule obtains convolution results for carrying out process of convolution to image to be processed;
Submodule is activated, for carrying out residual error and compression activation processing to the convolution results, obtains activation result;
Extracting sub-module obtains described for carrying out Multi resolution feature extraction and deconvolution processing to the activation result The characteristic pattern of image to be processed.
In one possible implementation, for described device by neural fusion, the neural network includes main point Cut network, first positioning network and first segmentation network, the main segmentation network include feature extraction network and second positioning and Divide network,
Wherein, the feature extraction network handles processing image carries out feature extraction, second positioning and segmentation network For carrying out the first positioning and dividing processing to the characteristic pattern, the first positioning network is used to carry out the to the characteristic pattern Two localization process, the first segmentation network are used to determine the second segmentation result of second target.
In one possible implementation, described device further include:
Training module 75, for according to preset training set, the training neural network.
In one possible implementation, the training module 75 includes:
First training submodule, for according to the training set, the training main segmentation network;
Second training submodule, for according to the training set and the main segmentation network trained, training described first Position network;
Third trains submodule, for according to the training set, the main segmentation network trained and trained first Position network, training the first segmentation network.
In one possible implementation, the training module 75 includes:
It loses and determines submodule, for determining according to focal loss function and generalized dice loss function The network losses of the neural network;
Adjusting submodule, for adjusting the network parameter of the neural network according to the network losses.
In one possible implementation, the image to be processed is the medical image comprising jeopardizing organ OAR.
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction Memory;Wherein, the processor is configured to the above method.
The equipment that electronic equipment may be provided as terminal, server or other forms.
Fig. 8 is the block diagram according to a kind of electronic equipment 800 of the embodiment of the present disclosure.For example, electronic equipment 800 can be shifting Mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building are set It is standby, the terminals such as personal digital assistant.
Referring to Fig. 8, electronic equipment 800 may include following one or more components: processing component 802, memory 804, Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user. In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800 Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800 The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor, Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment. Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete The above method.
Fig. 9 is the block diagram according to a kind of electronic equipment 1900 of the embodiment of the present disclosure.For example, electronic equipment 1900 can be by It is provided as a server.Referring to Fig. 9, it further comprises one or more places that electronic equipment 1900, which includes processing component 1922, Manage device and memory resource represented by a memory 1932, for store can by the instruction of the execution of processing component 1922, Such as application program.The application program stored in memory 1932 may include it is one or more each correspond to one The module of group instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Electronic equipment 1900 can also include that a power supply module 1926 is configured as executing the power supply of electronic equipment 1900 Management, a wired or wireless network interface 1950 is configured as electronic equipment 1900 being connected to network and an input is defeated (I/O) interface 1958 out.Electronic equipment 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1932 of machine program instruction, above-mentioned computer program instructions can by the processing component 1922 of electronic equipment 1900 execute with Complete the above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (10)

1. a kind of image processing method characterized by comprising
Feature extraction is carried out to image to be processed, obtains the characteristic pattern of the image to be processed;
First positioning and dividing processing are carried out to the characteristic pattern, determine the first segmentation result of first object;
Second positioning and dividing processing are carried out to the characteristic pattern, determine the second segmentation result of the second target;
According to first segmentation result and second segmentation result, the segmentation result of the image to be processed is determined.
2. the method according to claim 1, wherein to the characteristic pattern carry out second positioning and dividing processing, Determine the second segmentation result of the second target, comprising:
Second localization process is carried out to the characteristic pattern and cuts processing, location information and the target for respectively obtaining the second target are special Sign figure;
According to the target signature, the location information of second target, image to be processed and the characteristic pattern, institute is determined State the second segmentation result of the second target.
3. according to the method described in claim 2, it is characterized in that, carrying out the second localization process to the characteristic pattern and cutting place Reason, respectively obtains the location information and target signature of the second target, comprising:
Second localization process is carried out to the characteristic pattern, determines the location information of the second target;
The characteristic pattern is carried out to cut processing according to the location information of second target, obtains the target of second target Characteristic pattern.
4. according to the method described in claim 2, it is characterized in that, according to the target signature, the position of second target Confidence breath, image to be processed and the characteristic pattern, determine the second segmentation result of second target, comprising:
Image is carried out to the location information, image to be processed and the characteristic pattern of the target signature, second target Fusion, obtains fusion results;
Second segmentation is carried out to the fusion results, determines the second segmentation result of second target.
5. according to the method described in claim 3, N is whole greater than 1 it is characterized in that, the characteristic pattern includes N layers of characteristic pattern Number,
Wherein, the second localization process is carried out to the characteristic pattern, determines the location information of the second target, comprising:
Second localization process is carried out to n-th layer characteristic pattern, determines the location probability figure of the second target.
6. according to the method described in claim 5, it is characterized in that, according to the location information of second target to the feature Figure carries out cutting processing, obtains the target signature of second target, comprising:
N-th layer characteristic pattern is carried out to cut processing according to the location information of second target, obtains the mesh of second target Mark characteristic pattern.
7. method according to claim 5 or 6, which is characterized in that the position of the target signature, second target Confidence breath, image to be processed and the characteristic pattern carry out image co-registration, obtain fusion results, comprising:
According to the location information of second target, third point is carried out to the image to be processed and first layer characteristic pattern respectively It cuts, the first layer characteristic pattern after image to be processed and segmentation after being divided;
To the image to be processed after the location information of the target signature, second target, segmentation and first after segmentation Layer characteristic pattern carries out image co-registration, obtains fusion results.
8. a kind of image processing apparatus characterized by comprising characteristic extracting module, for carrying out feature to image to be processed It extracts, obtains the characteristic pattern of the image to be processed;
First determining module determines first point of first object for carrying out the first positioning and dividing processing to the characteristic pattern Cut result;
Second determining module determines second point of the second target for carrying out the second positioning and dividing processing to the characteristic pattern Cut result;
Segmentation result determining module, for determining described wait locate according to first segmentation result and second segmentation result Manage the segmentation result of image.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim require any one of 1 to 7 described in method.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the computer Method described in any one of claim 1 to 7 is realized when program instruction is executed by processor.
CN201910138465.6A 2019-02-25 2019-02-25 Image processing method and device, electronic equipment and storage medium Active CN109829920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910138465.6A CN109829920B (en) 2019-02-25 2019-02-25 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910138465.6A CN109829920B (en) 2019-02-25 2019-02-25 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109829920A true CN109829920A (en) 2019-05-31
CN109829920B CN109829920B (en) 2021-06-15

Family

ID=66864288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910138465.6A Active CN109829920B (en) 2019-02-25 2019-02-25 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109829920B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033460A (en) * 2019-04-03 2019-07-19 中国科学院地理科学与资源研究所 It is a kind of based on scale space transformation satellite image in mariculture area extracting method
CN110335199A (en) * 2019-07-17 2019-10-15 上海骏聿数码科技有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110415258A (en) * 2019-07-29 2019-11-05 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110458218A (en) * 2019-07-31 2019-11-15 北京市商汤科技开发有限公司 Image classification method and device, sorter network training method and device
CN110490891A (en) * 2019-08-23 2019-11-22 杭州依图医疗技术有限公司 The method, equipment and computer readable storage medium of perpetual object in segmented image
CN110569854A (en) * 2019-09-12 2019-12-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110782468A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Training method and device of image segmentation model and image segmentation method and device
CN111079761A (en) * 2019-11-05 2020-04-28 北京航空航天大学青岛研究院 Image processing method, image processing device and computer storage medium
CN111310509A (en) * 2020-03-12 2020-06-19 北京大学 Real-time bar code detection system and method based on logistics waybill
CN111768352A (en) * 2020-06-30 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device
CN112070158A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学(威海) Facial flaw detection method based on convolutional neural network and bilateral filtering
CN112233194A (en) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 Medical picture optimization method, device and equipment and computer-readable storage medium
WO2021051965A1 (en) * 2019-09-20 2021-03-25 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and computer program
CN113012166A (en) * 2021-03-19 2021-06-22 北京安德医智科技有限公司 Intracranial aneurysm segmentation method and device, electronic device, and storage medium
CN113537350A (en) * 2021-07-16 2021-10-22 商汤集团有限公司 Image processing method and device, electronic equipment and storage medium
CN113688748A (en) * 2021-08-27 2021-11-23 武汉大千信息技术有限公司 Fire detection model and method
CN113762476A (en) * 2021-09-08 2021-12-07 中科院成都信息技术股份有限公司 Neural network model for character detection and character detection method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611413A (en) * 2016-11-30 2017-05-03 上海联影医疗科技有限公司 Image segmentation method and system
CN109166107A (en) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 A kind of medical image cutting method and device, electronic equipment and storage medium
CN109166130A (en) * 2018-08-06 2019-01-08 北京市商汤科技开发有限公司 A kind of image processing method and image processing apparatus
CN109360633A (en) * 2018-09-04 2019-02-19 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, processing equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611413A (en) * 2016-11-30 2017-05-03 上海联影医疗科技有限公司 Image segmentation method and system
CN109166107A (en) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 A kind of medical image cutting method and device, electronic equipment and storage medium
CN109166130A (en) * 2018-08-06 2019-01-08 北京市商汤科技开发有限公司 A kind of image processing method and image processing apparatus
CN109360633A (en) * 2018-09-04 2019-02-19 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, processing equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIE HU ET AL.: "Squeeze-and-Excitation Networks", 《ARXIV》 *
WENTAO ZHU ET AL.: "AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy", 《ARXIV》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033460A (en) * 2019-04-03 2019-07-19 中国科学院地理科学与资源研究所 It is a kind of based on scale space transformation satellite image in mariculture area extracting method
CN110335199A (en) * 2019-07-17 2019-10-15 上海骏聿数码科技有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN110348537A (en) * 2019-07-18 2019-10-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110348537B (en) * 2019-07-18 2022-11-29 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110415258A (en) * 2019-07-29 2019-11-05 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110458218A (en) * 2019-07-31 2019-11-15 北京市商汤科技开发有限公司 Image classification method and device, sorter network training method and device
CN110458218B (en) * 2019-07-31 2022-09-27 北京市商汤科技开发有限公司 Image classification method and device and classification network training method and device
CN110490891A (en) * 2019-08-23 2019-11-22 杭州依图医疗技术有限公司 The method, equipment and computer readable storage medium of perpetual object in segmented image
CN110569854A (en) * 2019-09-12 2019-12-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110569854B (en) * 2019-09-12 2022-03-29 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2021051965A1 (en) * 2019-09-20 2021-03-25 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and computer program
JP2022533404A (en) * 2019-09-20 2022-07-22 上▲海▼商▲湯▼智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and computer program
CN110782468A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Training method and device of image segmentation model and image segmentation method and device
CN110782468B (en) * 2019-10-25 2023-04-07 北京达佳互联信息技术有限公司 Training method and device of image segmentation model and image segmentation method and device
CN111079761A (en) * 2019-11-05 2020-04-28 北京航空航天大学青岛研究院 Image processing method, image processing device and computer storage medium
CN111079761B (en) * 2019-11-05 2023-07-18 北京航空航天大学青岛研究院 Image processing method, device and computer storage medium
CN111310509A (en) * 2020-03-12 2020-06-19 北京大学 Real-time bar code detection system and method based on logistics waybill
CN111768352B (en) * 2020-06-30 2024-05-07 Oppo广东移动通信有限公司 Image processing method and device
CN111768352A (en) * 2020-06-30 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device
CN112070158A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学(威海) Facial flaw detection method based on convolutional neural network and bilateral filtering
CN112233194A (en) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 Medical picture optimization method, device and equipment and computer-readable storage medium
CN112233194B (en) * 2020-10-15 2023-06-02 平安科技(深圳)有限公司 Medical picture optimization method, device, equipment and computer readable storage medium
CN113012166A (en) * 2021-03-19 2021-06-22 北京安德医智科技有限公司 Intracranial aneurysm segmentation method and device, electronic device, and storage medium
CN113537350A (en) * 2021-07-16 2021-10-22 商汤集团有限公司 Image processing method and device, electronic equipment and storage medium
CN113537350B (en) * 2021-07-16 2023-12-22 商汤集团有限公司 Image processing method and device, electronic equipment and storage medium
CN113688748B (en) * 2021-08-27 2023-08-18 武汉大千信息技术有限公司 Fire detection model and method
CN113688748A (en) * 2021-08-27 2021-11-23 武汉大千信息技术有限公司 Fire detection model and method
CN113762476B (en) * 2021-09-08 2023-12-19 中科院成都信息技术股份有限公司 Neural network model for text detection and text detection method thereof
CN113762476A (en) * 2021-09-08 2021-12-07 中科院成都信息技术股份有限公司 Neural network model for character detection and character detection method thereof

Also Published As

Publication number Publication date
CN109829920B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN109829920A (en) Image processing method and device, electronic equipment and storage medium
WO2021051965A1 (en) Image processing method and apparatus, electronic device, storage medium, and computer program
JP7085062B2 (en) Image segmentation methods, equipment, computer equipment and computer programs
TWI770754B (en) Neural network training method electronic equipment and storage medium
CN110047078B (en) Image processing method and device, electronic equipment and storage medium
TWI743931B (en) Network training, image processing method, electronic device and storage medium
CN109978886A (en) Image processing method and device, electronic equipment and storage medium
WO2022151755A1 (en) Target detection method and apparatus, and electronic device, storage medium, computer program product and computer program
CN109360210B (en) Image partition method, device, computer equipment and storage medium
Tania et al. Advances in automated tongue diagnosis techniques
WO2020211293A1 (en) Image segmentation method and apparatus, electronic device and storage medium
CN112767329B (en) Image processing method and device and electronic equipment
CN111091576A (en) Image segmentation method, device, equipment and storage medium
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
CN109166107A (en) A kind of medical image cutting method and device, electronic equipment and storage medium
WO2023020198A1 (en) Image processing method and apparatus for medical image, and device and storage medium
CN114820584B (en) Lung focus positioner
CN110033005A (en) Image processing method and device, electronic equipment and storage medium
CN110321768A (en) For generating the arrangement of head related transfer function filter
CN113674269B (en) Tumor brain area positioning method and device based on consistency loss
CN113610750A (en) Object identification method and device, computer equipment and storage medium
WO2022032998A1 (en) Image processing method and apparatus, electronic device, storage medium, and program product
TWI765386B (en) Neural network training and image segmentation method, electronic device and computer storage medium
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
CN113349810B (en) Cerebral hemorrhage focus identification and hematoma expansion prediction system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant