CN109377496A - System and method and medium for Medical Image Segmentation - Google Patents

System and method and medium for Medical Image Segmentation Download PDF

Info

Publication number
CN109377496A
CN109377496A CN201811268094.5A CN201811268094A CN109377496A CN 109377496 A CN109377496 A CN 109377496A CN 201811268094 A CN201811268094 A CN 201811268094A CN 109377496 A CN109377496 A CN 109377496A
Authority
CN
China
Prior art keywords
feature
mapping
block
image
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811268094.5A
Other languages
Chinese (zh)
Other versions
CN109377496B (en
Inventor
宋麒
陈翰博
孙善辉
尹游兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Keya Medical Technology Co Ltd
Original Assignee
Kunlun Beijing Medical Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/159,573 external-priority patent/US10783640B2/en
Application filed by Kunlun Beijing Medical Cloud Technology Co Ltd filed Critical Kunlun Beijing Medical Cloud Technology Co Ltd
Publication of CN109377496A publication Critical patent/CN109377496A/en
Application granted granted Critical
Publication of CN109377496B publication Critical patent/CN109377496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

Embodiment of the disclosure provides the system and method and medium for Medical Image Segmentation.A kind of exemplary system includes communication interface, is configured to receive the medical image acquired by image capture device.The system further comprises memory, is configured to the multi-level learning network that storage includes at least first volume block and volume Two block.The volume Two block has at least one convolutional layer.The system also includes processors.The processor is configured to by determining that fisrt feature is mapped to first volume block described in the medical image applications, and determines that second feature maps by mapping the application volume Two block to the fisrt feature.The processor is further configured to determine the Feature Mapping of first level by combining the fisrt feature mapping and second feature mapping.The processor is configured to obtain the image after the segmentation of first level based on the first level Feature Mapping.

Description

System and method and medium for Medical Image Segmentation
Cross reference to related applications
The application based on and require on October 30th, 2017 submit and application No. is the interim Shens in 62/578,907 U.S. Priority please is incorporated herein on the whole by reference.
Technical field
This disclosure relates to the system and method for being used for medical image segmentation, and include more particularly, to for use The system and method that the multi-level learning network of convolution ladder carries out medical image segmentation.
Background technique
The accurate segmentation of medical image is the committed step delineated in radiotherapy treatment planning.Image segmentation is by digital picture It is divided into the process of multiple portions.The target of segmentation is the expression of simplified image and/or is changed into more meaningful and be easier to In the content of analysis.Image segmentation is usually to position object in image and boundary (line curve etc.).More precisely, figure As segmentation is process label distributed to each pixel in image, the pixel with same label is made to share certain features. Image segmentation has been used to various applications, including positioning tumor and other pathology, the diagnosis for measuring tissue volume, anatomical structure It navigates in research, surgical planning, virtual surgery simulation and art.
Image segmentation can be used as classification problem to solve.Convolutional neural networks such as with powerful layer architecture (CNN) learning network has been applied to image segmentation to improve accuracy.For example, the automatic classification using CNN can be obvious Better than the traditional images dividing method of the segmentation such as based on map and the segmentation based on shape.
CNN be originally developed for categorize an image as different classifications (for example, scanning postcode in number, Or cat and dog in social media photo).CNN usually passes through cascade (cascade) convolutional layer and then pond layer cascades and connects entirely Layer is connect to form.For example, Fig. 1 (a) shows CNN 110.It, can be by the entire image using cunning using CNN 110 Dynamic windowhood method executes pixel/voxel classification to execute image segmentation.However, the calculating cost of this method is excessively high, especially For big image.The major part for calculating cost is because multiple images block and each image block needs point must be generated explicitly Forward prediction operation is not executed and is occurred.
In order to solve expensive computational problem, full convolutional network (FCN) is introduced.In FCN, decision-making level's (Multilayer Perception Device) also use convolution algorithm.Therefore, algorithm can slide convolution kernel on the entire image to generate final image segmentation.Example Such as, Fig. 1 (b) shows the FCN120 for semantic segmentation.It is worth noting that, in these methods, it usually needs Chi Hua Layer obtains contextual information to increase receptive field, also reduces the spatial resolution for connecting layer simultaneously.Divide to ensure to export Cutting has resolution ratio identical with input source, usually executes up-sampling.This can be completed by interpolation algorithm or transposition convolution. However, the boundary of excess smoothness is inevitable due to the loss of spatial information.
In order to using the speed of FCN and avoid the loss of boundary precision, low spatial resolution Feature Mapping can be by as rolling up Automatic decoding device is the same continuously up-samples for product, and combines with the Feature Mapping with equal resolution being previously generated (concatenate, so-called " joint " to Feature Mapping indicates in the machine learning field carried out to Feature Mapping herein Concatenate operation).Global characteristics after up-sampling can help to ensure that the overall accuracy of segmentation, and united Local feature can help to refine segmentation and maintain to sharpen boundary.Since the network architecture forms " u "-shaped, can be claimed For U-Net.For example, Fig. 1 (c) shows supervised U-Net 130.In order to make network be easier to training to divide sparse distribution Object, may be incorporated into several loss functions depth supervision network, allow network be based on different spaces resolution The feature of rate generates segmentation.Although U-Net and its correlation technique provide high-precision, works as and be related to such as CT lung scans Large-scale 3D medical image when, it is still time-consuming that they, which handle whole images segmentation,.
Embodiment of the disclosure by using include convolution ladder multi-level learning network carry out Medical Image Segmentation be System and method solve the above problems.
Summary of the invention
Embodiment of the disclosure provides a kind of system for Medical Image Segmentation.The system comprises communication interface, It is configured to receive the medical image acquired by image capture device.The system further comprises memory, configuration It include the multi-level learning network of at least first volume block and volume Two block at storage.The volume Two block has at least one A convolutional layer.The system also includes processors.The processor is configured to by first described in the medical image applications Convolution block determines the by mapping the application volume Two block to the fisrt feature to determine that fisrt feature is mapped Two Feature Mappings.The processor be further configured to by combine fisrt feature mapping and second feature mapping come Determine the Feature Mapping of first level.The processor is configured to obtain first level based on the first level Feature Mapping Segmentation after image.
Embodiment of the disclosure additionally provides a kind of method for Medical Image Segmentation.The method includes passing through communication The medical image that interface is acquired by image capture device.The method further includes obtaining including at least first volume The multi-level learning network of block and volume Two block.The volume Two block has at least one convolutional layer.The method is also It include: by processor, by determining that fisrt feature is mapped to first volume block described in the medical image applications;And by institute Processor is stated, determines that second feature maps by mapping the application volume Two block to the fisrt feature.In addition, described Method includes: to determine the first order by combining the fisrt feature mapping and second feature mapping by the processor Another characteristic mapping;And by the processor, the segmentation of first level is obtained based on the Feature Mapping of the first level Image afterwards.
Embodiment of the disclosure further provides a kind of computer-readable Jie of non-transitory for storing instruction thereon Matter when described instruction is executed by least one processor, causes more than one processor to execute the side for being used for Medical Image Segmentation Method.The method includes receiving the medical image acquired by image capture device.The method further includes obtaining packet Include the multi-level learning network of at least first volume block and volume Two block.The volume Two block has at least one convolution Layer.The method also includes by first volume block described in the medical image applications is determined fisrt feature map, and Determine that second feature maps by mapping the application volume Two block to the fisrt feature.In addition, the method includes The Feature Mapping of first level is determined by combining the fisrt feature mapping and second feature mapping, and is based on institute Image after stating segmentation of the Feature Mapping of first level to obtain first level.
It should be appreciated that foregoing general description and following detailed description are all only exemplary and illustrative, not It is the limitation to claimed invention.
Detailed description of the invention
Fig. 1 shows the learning network of the illustrative prior art.
Fig. 2 shows the schematic diagram of example images segmenting system according to an embodiment of the present disclosure.
Fig. 3 shows the exemplary multi-stage according to an embodiment of the present disclosure for Medical Image Segmentation and does not learn net Network.
Fig. 4 shows the other learning network of exemplary multi-stage according to the embodiment of the present disclosure, wraps in each convolution block Include pond layer.
Fig. 5 shows the other learning network of exemplary multi-stage according to an embodiment of the present disclosure, in each convolution block Including empty convolutional layer.
Fig. 6 shows the block diagram of example images processing equipment according to an embodiment of the present disclosure.
Fig. 7 shows the flow chart of the illustrative methods according to an embodiment of the present disclosure for Medical Image Segmentation.
Specific embodiment
It reference will now be made in detail to exemplary embodiment now, example illustrates in the accompanying drawings.Everywhere in possible, entire attached To make that the same or similar part is denoted by the same reference numerals in figure.
Fig. 2 shows the example images segmenting system 200 according to some embodiments of the present disclosure.According to the disclosure, Image segmentation system 200 is configured to divide the medical image acquired by image capture device 205.According to the disclosure, image segmentation System 200 can receive medical image from image capture device 205.Alternatively, medical image can be initially stored in such as medicine In the image data base of image data base 204, and image segmentation system 200 can receive medical image from image data base. In some embodiments, medical image can be two-dimentional (2D) or three-dimensional (3D) image.3D rendering may include that several 2D images are cut Piece.
In some embodiments, any suitable image mode can be used to acquire medical image for image capture device 205 102, including such as functional MRI (for example, fMRI, DCE-MRI and diffusion MR I), conical beam CT (CBCT), spiral CT, positive electricity It is sub- emission tomography (PET), single photon emission computed tomography (SPECT), x-ray imaging, optical tomography, glimmering Light imaging, ultrasonic imaging and Radiotherapy imaging etc..
For example, image capture device 205 can be MRI scanner.The MRI scanner includes that there is surrounding for magnetic field to suffer from The magnet of person's pipeline.Patient is placed on the liner platform that can be moved in patient conduit.MRI scanner further comprise Gradient coil on several directions (for example, the direction x, y and z), it is empty to be created on the uniform magnetic field generated by the magnet Between changing magnetic field.The intensity of the uniform magnetic field used by MRI scanner is usually between 0.2T-7T, for example, about 1.5T or 3T. MRI scanner further include: RF coil, to excite the tissue of patient's body;And transceiver, return is woven in by described group to receive Electromagnetic signal generated while to equilibrium state.
As another example, image capture device 205 can be CT scanner.CT scanner includes: x-ray source, X-ray is emitted to bodily tissue;And receiver, receive the remaining X-ray after being decayed by bodily tissue.CT scanner also wraps Rotating mechanism is included to shoot radioscopic image in different perspectives.This rotating mechanism can be the turntable of rotating patient, either Around the rotational structure of patient's movable gantry and receiver.Then by the radioscopic image of computer system processor different angle To construct two-dimentional (2D) cross sectional image or three-dimensional (3D) image.
As shown in Figure 2, image segmentation system 200 may include the component for executing two stages, i.e. training stage With the segmentation stage.In order to execute the training stage, image segmentation system 200 may include that tranining database 201 and model training are set Standby 202.In order to execute the segmentation stage, image segmentation system 200 may include image processing equipment 203 and/or medical image number According to library 204.In some embodiments, image segmentation system 200 may include structure more more or fewer than component shown in Fig. 2 Part.For example, image segmentation system 200 can be omitted when training in advance and providing the segmentation network for Medical Image Segmentation Tranining database 201 and model training equipment 202, such as only include image processing equipment 203 and medical image databases 204. As another example, when medical image databases 204 are third party database or position far from image processing equipment 203, figure As segmenting system 200 can only include image processing equipment 203.
Image segmentation system 200 can optionally include network 206, to promote the various components of image segmentation system 200, Such as database 201 and 204, equipment 202,203 and 205, between communication.For example, network 206 can be local area network (LAN), Wireless network, cloud computing environment (for example, software services, platform services, infrastructure services), client-server Environment, wide area network (WAN), internet etc..In some embodiments, network 206 can be by wired data communication system or equipment Instead of.
In some embodiments, as shown in Figure 2, the various components of image segmentation system 200 can away from each other or Different positions, and can be connected by network 206.In some alternative embodiments, certain structures of image segmentation system 200 Part can be located at same scene or be located in an integrated equipment.For example, tranining database 201 can be located at model training equipment A part of 202 scene either model training equipment 202.As another example, at model training equipment 202 and image Managing equipment 203 can be located in same computer or processing equipment.
As shown in Figure 2, model training equipment 202 can be communicated with tranining database 201 to receive one group or more of instruction Practice data.Every group of training data may include medical image and its corresponding label figure as ground truth, which will divide Cut each pixel that result is supplied to image.The training image being stored in tranining database 201 can be from comprising previously acquiring Medical image medical image databases obtain.Training image can be 2-D or 3-D image.By to each pixel/voxel Classified using value and label be set, for example, if pixel/voxel corresponds to perpetual object (for example, cancer) value for 1 or It is value 0 if pixel/voxel corresponds to background (for example, non-cancer), first training image can be split.
Model training equipment 202 can be used from the received training data of tranining database 201 and train segmentation network, should Divide network for dividing from such as image capture device 205 or the received medical image of medical image databases 204.Model instruction Practicing equipment 202 can be realized with the hardware of the software dedicated programmed by execution training managing.For example, model training equipment 202 It may include processor and the computer-readable medium of non-transitory.The processor can be stored in computer by execution can The instruction of training managing in the medium of reading is trained.Model training equipment 202 can be also comprised to output and input and be connect Mouthful, to be communicated with tranining database 201, network 206 and/or user interface (not shown).User interface can be used in user Select training data group, adjusting training processing more than one parameter, selection or modify learning network frame, and/or manually Or semi-automatically provide testing result relevant to the image for training.
As it is used herein, " training " learning network refer to determining one of at least one of learning network layer with Upper parameter.For example, the convolutional layer of CNN model may include at least one filter or kernel.One of at least one filter The above parameter (such as kernel weight, size, shape and structure) can be for example, by the training managing based on backpropagation come really It is fixed.According to the disclosure, multi-level learning network can be trained using training data by model training equipment 202.
According to the disclosure, the engineering of such as multi-level learning network can be for the segmentation network of Medical Image Segmentation Practise network.Supervised learning can be used to train in the segmentation network.The framework of segmentation network includes the heap of different block and layer Folded, more than one input is converted to more than one output with layer by each piece.The example of different layers may include more than one convolution Layer or full convolutional layer, nonlinear operation symbol layer, pond layer or sub-sampling layer, full articulamentum and/or final loss layer.Every layer can be with Connect a upstream layer and a downstream layer.
According to the disclosure, the segmentation network may include convolution ladder comprising cascade several convolution blocks are to generate The Feature Mapping of different stage (resolution ratio).Segmentation network disclosed in the disclosure based on convolution ladder is compact and effective , this is: 1) it simplifies decoder-path by multiresolution features fusion;2) which reduce parameters used in network Quantity and 3) its keep convolution during spatial resolution.In some embodiments, the network architecture based on convolution ladder It is also scalable.In some embodiments, because segmentation result can be generated with several resolution ratio, user can be by reaching Stop early when to desired segmentation result to control convolution ladder depth.As a result, disclosed segmentation network can be not sacrificial Runing time is substantially reduced in the case where domestic animal accuracy.
It is not learned for example, Fig. 3 shows the exemplary multi-stage according to an embodiment of the present disclosure for Medical Image Segmentation Practise network 300.In some embodiments, multi-level learning network 300 includes the cascade convolution block in different stage.For example, Multi-level learning network 300 has the initial volume block 310 in rank 0, connects with the parallel-convolution block 320 in rank 1 It connects, which further connect with a series of parallel-convolution blocks of rank 2, rank 3 ... rank n.
In some embodiments, multi-level learning network 300 is merged using multiresolution features.For example, each rank Feature Mapping is combined with the mapping of previous stage another characteristic, to generate the segmentation result of the rank.In the traditional network of such as U-Net In, the calculating of half is exclusively used in decoding network, which can merge the feature of different resolution continuously to restore empty Between resolution ratio and simultaneously carry out to output segmentation after image prediction.In common point for dividing cat such as from camera scene It cuts in task, the high-level global characteristics with larger receptive field are more more crucial than local feature, correctly to be predicted.Cause This, it may be important and inevitable that this decoding network executes correctly predicted while restoring spatial resolution.However, For medical image segmentation task, local image characteristics can be important as global characteristics.For example, in CT image, each The intensity of local voxel is defined by Heng Shi unit (HU) scale, so that the radiation density of distilled water is the spoke of 0HU and air Penetrating density is -1000HU.In order to roughly cover the pure air in CT image, can be made with being slightly above the value of -1000HU For image threshold.Therefore, disclosed multi-level learning network merges the feature of different scale and resolution ratio to save and calculate Cost.
In some embodiments, the feature that different stage can be continuously extracted in segmentation network (for example, CNN) is reflected It penetrates.These features can directly combine according to pixel, and can be made most by merging them using other convolution block It determines eventually.In some embodiments, joint usually can be executed with original spatial resolution, and subsequent convolution block can protect Spatial resolution is held, so that the image after the segmentation of output has resolution ratio identical with input picture.If due to Chi Huahuo Other processes and reflect the spatial resolution of Feature Mapping then can correspondingly to this feature before joint lower than original image The spatial resolution penetrated is up-sampled.For example, such as arest neighbors, linear interpolation, the simple interpolations of b spline interpolation can be passed through Algorithm, or by the trained layer that deconvolutes, to execute up-sampling.For example, as shown in Figure 3, original image 302 can be based on (for example, the medical image acquired by image capture device 205) is generated the Feature Mapping 312 of rank 0 by initial volume block 310. Feature Mapping 332 can the Feature Mapping 312 based on rank 0 by parallel-convolution block 320 generate (as Feature Mapping 322) then It is up-sampled and is obtained by up-sampling block 330.The Feature Mapping 312 and Feature Mapping 332 of rank 0 are combined to generate rank 1 Feature Mapping 334.
In some embodiments, each parallel-convolution block (for example, parallel-convolution block 320) may include arranged parallel to each other Several convolutional layers.For example, Fig. 3 shows k convolutional layer in each parallel-convolution block.In some embodiments, input is special It is several intermediate special concurrently to generate that sign mapping (for example, Feature Mapping 312 of rank 0) can be distributed to different convolutional layers Sign mapping.In some embodiments, the output intermediate features that can combine these layers are mapped to generate new convolution Feature Mapping, Such as Feature Mapping 322.It is also contemplated, however, that convolutional layer can have different configurations, make it possible to extract in different stage And blending image feature.
In traditional CNN, the quantity for dividing the Feature Mapping filter in network can be increased continuously, because each Rank needs additional unit to carry out " memory " useful low-level feature and communicates information to non-adjacent layers.Increased filtering Device quantity may dramatically increase the number of parameters in network, and therefore increase computational complexity.For example, given by 512 spies Sign mapping is as the convolutional layer for inputting and exporting 1024 Feature Mappings, and required number of parameters is 512 × 1024 × K, wherein K It is the size of kernel.The number of parameters is using 32 Feature Mappings as the convolutional layer for inputting and exporting 32 Feature Mappings 512 times.Because disclosed segmentation network is combined with all Feature Mappings when being predicted, It is not necessary to have additional Unit come to non-adjacent layers transmit low-level feature.In some embodiments, some segmentation task (such as medical images point Cut) in high-level characteristics of image be not too complicated, and for those tasks, identical quantity can be used to each convolution block Feature Mapping.
In some embodiments, pond layer can be introduced to convolutional neural networks and be placed between convolutional layer to image Carry out down-sampling.The receptive field for connecting convolutional layer can be increased using pond layer in this way, eliminate redundant space feature, and Drive network to learn hierarchical information (from local to the overall situation).For example, Fig. 4 is shown according to the exemplary of the embodiment of the present disclosure Multi-level learning network 400 includes pond layer in each convolution block.For example, initial volume block may include convolutional layer 412 With maximum pond layer 414.Each subsequent parallel-convolution block may include convolutional layer 422 and maximum pond layer 424.In Fig. 4 Shown in example, 32 filters are can be used in convolutional layer 412 and 422, and the size of convolution kernel is 3 × 3 × 3.Maximum pond layer 414 and 424, which can have in each dimension 2, strides.As a result, the receptive field of each layer of each level is 3 × 3 × 3 (convolution Layer 412), 6 × 6 × 6 (maximum pond layers 414), 8 × 8 × 8 (convolutional layers 422), 16 × 16 × 16 (maximum pond layers 424), 18 × 18 × 18 (convolutional layers of rank 2) and 36 × 36 × 36 (the maximum pond layers of rank 2), they persistently increase on pantostrat Add.
In some other embodiments, convolution with holes can use rather than pond layer increases receptive field.According to this public affairs It opens, convolution with holes can be convolution with hole or empty convolution.The operation can expand in the case where not introducing additional parameter The receptive field of big convolution.If correct selection parameter, the size of receptive field can be with the number of the convolutional layer of sequence cascade It measures in exponential increase.For example, Fig. 5 shows the other learning network 500 of exemplary multi-stage according to an embodiment of the present disclosure, It include empty convolutional layer in each convolution block.For example, initial volume block may include empty convolutional layer 510, using 32 3 × The filter of 3 size of cores and 1 × 1 expansion.Each subsequent parallel-convolution block includes empty convolutional layer 520, uses convolution Core is having a size of 3 × 3 32 filters and 2i×2iExpansion.For example, the expansion is 2 × 2 at rank 1.Therefore, the sense of layer It is 2 × 2 at rank 0 by open country, is (2 at subsequent level ii-1)×(2iIt -1), i.e., is 7 × 7 at rank 1, at rank 2 It is 15 × 15 etc..The receptive field of the layer of each level also continues to increase.
Referring back to Fig. 2, image processing equipment 203 can receive segmentation network from model training equipment 202, for example, more Rank learning network 300/400/500.Image processing equipment 203 may include processor and computer-readable Jie of non-transitory Matter (is discussed in detail) in conjunction with Fig. 6.The processor can execute the instruction of the image dividing processing of storage in the medium.Image procossing Equipment 203 can also comprise output and input interface (being discussed in detail in conjunction with Fig. 6) with medical image databases 204, network 206 and/or user interface (not shown) communication.The user interface can be used for selective medicine image to be split, start segmentation Processing, display medical image and/or segmentation result.
Image processing equipment 203 can be communicated with medical image databases 204 to receive more than one medical image.One In a little embodiments, the medical image being stored in medical image databases 204 may include the medicine of more than one image mode Image.Medical image can be acquired by the image capture device 205 of such as MRI scanner and CT scanner.Image processing equipment 203 can be used from the received trained segmentation network of model training equipment 202 and carry out each pixel of prospective medicine image (such as Fruit is 2-D) or voxel (if it is 3-D) whether correspond to perpetual object, and export segmentation after image.
In some embodiments, multi-level learning network 300 can be applied to original image by image processing equipment 203 302.At rank 0, image processing equipment 203 can determine the Feature Mapping of rank 0 by application initial volume block 310 312.At rank 1, image processing equipment 203 can pass through the Feature Mapping 312 by parallel-convolution block 320 applied to rank 0 To determine Feature Mapping 322.If Feature Mapping 322 has the spatial resolution lower than original image 302, image procossing is set Standby 203, which can be used up-sampling block 330, up-samples Feature Mapping 322 to obtain with identical with original image 302 The Feature Mapping 332 of spatial resolution.Image processing equipment 203 can be by the Feature Mapping 312 of Feature Mapping 332 and rank 0 Combine to generate the Feature Mapping 334 of rank 1.Image processing equipment 203 can be in the Feature Mapping 334 of rank 1 using another Image 342 after segmentation of a roll of block 340 to obtain rank 1.In some embodiments, image processing equipment 203 can be after The image of continuous " convolution ladder " downwards with the continuous parallel-convolution of application and to be similar to after the above-mentioned segmentation for obtaining rank 1 342 mode obtains the image after the segmentation at different stage.
In some embodiments, when by the application of image processing equipment 203 with image after divide, dividing network can To be scalable.In some embodiments, the image after the segmentation of different stage can be successively returned due to segmentation network, Therefore when the image after the segmentation of specific rank is good enough, image processing equipment 203 can determine to stop network early.One In a little embodiments, which can be based on the calculating of predefined parameter associated with the image after segmentation.For example, image procossing is set Standby 203 can determine that the difference between the image after the segmentation of image and rank (i+1) after the segmentation of rank i is less than threshold value. In some embodiments, the image after the segmentation of different stage can be shown to user, and user can stop dividing manually Network is further applied.
In some embodiments, the quantity for dividing the rank in network can be predetermined and by by model training Equipment 202 is set.For example, model training equipment 202 can be based on before it will divide network and be supplied to image processing equipment 203 Test the size to determine network.For example, if the segmentation output of some rank it is good enough and cannot by subsequent rank into One step improves, then can abandon subsequent rank in segmentation network.As another example, if after the other segmentation of lower level Image reasonable performance is not provided, then can also segmentation network in eliminate relevant convolution block.
Fig. 6 shows the example images processing equipment 203 according to some embodiments of the present disclosure.In some embodiments In, image processing equipment 203 can be special purpose computer or general purpose computer.For example, image processing equipment 203 can be as doctor The computer of institute's customization, to execute Image Acquisition and image processing tasks.As shown in Figure 6, image processing equipment 203 can wrap Include communication interface 602, processor 604, memory 606, reservoir 608 and display 610.
Communication interface 602 may include network adapter, cable connector, serial connector, and USB connector connects parallel Connect device, high speed data transfer adapter (optical fiber, USB 3.0, thunder and lightning interface etc.), wireless network adapter (such as WiFi Adapter), telecommunications (3G, 4G/LTE etc.) adapter etc..Image processing equipment 203 can be connected to figure by communication interface 602 As other components of segmenting system 200 and network 206.In some embodiments, communication interface 602 is from image capture device 205 Receive medical image.For example, image capture device 205 is MRI scanner or CT scanner.In some embodiments, communication connects Mouth 602 also receives the segmentation network of such as multi-level learning network 300/400/500 from model training equipment 202.
Processor 604 can be the processing equipment including more than one general purpose processing device, such as microprocessor, centre Manage unit (CPU), graphics processing unit (GPU) etc..More specifically, to can be complex instruction set calculation (CISC) micro- for the processor Processor, very long instruction word (VLIW) microprocessor, runs other instruction set at reduced instruction set computing (RISC) microprocessor The combined processor of processor or operating instruction collection.The processor can also be more than one dedicated treatment facility, such as specially With integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processor (DSP), system on chip (SoC) etc.. Processor 604 can be communicably coupled to memory 606 and be configured as executing the executable finger of the computer being stored thereon It enables, to execute the example images dividing processing that will such as combine Fig. 7 description.
606/ reservoir 608 of memory can be the computer-readable medium of non-transitory, such as read-only memory (ROM), random access memory (RAM), phase change random access memory devices (PRAM), static random access memory (SRAM), Dynamic random access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other kinds of arbitrary access Memory (RAM), the flash memory of flash disk or other forms, caching, register, static memory, compact disc read-only memory (CD- ROM), digital versatile disc (DVD) or other optical memory, cassette tape or other magnetic storage apparatus, or be used to store The medium etc. of any other possible non-transitory of the information or instruction that can be accessed by computer equipment.
In some embodiments, reservoir 608 can store trained network (such as multi-level learning network 300/ 400/500) and data, all for example original medical images of the data, extraction characteristics of image (for example, the Feature Mapping of rank i, Intermediate features mapping), while executing computer program it is received, used or generated etc..In some implementations In example, memory 606 can store computer executable instructions, such as more than one image processing program.
In some embodiments, the image after segmentation and/or other numbers can be presented in processor 604 on display 610 According to visualization.Display 610 may include that liquid crystal display (LCD), light emitting diode indicator (LED), plasma are shown The display of device or any other type, and provide the graphic user interface (GUI) that is presented on display for user's input and Image/data are shown.The display may include many different types of materials (such as plastics or glass), and can be Touch-sensitive is ordered with receiving from user.For example, display may include substantially rigid touch-sensitive material (such as Gorilla glass GlassTM) or substantially flexible (such as Willow glassTM) touch-sensitive material.
According to the disclosure, model training equipment 202 be can have and the same or similar structure of image processing equipment 203. In some embodiments, model training equipment 202 including processor and is configured with training image training segmentation network Other components.
Fig. 7 shows the flow chart of the illustrative methods 700 according to an embodiment of the present disclosure for Medical Image Segmentation. For example, method 700 can be realized by the image processing equipment 203 in Fig. 1.However, method 700 is not limited to the exemplary implementation Example.Method 700 may include step S702-S724 as described below.It should be appreciated that some steps are provided herein for executing Disclosure may be optional.In addition, some steps may be performed simultaneously, or held with the sequence different from shown in Fig. 7 Row.
In step S702, image processing equipment 203 is for example received from medical image databases 204 by image capture device The medical image of 205 acquisitions.Medical image can be any image mode of such as MRI or CT.In step S704, at image Manage the segmentation network that equipment 203 receives for example multi-level learning network 300/400/500.For example, segmentation network can be by model Equipment 202 is trained to train.
In step S706, image processing equipment 203 determines rank 0 by the way that initial volume block is applied to medical image Feature Mapping.For example, image processing equipment 203 can be by multi-level learning network 300 in embodiment as shown in Figure 3 Applied to original image 302.At rank 0, image processing equipment 203 can determine grade by application initial volume block 310 Other 0 Feature Mapping 312.In some embodiments, initial volume block 310 may include convolutional layer 412 and maximum pond layer 414.In some other embodiments, initial volume block 310 may include empty convolutional layer 510.
In step S708,203 degree of establishment of image processing equipment indexes i=1.In step S710, image procossing is set Standby 203 can determine Feature Mapping by the way that parallel-convolution block to be applied to the Feature Mapping of prior level.For example, as in Fig. 3 Shown, at rank 1, image processing equipment 203 can pass through the Feature Mapping 312 by parallel-convolution block 320 applied to rank 0 To determine Feature Mapping 322.In some embodiments, parallel-convolution block 320 may include convolutional layer 422 and maximum pond layer 424.In some other embodiments, parallel-convolution block 320 may include empty convolutional layer 520.
In some embodiments, parallel-convolution block (for example, parallel-convolution block 320) may include parallel arranged several Convolutional layer.For example, as shown in Figure 3, parallel-convolution block 320 includes parellel arranged k convolutional layer, and the feature of rank 0 Mapping 312 can be distributed to k convolutional layer to generate several intermediate features mappings, can be combined to generate Feature Mapping 322。
In step S712, image processing equipment 203 determine Feature Mapping spatial resolution whether with divided doctor Learn the spatial resolution matching of image.If Feature Mapping has the spatial resolution lower than the spatial resolution of medical image (S712: no), then method 700 proceeds to step 714, this image processing equipment 203 can for example using up-sampling block 330 it is right Feature Mapping is up-sampled, to obtain the Feature Mapping for having same spatial resolution with medical image.Otherwise (S712: It is), method 700 is directly to step S716.
In step S716, image processing equipment can reflect the feature of Feature Mapping and rank (i-1) after up-sampling It penetrates and combines to generate the Feature Mapping of rank i.For example, as shown in Figure 3, image processing equipment 203 can be by Feature Mapping 332 Combine with the Feature Mapping 312 of rank 0 to generate the Feature Mapping 334 of rank 1.In some embodiments, the picture of Feature Mapping Plain value can be combines pixel by pixel.For example, can be added, be averaging or otherwise assemblage characteristic mapping in correspondence picture Plain value is to generate the pixel value of Feature Mapping.
In step S718, image processing equipment 203 can be by being applied to another convolution block in step S716 Image after segmentation of the Feature Mapping of the rank i of acquisition to obtain rank i.For example, as shown in Figure 3, image processing equipment 203 can be in the Feature Mapping 334 of rank 1 using the image 342 after segmentation of another convolution block 340 to obtain rank 1.
In step S720, image processing equipment 203 can determine whether the segmentation result obtained in step S718 enables People is satisfied.In some embodiments, image processing equipment 203 can calculate with divide after the associated some predetermined ginsengs of image Number.For example, image processing equipment 203 can determine the image after the segmentation of image and rank (i-1) after the segmentation of rank i it Between difference be less than threshold value, this instruction is sufficiently small so that subsequent refinement can be does not have by improving of promoting that rank obtains It is necessary.In such a case, it is possible to think that segmentation result is satisfactory.If segmentation result is satisfactory (S720: yes), Image processing equipment 203 can determine the segmentation network for stopping applying other ranks, and point of rank i is provided in step S724 Image after cutting is as final segmentation result.Otherwise (S720: no), method 700 proceed to S722 to increase level index i, and Back to step S710, this image processing equipment 203 by repeat step S710-S720 continue downward " convolution ladder " with Using subsequent parallel-convolution block and obtain the image after the segmentation in following stages other places.
Another aspect of the present disclosure is intended to provide a kind of non-transitory computer-readable medium of storage instruction, described instruction One or several processors are caused to execute method as described above when executed.Computer-readable medium may include volatibility Or non-volatile, magnetic, semiconductor, tape, optical, moveable, irremovable or other kinds of computer can Read medium or computer-readable storage facilities.For example, as disclosed, computer-readable medium can be stores calculating thereon The storage facilities or memory module of machine instruction.In some embodiments, computer-readable medium can be stores calculating thereon The disk or flash drive of machine instruction.
It will be apparent to one skilled in the art that various repair can be carried out to disclosed system and correlation technique Change and changes.Specification and practice in view of disclosed system and correlation technique, other embodiments are for art technology It will be apparent for personnel.
It is intended to specification and example is considered only as illustratively, wherein true scope is by the following claims and their equivalents Instruction.

Claims (20)

1. a kind of system for Medical Image Segmentation, the system comprises:
Communication interface is configured to receive the medical image acquired by image capture device;
Memory is configured to the multi-level learning network that storage includes at least first volume block and volume Two block, wherein institute Volume Two block is stated at least one convolutional layer;And
Processor is configured to:
By determining that fisrt feature is mapped to first volume block described in the medical image applications;
Determine that second feature maps by mapping the application volume Two block to the fisrt feature;
First level Feature Mapping is determined by combining the fisrt feature mapping and second feature mapping;And
The image after first level segmentation is obtained based on the first level Feature Mapping.
2. system according to claim 1, wherein the volume Two block has multiple convolutional layers in parallel, Ge Gejuan Lamination is configured to determine intermediate features mapping, and wherein the processor is further configured to by combining the intermediate features It maps to determine the second feature mapping.
3. system according to claim 1, wherein at least one described convolutional layer includes the impression for expanding the convolutional layer Wild empty convolutional layer.
4. system according to claim 1, wherein the volume Two block further comprises maximum pond layer.
5. system according to claim 1, wherein the processor is further configured to by special to the first level Sign mapping determines the image after the first level segmentation using serial convolutional block.
6. system according to claim 1, wherein the processor is further configured to
Determine whether the spatial resolution of the second feature mapping is lower than the spatial resolution of the medical image;And
It is lower than the determination of the spatial resolution of the medical image in response to the spatial resolution that the second feature maps, is joining The second feature is mapped before closing the fisrt feature mapping and second feature mapping and is up-sampled.
7. system according to claim 1, wherein the multi-level learning network further comprises third convolution block, Described in processor be further configured to
Determine that third feature maps by mapping the application third convolution block to the second feature;
Second feature mapping and third feature mapping are combined to obtain second level Feature Mapping;And
Image after obtaining second level segmentation based on the second level Feature Mapping.
8. system according to claim 7, wherein the processor is further configured to
Improvement of the image relative to the image after first level segmentation after determining the second level segmentation;And
It is less than the determination of threshold value in response to the improvement to stop in the multi-level learning network using any additional volume Block.
9. system according to claim 1, wherein combine the fisrt feature mapping and second feature mapping includes The value of the fisrt feature mapping and the value of second feature mapping are combined pixel by pixel.
10. a kind of method for Medical Image Segmentation, wherein the described method includes:
The medical image acquired by image capture device is received by communication interface;
The multi-level learning network including at least first volume block and volume Two block is obtained, wherein the volume Two block has At least one convolutional layer;
By processor, by determining that fisrt feature is mapped to first volume block described in the medical image applications;
By the processor, determine that second feature maps by mapping the application volume Two block to the fisrt feature;
By the processor, first level feature is determined by combining the fisrt feature mapping and second feature mapping Mapping;And
Image by the processor, after first level segmentation is obtained based on the first level Feature Mapping.
11. according to the method described in claim 10, wherein, the volume Two block has multiple convolutional layers in parallel, each Convolutional layer determines that intermediate features map, wherein determining that the second feature mapping further comprises combining the intermediate features to reflect It penetrates.
12. according to the method described in claim 10, wherein, at least one described convolutional layer includes the sense for expanding the convolutional layer By wild empty convolutional layer.
13. according to the method described in claim 10, wherein, the volume Two block further comprises maximum pond layer.
14. according to the method described in claim 10, wherein, the image after obtaining the first level segmentation includes serially rolling up Block is applied to the first level Feature Mapping.
15. according to the method described in claim 10, further comprising:
Determine whether the spatial resolution of the second feature mapping is lower than the spatial resolution of the medical image;And
It is lower than the determination of the spatial resolution of the medical image in response to the spatial resolution that the second feature maps, is joining The second feature is mapped before closing the fisrt feature mapping and second feature mapping and is up-sampled.
16. according to the method described in claim 10, wherein, the multi-level learning network further comprises third convolution block, Wherein the method further includes:
Determine that third feature maps by mapping the application third convolution block to the second feature;
Second level Feature Mapping is determined by combining second feature mapping and third feature mapping;And
Image after obtaining second level segmentation based on the second level Feature Mapping.
17. according to the method for claim 16, wherein the method further includes:
Improvement of the image relative to the image after first level segmentation after determining the second level segmentation;And
It is less than the determination of threshold value in response to the improvement to stop in the multi-level learning network using any additional volume Block.
18. according to the method described in claim 10, wherein,
Combine the fisrt feature mapping and second feature mapping includes combining the fisrt feature mapping pixel by pixel The value of value and second feature mapping.
19. the computer-readable medium of non-transitory that one kind stores computer program thereon, wherein the computer program When being executed by least one processor, the method for being used for Medical Image Segmentation is executed, which comprises
Receive the medical image acquired by image capture device;
The multi-level learning network including at least first volume block and volume Two block is obtained, wherein the volume Two block has At least one convolutional layer;
By determining that fisrt feature is mapped to first volume block described in the medical image applications;
Determine that second feature maps by mapping the application volume Two block to the fisrt feature;
First level Feature Mapping is determined by combining the fisrt feature mapping and second feature mapping;And
Image after obtaining first level segmentation based on the first level Feature Mapping.
20. the computer-readable medium of non-transitory according to claim 19, wherein the multi-level learning network into One step includes third convolution block, wherein the method further includes:
Determine that third feature maps by mapping the application third convolution block to the second feature;
Second level Feature Mapping is determined by combining second feature mapping and third feature mapping;And
Image after obtaining second level segmentation based on the second level Feature Mapping.
CN201811268094.5A 2017-10-30 2018-10-29 System and method for segmenting medical images and medium Active CN109377496B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762578907P 2017-10-30 2017-10-30
US62/578,907 2017-10-30
US16/159,573 2018-10-12
US16/159,573 US10783640B2 (en) 2017-10-30 2018-10-12 Systems and methods for image segmentation using a scalable and compact convolutional neural network

Publications (2)

Publication Number Publication Date
CN109377496A true CN109377496A (en) 2019-02-22
CN109377496B CN109377496B (en) 2020-10-02

Family

ID=65390405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811268094.5A Active CN109377496B (en) 2017-10-30 2018-10-29 System and method for segmenting medical images and medium

Country Status (1)

Country Link
CN (1) CN109377496B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961442A (en) * 2019-03-25 2019-07-02 腾讯科技(深圳)有限公司 Training method, device and the electronic equipment of neural network model
CN109978838A (en) * 2019-03-08 2019-07-05 腾讯科技(深圳)有限公司 Image-region localization method, device and Medical Image Processing equipment
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN110827963A (en) * 2019-11-06 2020-02-21 杭州迪英加科技有限公司 Semantic segmentation method for pathological image and electronic equipment
CN111369562A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN112348838A (en) * 2019-08-08 2021-02-09 西门子医疗有限公司 Method and system for image analysis
CN113034507A (en) * 2021-05-26 2021-06-25 四川大学 CCTA image-based coronary artery three-dimensional segmentation method
CN113793699A (en) * 2021-11-16 2021-12-14 四川省肿瘤医院 Lung tumor delineation method based on 5G cloud radiotherapy private network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139023A (en) * 2015-07-24 2015-12-09 福州大学 Seed identification method based on multi-scale feature fusion and extreme learning machine
CN106897573A (en) * 2016-08-01 2017-06-27 12西格玛控股有限公司 Use the computer-aided diagnosis system for medical image of depth convolutional neural networks
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107133569A (en) * 2017-04-06 2017-09-05 同济大学 The many granularity mask methods of monitor video based on extensive Multi-label learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139023A (en) * 2015-07-24 2015-12-09 福州大学 Seed identification method based on multi-scale feature fusion and extreme learning machine
CN106897573A (en) * 2016-08-01 2017-06-27 12西格玛控股有限公司 Use the computer-aided diagnosis system for medical image of depth convolutional neural networks
CN107085842A (en) * 2017-04-01 2017-08-22 上海讯陌通讯技术有限公司 The real-time antidote and system of self study multiway images fusion
CN107133569A (en) * 2017-04-06 2017-09-05 同济大学 The many granularity mask methods of monitor video based on extensive Multi-label learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIANG-CHIEH CHEN等: "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs", 《ARXIV:1606.00915V2 [CS.CV]》 *
OLAF RONNEBERGER等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV:1505.04597V1 [CS.CV]》 *
TSUNG-YI LIN等: "Feature Pyramid Networks for Object Detection", 《ARXIV:1612.03144V2 [CS.CV]》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978838A (en) * 2019-03-08 2019-07-05 腾讯科技(深圳)有限公司 Image-region localization method, device and Medical Image Processing equipment
CN109978838B (en) * 2019-03-08 2021-11-30 腾讯科技(深圳)有限公司 Image area positioning method and device and medical image processing equipment
CN109961442A (en) * 2019-03-25 2019-07-02 腾讯科技(深圳)有限公司 Training method, device and the electronic equipment of neural network model
CN109961442B (en) * 2019-03-25 2022-11-18 腾讯科技(深圳)有限公司 Training method and device of neural network model and electronic equipment
CN110110617B (en) * 2019-04-22 2021-04-20 腾讯科技(深圳)有限公司 Medical image segmentation method and device, electronic equipment and storage medium
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
US11887311B2 (en) * 2019-04-22 2024-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for segmenting a medical image, and storage medium
US20210365717A1 (en) * 2019-04-22 2021-11-25 Tencent Technology (Shenzhen) Company Limited Method and apparatus for segmenting a medical image, and storage medium
CN110610480A (en) * 2019-08-02 2019-12-24 成都上工医信科技有限公司 MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN112348838A (en) * 2019-08-08 2021-02-09 西门子医疗有限公司 Method and system for image analysis
CN110827963A (en) * 2019-11-06 2020-02-21 杭州迪英加科技有限公司 Semantic segmentation method for pathological image and electronic equipment
CN111369562B (en) * 2020-05-28 2020-08-28 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111369562A (en) * 2020-05-28 2020-07-03 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111429460B (en) * 2020-06-12 2020-09-22 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN111429460A (en) * 2020-06-12 2020-07-17 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation model training method, device and storage medium
CN113034507A (en) * 2021-05-26 2021-06-25 四川大学 CCTA image-based coronary artery three-dimensional segmentation method
CN113793699A (en) * 2021-11-16 2021-12-14 四川省肿瘤医院 Lung tumor delineation method based on 5G cloud radiotherapy private network
CN113793699B (en) * 2021-11-16 2022-03-01 四川省肿瘤医院 Lung tumor delineation method based on 5G cloud radiotherapy private network

Also Published As

Publication number Publication date
CN109377496B (en) 2020-10-02

Similar Documents

Publication Publication Date Title
US11574406B2 (en) Systems and methods for image segmentation using a scalable and compact convolutional neural network
CN109377496A (en) System and method and medium for Medical Image Segmentation
CN109906470B (en) Image segmentation using neural network approach
AU2017315684B2 (en) Systems and methods for image segmentation using convolutional neural network
Zhang et al. ME‐Net: multi‐encoder net framework for brain tumor segmentation
Usman et al. Volumetric lung nodule segmentation using adaptive roi with multi-view residual learning
CN108022238A (en) Method, computer-readable storage medium and the system being detected to object in 3D rendering
CN102753962B (en) System and method for multimode three dimensional optical tomography based on specificity
CN107077736A (en) System and method according to the Image Segmentation Methods Based on Features medical image based on anatomic landmark
CN109410188A (en) System and method for being split to medical image
CN109060849A (en) A kind of mthods, systems and devices of determining dose of radiation modulation lines
CN109124666A (en) A kind of mthods, systems and devices of determining dose of radiation modulation lines
CN111210444A (en) Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN102132322A (en) Apparatus for determining modification of size of object
CN109410187B (en) Systems, methods, and media for detecting cancer metastasis in a full image
CN114514558A (en) Segmenting tubular features
CN111612762B (en) MRI brain tumor image generation method and system
US20230237647A1 (en) Ai driven longitudinal liver focal lesion analysis
CN117333692A (en) Method for identifying the type of an organ in a volumetric medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Block B, Mingyang International Center, 46 xizongbu Hutong, Dongcheng District, Beijing, 100005

Patentee after: Beijing Keya ark Medical Technology Co.,Ltd.

Address before: Block B, Mingyang International Center, 46 xizongbu Hutong, Dongcheng District, Beijing, 100005

Patentee before: BEIJING CURACLOUD TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 3f301, East Tower, hadmen square, 8 Chongwenmenwai Street, Dongcheng District, Beijing 100062

Patentee after: Beijing Keya ark Medical Technology Co.,Ltd.

Address before: Block B, Mingyang International Center, 46 xizongbu Hutong, Dongcheng District, Beijing, 100005

Patentee before: Beijing Keya ark Medical Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 3f301, East Tower, hadmen square, 8 Chongwenmenwai Street, Dongcheng District, Beijing 100062

Patentee after: Keya Medical Technology Co.,Ltd.

Address before: 3f301, East Tower, hadmen square, 8 Chongwenmenwai Street, Dongcheng District, Beijing 100062

Patentee before: Beijing Keya ark Medical Technology Co.,Ltd.