CN105389813A - Medical image organ recognition method and segmentation method - Google Patents

Medical image organ recognition method and segmentation method Download PDF

Info

Publication number
CN105389813A
CN105389813A CN201510729150.0A CN201510729150A CN105389813A CN 105389813 A CN105389813 A CN 105389813A CN 201510729150 A CN201510729150 A CN 201510729150A CN 105389813 A CN105389813 A CN 105389813A
Authority
CN
China
Prior art keywords
organ
medical image
image
target organ
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510729150.0A
Other languages
Chinese (zh)
Other versions
CN105389813B (en
Inventor
田野
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201510729150.0A priority Critical patent/CN105389813B/en
Priority claimed from CN201510729150.0A external-priority patent/CN105389813B/en
Publication of CN105389813A publication Critical patent/CN105389813A/en
Application granted granted Critical
Publication of CN105389813B publication Critical patent/CN105389813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention discloses a medical image organ recognition method. The method comprises steps: a to-be-processed medical image is acquired, the medical image is segmented into a plurality of two-dimensional images in X-axis, Y-axis and Z-axis directions respectively, and a detection window is set according to the size of a target organ; the detection window is used to carry out traversing detection on the two-dimensional images respectively according to a set detection step length, and detection results in the X-axis, Y-axis and Z-axis directions are acquired; and result fusion is carried out on the detection results, pixel points which are detected to be positive in the X-axis, Y-axis and Z-axis directions are kept, and a target organ boundary is determined. The medical image organ recognition method of the invention can quickly and accurately recognize the target organ area, the target organ boundary is determined and the adaptive ability is strong. In addition, the invention also provides a medical image organ segmentation method.

Description

The recognition methods of organ and dividing method in medical image
[technical field]
The present invention relates to field of medical image processing, particularly relate to recognition methods and the dividing method of organ in medical image.
[background technology]
Along with the increasingly mature and various medical imaging device widespread use within the hospital of Medical Imaging Technology, conveniently can nondestructively get inside of human body organizational information image, by image processing techniques, these information are effectively processed, for diagnosis even surgery planning etc. of assist physician, there is great social benefit and application prospect widely.As computed tomography (ComputedTomography, CT) image is by the black matrix rearranged to the pixel of white different gray scale by some, can be reflected the X-ray absorption coefficient of corresponding voxel by pixel, different gray scales then reflects the degree of absorption of organ or tissue to X ray.In the image procossing later stage, by carrying out correctly, reasonably splitting to CT image, organ of interest, tissue or pathology body can be extracted, and then the three-dimensional visualization to these organ be extracted, tissue or pathologies can be realized, to reach the object of supplemental treatment and surgery planning.But, for the human organ of complexity, as belly (comprising liver, gall-bladder, spleen, stomach, inferior caval vein and sustainer etc. in abdominal cavity), the segmentation performing each organ is consuming time more than 3 minutes, if when indefinite body part (head, chest, belly and pelvic cavity) corresponding when pre-treatment CT image, the partitioning algorithm blindly calling all organs then needs to consume a large amount of time.In addition, abdominal CT images due to imaging device limitation and tissue wriggle, some artifacts and noise can be produced, cause partial organ organize fuzzy, pathology body edge is indefinite, these all bring sizable difficulty to segmentation.Therefore, call partitioning algorithm before need to prejudge the position when pre-treatment CT image essential body position.
By the body part that doctor's artificial cognition and mark current C T image are positioned at, need doctor to carry out a large amount of repeated works, efficiency is lower.And existing CT image body part automatic identifying method mainly can be divided into three kinds: (1) is based on digital imaging and communications in medicine (DigitalImagingandCommunicationsinMedicine, DICOM) the body part identification [1] of File header information, usual DICOM file header comprises the label information of CT image scanning, but due to various Cultural Language difference, the label of different language record can increase the difficulty accurately identifying DICOM File header information, and more very the DICOM File header information of mistake can cause identification error, (2) based on the method for grey value characteristics, they are mainly different according to the attenuation degree of X ray when passing through health different tissues composition, the grey value profile of health different tissues composition in CT image is different, method based on grey value characteristics is the priori according to health Main Tissues composition grey value profile rule in CT image, body part is divided, but the method is to head, pelvic cavity discrimination lower [2], (3) based on the method for machine learning, the method is mainly divided into two stages of training and testing, the corresponding Haar characteristics of image of body part critical organ is extracted in the training stage, build a large amount of positive negative sample, by training AdaBoost sorter, extract organ corresponding effectively Haar characteristic sequence and respective weights thereof, at test phase, input image to be measured, calculate the Haar eigenwert of this image, itself and existing training result are contrasted, judge this image whether as positive sample [3], the method needs to carry out repeatedly up-sampling or down-sampling to characteristic window when extracting image Haar feature, and the Haar feature quantity used is more, there is double counting and calculated amount is larger, the problem that operation efficiency is low.Based on this, be necessary to improve the recognition methods of organ in existing medical image.
[1]GueldMO,KohnenM,KeysersD,etal.QualityofDICOMheaderinformationforimagecategorization[C].MedicalImaging.InternationalSocietyforOpticsandPhotonics,2002:280-287.
[2]DickenV,LindowB,BornemannL,etal.Rapidimagerecognitionofbodypartsscannedincomputedtomographydatasets[J].Internationaljournalofcomputerassistedradiologyandsurgery,2010,5(5):527-535.
[3]NakamuraK,LiY,ItoW,etal.AmachinelearningapproachforbodypartrecognitionbasedonCTimages[C].MedicalImaging.InternationalSocietyforOpticsandPhotonics,2008:69141U-69141U-9.
[summary of the invention]
Technical matters to be solved by this invention is to provide the recognition methods of organ in the medical image that a kind of adaptive ability is strong, recognition accuracy is high.
The present invention solves the problems of the technologies described above adopted technical scheme: the recognition methods of organ in a kind of medical image, comprises the following steps:
Obtain pending medical image, described medical image is split into some two dimensional images at X, Y and Z-direction respectively, and set detection window according to the size of target organ;
Utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting to detect, obtain the testing result at X, Y and Z-direction;
Described testing result is carried out result fusion, is retained in the pixel of all test positive on X, Y and Z axis three directions, thus determines target organ border described in described medical image.
Further, also comprise the training process utilizing AdaBoost algorithm to generate AdaBoost cascade classifier, be specially:
A) build training data, and choose positive sample areas and negative sample region from described training data, described positive sample areas is the sample window comprising described target organ, and described negative sample region is the sample window not comprising described target organ completely;
B) calculate the Haar eigenwert of described positive sample and negative sample, utilize AdaBoost algorithm from the effective Haar feature of described Haar Feature Selection, each effective Haar feature forms single Weak Classifier;
C) Weak Classifier described in several forms single strong classifier, and strong classifier cascade described in several forms AdaBoost cascade classifier.
Further, described Haar feature is calculated by integrogram, described integrogram numerical value be on image a coordinate points above and the pixel value sum of whole points on the left side.
Further, delete the Haar feature being less than setting size, and the Haar feature that delete position is adjacent and measure-alike.
Further, utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting and detect, obtain the testing result at X, Y and Z-direction, its detailed process is:
In described detection window, utilize described AdaBoost cascade classifier to detect the two dimensional image split at X, Y and Z-direction respectively according to the detection step-length of setting, and preserve respectively in three directions by the testing result of described AdaBoost cascade classifier;
Judge whether the traversal of two dimensional image completes, if not, then continue above-mentioned testing process; If not, then end is detected.
Further, described detection step-length is the distance between adjacent three pixels.
Further, also comprise the image after to result fusion and split into two dimensional image at X, Y and Z-direction respectively, and add up the pixel number that testing result on described two dimensional image is the positive respectively, the border of described target organ is determined further according to the distribution of pixel.
Further, determine that the border of described target organ is specially further according to the distribution of pixel: described image is in the border maximal value of X, Y and Z-direction and minimum value to adopt the method for Gauss Distribution Fitting to determine respectively, and the region that described border maximal value and minimum value surround is described target organ scope.
The present invention also proposes the dividing method of organ in a kind of medical image, comprises the steps:
There is provided image collection module, described image collection module obtains pending medical image;
There is provided target organ identification module, described target organ identification module carries out relevant treatment to described medical image, to obtain described target organ border;
There is provided target organ to split module, in described target organ segmentation module objectives organ boundaries, described target organ is split;
Described relevant treatment is:
Described medical image is split into some two dimensional images along at least two reference directions respectively in three-dimensional system of coordinate, and sets detection window according to the size of target organ;
Utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting to detect, obtain the testing result along corresponding reference direction;
Described testing result is merged in three-dimensional system of coordinate, is retained in the pixel of all test positive on the two dimensional surface that splits along different reference direction, thus determines described target organ border.
Further, described reference direction is X-axis, Y-axis, Z axis or any combinations of directions between the two.
The beneficial effect that the present invention produces relative to prior art is: the multiple the two dimensional images first medical image of three-dimensional being split into sagittal plane, coronal-plane and transversal section at X, Y and Z-direction, then the image that traversal detects is merged again, not only solve the problem of lack of training samples, and can be applicable to different target organ identification, self-adaptation strong adaptability; Actual physical size according to organ selects still image resolution, avoid carrying out repeatedly up-sampling and down-sampling to image, and in the process choosing Haar feature, ignore undersized, that randomness the is stronger Haar feature of representativeness difference and the Haar feature of adjacent position adjoining dimensions, the guarantee representational basis of Haar feature reduces operand, improves recognition efficiency; Choosing testing result is in three directions all positive pixel, and get rid of false positive point, and according to the border of pixel at the Gauss curve fitting distribution peaks determination target image of all directions, effectively can remove noise at the boundary, recognition accuracy is high.
[accompanying drawing explanation]
Fig. 1 is the recognition methods process flow diagram of organ in medical image of the present invention;
Fig. 2 is that 3 d medical images of the present invention splits into two dimensional image schematic diagram;
Fig. 3 is the original rectangular feature schematic diagram detected for image;
Fig. 4 is image-region integrogram;
Fig. 5 is for utilizing AdaBoost cascade classifier detected image schematic diagram;
Fig. 6 is the embodiment of the present invention left kidney transversal section testing result;
Fig. 7 a is embodiment of the present invention kidney testing result front view;
Fig. 7 b is embodiment of the present invention kidney testing result left view;
Fig. 7 c is embodiment of the present invention kidney testing result vertical view;
Fig. 8 is left kidney transversal section pixel Gauss curve fitting distribution schematic diagram;
Fig. 9 is the border schematic diagram of the kidney organ utilizing the inventive method to determine;
Figure 10 is the dividing method process flow diagram of organ in medical image of the present invention.
[embodiment]
For enabling above-mentioned purpose of the present invention, feature and advantage more become apparent, and are described in detail the specific embodiment of the present invention below in conjunction with drawings and Examples.
In clinical diagnosis, medical image plays key player, medical image segmentation is medical image data analysis and visual first stage, is also primary prerequisite and the committed step of numerous medical image applications such as computer-aided diagnosis, medical image three-dimensional visualization, image-guided surgery, virtual endoscope.Before medical image segmentation, accurately judge that the position at human organ position in medical image has vital role for raising segmentation accuracy.Be illustrated in figure 1 the process flow diagram of the recognition methods of organ in medical image of the present invention, it mainly comprises the following steps:
S10, obtain pending medical image, described medical image is split into some two dimensional images at X, Y and Z-direction respectively, and according to the size setting detection window of target organ.For adult, the actual physical size difference of major body organs is also little, and the present invention carries out resampling to pending medical image, makes the resolution of medical image be fixed as setting value.Therefore in the present invention based on actual physical size, X resolution=Y resolution=Z resolution=3mm is set to by resampling by unified for the resolution of medical image.Medical image is split into two dimensional image at X, Y and Z-direction respectively, and sets the size of detection window according to the size of target organ.CT image is selected to process in the present invention, the image produced for current CT equipment mostly is 3-D view, directly the Haar feature application of two dimension is realized comparatively complicated problem in 3-D view, as shown in Figure 2, first three-dimensional CT image is split into sagittal plane along X, Y and Z axis by the present invention respectively, the two dimensional image of coronal-plane and transversal section three planes processes.In this specifically implements, 3 d medical images size is 117 × 117 × 107, has 117 layers of two dimensional image when being namely reference with X-direction, and the size of every one deck is 117 × 107 pixels; Have 117 layers of two dimensional image when taking Y direction as reference, the size of every one deck is 107 × 117 pixels; Have 107 layers of two dimensional image when taking Z-direction as reference, the size of every one deck is 117 × 117 pixels.Problem and the variability issues of organ on three-dimensional planar of follow-up lack of training samples can be solved by aforesaid operations, be applicable to the organ that pattern is different, improve adaptive ability.To the actual available two ends apex coordinate P in the location of organ min(x min, y min, z min) and P max(x max, y max, z max) determine.Three-dimensional CT image is split into the two dimensional image of three change in coordinate axis direction by the present invention, utilizes the method for machine learning to train the Haar characteristics of image in three directions, and the sagittal plane two dimensional image split by being identified in X-direction can determine x minand x max, the coronal-plane two dimensional image split by being identified in Y direction can determine Y minand Y max, the transversal section two dimensional image split by being identified in Z-direction can determine Z minand Z max, so just the problem of three-dimensional can be converted into the problem of two dimension.
According to follow-up AdaBoost Algorithm for Training flow process, need the size determining detection window, and the image that the CT equipment of different model is taken out often resolution difference, thus cause the pixel window size of organ to change.In abovementioned steps, pending image resolution ratio has been unified value, and the selection principle of detection window is: whole target organ can be included in window.It should be noted that, target organ is different, and the size of the detection window of selection is also different.Body part is mainly divided into head, chest, belly and pelvic cavity by the present invention, and wherein head is because feature is obvious, and area is less, and identification is easy, does not need the algorithm process by complexity.The location of other body parts can be realized by locator key organ, characteristic organ (utilizing study and the training of Haar feature) on selected shape and gray-scale value in the following embodiment of the present invention, the critical organ that chest is found is heart, and belly is kidney, and pelvic cavity is femoral head.In one embodiment, target organ is kidney (belly), and the size of detection window is set to 24 × 36 (pixels) in X-direction (Y-Z plane); 36 × 24 are set in Y direction (Z-X plane); 24 × 24 are set in Z-direction (X-Y plane).In another embodiment, target organ is chest, and the size of detection window is all set to 42 × 42 at X, Y and Z-direction.In fork embodiment, target organ is pelvic cavity, and the size of detection window is all set to 30 × 30 at X, Y and Z-direction.
The detection window of S20, utilization setting and the detection step-length of setting are carried out traversal to the two dimensional image split at X, Y and Z-direction respectively and are detected, and obtain the testing result at X, Y and Z-direction.Detecting step-length is the moving step length of detection window on image to be detected, and the size of moving step length can be the distance range etc. of 2-4 pixel.In a specific embodiment, the process that traversal detects is: when detection window moves to upper right side from image upper left side to be detected, move down 2 pixels, and then move 2 pixels in direction to the right from left direction; Utilize AdaBoost cascade classifier respectively to carry out above-mentioned detection according to above-mentioned mobile principle to the two dimensional image split at X, Y and Z-direction according to the detection window of setting, preserve in three directions by the testing result of AdaBoost cascade classifier respectively; Judge whether the traversal of two dimensional image completes, if not, then continue above-mentioned testing process; If not, then traversal detects end.
The AdaBoost cascade classifier used in aforementioned process is generated by AdaBoost Algorithm for Training, and the learning process of AdaBoost algorithm is mainly, and when the classification of sorter to some sample is correct, is reduced by the weights of these samples; When classification error, increase the weights of these samples.Concentrate in follow-up study and learn more difficult training sample, a series of weak feature (corresponding Weak Classifier) effectively distinguishing view picture sample image of final acquisition, the weighted array of a large amount of Weak Classifier forms strong classifier.Be specially:
A) training data is built, and choose positive sample areas and negative sample region from training sample, positive sample areas is the sample window comprising target organ, more specifically, the sample window regional percentage of positive sample areas shared by target organ is more than 50%, and negative sample region is the sample window not comprising target organ completely.Training sample data are: { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x l, y l), wherein, x irepresent the training sample vector of input, y ipresentation class classification, and y i∈ { 0,1}, y i=0 represents that described training sample is manually judged to be negative sample, y in advance i=1 represents that described training sample is judged to be positive sample in advance, and l is all number of training.In addition, also need the number of times T setting circulation to determine the number of Weak Classifier in strong classifier, and the positive sample of initialization and negative sample weight, for each positive sample weights for each negative sample weight wherein m represents positive sample number, and n represents negative sample number, total weight of all positive negative samples and be 1.Build the most important condition that abundant positive sample and negative sample are training, consider that the characteristic information of learning objective organ is not only wanted in machine learning, also need the characteristic information around learning objective organ, therefore, sample window can carry out release slip, to expand positive sample size by surrounding target organ central point; Negative sample sample window then can not overlap with target organ.For ensureing the comprehensive of negative sample, negative sample detection mode of the present invention is fixed test step-length, remove part and target organ region beyond skin, in remaining image, be fixed step-length sampling, thus guarantee that there is the negative sample of some at each position of health.
B) calculate the Haar eigenwert of positive sample and negative sample, by AdaBoost algorithm from the effective Haar feature of Haar Feature Selection, each effective Haar feature forms single Weak Classifier.The method of study image distribution rule has a lot, mainly contain according to pixel pointwise study with based on characteristics of image learning method, wherein need grasp the regularity of distribution of the pixel of each organ based on pixel pointwise learning method, study operand is comparatively large, inefficiency; Reaching under equal recognition effect, the calculated amount based on characteristics of image study be less, fast operation.The simple image feature used in the present invention is Haar feature, is illustrated in figure 3 the original rectangular feature detected for image.As can be seen from Fig., Haar feature has various ways, and the little rectangle of each black and white is vertically mutual or level is adjacent, and its eigenwert is the difference of white portion pixel sum and black part pixel.In the specific embodiment of the invention, the calculating of Haar eigenwert adopts integrogram to obtain, and integrogram is defined as: for a certain coordinate points (x, y) in image, this pixel upper left corner (left side and top) pixel value sum a little, formula is: the coordinate that wherein i (x ', y ') is upper left corner area is the pixel value of (x ', y ') picture point.As shown in Figure 4, be an image-region integrogram schematic diagram, for the rectangle of four in figure, the integrogram of 2 is I+II, the integrogram of 3 is I+III, then region IV all pixels and be that 4 dot product components and 1 dot product component sum deduct 2 dot product components and 3 dot product component sums.
The present invention mainly uses L-R feature, previous-next feature, left-in-right feature, upper-in-lower feature, 45 °, left diagonal angle feature, these 7 kinds of Haar features of 45 °, right diagonal angle characteristic sum central feature.For the image window of a typical 30x30, by by above-mentioned 7 kinds of Haar feature equal proportion Scalable, the total Haar feature quantity that can be calculated this detection window is 394395, and the Haar feature of such order of magnitude is for too huge last test process.The present invention, in the representational situation not weakening Haar feature, utilizes the too small feature that Haar feature randomness is excessively strong, representativeness is poor, quantity is various on the one hand, ignores undersized Haar feature, as the sample window size of 1x1 or 2x2; On the other hand, consider in testing process, to need whole sample window to slide sampling, a large amount of Haar feature occurs by the situation of double counting, ignore the Haar feature that position is adjacent and measure-alike.In this particular embodiment, the Haar smallest feature sizes that kidney is chosen is 3 × 3, and be 6591 in the quantity of X-direction (Y-Z plane) Haar eigenwert, positive sample size is 4608, and negative sample quantity is 41563; Be 6591 in the quantity of Y direction (Z-X plane) Haar eigenwert, positive sample size is 3204, and negative sample quantity is 63144; Be 8493 in the quantity of Z-direction (X-Y plane) Haar eigenwert, positive sample size is 3012, and negative sample quantity is 50211.When detection position is chest, target organ is heart, and the Haar smallest feature sizes chosen is 4 × 4, and be 7132 in the quantity of X-direction (Y-Z plane) Haar eigenwert, positive sample size is 5636, and negative sample quantity is 43780; Be 7132 in the quantity of Y direction (Z-X plane) Haar eigenwert, positive sample size is 5320, and negative sample quantity is 41971; Be 7132 in the quantity of Z-direction (X-Y plane) Haar eigenwert, positive sample size is 4761, and negative sample quantity is 56752.In above process, small size Haar characteristic sum through removing representative difference ignores the adjacent Haar feature of same size, the Haar feature quantity that the quantity of Haar eigenwert is obtained than existing methods reduces by two orders of magnitude, under the representational prerequisite of guarantee sample window, significantly improve arithmetic speed.
The definition procedure of single Weak Classifier h is specially: for piece image x, has k Haar feature altogether, chooses one of them Haar feature t, computed image x value f about this Haar feature t t(x), and choose a threshold value θ relevant with this feature t t, the computing formula of Weak Classifier is:
In formula, 1 expression is detected as positive sample, and 0 expression is detected as negative sample, p tfor polarity designated symbol, when pt gets+1, be positive sample when eigenwert is greater than threshold value, when pt gets-1, be positive sample when eigenwert is less than threshold value, x represents detection window.The positive sample utilizing abovementioned steps a) to divide in advance and negative sample, and the weight of all m+n sample.For a Haar feature t, calculate the eigenwert of all samples about t, eigenwert is arranged according to ascending order, be designated as f (t 1), f (t 2) ..., f (t m+n), therefrom divide certain threshold value θ tas the watershed divide of classification.θ tchoose relevant with mis-classification rate, mis-classification rate e taccount form be:
e t=min(S t ++(T t --S t -),S t -+(T t +-S t +))
Wherein, min represents and gets minimum value function, T t +represent all positive samples weight and, T t -represent all negative samples weight and, S t +for at θ tunder threshold value, all eigenwerts are less than θ tpositive sample weights and, S t -for at θ tunder threshold value, all eigenwerts are less than θ tnegative sample weight and.Under original state, all samples are all endowed equal weighted value.It should be noted that, along with the carrying out of learning process, the data being judged as negative sample are constantly dropped, and the quantity being judged as positive sample constantly changes, and therefore, need carry out standardization for all positive samples and negative sample, its weight standardization formula is: j represents the number being judged as positive sample, makes travel through all features, choose and Weak Classifier minimum for wrong resolution is joined in strong classifier, minimal error resolution upgrade the weight of next round training sample subsequently wherein if sample is correctly classified, e i=0, otherwise, then e i=1.According to above-mentioned AdaBoost iterative algorithm, take turns iteration by T, will choose and obtain T Haar feature, T=50 in the present invention.As effective Haar feature quantity that table 1 is X-direction training result use at different levels in the embodiment of the present invention.
Effective Haar feature quantity of table 1X direction of principal axis training result use at different levels
C) Weak Classifier described in several forms single strong classifier, and strong classifier cascade described in several forms AdaBoost cascade classifier.Combined by multiple Weak Classifier after having trained for T time, form strong classifier, strong classifier cascade described in several forms AdaBoost cascade classifier, and strong classifier expression formula is: wherein α t = l o g 1 β t .
Be illustrated in figure 5 the cascade classifier schematic diagram of the embodiment of the present invention, every one-level strong classifier is all according to AdaBoost algorithm construction, and strong classifier at different levels is from simple to complexity.When new input piece image, if this image have passed the test of all strong classifiers, then think that image belongs to positive sample; If not by the test of a certain group of strong classifier in test process, be then negative sample by this spectral discrimination immediately, no longer carry out subsequent treatment.In this specific embodiment, belly kidney is 310 in the effective Haar feature quantity through training acquisition of X-direction (Y-Z plane); It is 193 in effective Haar feature quantity of Y direction (Z-X plane); It is 146 in effective Haar feature quantity of Z-direction (X-Y plane).Chest heart is 146 in the effective Haar feature quantity through training acquisition of X-direction (Y-Z plane); It is 133 in effective Haar feature quantity of Y direction (Z-X plane); It is 58 in effective Haar feature quantity of Z-direction (X-Y plane).Above-mentioned each effective Haar feature forms a Weak Classifier, multiple Weak Classifier composition strong classifier.As shown in Figure 5, utilize AdaBoost cascade classifier to detect according to the detection window of setting and detection step-length the two dimensional image split at X, Y and Z-direction respectively, preserve in three directions by the testing result of AdaBoost cascade classifier respectively; Judge whether the traversal of two dimensional image completes, if not, then continue above-mentioned testing process; If not, then end is detected.It should be noted that, in the present invention, the detection step-length of detection window is that every 3 pixels, sampling should be carried out, without the need to carrying out pointwise sampling, to improve test speed.For each width two dimension test pattern, sharp AdaBoost trains the cascade classifier obtained to test, and records the image position in former 3-D view of each group by test.If a sample is by all strong classifiers in cascade classifier, then the value recording all pixels in this window is 1, is the positive (positive sample); If detect sample not by any one strong classifier in cascade classifier, then represent not by testing and being designated as 0, be feminine gender (negative sample), if Fig. 6 is the left kidney transversal section testing result obtained in one embodiment of the invention, inner shaded boxes part is the region being detected as kidney.Three directions are processed separately, and the intermediate result that each direction processes is preserved.
S30, testing result is carried out result fusion, being retained on X, Y and Z axis three directions is all positive pixel, thus determines target organ border.Due in aforementioned process for X, Y of 3-D view and corresponding sagittal plane, coronal-plane and three, the transversal section plane of Z axis, in the three-dimensional plot that traversal detects, a pixel may be marked as positive number of times is 0 ~ 3 time.But the positive test symbol in single direction is also unreliable, and easily occur false-positive judgement, in order to strengthen the robustness of net result, the test result in three directions merges by the present invention, retains in three directions the pixel of all test positive.As Fig. 7 a-7c is depicted as target organ kidney front view, left view and vertical view testing result, the main region of testing result comprises kidney, certainly the pixel outside part kidney organ region is still comprised, these pixels, in sagittal plane, coronal-plane and three, transversal section direction all test positive, are false positive point.
In the present invention, the pixel of the image after merging further splits on X, Y and Z axis three directions again to be observed, by the mode of Gauss Distribution Fitting, the pixel number of test positive on the direction that statistics is vertical with splitting direction respectively, determine the border of described target organ according to the distribution of pixel further, remove noise at the boundary point.Be illustrated in figure 8 the pixel Gauss curve fitting distribution of left kidney transversal section, horizontal ordinate represents along health Z axis distance from bottom to top, ordinate represents the pixel number along the longitudinal every layer of test positive of health, in figure pixel be distributed with an obvious crest, but around crest, also have the distribution of fragmentary false positive pixel.Be distributed as example with left kidney in the present embodiment, select reliable crest up-and-down boundary can determine the Pz of left kidney minand Pz max.Pass through Gauss curve fitting x denotation coordination position in formula, G (x) represents positive pixel number, obtains expectation value μ and standard deviation sigma.According to the feature P (μ-σ < X≤μ+σ)=68.3% of Gaussian distribution, choose the interval sample total comprised of different Gausses different.For determining the bounds of kidney in the present invention, the fit interval of employing is (μ-σ, μ+σ), μ=99.92, σ=9.20, and then can obtain the lower boundary Pz of kidney in Z-direction min=μ-σ, coboundary Pz max=μ+σ.According to above-mentioned same operation, Px can be determined by the fitting result of another both direction min, Px max, Py minand Py max, so just determine the P of whole organ min(x min, y min, z min) and P max(x max, y max, z max), and then determine the rectangular parallelepiped border of whole kidney organ as shown in Figure 9.
In one embodiment, target organ is chosen as heart, select the data at chest (sample data 103 groups) and non-chest (head, belly be totally 211 groups of data) multiple position to test in test respectively, with artificial result of determination for goldstandard, testing result is as shown in table 2.Wherein, true positives represents that actual sample pixel and testing result are all positive; True negative represents that sample actual pixels point and testing result are all negative; False negative represents that sample actual pixels point is for positive and testing result is for negative; False positive represent sample actual pixels point be negative sexual and testing result for positive, the position comprising target organ be the most important thing is to identify sensitivity, and for not comprising the position of target organ, the most important thing is to identify specificity, wherein susceptibility=true positives sample/(true positives sample+false negative sample), specificity=true negative sample/(true negative sample+false positive sample), for the susceptibility of chest data heart up to 97.9%, and be 100% to the identification specificity of non-chest position organ.
Table 2 target organ is the testing result of heart
In another embodiment, target organ is chosen as left kidney, the data at pelvic cavity (sample data 71 groups) and non-pelvic cavity (head, chest, belly be totally 156 groups of data) multiple position are selected to test in test respectively, its test result is as shown in table 3, susceptibility for pelvic cavity data kidney is 85.9%, and is 98.8% to the identification specificity of non-pelvic cavity position organ.
Table 3 target organ is the testing result of left kidney
In addition, in fork one embodiment, we select basin bone femoral head to be target organ.Consider the structure that head existence part is similar with femoral head shape, 55 groups of data are selected in the present embodiment, wherein 21 groups of header data are as negative sample, other 34 groups is belly pelvic cavity data, the target organ window size chosen at X, Y and Z-direction is all 30 × 30, Haar characteristic dimension is minimum is 5 × 5, and the Haar feature quantity finally determined has 7716.It should be noted that, there is certain similarity in left and right femoral head, carrying out often also fl head to be identified in right capital test.Therefore, fl head is turned to right side by us along health axis, join in right capital training data, both expanded training sample amount, turn improve capital discrimination.The recognition methods of organ in medical image of the present invention, the front and rear part calculating Haar eigenwert does not have dependence, parallel computation can accelerate arithmetic speed, identify target organ fast finally by Gauss curve fitting method, recognition accuracy is high, and a test sample required time is within 1-5min usually.And the adaptive ability of the inventive method is comparatively strong, goes for the situation of Different Individual organs differences, also can obtain good recognition result for the sample do not learnt in training process.It should be noted that, the present invention not only can be applicable to the identification of target organ in CT image, also can be applicable to the imaging processing of the equipment such as magnetic resonance (MRI), single photon emission computerized tomography (SPECT), positron emission tomography (PET).
In above-mentioned medical image organ recognition methods basis on, the present invention also proposes the dividing method of organ in a kind of medical image, system for use in carrying comprises image collection module, target organ identification module, target organ segmentation module, and concrete steps are as shown in Figure 10:
Image collection module obtains pending medical image, and medical image can be the image that the equipment such as MRI, CT, SPECT, PET gather, and can comprise multiple body parts such as head, chest, belly, pelvic cavity in image.
Target organ identification module carries out traversal to medical image and detects acquisition testing result, and obtain target organ border according to testing result, be specially: utilize the effective Haar feature chosen to detect medical image, obtain the border of target organ, medical image is split into some two dimensional images along at least two reference directions respectively, and sets detection window according to the size of target organ; Based on the machine learning method of two dimensional image, choose effective Haar feature, according to effective Haar feature composition AdaBoost cascade classifier, utilize detection window to carry out traversal to two dimensional image respectively according to the detection step-length of setting and detect, obtain the testing result along corresponding reference direction; Sampled result is carried out result fusion in three-dimensional system of coordinate, be retained in the pixel of all test positive on the two dimensional surface that splits along different reference direction, thus determine in medical image containing target organ, obtain target organ border, and then call the region comprising target organ targetedly, reduce the zone of action of follow-up organ segmentation.It should be noted that, reference direction can be X-axis, Y-axis, Z axis or any combinations of directions between the two, any direction namely between X-Y, Y-Z or Z-X.In the specific embodiment of the invention, order reference direction is chosen as X-axis, Y-axis, Z axis three fractionation direction, obtains two dimensional image in three directions.
By above-mentioned target organ identifying, the region comprising target organ can be obtained, adopt target organ segmentation module to split target organ in the target organ border that target organ identification module obtains subsequently.To the method for target organ segmentation can be: utilize Hopfield network to carry out cluster segmentation to the textural characteristics of image; Based on the Iamge Segmentation of bayes method; The fuzzy connectedness segmentation method that aspect graph registration learned by plane is conciliate based on image; Utilize the organs automatic segmentation etc. of 3D region growth method.The inventive method calls partitioning algorithm to the region comprising target organ targetedly, to avoid in traditional images dividing method when the body part of indefinite correspondence, blindly call the shortcoming of the partitioning algorithm of all organs, save the processing time, improve treatment effeciency.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the recognition methods of organ in medical image, is characterized in that, comprise the following steps:
Obtain pending medical image, described medical image is split into some two dimensional images at X, Y and Z-direction respectively, and set detection window according to the size of target organ;
Utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting to detect, obtain the testing result at X, Y and Z-direction;
Described testing result is carried out result fusion, is retained in the pixel of all test positive on X, Y and Z axis three directions, thus determines target organ border described in described medical image.
2. the recognition methods of organ in medical image according to claim 1, is characterized in that, also comprises the training process utilizing AdaBoost algorithm to generate AdaBoost cascade classifier, is specially:
A) build training data, and choose positive sample areas and negative sample region from described training data, described positive sample areas is the sample window comprising described target organ, and described negative sample region is the sample window not comprising described target organ completely;
B) calculate the Haar eigenwert of described positive sample and negative sample, utilize AdaBoost algorithm from the effective Haar feature of described Haar Feature Selection, each effective Haar feature forms single Weak Classifier;
C) Weak Classifier described in several forms single strong classifier, and strong classifier cascade described in several forms AdaBoost cascade classifier.
3. the recognition methods of organ in medical image according to claim 2, it is characterized in that, described Haar feature is calculated by integrogram, described integrogram numerical value be on image a coordinate points above and the pixel value sum of whole points on the left side.
4. the recognition methods of organ in medical image according to claim 3, is characterized in that, deletes the Haar feature being less than setting size, and the Haar feature that delete position is adjacent and measure-alike.
5. the recognition methods of organ in medical image according to claim 3, it is characterized in that, utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting to detect, obtain the testing result at X, Y and Z-direction, its detailed process is:
In described detection window, utilize described AdaBoost cascade classifier to detect the two dimensional image split at X, Y and Z-direction respectively according to the detection step-length of setting, and preserve respectively in three directions by the testing result of described AdaBoost cascade classifier;
Judge whether the traversal of two dimensional image completes, if not, then continue above-mentioned testing process; If so, then end is detected.
6. the recognition methods of organ in medical image according to claim 5, it is characterized in that, described detection step-length is the distance between adjacent three pixels.
7. the recognition methods of organ in medical image according to claim 1, it is characterized in that, also comprise the image after to result fusion and split into two dimensional image at X, Y and Z-direction respectively, and add up the pixel number that testing result on described two dimensional image is the positive respectively, the border of described target organ is determined further according to the distribution of pixel.
8. the recognition methods of organ in medical image according to claim 7, it is characterized in that, determine that the border of described target organ is specially further according to the distribution of pixel: described image is in the border maximal value of X, Y and Z-direction and minimum value to adopt the method for Gauss Distribution Fitting to determine respectively, and the region that described border maximal value and minimum value surround is described target organ scope.
9. the dividing method of organ in medical image, is characterized in that, comprise the steps:
There is provided image collection module, described image collection module obtains pending medical image;
There is provided target organ identification module, described target organ identification module carries out relevant treatment to described medical image, to obtain described target organ border;
There is provided target organ to split module, in described target organ segmentation module objectives organ boundaries, described target organ is split;
Described relevant treatment is:
Described medical image is split into some two dimensional images along at least two reference directions respectively in three-dimensional system of coordinate, and sets detection window according to the size of target organ;
Utilize described detection window to carry out traversal to described two dimensional image respectively according to the detection step-length of setting to detect, obtain the testing result along corresponding reference direction;
Described testing result is merged in three-dimensional system of coordinate, is retained in the pixel of all test positive on the two dimensional surface that splits along different reference direction, thus determines described target organ border.
10. the dividing method of organ in medical image according to claim 9, is characterized in that, described reference direction is X-axis, Y-axis, Z axis or combinations of directions between the two arbitrarily.
CN201510729150.0A 2015-10-30 The recognition methods of organ and dividing method in medical image Active CN105389813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510729150.0A CN105389813B (en) 2015-10-30 The recognition methods of organ and dividing method in medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510729150.0A CN105389813B (en) 2015-10-30 The recognition methods of organ and dividing method in medical image

Publications (2)

Publication Number Publication Date
CN105389813A true CN105389813A (en) 2016-03-09
CN105389813B CN105389813B (en) 2018-08-31

Family

ID=

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107563998A (en) * 2017-08-30 2018-01-09 上海联影医疗科技有限公司 Medical image cardiac image processing method
CN108055454A (en) * 2017-12-08 2018-05-18 合肥工业大学 The architectural framework and image processing method of medical endoscope artificial intelligence chip
CN108701370A (en) * 2016-03-10 2018-10-23 西门子保健有限责任公司 The medical imaging based on content based on machine learning renders
CN108764355A (en) * 2018-05-31 2018-11-06 清华大学 Image processing apparatus and method based on textural characteristics classification
CN109035261A (en) * 2018-08-09 2018-12-18 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109087357A (en) * 2018-07-26 2018-12-25 上海联影智能医疗科技有限公司 Scan orientation method, apparatus, computer equipment and computer readable storage medium
CN109300088A (en) * 2018-09-17 2019-02-01 青岛海信医疗设备股份有限公司 A kind of method and apparatus of determining organ and tumor contact area
CN109345629A (en) * 2018-08-08 2019-02-15 安徽慧软科技有限公司 A kind of 3 d medical images are fuzzy to highlight display methods
CN109658419A (en) * 2018-11-15 2019-04-19 浙江大学 The dividing method of organella in a kind of medical image
WO2019085985A1 (en) * 2017-11-02 2019-05-09 Shenzhen United Imaging Healthcare Co., Ltd. Systems and methods for generating semantic information for scanning image
CN109934796A (en) * 2018-12-26 2019-06-25 苏州雷泰医疗科技有限公司 A kind of automatic delineation method of organ based on Deep integrating study
CN109978037A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Image processing method, model training method, device and storage medium
CN110235172A (en) * 2018-06-07 2019-09-13 深圳迈瑞生物医疗电子股份有限公司 Image analysis method and ultrasonic image equipment based on ultrasonic image equipment
CN110334736A (en) * 2019-06-03 2019-10-15 北京大米科技有限公司 Image-recognizing method, device, electronic equipment and medium
CN110348318A (en) * 2019-06-18 2019-10-18 北京大米科技有限公司 Image-recognizing method, device, electronic equipment and medium
CN110399913A (en) * 2019-07-12 2019-11-01 杭州依图医疗技术有限公司 The classification method and device at position are shot in a kind of medical image
CN110533637A (en) * 2019-08-02 2019-12-03 杭州依图医疗技术有限公司 A kind of method and device of test object
CN110533638A (en) * 2019-08-02 2019-12-03 杭州依图医疗技术有限公司 A kind of method and device of measurement object size
CN110689947A (en) * 2018-07-04 2020-01-14 天津天堰科技股份有限公司 Display device and display method
CN110867233A (en) * 2019-11-19 2020-03-06 西安邮电大学 System and method for generating electronic laryngoscope medical test reports
CN112102333A (en) * 2020-09-02 2020-12-18 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
CN103295256A (en) * 2012-01-24 2013-09-11 株式会社东芝 Medical image processing apparatus and medical image processing program
CN103914697A (en) * 2012-12-29 2014-07-09 上海联影医疗科技有限公司 Extraction method for region of interest of breast three-dimension image
CN104637056A (en) * 2015-02-02 2015-05-20 复旦大学 Method for segmenting adrenal tumor of medical CT (computed tomography) image based on sparse representation
CN104751434A (en) * 2013-12-25 2015-07-01 北京三星通信技术研究有限公司 Method and apparatus for dividing object from image
CN104798107A (en) * 2012-11-23 2015-07-22 皇家飞利浦有限公司 Generating a key-image from a medical image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
CN103295256A (en) * 2012-01-24 2013-09-11 株式会社东芝 Medical image processing apparatus and medical image processing program
CN104798107A (en) * 2012-11-23 2015-07-22 皇家飞利浦有限公司 Generating a key-image from a medical image
CN103914697A (en) * 2012-12-29 2014-07-09 上海联影医疗科技有限公司 Extraction method for region of interest of breast three-dimension image
CN104751434A (en) * 2013-12-25 2015-07-01 北京三星通信技术研究有限公司 Method and apparatus for dividing object from image
CN104637056A (en) * 2015-02-02 2015-05-20 复旦大学 Method for segmenting adrenal tumor of medical CT (computed tomography) image based on sparse representation

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108701370A (en) * 2016-03-10 2018-10-23 西门子保健有限责任公司 The medical imaging based on content based on machine learning renders
CN108701370B (en) * 2016-03-10 2020-01-21 西门子保健有限责任公司 Content-based medical imaging rendering based on machine learning
US10339695B2 (en) 2016-03-10 2019-07-02 Siemens Healthcare Gmbh Content-based medical image rendering based on machine learning
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107563998B (en) * 2017-08-30 2020-02-11 上海联影医疗科技有限公司 Method for processing heart image in medical image
CN107563998A (en) * 2017-08-30 2018-01-09 上海联影医疗科技有限公司 Medical image cardiac image processing method
WO2019085985A1 (en) * 2017-11-02 2019-05-09 Shenzhen United Imaging Healthcare Co., Ltd. Systems and methods for generating semantic information for scanning image
US11348247B2 (en) 2017-11-02 2022-05-31 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for generating semantic information for scanning image
CN108055454A (en) * 2017-12-08 2018-05-18 合肥工业大学 The architectural framework and image processing method of medical endoscope artificial intelligence chip
CN108055454B (en) * 2017-12-08 2020-07-28 合肥工业大学 System architecture of medical endoscope artificial intelligence chip and image processing method
CN108764355A (en) * 2018-05-31 2018-11-06 清华大学 Image processing apparatus and method based on textural characteristics classification
CN113222966A (en) * 2018-06-07 2021-08-06 深圳迈瑞生物医疗电子股份有限公司 Image analysis method based on ultrasonic imaging equipment and ultrasonic imaging equipment
CN110235172B (en) * 2018-06-07 2021-07-20 深圳迈瑞生物医疗电子股份有限公司 Image analysis method based on ultrasonic imaging equipment and ultrasonic imaging equipment
CN110235172A (en) * 2018-06-07 2019-09-13 深圳迈瑞生物医疗电子股份有限公司 Image analysis method and ultrasonic image equipment based on ultrasonic image equipment
CN113222966B (en) * 2018-06-07 2023-01-10 深圳迈瑞生物医疗电子股份有限公司 Image analysis method based on ultrasonic imaging equipment and ultrasonic imaging equipment
CN110689947A (en) * 2018-07-04 2020-01-14 天津天堰科技股份有限公司 Display device and display method
CN109087357A (en) * 2018-07-26 2018-12-25 上海联影智能医疗科技有限公司 Scan orientation method, apparatus, computer equipment and computer readable storage medium
CN109345629A (en) * 2018-08-08 2019-02-15 安徽慧软科技有限公司 A kind of 3 d medical images are fuzzy to highlight display methods
CN109035261A (en) * 2018-08-09 2018-12-18 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109300088B (en) * 2018-09-17 2022-12-20 青岛海信医疗设备股份有限公司 Method and device for determining contact area between organ and tumor
CN109300088A (en) * 2018-09-17 2019-02-01 青岛海信医疗设备股份有限公司 A kind of method and apparatus of determining organ and tumor contact area
CN109658419A (en) * 2018-11-15 2019-04-19 浙江大学 The dividing method of organella in a kind of medical image
CN109934796A (en) * 2018-12-26 2019-06-25 苏州雷泰医疗科技有限公司 A kind of automatic delineation method of organ based on Deep integrating study
CN109978037A (en) * 2019-03-18 2019-07-05 腾讯科技(深圳)有限公司 Image processing method, model training method, device and storage medium
CN110334736A (en) * 2019-06-03 2019-10-15 北京大米科技有限公司 Image-recognizing method, device, electronic equipment and medium
CN110348318A (en) * 2019-06-18 2019-10-18 北京大米科技有限公司 Image-recognizing method, device, electronic equipment and medium
CN110399913A (en) * 2019-07-12 2019-11-01 杭州依图医疗技术有限公司 The classification method and device at position are shot in a kind of medical image
CN110533637A (en) * 2019-08-02 2019-12-03 杭州依图医疗技术有限公司 A kind of method and device of test object
CN110533637B (en) * 2019-08-02 2022-02-11 杭州依图医疗技术有限公司 Method and device for detecting object
CN110533638A (en) * 2019-08-02 2019-12-03 杭州依图医疗技术有限公司 A kind of method and device of measurement object size
CN110867233A (en) * 2019-11-19 2020-03-06 西安邮电大学 System and method for generating electronic laryngoscope medical test reports
CN112102333A (en) * 2020-09-02 2020-12-18 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image
CN112102333B (en) * 2020-09-02 2022-11-04 合肥工业大学 Ultrasonic region segmentation method and system for B-ultrasonic DICOM (digital imaging and communications in medicine) image

Similar Documents

Publication Publication Date Title
Yang et al. Research on feature extraction of tumor image based on convolutional neural network
Kermi et al. Fully automated brain tumour segmentation system in 3D‐MRI using symmetry analysis of brain and level sets
CN105957066B (en) CT image liver segmentation method and system based on automatic context model
CN106296653B (en) Brain CT image hemorrhagic areas dividing method and system based on semi-supervised learning
US7876938B2 (en) System and method for whole body landmark detection, segmentation and change quantification in digital images
CN108010021A (en) A kind of magic magiscan and method
Qadri et al. OP-convNet: a patch classification-based framework for CT vertebrae segmentation
Zhang et al. Review of breast cancer pathologigcal image processing
CN108038513A (en) A kind of tagsort method of liver ultrasonic
Wu et al. AAR-RT–a system for auto-contouring organs at risk on CT images for radiation therapy planning: principles, design, and large-scale evaluation on head-and-neck and thoracic cancer cases
CN104616289A (en) Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
Javaid et al. Multi-organ segmentation of chest CT images in radiation oncology: comparison of standard and dilated UNet
CN106846330A (en) Human liver&#39;s feature modeling and vascular pattern space normalizing method
CN103955912A (en) Adaptive-window stomach CT image lymph node tracking detection system and method
Merkow et al. Structural edge detection for cardiovascular modeling
Ramasamy et al. Machine learning in cyber physical systems for healthcare: brain tumor classification from MRI using transfer learning framework
Liu et al. Automated classification and measurement of fetal ultrasound images with attention feature pyramid network
Ramana Alzheimer disease detection and classification on magnetic resonance imaging (MRI) brain images using improved expectation maximization (IEM) and convolutional neural network (CNN)
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
Umadevi et al. Enhanced Segmentation Method for bone structure and diaphysis extraction from x-ray images
WO2021183765A1 (en) Automated detection of tumors based on image processing
Chanchlani et al. Tumor detection in brain MRI using clustering and segmentation algorithm
Sharma et al. Importance of deep learning models to perform segmentation on medical imaging modalities
CN111986216B (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
Yan et al. Segmentation of pulmonary parenchyma from pulmonary CT based on ResU-Net++ model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 201807 No. 2258 Chengbei Road, Jiading Industrial Zone, Jiading District, Shanghai.

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 No. 2258 Chengbei Road, Jiading Industrial Zone, Jiading District, Shanghai.

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CP01 Change in the name or title of a patent holder