US20070047790A1  Method of Segmenting Anatomic Entities in Digital Medical Images  Google Patents
Method of Segmenting Anatomic Entities in Digital Medical Images Download PDFInfo
 Publication number
 US20070047790A1 US20070047790A1 US11/465,724 US46572406A US2007047790A1 US 20070047790 A1 US20070047790 A1 US 20070047790A1 US 46572406 A US46572406 A US 46572406A US 2007047790 A1 US2007047790 A1 US 2007047790A1
 Authority
 US
 United States
 Prior art keywords
 cost
 landmark
 image
 points
 profile
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Granted
Links
 238000005070 sampling Methods 0.000 claims abstract description 32
 230000011218 segmentation Effects 0.000 claims description 73
 238000005259 measurement Methods 0.000 claims description 47
 239000011159 matrix material Substances 0.000 claims description 41
 230000000875 corresponding Effects 0.000 claims description 14
 238000004590 computer program Methods 0.000 claims description 6
 210000004072 Lung Anatomy 0.000 description 50
 210000000038 chest Anatomy 0.000 description 19
 210000002216 Heart Anatomy 0.000 description 13
 238000000034 method Methods 0.000 description 11
 238000005457 optimization Methods 0.000 description 9
 238000010276 construction Methods 0.000 description 8
 238000004422 calculation algorithm Methods 0.000 description 7
 201000011442 metachromatic leukodystrophy Diseases 0.000 description 7
 238000000354 decomposition reaction Methods 0.000 description 6
 238000006073 displacement reaction Methods 0.000 description 6
 230000001131 transforming Effects 0.000 description 6
 238000004458 analytical method Methods 0.000 description 5
 230000000717 retained Effects 0.000 description 5
 210000000988 Bone and Bones Anatomy 0.000 description 4
 210000003484 anatomy Anatomy 0.000 description 4
 210000000056 organs Anatomy 0.000 description 4
 239000000203 mixture Substances 0.000 description 3
 238000000844 transformation Methods 0.000 description 3
 210000004197 Pelvis Anatomy 0.000 description 2
 239000000470 constituent Substances 0.000 description 2
 230000002596 correlated Effects 0.000 description 2
 230000001419 dependent Effects 0.000 description 2
 238000010191 image analysis Methods 0.000 description 2
 230000014616 translation Effects 0.000 description 2
 210000003557 Bones of Lower Extremity Anatomy 0.000 description 1
 210000003199 Bones of Upper Extremity Anatomy 0.000 description 1
 210000004556 Brain Anatomy 0.000 description 1
 206010007554 Cardiac failure Diseases 0.000 description 1
 210000000610 Foot Bones Anatomy 0.000 description 1
 241000288110 Fulica Species 0.000 description 1
 210000002411 Hand Bones Anatomy 0.000 description 1
 206010019280 Heart failure Diseases 0.000 description 1
 210000001624 Hip Anatomy 0.000 description 1
 210000003734 Kidney Anatomy 0.000 description 1
 210000003127 Knee Anatomy 0.000 description 1
 208000007177 Left Ventricular Hypertrophy Diseases 0.000 description 1
 210000002414 Leg Anatomy 0.000 description 1
 210000004185 Liver Anatomy 0.000 description 1
 208000005228 Pericardial Effusion Diseases 0.000 description 1
 210000002307 Prostate Anatomy 0.000 description 1
 208000000924 Right Ventricular Hypertrophy Diseases 0.000 description 1
 241000270295 Serpentes Species 0.000 description 1
 DBMJMQXJHONAFJUHFFFAOYSAM Sodium laurylsulphate Chemical compound data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='300px' height='300px' viewBox='0 0 300 300'>
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='300' height='300' x='0' y='0'> </rect>
<path class='bond-0' d='M 13.6364,160.655 L 33.0245,150.162' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 33.0245,150.162 L 51.8059,161.706' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 51.8059,161.706 L 71.194,151.213' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 71.194,151.213 L 89.9755,162.757' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 89.9755,162.757 L 109.364,152.264' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 109.364,152.264 L 128.145,163.808' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 128.145,163.808 L 147.533,153.314' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 147.533,153.314 L 166.315,164.858' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 166.315,164.858 L 185.703,154.365' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 185.703,154.365 L 204.484,165.909' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 204.484,165.909 L 223.872,155.416' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 223.872,155.416 L 229.786,159.051' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 229.786,159.051 L 235.701,162.686' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 249.728,163.131 L 252.448,161.659' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 252.448,161.659 L 255.169,160.187' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 269.116,152.638 L 271.836,151.166' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 271.836,151.166 L 274.557,149.693' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 260.152,148.343 L 259.157,146.505' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 259.157,146.505 L 258.162,144.666' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 256.274,150.442 L 255.279,148.603' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 255.279,148.603 L 254.284,146.765' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 264.886,166.354 L 265.895,168.218' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 265.895,168.218 L 266.903,170.081' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 268.764,164.255 L 269.772,166.119' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 269.772,166.119 L 270.781,167.982' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:2.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='158.252' y='126.791' class='atom-0' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >N</text>
<text x='164.337' y='126.791' class='atom-0' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >a</text>
<text x='169.022' y='123.263' class='atom-0' style='font-size:5px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >+</text>
<text x='240.008' y='171.369' class='atom-13' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='259.396' y='160.876' class='atom-14' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FCC633' >S</text>
<text x='278.784' y='150.383' class='atom-15' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='284.869' y='146.855' class='atom-15' style='font-size:5px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >-</text>
<text x='248.903' y='141.488' class='atom-16' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='269.89' y='180.264' class='atom-17' style='font-size:8px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
</svg>
 data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='85px' height='85px' viewBox='0 0 85 85'>
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='85' height='85' x='0' y='0'> </rect>
<path class='bond-0' d='M 3.36364,44.6532 L 8.73752,41.7447' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 8.73752,41.7447 L 13.9433,44.9444' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 13.9433,44.9444 L 19.3171,42.0359' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 19.3171,42.0359 L 24.5229,45.2356' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 24.5229,45.2356 L 29.8968,42.3272' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 29.8968,42.3272 L 35.1025,45.5269' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 35.1025,45.5269 L 40.4764,42.6184' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 40.4764,42.6184 L 45.6821,45.8181' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 45.6821,45.8181 L 51.056,42.9097' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 51.056,42.9097 L 56.2617,46.1094' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 56.2617,46.1094 L 61.6356,43.2009' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 61.6356,43.2009 L 63.2621,44.2007' style='fill:none;fill-rule:evenodd;stroke:#3B4143;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 63.2621,44.2007 L 64.8886,45.2004' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 68.7941,45.3438 L 69.5561,44.9314' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 69.5561,44.9314 L 70.318,44.519' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 74.168,42.4353 L 74.93,42.0229' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 74.93,42.0229 L 75.6919,41.6106' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 71.6958,41.2486 L 71.6081,41.0866' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 71.6081,41.0866 L 71.5205,40.9247' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 70.621,41.8303 L 70.5333,41.6683' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 70.5333,41.6683 L 70.4457,41.5064' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 73.3842,46.9358 L 73.4811,47.1148' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 73.4811,47.1148 L 73.5779,47.2938' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 74.459,46.3541 L 74.5559,46.5331' style='fill:none;fill-rule:evenodd;stroke:#FCC633;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 74.5559,46.5331 L 74.6527,46.7121' style='fill:none;fill-rule:evenodd;stroke:#E84235;stroke-width:1.0px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='42.3807' y='37.0447' class='atom-0' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >N</text>
<text x='46.5207' y='37.0447' class='atom-0' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >a</text>
<text x='49.7088' y='34.6447' class='atom-0' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#3B4143' >+</text>
<text x='65.0414' y='49.4006' class='atom-13' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='70.4153' y='46.4922' class='atom-14' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FCC633' >S</text>
<text x='75.7891' y='43.5837' class='atom-15' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='79.9291' y='41.1837' class='atom-15' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >-</text>
<text x='67.5068' y='41.1183' class='atom-16' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
<text x='73.3237' y='51.8661' class='atom-17' style='font-size:6px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#E84235' >O</text>
</svg>
 [Na+].CCCCCCCCCCCCOS([O])(=O)=O DBMJMQXJHONAFJUHFFFAOYSAM 0.000 description 1
 210000000952 Spleen Anatomy 0.000 description 1
 210000002784 Stomach Anatomy 0.000 description 1
 230000002159 abnormal effect Effects 0.000 description 1
 230000003044 adaptive Effects 0.000 description 1
 230000001058 adult Effects 0.000 description 1
 230000003190 augmentative Effects 0.000 description 1
 238000004364 calculation method Methods 0.000 description 1
 239000000969 carrier Substances 0.000 description 1
 238000004891 communication Methods 0.000 description 1
 230000001808 coupling Effects 0.000 description 1
 238000010168 coupling process Methods 0.000 description 1
 238000005859 coupling reaction Methods 0.000 description 1
 230000003247 decreasing Effects 0.000 description 1
 238000003745 diagnosis Methods 0.000 description 1
 238000002059 diagnostic imaging Methods 0.000 description 1
 230000004069 differentiation Effects 0.000 description 1
 238000003708 edge detection Methods 0.000 description 1
 238000011156 evaluation Methods 0.000 description 1
 238000002474 experimental method Methods 0.000 description 1
 239000000284 extract Substances 0.000 description 1
 238000000605 extraction Methods 0.000 description 1
 238000001914 filtration Methods 0.000 description 1
 210000000527 greater trochanter Anatomy 0.000 description 1
 238000003384 imaging method Methods 0.000 description 1
 230000000399 orthopedic Effects 0.000 description 1
 238000000513 principal component analysis Methods 0.000 description 1
 210000004872 soft tissue Anatomy 0.000 description 1
 210000001519 tissues Anatomy 0.000 description 1
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
 G06K9/46—Extraction of features or characteristics of the image
 G06K9/4671—Extracting features based on salient regional features, e.g. Scale Invariant Feature Transform [SIFT] keypoints

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6201—Matching; Proximity measures
 G06K9/6202—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
 G06K9/6203—Shifting or otherwise transforming the patterns to accommodate for positional errors
 G06K9/6206—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching
 G06K9/6207—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching based on a local optimisation criterion, e.g. "snakes", i.e. active contour models of the pattern to be recognised

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6201—Matching; Proximity measures
 G06K9/6202—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
 G06K9/6203—Shifting or otherwise transforming the patterns to accommodate for positional errors
 G06K9/6206—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching
 G06K9/6209—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching based on shape statistics, e.g. active shape models of the pattern to be recognised

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/10—Segmentation; Edge detection
 G06T7/12—Edgebased segmentation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/10—Segmentation; Edge detection
 G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/30—Subject of image; Context of image processing
 G06T2207/30004—Biomedical image processing
 G06T2207/30061—Lung
Abstract
For each of a number of landmarks in an image an initial position of the landmark is defined. Next a neighborhood around the initial position, comprising a number of candidate locations of the landmark is sampled and a cost is associated with each of the candidate locations. A cost function expressing a weighted sum of overall gray level cost and overall shape cost for all candidate locations is optimized. A segmented anatomic entity is defined as a path through a selected combination of candidate locations for which combination the cost function is optimized.
Description
 This application claims priority to European Application Nos. EP 05107903.6, filed Aug. 30, 2005, and EP 05107907.7, filed Aug. 30, 2005, and also claims the benefit of U.S. Provisional Application No. 60/715,878, filed on Sep. 9, 2005, all three of which are incorporated herein by reference in their entirety.
 The present invention relates to a method of segmenting an entity in a digital image, more specifically an anatomic entity in a digital medical image. The segmentation process according to the present invention is based on the use of geometric and photometric models of an image.
 The method of the present invention can inter alia be used as a process of performing geometric measurements on digital images, more specifically on digital medical images.
 In radiological practice, geometric measurements are frequently used to aid diagnosis of abnormalities. In order to perform these measurements, key user points must be placed in an image, for example in an image displayed on a display device, on their corresponding anatomical landmark position. Measurements such as the distance between two points, or the angulation between lines are based on the position of these key user points. Furthermore, the geometry as a whole may be assessed for normality or abnormality, involving an analysis of the complete shape. Hence there is a need to automate and objectify the extraction of quantitative information that is embedded in a radiological image.
 An example of such a frequently performed measurement is the computation of the cardiothoracic ratio (CTR) in thorax RX images (
FIG. 6 ). This ratio is defined as the ratio of the transverse diameter of the heart, at the level of the apex, to the internal diameter of the thorax (ID), i.e. CTR=(MLD+MRD)/ID.  The transverse diameter of the heart is composed of the maximum transverse diameter on the left side of the heart (MLD) and the maximum transverse diameter on the right side of the heart (MRD). Clearly, this definition entails that the radiologist searches along the inner border of the left and right ribcage boundary to locate the point pair needed to compute the internal diameter ID. This point pair must lie at the greatest internal diameter of the thorax. Likewise, the left and right heart shadow border must be searched to locate the points needed to compute the sum of MLD and MRD. More specifically, these points are situated most distant with respect to the midline of the spine. The process of border search requires that the radiologist is performing anatomy segmentation and locating the points (a total of four in this example) on the segmented anatomy. The segmentation step, in the case of CTR computation, amounts to delineating the lung fields.
 Many other measurements in digital images follow a similar approach involving the segmentation (224,418) of the anatomic organ or entity onto which segmented geometry the characteristic points (510) and measurement objects (512) are determined and finally measurements (514) are performed (
FIG. 5 ).  Referring to the example of cardiothoracic index calculation, to automatically position the required points, a method is needed to automatically segment the lung fields on a chest radiographic image.
 The segmentation problem can be approached in several ways, depending on the application. Segmentation strategies evolved from lowlevel strategies in the early years of computer vision to the more recent modelbased strategies.
 Lowlevel methods rely on local image operators separating pixels with different photometric characteristics and grouping of pixels with similar local photometric characteristics. Examples of both classes are edge detection and region growing. Despite the poor performance of these lowlevel approaches, they are very popular in most commercial image analysis tools. The main reasons are that they are simple to understand and to implement. For complex image data however, such as present in medical images and exemplified by the content of a thorax image as described above, their usefulness is limited.
 More successful methods incorporate a priori knowledge about the shape to be segmented and about the photometric or graylevel appearance of the object in the image. These methods, referred to as modelbased methods are often based on template matching. A template is matched for instance by correlation or with generalized Hough transform techniques. Unfortunately, the template matching is likely to fail in case of medical images. This is due to the large variability in shape and graylevel appearance that the anatomic object may exhibit.
 Methods based on active contours, introduced by Kass et. al. (M. Kass, A. Witkin, and D. Terzopoulos, Snakes: active contour models, Int. J. Computer Vision, 1(4):321331, 1988) and level sets (J. A. Sethian, Level set methods and fast marching methods, Cambridge Univ. Press, Cambridge, U.K. 1999) are able to cope with a larger shape variability, but are still unsuited for many medical segmentation tasks because little a priori knowledge about the object to be segmented can be incorporated. Handcrafted parametric models overcome this problem, but are limited to a single application.
 In view of these shortcomings, it is obvious that there is need for a generic segmentation scheme that can be trained with examples in order to acquire knowledge about the shape of the object to be segmented and the graylevel appearance of the object in the image. Active shape models (ASMs), introduced by Cootes and Taylor (T. F. Cootes, C. J. Taylor, D. Cooper, J. Graham, Active Shape Models—their training and applications, Computer Vision and Image Understanding, 61(1):3859, 1995) satisfy this definition of segmentation schemes. The shape model is given by the principal components of vectors of landmark points. The graylevel appearance model describes the statistics of the normalized first derivative of profiles centered at each landmark that run perpendicular to the object contour. The location of a landmark in a new image is found by minimizing the Mahalanobis distance between the first derivative profile and the distribution of the profile. This algorithm starts from an initial estimate and performs a fitting procedure, which is an alternation of landmark displacements and shape model fitting. Similar approaches have been devised all employing a threestep procedure. First, they all use a shape model that ensures that plausible results are generated. Secondly, they use a graylevel appearance model to place the object at a location where the graylevel pattern around the border or within the object is similar to what is expected from the training examples. Finally, the algorithm fits the model by minimizing some cost function.
 The approach presented by active shape models is still faced with several limitations.
 A first limitation is the need for an initial estimate, known as the initial positioning problem. In some cases, an extensive search for a proper initial contour is needed in this framework.
 A second limitation lies in the alternate use of shape model and graylevel appearance model. Firstly, the segmentation estimate is updated using the graylevel appearance model. Secondly, the shape model is fitted to the updated contour. Unfortunately the shape model is misled if the graylevel appearance model wrongly locates a landmark.
 Some experiments with active shape model segmentation schemes exhibited another problem. If the graylevel appearance model for a specific landmark is used to find a better location for that landmark, it requires that the true landmark location lies inside the region the algorithm is exploring. In some cases the algorithm may search for a landmark in the region of another landmark. This means that the wrong graylevel appearance model is used at the wrong region, resulting in a poor segmentation.
 It is an object of the present invention to provide a method of segmenting an entity in a digital image, more specifically an anatomic entity in a digital medical image, that overcomes the problems of the prior art.
 The abovementioned aspects are realized by a method as set out in claim 1.
 Specific features for preferred embodiments of the invention are set out in the dependent claims.
 Another aspect of this invention relates to a method of measuring an anatomic entity in a digital medical image as set out in the appending claims.
 In the following the terms gray value model, gray value appearance model, gray level model, gray level appearance model and photometric model are used as synonyms.
 Likewise the terms shape model and geometric model are used as synonyms.
 The terms feature image and feature are also used as synonyms.
 The embodiments of the methods of the present invention are generally implemented in the form of a computer program product adapted to carry out the method steps of the present invention when run on a computer. The computer program product is commonly stored in a computer readable carrier medium such as a DVD or a CDROM. Alternatively the computer program product takes the form of an electric signal and can be communicated to a user through electronic communication.
 In EP A 1349098, which is incorporated herein by this reference, a method is disclosed to automate the measurements in digitally acquired medical images by grouping measurement objects and entities into a computerized measurement scheme including a bidirectionally linked external graphical model and an internal informatics model. In a measurement session according to EP A 1349098, a measurement scheme is retrieved from the computer and activated. Measurements are subsequently performed on the displayed image under guidance of the activated measurement scheme.
 In this computerized method, a multitude of geometric objects are mapped in the digital image onto which other measurement objects and finally measurement entities (such as distances and angles) are based. The basic geometric objects are typically key user points, which define other geometric objects onto which geometric measurements are based. The above identified patent application does not disclose however how the mapping may be effectuated automatically without the need of user positioning and manipulation of key measurement points. The method according to the present invention can be used to effectuate this mapping.
 According to the present invention, a segmentation method is provided which comprises the construction of models of anatomic entities such as an organ or a structure in a medical image, which models can be trained from examples so as to obtain knowledge of the gray value appearance and knowledge of the shape of the anatomic entity to be segmented.
 The gray value appearance model comprises the probability distribution of the gray value and a number of multiscale derivatives of gray values at a set of landmarks distributed over the anatomic outline.
 The shape model comprises the probability distribution of each connection vector between successive landmarks along the outline of an anatomic entity.
 A discrete pointbased object representation is introduced to describe the anatomic outline in a medical image.
 Two strategies and associated systems for generating models and application of the models to segment actual images are disclosed.
 These strategies, although decomposing in building blocks with equal goals, construct and employ different photometric and geometric model knowledge.
 First, a new gray level appearance model is constructed from training images, encoding photometric knowledge at landmark positions. This step exploits intensity correlation in neighborhoods sampled around each landmark.
 Secondly, a shape model is constructed, encoding geometric knowledge between landmarks. This step exploits spatial correlation between landmarks of the segmented objects.
 In a segmentation step, photometric and geometric knowledge are jointly used to segment one or more anatomic contours on a new image.
 The resulting segmentations may be used to derive the position of desired measurement points in the course of performing geometric measurements on an image.
 Measurement points can be defined in either a direct or an indirect way.
 In the direct way, the positions of the landmarks are used as individual measurement points.
 In the indirect way, the landmarks resulting from the segmentation process are interconnected by a curve and characteristic points on this curve are defined as measurement points.
 Gray Level Appearance Models
 The first step in each modeling system constrains the geometric position of the landmark to lie on a linear path perpendicular to the present contour or a sparse grid centered on the landmark. The photometric pattern at a landmark point in an image is captured by a profile vector sampled in feature images based on image derivates extracted at a number of scales. The profile sampling may be along a linear path in the images or on a circular path around the landmark point.
 Feature Images
 When the photometric pattern is computed for all image points in the image, feature images are obtained, the size of which is equal to that of the original image. Neighborhood operations are typically employed to compute the feature images. The values of the features in the feature image for a specific point can be arranged in a feature vector. The computation of feature images typically comprise two steps: (a) a linear filtering step using a bank of convolution kernels and (b) a postprocessing step involving the substeps of a nonlinear point operation combined with the computation of local image statistics. In step (b) one of the substeps may be performed alone.
 Different filters and different types of postprocessing lead to different types of feature images. For example, Laws constructed 25 twodimensional convolution kernels as the outer product of pairs of vectors resembling ideal feature profiles such as Level, Edge, Spot, Wave and Ripple. The postprocessing step comprises nonlinear windowing in which the absolute filtered values are averaged in larger windows to obtain image texture energy measures. Unser used filter banks constructed as the outer product of vectors corresponding to wellknown transforms like discrete sine (DST), discrete cosine (DCT), discrete even sine (DEST), discrete real even Fourier (DREFT), discrete real odd Fourier (DROFT) transforms, and for postprocessing the computation of the channel variances. Finally, Gabor filter banks are constructed using symmetric or antisymmetric Gabor filters tuned to combinations of different spatial frequencies and different orientations, such that all outputs have equal frequency bandwidth. The nonlinear point operation step may include thresholding in this case.
 Other image characteristics may be measured using feature images based on multiresolution wavelets filters, rankvalue filters or adaptive filters.
 Feature images based on locally orderless images (LOI's) The image structure around an anatomical landmark in a medical image exhibits a typical graylevel pattern. In the present invention, this structure is captured in a number N of mathematical features as outlined in the sequel. The concept of using feature images to build a graylevel appearance model was introduced earlier in B. Van Ginneken et al., Active shape model segmentation with optimal features, IEEE Trans. on Medical Imaging, 21(8):924933, 2002, based on socalled locally orderless images (LOI's) presented by J. J. Koenderink and A. J. Vandoorn, The structure of locally orderless images, Int. J. of Computer Vision, 31(2):159168, 1999.
 A Taylor expansion can be used to approximate a graylevel function ƒ around a point of interest, i.e. a landmark, at position x_{0}, by a polynomial of order K. The coefficients of each term are given by the derivatives ƒ^{(i) }at x_{0}:
$f\left(x\right)\approx \sum _{i=0}^{K}\frac{1}{i!}{f}^{\left(n\right)}\left({x}_{0}\right){\left(x{x}_{0}\right)}^{i}$  All information in this function approximation is contained in the derivatives ƒ^{(i)}. Similarly, the image can be described by an image filter bank containing the derivatives of the original image (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}, . . . ). Each derivative image may further be computed at a number of blurring or inner scales σ. The feature images are finally obtained by computing the postprocessing step as local image statistics of each derivate image. The local image statistics comprise a number of moments of the local histogram in a window with width α around each location x_{0}. In a specific embodiment the first and second histogram moments, all derivatives up to secondorder (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}) and 5 inner scales (σ=0.5,1,2,4,8 pixels) are computed, amounting to a total of N=2×6×5=60 feature images. The window scale to compute the histogram is in accordance to the inner scale, i.e. α=2σ.
 The limitation of using the derivatives in the row and column directly (i.e. x resp. y) in the feature vector is the lack of invariance with respect to translation and rotation (i.e. Euclidean Transformations). Unless the images are antirotated into a standard orientation, these operators can only be trained on examples that have the same orientation. Cartesian differential invariants (CDI) describe the differential structure of an image independently of the chosen Cartesian coordinate system; hence the antirotation step can be omitted. The construction of CDI's from Gaussian differential operators was described in L. M. J. Florack et al., Scale and the differential structure of images, Image and Vision Computing, Vol. 10 (6):376388, 1992. The following are CDI's using derivatives up to second order: (L,L_{xx}+L_{yy},L^{2} _{x}+L^{2} _{y},L^{2} _{x}L_{xx}+2L_{xy}L_{x}L_{y}+L^{2} _{y}L_{yy},L^{2} _{xx}+2L^{2} _{xy}+L^{2} _{yy}). Again, at each pixel location, feature images are constructed using these operators at number of inner scales σ and computing the first two moments of the locally orderless histograms in windows of extent α, with α linked to the inner scale, e.g. α=2σ. These operators are independent to Euclidean transformations of image geometry, but still depend on the choice of the scale parameter σ.
 Profile and Feature Similarity Measurement
 The gray level appearance model is built on the basis of a mean profile and variancecovariance matrix, and exploits intensity correlation between points of the profile, as explained next. The intensity at a profile is characterized by the intensity feature vector as outlined above.
 In order to compare a profile with another profile, it is necessary to know how each point in the profile relates to any other point, and to measure the extent by which the value of a certain point of the profile varies from the mean with respect to any of the other points. This statistical relationship is captured by the covariance measure, which is always computed between 2 points (or the features at those points). If the covariance is measured between the feature value of a point and itself, the variance is obtained. When the profile includes k points at either side of the contour, for a total of 2k+1 points, (2k+1)k number of relationships exists. The covariance between a pair of points from the profile tells the way how a first value of the pair varies with respect to the second. When the covariance is large and positive, it indicates that when the first point's feature value increases, so does the second in a proportional way. When the covariance is large and negative, it indicates that when the first point's feature value increases, the second decreases in a proportional way. When the covariance is zero, it means that there is no systematic coupling in the feature values, i.e. they are independent of each other. All variancecovariance pairs can be arranged in a covariance matrix S, which is symmetrical about the main diagonal. The covariance matrix thus captures all structural dependencies between feature value pairs in the profile. In practice, the covariance matrix is built from learning samples, in this case a collection of profiles at a number of landmarks and a number of intensity features in a set of training images.
 The covariance relates to how distance is calculated in higher dimensional space, i.e. the space of the 2k+1 points in a given profile, to compute the distance between a current profile and a model profile. When the profiles would only comprise a single point with value x, the profiles could be compared with the model point value
x for similarity just by subtracting the values of the current profile point and model profile point and taking the absolute value, i.e. d=x−x . When the profiles would comprise two points, the distance between the 2vector x and the model 2vector x could be computed as the Euclidean distance in the plane, i.e.
d=√{square root over ((x−x )′(x−x )=√{square root over (x_{1}−x _{1}^{2}+x_{2}−x _{2}^{2}.
Euclidean distance weighs all directions equally. However, when the variables x_{1 }and x_{2 }have unequal standard deviations σ_{1 }resp. σ_{2}, the contours of isodistance lines are ellipses with major axes parallel to the coordinate axes. The differences may be weighted with the inverse of the standard deviation, resulting in the weighted Euclidean distance${d}_{w}=\sqrt{{\left(x\stackrel{\_}{x}\right)}^{\prime}W\left(x\stackrel{\_}{x}\right)}=\sqrt{{\uf603\frac{{x}_{1}{\stackrel{\_}{x}}_{1}}{{\sigma}_{1}}\uf604}^{2}+{\uf603\frac{{x}_{2}{\stackrel{\_}{x}}_{2}}{{\sigma}_{2}}\uf604}^{2}}.$
The matrix W contains the inverse of the variances of the variables, i.e.$W=\left[\begin{array}{c}1/{\sigma}_{1}^{2}\\ 1/{\sigma}_{2}^{2}\end{array}\right].$
Intensity levels of pixels close to each other as in the case of linear or circular profiles will be heavily correlated when they belong to an identical anatomical region with smoothly varying intensity levels. When pixel pairs are considered belonging to different tissues, the correlation may be inversely proportional. In either case, imaging noise and unimportant anatomical variation introduces a nonzero variance for the intensity level of the pixel itself.  Hence, if the variables are also correlated and have different variances, the inverse of the covariance matrix must be inserted, yielding the socalled Mahalanobis distance
d _{m}=√{square root over ((x−x )′S ^{−1}(x−x ).  The Mahalanobis distance weights the distance of a multidimensional data point x from its mean
x such that observations that are on the same multivariate normal density contour will have the same distance d_{m}. These Mahalanobis isodistance contours take the form of ellipses in twodimensional space, and ellipsoids in threedimensional space. Obviously, the number of points in the profile 2k+1 is normally much higher than three, in which case the isodistance loci take the form of hyperellipsoids in multidimensional space.  The gray level appearance model is obtained from a training image set, and includes for each landmark in a mean profile and a covariance matrix for each of the features. The total of stored models is thus proportional to the number of landmarks and the number of features.
 Shape Models
 The positional correlation between landmarks along the contour is taken into account in a shape modelling step. Because segmentation of anatomic structures with relatively deterministic shape is aimed at, knowledge regarding the shape must preferably encode the deterministic component, along with relevant anatomical variation, and further preferably ignore irrelevant shape details.
 Two embodiments are outlined to take into account positional correlation. The first is based on a global approach involving the use of all landmark positions simultaneously; the second employs a local approach, modelling the spatial disposition of successive landmarks.
 PCA Analysis and Fitting
 In the first embodiment, a variancecovariance matrix of landmark coordinate positions is constructed from manually placed landmarks in a set of aligned training images. Each diagonal entry in the variancecovariance matrix represents the variance of the coordinate values of the landmarks. The nondiagonal entries represent the covariance of pairs of landmark coordinate values, i.e. it expresses how the coordinates of two landmarks vary with respect to each other. The variation may systematically be in equal ‘positive directions’ with respect to their means, yielding high positive covariance, or systematically in equal ‘negative directions’ with respect to their respective means, yielding high negative covariance, or randomly in opposite directions with respect to their means, yielding zero covariance. An eigenvector/eigenvalue analysis, known in the prior art as Principal Components Analysis (PCA), will detect any global systematic variation modes of the landmarks with respect to the mean landmark positions. PCA aims to reduce the dimensionality of a data set by only keeping the components of coordinate set with largest variation; it therefore statistically decorrelates the original coordinate positions. By projecting onto these highly varying subspaces, the relevant statistics can be approximated by a smaller dimensional system that retains important shape features. A sufficient number of these principal modes are retained and used as the shape model, ignoring the other modes attributed to noisy variations.
 Landmark Connection Vector Model
 The second embodiment aims at modeling the disposition of successive landmarks at the local level. However, this restriction does not imply that the global shape is not captured. The shape outline can be completely reconstructed given the start point coordinates and the connection vectors, by iteratively adding the connection vector to the current point, starting from the start point. This property is exploited in shape description at the pixel level by chain codes for example. The connection vector is a discrete approximation of the local tangent vector to the curve. When taking into account the direction vector between successive landmarks, i.e. first order geometric derivatives, it allows describing the shape of the object irrespective of translations of the object. By taking into account the curvature, i.e. second order geometric differentiation, between successive landmarks, for example expressed by the angle subtended by two successive connection vectors, the shape of the object can be described irrespective of translations and rotations of the shape (socalled rigid body transformations). In that case the curve can be completely reconstructed when the start position, and the first geometric derivative (i.e. start connection vector) is known, by iteratively adding the difference vector of successive tangent vectors. Curvature based on a chain code at the pixel level is usually highly noise dependent for the scale is at a too fine level. This drawback is coped with in the present disclosure because the landmarks are sufficiently spaced apart so as to enable computation of noiseindependent yet accurate first or second order derivative information. Because the magnitude of tangent or curvature vector depends on scale, normalizing the size of each curve of the training set to unity for example, further allows describing the curve up to an Euclidean similarity transformation. The mean and covariance statistics of displacement vectors (i.e. first derivative or tangent) or vector differences of displacement vectors (i.e. second derivative or curvature) between manually placed landmarks in a set of aligned training images constitutes the shape model of this embodiment.
 Segmentation Systems
 Segmentation systems can also be organised along two tracks each having building blocks with related objectives.
 A first building block aims at confining the admissible positions of instantiated model landmark positions to a number of most probable locations in a target image.
 In the case landmarks are represented by linear profiles in feature images, a best point voting approach can be adopted that ranks a set of candidate positions on a perpendicular line through an instantiated landmark.
 For a circular profile case, a rectangular search grid can be considered at each instantiated landmark and grid crossing positions are ranked according to the combined Mahalanobis distance of circular profiles through all feature images to the model circular profiles. Second and third building blocks jointly constitute an optimization step. A cost matrix may be constructed, using the cost measures associated with the retained admissible position for each landmark. The cost comprises a term associated with agreement in gray level appearance. Agreement with the shape model may be verified at the global level using weighted PCA contour fitting. Alternatively, the shape agreement may be verified by incorporating a shape cost associated with the candidate position of the landmark. A dynamic programming algorithm can be used to construct a minimum cost path through the cost matrix, yielding the final segmentation as the path with lowest overall cost.
 Finally, characteristic points on the final segmentation may indirectly define the geometrical position of the key measurement points, and all depending measurement objects and measurement quantities may be generated on the basis of these key measurement points.
 Alternatively, the landmark points used in the segmentation may be used to directly define the key measurement points and a search grid for each measurement point may be provided to locate the admissible positions of the measurement point.
 Further specific embodiments and associated advantages of the present invention will become apparent from the following description and drawings.

FIG. 1 is a flow chart illustrating a first modeling system (Modeling system I) according to the present invention, 
FIG. 2 is a flow chart illustrating a first segmentation system (Segmentation System I) according to the present invention, 
FIG. 3 is a flow chart illustrating a second modeling system (Modeling System II) according to the present invention, 
FIG. 4 is a flow chart illustrating a second segmentation system (Segmentation System II) according to the present invention, 
FIG. 5 is a flow chart illustrating measurement computation on the basis of the characteristic points on the anatomic objects segmented according to the present invention, 
FIG. 6 is a measurement scheme for measurement of the cardiothoracic ratio from a chest radiograph, 
FIG. 7 (a) illustrates manual delineation of the left lung in a chest radiograph, inFIG. 7 (b) the left lung is represented as a set of landmarks {p_{i}}^{n} _{i−1},in this example n=14. 
FIG. 8 illustrates a discrete representation of the lungs fields by a set of n landmarks p_{1}=(x_{1},y_{1}), . . . , p_{n}=(x_{n},y_{n}). The image sizes are normalized between 0 . . . 1. The continuous curve is interpolated through the discrete landmarks. 
FIG. 9 illustrates a linear and a circular graylevel profile as a sampling of gray values at a number of points on a line resp. a circle with radius r_{c }around the landmark, 
FIG. 10 shows experimentally estimated distributions for some connection vectors on the left lung shape having 14 landmarks: (a) v_{l}, (b) v_{2}, (c) V_{3}. 
FIG. 11 shows first moment feature images of a chest radiograph, corresponding to the derivatives up to secondorder (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}) and the scales σ equal to 0.5, 2 and 8 pixels, 
FIG. 12 shows second moment feature images of a chest radiograph, corresponding to the derivatives up to secondorder (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}) and the scales σ equal to 0.5, 2 and 8 pixels, 
FIG. 13 shows examples of application of the present invention to the segmentation of lung fields on a chest radiograph: (a) manual segmentation of left and right lung field; (b) automatic segmentation of left and right lung field. 
FIG. 14 illustrates the computation of the Cardiothoracic Ratio using (a) the manual lung field segmentations resulting in a CTR_{man}=0.47; (b) using the automatically derived lung field segmentations according to the present invention resulting in a CTR_{automatic}=0.45.  The present invention will be explained in detail with regard to a specific application, namely the segmentation of the lung field in a medical image.
 Object Representation
 In the specific embodiments of the method of the present invention that are described below, an anatomical object in an image is represented mathematically as a fixed number of discrete, labeled points lying on the contour that encloses the object, i.e. p_{1}=(x_{1},y_{1}), . . . , p_{n}=(x_{n},y_{n}).
 The contour {p_{i}}^{n} _{i−1 }runs from p_{1 }to p_{n }and back to p_{1}. Hence, the object may be captured by a discrete shape vector x=(x_{1},y_{1}, . . . x_{n},y_{n})^{T}. The coordinate system is chosen such that all points inside the image area lie in the domain [0,1]×[0,1] (
FIG. 7 ).  The segmentation scheme described below needs a number of training images, in which the shape vectors x are manually determined. Once the algorithm is trained on the data set, it is able to produce the shape vector in a new image.
 The continuous contour of an anatomical object can be reconstructed by curve interpolation from the points {p_{i}}^{n} _{i=1}. A first order interpolation links successive points with a line segment, resulting in a polygonal approximation. Higher order interpolations may be used to achieve a smoother outline. Irrespective of the order, the continuous representation can be used to derive spatial properties of the curve, such as the tangent direction along the curve, or the normal direction at a certain point on the curve.
 Modeling System I (
FIG. 1 )  Gray Level Appearance Model
 The gray level appearance model (118) (also called photometric model) captures the spatial intensity distribution in the neighborhood of socalled landmarks (also called landmark points). The spatial component of the gray level appearance model has been devised along the use of graylevel profiles. As will be apparent from their construction, other schemes can be employed.
 The number n of landmarks (110) that is for example used to accurately describe a lung field outline may be as low as 14, e.g. landmark 1 denotes the uppermost point of each lung field, landmark 7 is positioned at the cross section of either the left or right heart shadow with their respective hemidiaphragm, landmark 9 denoting the costophrenic angle. The other landmarks can be equidistantly positioned between these major landmarks. Obviously, a higher number of landmarks may be used.
 Linear GrayLevel Profiles (
FIG. 9 )  At each landmark, the gray level appearance model describing the typical image structure around the landmark is sampled perpendicular to the contour. On either side of the landmark, k pixels are sampled using a certain step size, which gives profile vectors (112) of length 2k+1. This sampling scheme has the advantage that it is able to model linear intensity gradients at the landmark. The profiles may be normalized so that the sum of absolute values of elements in the profile is 1. The direction of sampling can be computed as the mean normal direction between the normals on the line segments resulting from connecting the landmark with its previous or its next neighbour, or as the normal on the line segment connecting the previous and next landmark.
 GrayLevel Feature Images
 The image structure around a landmark exhibits is captured in a number N of mathematical features as explained higher. The feature images (114) are obtained by computing a number of moments of the local histogram of derivative images in a window with width α around each location x_{0}. In a preferred embodiment the first and second histogram moments, all derivatives up to secondorder (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}) and 5 inner scales (σ=0.5,1,2,4,8 pixels) are computed, amounting to a total of N=2×6×5=60 feature images. The window scale to compute the histogram is in accordance to the inner scale, i.e. α=2σ. The set of feature images (114) can be regarded as a graylevel decomposition of the graylevel function in the neighbourhood of a landmark.
 An example of the computed feature images in a thorax image is given in
FIG. 11 (first histogram moments of original image and 5 derivatives images at three scales) andFIG. 12 (second histogram moments of original image and 5 derivatives images at three scales). Thorax images normally appear in the upright position in the image; anatomic objects pertaining to other radiographic examinations may exhibit nonstandardized position and rotation, in which case the construction of feature images may employ Cartesian Differential Invariants.  GrayLevel Appearance Model
 All images of a training set are used to build a statistical model for each feature and each landmark. Denoting the normalized profile sampled at a landmark i (e.g. 14 in total for the 2D case of a thorax lung field) in feature j (e.g. 60 in total for 2D images) as a vector g_{i,j}, the probability distribution of g_{i,j }is estimated from the training images, resulting in a mean
g _{i,j}, and a covariance S_{i,j }(116). For a length of a profile vector of 2k+1, the size of this covariance matrix (116) is (2k+1)×(2k+1). The gray level appearance model for a certain landmark i having the mean profilesg _{i,j},j=1 . . . N and covariance matrices S_{i,j}, j=1 . . . N. The graylevel appearance model for the total lung field having the collection of all mean profilesg _{i,j},i=1 . . . n,j=1 . . . N and covariance matrices S_{i,j},i=1 . . . n,j=1 . . . N.  GrayLevel Cost
 The Mahalanobis distance between a new profile g_{i,j }sampled at a point p in a new feature image j for a certain landmark i is computed as
h_{i,j}(p)=(g_{i,j}(p)−g _{i,j})^{T}S^{−1} _{i,j}(g_{i,j}(p)−g _{i,j})  A smaller Mahalanobis distance means a larger probability that the profile g_{i,j}(p) originates from the Gaussian distribution with mean
g _{i,j }and covariance S_{i,j}. Hence, the Mahalanobis distance may be used as a graylevel cost measure, denoted as h_{i,j}(p). This cost measure is a function of the location p in a given image. The location that most likely is the true location for the landmark i, in accordance to feature j, is the point p that minimizes h_{i,j}(p). This graylevel cost can thus be defined for each feature j.  Shape Model
 The curve outline, in the case of lung fields: one for each of the lung fields, is described by n landmark points (110) (
FIG. 8 ). These landmark points are manually determined in a set of s training images, yielding a sequence of points (x_{1},y_{1}) . . . (x_{n},y_{n}). These coordinate tuples are subsequently positioned in a vector x=(x_{1},y_{1}, . . . , x_{n},y_{n})^{T}, representing the curve. Next Principal Components Analysis (PCA) (120) is applied to the shape vectors x of the training images. The PCA projects the curve in a lower dimensional space, covering the most important modes of geometric variation of the lung fields. Each curve x ∈ can be approximated by b ∈ with t<<2n:
x≈x +Φb
with x the mean shape (122) and Φ ∈ the eigenvectors of the covariance matrix of x corresponding to the t largest eigenvalues. Each eigenvalue determines the variance of the shape for that shape mode (124). This curve approximation, represented by the vector b, constitutes the shape model (126) and fits the curve x into the shape model. The eigenshapes can be regarded as a zeroorder (positional) geometric decomposition of the shape represented by the set of landmarks. The role of the shape model is to constrain the deformation of the object between the bounds of variation imposed by the training set, for example the three standard deviations with respect to the mean shape.
Segmentation System I (FIG. 2 )  The segmentation algorithm will initially position a curve, representing a lung field outline, in the image, and finds a segmentation solution (224) by iteratively updating the positions of the curve's constituent landmarks until an optimization criterion reaches an optimal value. Each iteration includes the following steps.
 Step 1. Best Point Voting
 Assuming a current position of a particular landmark, a better location for this landmark will be searched on a line perpendicular to the contour. On each side of the current position of the landmark, n_{s }positions are evaluated. One of the 2n_{s}+1 points will be voted as the best next point. Each feature of the set of N features contributes one vote to one of the 2n_{s}+1 points. That point of the 2n_{s}+1 points is selected that has minimum cost according to the Mahalanobis distance criterion (214). For each of the 2n_{s}+1 points, the profile having 2k+1 points sampled in feature image j is put in the equation of h_{i,j}(p) with the proper mean
g _{i,j }and covariance S_{i,j }filled in. The point with the lowest Mahalanobis distance is voted by that feature (216). Each feature chooses one point, resulting in N votes divided over 2n_{s}+1 points as sets N_{1 }, . . . , N_{2n} ^{ s } _{+1 }with$\sum _{k=1}^{2{n}_{s}+1}{N}_{k}=N.$
Obviously the number N of features must be substantially larger than the number 2n_{s}+1 of selectable points for otherwise there are too few votes that may be assigned to a point of the set of selectable points.  The number of votes N_{i,j }that each point of the set of considered 2n_{s}+1 points for a certain landmark i receives, may be regarded as a confidence rate that the particular point is the searched for landmark.
 Step 2. Minimal Cost Path Optimization
 According to the first step, each landmark may separately be displaced over 2n_{s}+1 possible locations. The confidence rates N_{i,j }can be put in a matrix (218) as follows
$R=\left[\begin{array}{ccc}{N}_{1,1}& \cdots & {N}_{1,2{n}_{s}+1}\\ \vdots & {N}_{i,j}& \vdots \\ {N}_{n,1}& \cdots & {N}_{n,2{n}_{s}+1}\end{array}\right]\text{\hspace{1em}}\mathrm{with}$ $\sum _{j=1}^{2{n}_{s}+1}{N}_{i,j}=N$ $\forall i=1,\dots \text{\hspace{1em}},n.$  Updating the contour involves choosing one element in each row of the matrix C; the concatenation of all elements forming a path through the matrix, such that a cost criterion is optimized (i.e. minimized), is a minimal cost path (MCP) search.
 A logical choice would be to take the element with highest confidence in each row; this approach has the drawback that it does not take into account the presence of outliers. Therefore, it may include abrupt changes in the path. This drawback is overcome by imposing a smoothness constraint that excludes the abrupt changes.
 To this purpose, first a cost matrix is constructed as follows:
$C=\left[\begin{array}{ccc}1/{N}_{1,1}^{m}& \cdots & 1/{N}_{1,2{n}_{s}+1}\\ \vdots & 1/{N}_{i,j}^{m}& \vdots \\ 1/{N}_{n,1}^{m}& \cdots & 1/{N}_{n,2{n}_{s}+1}^{m}\end{array}\right],$  with the power m in the denominator being a positive number that influences the computed path (a larger m yielding smaller cost contribution for a point and hence steering the path towards this point). Second, a cost function is constructed as follows:
$J\left({j}_{1},{j}_{2},\dots \text{\hspace{1em}},{j}_{n}\right)=\sum _{i=1}^{n}{C}_{i,{j}_{i}}+\alpha \sum _{i=1}^{n}\uf603{\delta}_{i,{j}_{i}}{\delta}_{i1,{j}_{i1}}\uf604,$  with δ_{i,j} ^{ i }the displacement of point i towards its target point j_{i }along the normal to the shape and α a weight factor. The second term in this equation is the smoothness constraint that penalizes target point configurations for which displacements of one target point w.r.t. the displacement of the previous target point vary abruptly along the contour. The cost function J is optimized (220), using e.g. a technique known in the prior art as dynamic programming, described in e.g. D. H. Ballard, C. M. Brown, Computer Vision, Englewood Cliffs, Prentice Hall Inc. 1982, pp 137143, resulting in the target points (j^{*} _{1 }, . . . , j^{*} _{n}), that minimize J as a minimal cost path (MCP).
 Alternatively to the matrix data structure, a layered graph may be used also. Each layer represents the array of perpendicularly sampled image points at a landmark. The graph is augmented with a set of arcs from the last layer back to the first. The dynamic programming technique extracts the boundary as an optimal closed path through the graph.
 Step 3. Unweighted and Weighted PCA Contour Fitting (223)
 In the previous step, each landmark i is given a new location j^{*} _{i}. The sequence (j^{*} _{1 }, . . . j^{*} _{n}) represents a contour that may be fitted in the shape model x≈
x +Φb by substituting it for x and deriving b by solving Φb=x−x . Fitting x in the shape model means finding a b such thatx +Φb approximates x with small errors in the high confidence landmarks.  Unweighted PCA Contour Fitting
 Since Φ∈ there are more equations than unknowns, and the system is overdetermined, having no exact solution. Therefore, ∥Φb−(x−
x )∥_{p }must be minimized for some suitable choice of p. Different norms render different optimum solutions. In particular the case of p=2, the least squares problem, is more tractable, and is solved in the prior art by methods based on normal equations and the QR factorization. Denoting the difference between actual point sequence and the fitted point sequence as the fit error e=x−(x +Φb), the problem is to seek the b to minimize e^{T}e. Substituting for e, this optimization problem becomes$\underset{b}{\mathrm{min}}\text{\hspace{1em}}J\left(b\right)=\underset{b}{\mathrm{min}}\text{\hspace{1em}}{e}^{T}e=\underset{b}{\mathrm{min}}\left({\left(x\stackrel{\_}{x}\right)}^{T}{b}^{T}{\Phi}^{T}\right)\left(\left(x\stackrel{\_}{x}\right)b\text{\hspace{1em}}\Phi \right).$  J(b)is minimized if ∇J(b)=0. The gradient ∇J(b) of J(b) can be written
∇J(b)=−2Φ^{T}W^{2}(x−x )+2Φ^{T}W^{2 }Φb.  Hence the optimization problem leads to the equations
 known as the normal equations. These are solved for b as
b=(Φ^{T}Φ)^{−1}Φ^{T}(x−x )  This equation projects the data vector x of the image space into the PCA model space and returns the approximation coefficients b. The approximation result b may be projected back in image space by evaluating the shape model x=
x +Φb.  Weighted PCA Contour Fitting (222)
 The corresponding number of votes N_{i,j} ^{ * } ^{ i }may be used as weights when fitting the contour into the shape model. As a consequence, the PCA model will tend to give a high priority to the high confidence points. Points with a low confidence rate will lessen their influence on the contour that the model generates. The optimization problem now includes minimizing the weighted quadratic error (We)^{T}(We). Using a similar approach with normal equations as in the unweighted case, this problem leads to the solution for b expressed as
b=(Φ^{T}W^{2}Φ)^{−1}Φ^{T}W^{2}(x−x ).  This equation is evaluated to project x in the PCA space and may be projected back using the shape model x=
x +Φb.  Steps 1 to 3 are iterated until a stop criterion is fulfilled, e.g. when the positions of the landmarks do not change substantially between successive iterations.
 Both the unweighted and weighted PCA fitting yields the final segmentation result (224) as a set of landmark points x in the image. A continuous spline curve may be interpolated through these points to represent the segmentation analytically.
 Modeling System II (
FIG. 3 )  Gray Level Appearance Model
 The gray level appearance model (118′) captures the spatial intensity distribution in the neighborhood of landmarks. In this embodiment, the spatial model departs from circular profiles sampled in the neighborhood of a landmark. The number n of landmarks that is used to accurately describe a lung field outline may be as low as 14, e.g. landmark 1 denotes the uppermost point of each lung field, landmark 7 is positioned at the cross section of either the left or right heart shadow with their respective hemidiaphragm, landmark 9 denoting the costophrenic angle. The other landmarks are equidistantly positioned between these major landmarks. Obviously, a higher number of landmarks may be used also in the computational scheme according to the present invention.
 Circular Profiles
 An alternative way for selecting the points in the neighborhood of a landmark is to sample points on a circle with center at the landmark and at least one radius (312). An example of a circular sampling is given in
FIG. 9 . If the landmark is located at (x_{0},y_{0}), the gray value function of the image is sampled at radius r_{c }at the points$\left({x}_{0}+{r}_{c}\mathrm{cos}\frac{2\pi}{{n}_{c}}\left(i1\right),{y}_{0}+{r}_{c}\mathrm{sin}\frac{2\pi}{{n}_{c}}\left(i1\right)\right)$ $i=1,\dots \text{\hspace{1em}},{n}_{c}.$  The samples are put in a profile vector of length n_{c}. Suitable choices of n_{c }are 12, 8, 6, 4 and 3 (corresponding to 30, 45, 60, 90 and 120 degree angular spacing). Multiple circular subsamplings at a set of radii may be considered simultaneously. Suitable radii r_{c}, expressed dimensionless as a fraction of the image size, are 0.05 and 0.025.
 This scheme has the advantage over the linear profiles that it captures the 2D structure around the neighborhood. Specific anatomical landmarks such as the costophrenic corner point or the intersection of the left heart shadow with the left diaphragm are better modeled by circular profiles than linear profiles for they have discontinuous first derivative of the lung field contour. Landmarks along the lung field contour in the vicinity of these specific anatomical landmarks may also ambiguously be represented by a linear profile if its length is too long, for then the profile may cross the juxtaposed lung field border.
 The circular profile may be normalized such that the sum of absolute values of the elements in the profile is 1. The degree of subsampling may range from no subsampling (yielding a resampling at original resolution of the image) to very subsampled, e.g. only 4 points on the main axes in the case of a circular profile.
 GrayLevel Feature Images
 The image structure around a landmark is captured in a number N of mathematical features as outlined in the summary of the invention. The feature images (314) are obtained by computing a number of moments of the local histogram of derivative images in a window with width α around each location x_{0}. In a preferred embodiment the first and second histogram moments, all derivatives up to secondorder (L,L_{x},L_{y},L_{xx},L_{yy},L_{xy}) and 5 inner scales (σ=0.5,1,2,4,8 pixels) are computed, amounting to a total of N=2×6×5=60 feature images. The window scale to compute the histogram is in accordance to the inner scale, i.e. α=2σ. The set of feature images can be regarded as a graylevel decomposition of the graylevel function in the neighborhood of a landmark.
 GrayLevel Cost
 The Mahalanobis distance between a new profile g_{i,j }sampled in a new feature image j for a certain landmark i is computed as
h_{i,j}(p)=(g_{i,j}(p)−g _{i,j})^{T}S^{−1} _{i,j}(g_{i,j}(p)−g _{i,j})  A smaller Mahalanobis distance means a larger probability that the profile g_{i,j}(p) originates from the Gaussian distribution with mean
g _{i,j }and covariance S_{i,j}. Hence, the Mahalanobis distance may be used as a graylevel cost measure, denoted as h_{i,j}(p). This cost measure is a function of the location p in a given image. The location that most likely is the true location for the landmark i, in accordance to feature j, is the point p that minimizes h_{i,j}(p). This graylevel cost can thus be defined for each feature j.  A total graylevel cost is obtained by combining all graylevel costs for all features j in a sum
${h}_{i}\left(p\right)=\sum _{j=1}^{N}{\left({g}_{i,j}\left(p\right){\stackrel{\_}{g}}_{i,j}\right)}^{T}{S}_{i,j}^{1}\left({g}_{i,j}\left(p\right){\stackrel{\_}{g}}_{i,j}\right)$  reflecting the similarity between the graylevel pattern at p and the expected graylevel pattern at landmark i. The location most likely to coincide with landmark i, in accordance to the graylevel appearance, is the solution of the optimization problem
${p}_{o}=\mathrm{arg}\text{\hspace{1em}}\underset{p}{\mathrm{min}}{h}_{i}\left(p\right).$  To construct the graylevel appearance model (118′), the distributions (
g _{i,j},S_{i,j}) of the feature profiles g_{i,j }(for all landmarks i=1 , . . . , n, for all feature images j=1 , . . . , N have to be estimated from the training images.  Shape Model (126′)
 Whereas a graylevel cost validates a graylevel pattern or a pattern of features, a shape cost is defined to validate a connection between two successive landmarks.
 The curve outline, one for each of the lung fields, is described by n points (landmarks). These landmarks are manually determined in a set of s training images, yielding a sequence of points (x_{1},y_{1}) . . . (x_{n},y_{n})=(p_{1 }, . . . , p_{n}). These coordinate tuples are subsequently positioned in a vector x=(x_{1},y_{1 }. . . , x_{n}, y_{n})^{T }(320), representing the shape. Considering a pair (p_{i},p_{i+1}) of estimated positions of successive landmarks. A shape cost is assigned to the vector v_{i}=p_{i+1}−p_{i}, reflecting the plausibility of v_{i }w.r.t. its probability distribution. The distributions of v_{1},v_{2 }, . . . , v_{n}, assumed to have normal distributions around their mean, are estimated from the training shapes. The mean (322) and covariance (324) of v_{i }are noted as
v _{i }and S_{v} ^{ i }respectively. An example of these vector distributions is given inFIG. 10 . The vector distributions can be regarded as a firstorder (tangential) geometric decomposition of the shape represented by the set of landmarks.  A novel shape cost validating a connection vector v_{i }between two successive landmarks p_{i},p_{i−1 }(a vector in the plane being fully characterized by its orientation and its length) is the Mahalanobis distance between v_{i }and its mean
v _{i}:
ƒ_{i}(v_{i})=(v_{i}−v _{i})^{T}S^{−1} _{v} ^{ i }(v_{i}−v _{i}).  A connection, which is unlikely to happen, because it has large Mahalanobis distance, will get a high shape cost. On the other hand, a zero cost will be assigned to a connection that equals the expected value.
 Segmentation System II (
FIG. 4 )  Knowledge about the graylevel appearance of the object in the image and about the shape of the object to be segmented is acquired during the training phase. This knowledge is subsequently used to segment the object in a new image (418) according to the following steps.
 Step 1. Rectangular Search Grids Construction (410).
 A rectangular search grid for each landmark i is constructed to constrain the possible locations of the landmark. Each rectangular grid is characterized by the parameters of geometric extent and grid spacing.
 The grid spacing r_{g }should be in accordance with the radius r_{c }of the circular profiles (312) of the graylevel appearance model. A suitable relationship is r_{g }=ƒ_{g}r_{c }with r_{c }the radius and ƒ_{g }a fixed fraction. A typical fraction for the grid spacing is ƒ_{g}=0.5.
 The grid extent and grid position is landmark specific and is represented by x_{min}, x_{max}, y_{min }and y_{max}. The true (unknown) landmark location (x^{*},y^{*}) is deemed to lie between the lower and upper borders:
x_{min}<x^{*}<x_{max}
y_{min}<y^{*}<y_{max}  These conditions can only be guaranteed if the grid spans the whole image area (i.e. x_{min}=y_{min}=0 and x_{max}=y_{max}1). A more efficient approach is to constrain the search to a relevant part of the image. Assuming that the probability distribution p_{x}(x) of the xcoordinate of the landmark is normal with mean
x and standard deviation σ_{x }(estimated from the training shapes). The grid borders x_{min }and x_{max}, jointly defining the grid extent in the xdirection, are chosen such that${\int}_{{x}_{\mathrm{min}}}^{{x}_{\mathrm{max}}}{p}_{x}\left(x\right)\text{\hspace{1em}}dx={f}_{c}$  with ƒ_{c }some fraction representing the probability that the condition x_{min}<x^{*}<x_{max }is true. A suitable value for the fraction ƒ_{c }is 0.995. If the requirement is imposed that the extent is symmetric around the mean
x , i.e. x_{max}−x =x −x_{min}, the upper border x_{max }is the value that satisfies$\frac{1}{{\sigma}_{x}\sqrt{2\pi}}{\int}_{\stackrel{\_}{x}}^{{x}_{\mathrm{max}}}\mathrm{exp}\left(\frac{{\left(x\stackrel{\_}{x}\right)}^{2}}{2{\sigma}_{x}^{2}}\right)\text{\hspace{1em}}dx=\frac{1}{2}{f}_{c}.$  The lower border is then x_{min}=2
x −x_{max}. The borders for the y coordinate are obtained analogously.  Step 2. GrayLevel Cost Matrix Construction
 The graylevel appearance model (126′) is used to find proper locations for each landmark. The m best locations of landmark i, in accordance with the gray level appearance model, are selected as follows.
 First, a rectangular grid covering a relevant part of the image area around the expected location of each landmark i is defined, according to the previous step.
 Secondly, for each grid point, the total graylevel cost h_{i}(p) is computed. The points p_{i,1},p_{i,2 }, . . . p_{i,m }corresponding to the m lowest total graylevel costs are selected.
 Thirdly, a graylevel cost matrix C (414) is constructed so that each row i contains the costs of the m most likely locations of landmark i:
$C=\left[\begin{array}{ccccc}{h}_{1}\left({p}_{1,1}\right)& \cdots & {h}_{1}\left({p}_{1,k}\right)& \cdots & {h}_{1}\left({p}_{1,m}\right)\\ \vdots & \text{\hspace{1em}}& \vdots & \text{\hspace{1em}}& \vdots \\ {h}_{\text{\hspace{1em}}i}\left({p}_{\text{\hspace{1em}}i,\text{\hspace{1em}}1}\right)& \cdots & {h}_{i}\left({p}_{i,k}\right)& \cdots & {h}_{i}\left({p}_{i,m}\right)\\ \vdots & \text{\hspace{1em}}& \vdots & \text{\hspace{1em}}& \vdots \\ {h}_{n}\left({p}_{n,1}\right)& \cdots & {h}_{n}\left({p}_{n,k}\right)& \cdots & {h}_{n}\left({p}_{n,m}\right)\end{array}\right].$  A typical number m of best points per landmark ranges from 10 to 40.
 As the m of best points per landmark are selected independently from the set of m of best points for a neighboring landmark, the situation may arise that one or more of these points are nearer to a nonneighboring landmark. These points will likely be neglected in the final contour segmentation by accounting for the shape cost in the next step.
 Step 3. Minimal Cost Path Construction (416)
 Determining the contour that segments the object is reduced to finding a path from top to bottom through the matrix C by selecting one element per row. Denoting the index of the selected element in row i as k_{i}, the curve becomes the point sequence {p_{1,k} ^{ 1 },p_{2,k} ^{ 2 }, . . . p_{n,k} ^{ n }}. The optimal path (k^{*} _{1},k^{*} _{2 }, . . . , k^{*} _{n}) is the path that minimizes a cost function J(k_{1 }, . . . , k_{n}):
$\left({k}_{1}^{*},{k}_{2}^{*},\dots \text{\hspace{1em}},{k}_{n}^{*}\right)=\mathrm{arg}\text{\hspace{1em}}\underset{{k}_{1},\dots \text{\hspace{1em}},{k}_{n}}{\mathrm{min}}J\left({k}_{1},\dots \text{\hspace{1em}},{k}_{n}\right).$  The models introduced above admit a number of cost measures. Considering two successive landmarks p_{i,k} ^{ i }and p_{i+1,k} ^{ i+1 }, a cost component (412) according to the graylevel appearance model and a cost component according to the shape model are:
 The gray level costs h_{i}(p_{i,k} ^{ i }) and h_{i+1}(p_{i+1,k} ^{ i+1 }) corresponding to the landmarks pink and p_{i,k} ^{ i }and p_{i+1,k} ^{ i+1 };
 The shape cost ƒ_{i}(p_{i+1,k} ^{ i+1 }−p_{i,k} ^{ i }), which validates the plausibility of the connection from p_{i,k} ^{ i }to p_{i+1,k} ^{ i+1 }.
 Both cost types can be incorporated in the cost function J(k_{1 }, . . . ,k_{n}) by means of a weighted sum of an overall graylevel cost and an overall shape cost. The overall graylevel cost with weight factor γ_{1 }is the sum of the landmark individual graylevel costs h_{i}(p_{i,k} _{i}),i=1 , . . . , n. The overall shape cost with weight factor γ_{2 }is the sum of the shape costs ƒ_{i}(p_{i−1,k} ^{ i+1 }−p_{i,k} ^{ i }),i=1 , . . . , n. Hence the cost function J(k_{1 }, . . . , k_{n}) becomes
$J\left({k}_{1},\dots \text{\hspace{1em}},{k}_{n}\right)={\gamma}_{1}\sum _{i=1}^{n}\text{\hspace{1em}}{h}_{i}\left({p}_{i,{k}_{i}}\right)+{\gamma}_{2}\sum _{i=1}^{n1}\text{\hspace{1em}}{f}_{i}\left({p}_{i+1,{k}_{i+1}}{p}_{i,{k}_{i}}\right)+{\gamma}_{2}{f}_{n}\left({p}_{1,{k}_{1}}{p}_{n,{k}_{n}}\right).$  The optimal path (k^{*} _{1},k^{*} _{2 }. . . , k^{*} _{n}) that minimizes J(k_{1 }, . . . , k_{n}) is called a Minimal Cost Path. The optimal path (416) is computed using dynamic programming techniques known in the prior art, such as introduced in G. Behiels et al., Evaluation of image features and search strategies for segmentation of bone structures using active shape models, Medical Image Analysis, 6(1):4762, 2002. A typical weight factor for the graylevel cost is γ_{1}=1, and for the shape cost γ_{2}=0.25.

FIG. 13 shows the automatic segmentation of three thorax images according to the system outlined before, and compares it to the manual segmentation performed by an experienced radiologist. The 14 autoplaced landmarks on each lung field are shown, along with a continuous spline interpolation through them.  Computational Speedup Refinements
 The segmentation method including steps 13 can be applied to segment only part of the outline of one or more anatomic entities. To this purpose the rectangular search grid is only constructed for a subset of consecutive landmarks. The gray level cost matrix will comprise a number of rows equal to the number of retained landmarks. The minimum cost path construction will minimize a cost function that only comprises a gray level term and a shape term for the retained landmarks. The resulting segmentation traces only that outline part that is covered by the retained landmarks. The obvious advantage of partial segmentation is computational speed when one is only interested in part of the anatomic entity outline, or when only some specific landmarks need to be located to obtain specific measurements.
 Another algorithmic refinement to speed up the computation of the segmentation is to use a coarsetofine multiresolution approach. The graylevel appearance and shape models are constructed on a multiresolution representation for each landmark, introducing the spatial scale as a new dimension. Two alternatives are implemented to this purpose, both applying the segmentation concept over successively finer resolution levels.
 In one alternative, during segmentation, the number of points in a profile, the number of points in the search grid and the number of landmark points along the contour are decreased in the original image. The positional precision of the landmarks is then increased by successively increasing the resolution of the search grid in the vicinity of the previous solution (the total number of considered points in the profile and search grid may thus be held constant at each iteration), and applying the finer resolution graylevel and shape models.
 In another alternative, the multiresolution approach may be implemented using a number of levels of a Gaussian and Laplacian pyramid and their associated derivative feature images. The initial position of the landmarks is determined on a coarse level, and is tracked to its final position on the finest level using the intermediate positions of the previous level as the starting position for the next level. Because the coarse resolution images contain fewer details, the search space will likely contain less false minima which will enable faster optimization at a given level and faster locating the landmarks towards their final position.
 Training the Segmentation Models
 A number of thoracic images is collected from an image repository, and presented sequentially in a Graphical User Interface. The following steps are performed to build the segmentation models:
 An experienced user manually segments the lung fields in each displayed image, by indicating—using left mouse clicks—a number of points along the contour that delineates the lung field. These points need not be spaced equidistantly along the lung field outline; the only requirement is that the point set collectively approximates the lung field to a sufficiently high degree. To assess the anatomic fit, a spline curve is continually interpolated through the manually positioned points determined so far, until the curve is closed from the last point to the first point by a right mouse click. Manual segmentation adjustments can be made by dragging an individual point towards a new position. The resulting outline is again assessed with respect to anatomical correctness.
 Next, a number of landmarks will be autoplaced on the manual segmented lung field contour. In order to achieve that identical landmarks on all images of the training set map on each other, the user positions a few number of easily discernible landmarks, the other landmarks are obtained by equidistantly placing points on the lung field contour. In the case of lung field outlines, a number of easily discernable landmarks are the topmost lung field point, the point denoting the costophrenic angle and the junction of heart shadow and hemidiaphragm, a subtotal of three. Next a number of total landmarks are chosen, a suitable choice ranging from 14 to 40 for example. In the 14 landmark case, the points p_{1}, p_{7 }and p_{9 }represent the three fixed landmarks, five points are evenly distributed between p_{1}, and p_{7}, one point between p_{7 }and p_{9}, and another five points between p_{9 }and p_{1}. This step is performed separately on left and right lung fields.
 Next, parameters pertaining to the training phase are asked: (a) image size for training (and segmentation), a typical value is 256 or 512; (b) fraction of the shape variance to be explained by the Principal Components Analysis (PCA), a typical value being 0.9999; (c) number of points in the profile, either linear or circular, a typical value for the circular profile being 12, 8, 6, 4 or 3; (d) radius of the circular profile as a fraction of the image dimension, typical values being 0.04, 0.03, 0.02.
 Training of the graylevel appearance model: (a) decomposition of the graylevel function around the landmarks, i.e. computation of N feature images e.g. the local histogram moments of grayvalue derivatives as outlined before, and (b) collection of the gray value distribution of the linear or circular profiles at the landmarks p_{i}, i.e. computation of
g _{i,j},i=1 . . . n,j=1 . . . N and the covariance matrices S_{i,j},i=1 . . . n,j=1 . . . N (i.e. for each landmark i in feature image j). This step generates the statistical graylevel appearance knowledge.  Training of the shape model: (a) computation of geometric decomposition of the zeroth order (positional) information contained in the landmarks, i.e. computation of the mean shape
x and the t principal modes of variation (eigenshapes) arranged columnwise in a matrix b; (b) computation of the vector distributions (first order tangential information) contained in the connection sequence of landmarks, i.e. computation of the meanv _{i }and covariance S_{v} ^{ i }of v_{i}. This step generates the statistical shape knowledge.  Storage of the statistical graylevel appearance and shape knowledge, e.g. to be used to segment the lung fields in a new image according to the modelbased segmentation subsystems given above.
Application of the Modelbased Segmentation in 2D Images to the Computation of the CardioThoracic Ratio (CTR)  An application of the automated lung field segmentation is the computation of the Cardiothoracic Ratio (CTR). The CTR (
FIG. 6 ) is defined as the ratio of the transverse diameter of the heart to the internal diameter of the thorax (ID):$\mathrm{CTR}=\frac{\mathrm{MLD}+\mathrm{MRD}}{\mathrm{ID}}$
with MLD the maximum transverse diameter on the left side of the heart and MRD the maximum transverse diameter on the right side of the heart. This index is an important clinical parameter, which varies for an adult between 39% and 50% with an average of about 45%. A cardiothoracic index higher than 50% is considered abnormal. Possible causes are cardiac failure, pericardial effusion and left or right ventricular hypertrophy. It is possible to compute the cardiothoracic ratio automatically, using the automatic lung field segmentation as disclosed in the present invention.  Referring to
FIG. 8 , the characteristic point defining MRD is obtained by selecting the Cartesian leftmost point on the fitted contour segment between landmarks p1 and p9 of the right lung segmentation; the characteristic point defining MLD is obtained by selecting the Cartesian rightmost point on the fitted contour segment between landmarks p1 and p7 of the left lung segmentation. The sum MLD+MRD is obtained by subtracting the column coordinates of these characteristic points and taking the absolute value. Similarly, the ID is obtained by selecting the Cartesian leftmost point on the fitted contour segment between landmarks p1 and p7 of the right lung segmentation and the Cartesian rightmost point on the fitted contour segment between landmarks p1 and p9 of the left lung segmentation, subtracting the column coordinates of these characteristic points and taking the absolute value. 
FIG. 14 shows the computation of the characteristic points on the segmented lung fields, and the computation of the cardiothoracic ratio using (a) the manual lung field segmentations resulting in a CTR_{man}=0.47; (b) using the automatically derived lung field segmentations according to the present invention resulting in a CTR_{automatic}=0.45.  Application of the Modelbased Segmentation and Measurement System to Other Body Parts
 The spatial extent of the search grid for each of the landmarks in the lung field segmentation is derived on the basis of the positions of all similar landmarks in a set of training thorax images. The concept of using a search grid for constraining the candidate positions for a given landmark can be extended for other body parts with that may have wider positional, rotational and size variation in the image than that of a lung field in a thorax image, and that do occupy the full image area as opposed to lung fields in a thorax image.
 To allow for such positional, rotational and size variation of the body part, the concept of anchor point mapping of the search grids using methods disclosed in European patent application 04076454, entitled “Method for automatically mapping of geometric objects in digital medical images”, may be applied in conjunction with the present invention. The refined method then becomes particular interesting to segment bony body parts in orthopedic radiographs because they are characterized by a relatively constant shape in the image, and they can be anchored to well manifested bony landmarks in the image.
 For example, in a pelvis, hip or full leg examination, the femoral outlines can be anchored to well known landmarks such as the tip of the greater trochanter, the knee center and the femoral head center. These anchor points are used to establish an affine transformation between model anchor points and the associated anchor points selected in the actual image. Because a search grid has a collection of candidate points arranged in a rectangular lattice around a landmark point on the segmentation model contour in the model image, each of the constituent candidate points is mapped in turn in the actual image by applying the transformation to the model points' coordinates. In this way, the search grid for a given landmark point is reconstructed around the most likely position of the point. The optimization process then proceeds in the manner disclosed above by optimizing a combined gray value and shape cost for a path through a selected combination of candidate locations, one on each search grid. The path with most optimal cost is the final segmentation of the body part. Examples of other bones are hand and upper extremity bones, foot and lower extremity bones, pelvis and spinal vertebrae and structures.
 Other body parts that are amenable to this type of landmarkbased segmentation are softtissue organs in 3D CT of MR images. On the one hand, a slicebyslice based approach may be taken here, that determines the segmented contour of the anatomy on each slice, and that combines the result of all slices into a 3D segmented surface. On the other hand, a fully 3D approach, extending the current concept on a 3D volume, may be adopted, that builds and applies models for 3D landmarks confined within 3D search grids. Kidneys, lungs, heart, liver, stomach, spleen, prostate and brain structures are examples of organs to which the method of the present invention is applicable.
 In all application examples mentioned, measurement points may be based on the position of the landmarks on the segmentation or on a combination of several landmarks (such as a midpoint of two juxtaposed points on either side of the cortex of an elongated bone, a pair of which determines the bone's anatomical axis), and the landmark points may be subsequently fed into a measurement system such as disclosed in EP A 1349098.
Claims (22)
1. A method of segmenting an anatomic entity in a digital medical image comprising the following steps:
defining, for each of a number of landmarks in said image, an initial position of said landmark,
sampling a neighborhood around said initial position, said neighborhood comprising a number of candidate locations of said landmark,
associating a cost with each of said candidate locations,
optimizing a cost function expressing a weighted sum of overall gray level cost and overall shape cost for all candidate locations,
defining a segmented anatomic entity as a path through a selected combination of said candidate locations for which combination said cost function is optimized,
wherein said cost associated with a candidate location is the total gray level cost obtained by summing gray level costs of said candidate location in every feature image of a set of feature images calculated for said medical image,
said gray level costs being expressed as the Mahalanobis distance defined as the distance between the gray value profile at said candidate location with a corresponding mean profile weighted by the inverse of a corresponding covariance matrix, said mean profile and covariance matrix retrieved from a gray value model associated with said anatomic entity.
2. A method according to claim 1 wherein said gray value model is obtained by the steps of
sampling a manually segmented outline of said anatomic entity at a number of landmark points;
sampling a number of points in a neighborhood of each of said landmark points;
for each landmark point arranging sampled points in a neighborhood around a landmark point in a profile;
computing at least one feature image;
computing a mean profile for each landmark point and for each feature image;
computing a covariance matrix of the profiles for each landmark point and each feature image;
identifying said mean profile and covariance matrix as gray value model of said anatomic entity.
3. A method according to 1 wherein a feature image is a derivative image at a predefined scale σ or an image representation comprising mathematical moments of the local histogram of said image or derivative image in a window with width α around the location of a landmark.
4. A method according to claim 1 wherein said overall gray level cost is a combination of all individual gray level costs of a profile in a number of feature images whereby said individual cost is the Mahalanobis distance defined as the distance between said profile and the mean profile of that feature weighted with the inverse of the covariance matrix, said mean profile and covariance matrix being retrieved from a gray value model of said anatomic entity.
5. A method according to claim 4 wherein said gray value model is obtained by the steps of
sampling a manually segmented outline of said anatomic entity at a number of landmark points;
sampling a number of points in a neighborhood of each of said landmark points;
for each landmark point arranging sampled points in a neighborhood around a landmark point in a profile;
computing at least one feature image;
computing a mean profile for each landmark point and for each feature image;
computing a covariance matrix of the profiles for each landmark point and each feature image;
identifying said mean profile and covariance matrix as gray value model of said anatomic entity.
6. A method according to claim 1 wherein said overall shape cost is a combination of all individual costs of connection vectors, said connection vectors connecting successive landmarks, for all landmarks whereby said individual shape cost is expressed by the Mahalanobis distance defined as the distance of a connection vector between two successive landmarks to the mean connection vector weighted with the inverse of the covariance matrix, said mean connection vector and covariance matrix being retrieved from a shape model of said anatomic entity.
7. A method according to claim 6 wherein said shape model is obtained by the steps of
sampling a manually segmented outline of said anatomic entity at a number of landmark points;
computing connection vectors or connection vector differences between successive landmark points;
computing a mean connection vector or mean connection vector difference for successive pairs of landmark points;
computing a covariance matrix of connection vectors or connection vector differences;
identifying said mean connection vector and covariance matrix as a geometric model of said anatomic entity.
8. A method according to claim 1 wherein said cost function is optimized by dynamic programming.
9. A method according to claim 1 wherein said number of candidate locations is reduced by selecting those with minimal total gray value cost, said total gray value cost being the sum over all feature images of all gray value costs for the candidate position in said neighborhood, said gray value cost expressed as a Mahalanobis distance defined as the distance of a gray value profile at the candidate location to a corresponding mean profile weighted with the inverse of a covariance matrix, said mean profile and covariance matrix retrieved from a gray value model associated with said anatomic entity.
10. A method according to claim 1 wherein said neighborhood comprises a rectangular grid of sampling points.
11. A method according to claim 1 wherein said neighbourhood comprises a circular profile.
12. A method according to claim 1 wherein said gray value model and said geometric model are constructed from and applied to a multiresolution representation of the image.
13. A method of measuring an anatomic entity in a digital medical image comprising the steps of
calculating a segmentation of said anatomic entity using a method according to claim 1 ,
calculating or selecting characteristic points based on the segmentation of said anatomic entity,
calculating measurement objects based on said characteristic points,
deriving measurements of said anatomic entity on the basis of said measurement objects.
14. A method of measuring an anatomic entity in a digital medical image comprising the steps of
calculating a segmentation of said anatomic entity using a method according to claim 1 ,
identifying said landmarks as measurement points,
calculating measurement objects based on said measurement points,
deriving measurements of said anatomic entity on the basis of said measurement objects.
15. A computer program product embodied in a computer readable medium for performing the steps of the method of claim 1 .
16. A computer readable medium comprising computer executable program code for performing the steps of the method of claim 1 .
17. A computer program product embodied in a computer readable medium for performing the steps of the method of claim 13 .
18. A computer readable medium comprising computer executable program code for performing the steps of the method of claim 13 .
19. A computer program product embodied in a computer readable medium for performing the steps of the method of claim 14 .
20. A computer readable medium comprising computer executable program code for performing the steps of the method of claim 14 .
21. A method of segmenting an anatomic entity in a digital medical image comprising the following steps:
defining, for each of a number of landmarks in said image, an initial position of said landmark,
sampling a neighborhood around said initial position, said neighborhood comprising a number of candidate locations of said landmark,
associating a cost with each of said candidate locations,
optimizing a cost function expressing a weighted sum of overall gray level cost and overall shape cost for all candidate locations,
defining a segmented anatomic entity as a path through a selected combination of said candidate locations for which combination said cost function is optimized,
wherein said cost associated with a candidate location is the total gray level cost obtained by summing gray level costs of said candidate location in every feature image of a set of feature images calculated for said medical image,
said gray level costs being expressed as the Mahalanobis distance defined as the distance between the gray value profile at said candidate location with a corresponding mean profile weighted by the inverse of a corresponding covariance matrix, said mean profile and covariance matrix retrieved from a gray value model associated with said anatomic entity,
and wherein said neighborhood comprises a circular profile.
22. A method of segmenting an anatomic entity in a digital medical image comprising the following steps:
defining, for each of a number of landmarks in said image, an initial position of said landmark,
sampling a neighborhood around said initial position, said neighborhood comprising a number of candidate locations of said landmark,
associating a cost with each of said candidate locations,
optimizing a cost function expressing a weighted sum of overall gray level cost and overall shape cost for all candidate locations,
defining a segmented anatomic entity as a path through a selected combination of said candidate locations for which combination said cost function is optimized, and wherein
said overall shape cost is a combination of all individual costs of connection vectors, said connection vectors connecting successive landmarks, for all landmarks whereby said individual shape cost is expressed by the Mahalanobis distance defined as the distance of a connection vector between two successive landmarks to the mean connection vector weighted with the inverse of the covariance matrix, said mean connection vector and covariance matrix being retrieved from a shape model of said anatomic entity.
Priority Applications (6)
Application Number  Priority Date  Filing Date  Title 

EP05107907A EP1760659B1 (en)  20050830  20050830  Method of segmenting anatomic entities in digital medical images 
EP05107903A EP1760658B1 (en)  20050830  20050830  Method of constructing a gray value model of an anatomic entity in a digital medical image. 
EP05107907.7  20050830  
EP05107903.6  20050830  
US71587805P true  20050909  20050909  
US11/465,724 US20070047790A1 (en)  20050830  20060818  Method of Segmenting Anatomic Entities in Digital Medical Images 
Applications Claiming Priority (3)
Application Number  Priority Date  Filing Date  Title 

US11/465,724 US20070047790A1 (en)  20050830  20060818  Method of Segmenting Anatomic Entities in Digital Medical Images 
US12/689,329 US7916917B2 (en)  20050830  20100119  Method of segmenting anatomic entities in digital medical images 
US12/716,771 US8023708B2 (en)  20050830  20100303  Method of segmenting anatomic entities in digital medical images 
Related Child Applications (2)
Application Number  Title  Priority Date  Filing Date 

US12/689,329 Division US7916917B2 (en)  20050830  20100119  Method of segmenting anatomic entities in digital medical images 
US12/716,771 Continuation US8023708B2 (en)  20050830  20100303  Method of segmenting anatomic entities in digital medical images 
Publications (1)
Publication Number  Publication Date 

US20070047790A1 true US20070047790A1 (en)  20070301 
Family
ID=37804139
Family Applications (3)
Application Number  Title  Priority Date  Filing Date 

US11/465,724 Granted US20070047790A1 (en)  20050830  20060818  Method of Segmenting Anatomic Entities in Digital Medical Images 
US12/689,329 Active US7916917B2 (en)  20050830  20100119  Method of segmenting anatomic entities in digital medical images 
US12/716,771 Active US8023708B2 (en)  20050830  20100303  Method of segmenting anatomic entities in digital medical images 
Family Applications After (2)
Application Number  Title  Priority Date  Filing Date 

US12/689,329 Active US7916917B2 (en)  20050830  20100119  Method of segmenting anatomic entities in digital medical images 
US12/716,771 Active US8023708B2 (en)  20050830  20100303  Method of segmenting anatomic entities in digital medical images 
Country Status (1)
Country  Link 

US (3)  US20070047790A1 (en) 
Cited By (18)
Publication number  Priority date  Publication date  Assignee  Title 

US20050190984A1 (en) *  20040224  20050901  Daniel Fischer  Method for filtering tomographic 3D images after completed reconstruction of volume data 
US20070047789A1 (en) *  20050830  20070301  AgfaGevaert N.V.  Method of Constructing Gray Value or Geometric Models of Anatomic Entity in Medical Image 
US20070173744A1 (en) *  20050913  20070726  Siemens Corporate Research Inc  System and method for detecting intervertebral disc alignment using vertebrae segmentation 
US20070223795A1 (en) *  20051019  20070927  Siemens Corporate Research, Inc.  System and Method For Tracing Rib Posterior In Chest CT Volumes 
US7408553B1 (en) *  20051215  20080805  Nvidia Corporation  Inside testing for paths 
WO2008155255A1 (en) *  20070619  20081224  Agfa Healthcare  Method of constructing a gray value model and/or a geometric model of an anatomic entity in a 3d digital medical image 
US20090080773A1 (en) *  20070920  20090326  Mark Shaw  Image segmentation using dynamic color gradient threshold, texture, and multimodalmerging 
US20100056919A1 (en) *  20080829  20100304  Yasuhiko Abe  Ultrasonic diagnosis apparatus, image processing apparatus, and image processing method 
US20100098313A1 (en) *  20081016  20100422  Riverain Medical Group, Llc  Ribcage segmentation 
US20100177946A1 (en) *  20070518  20100715  Marleen De Bruijne  Semiautomatic contour detection 
US8004522B1 (en)  20070807  20110823  Nvidia Corporation  Using coverage information in computer graphics 
US20120134568A1 (en) *  20061219  20120531  Daniel Russakoff  Method and apparatus of using probabilistic atlas for feature removal/positioning 
US8325203B1 (en)  20070815  20121204  Nvidia Corporation  Optimal caching for virtual coverage antialiasing 
US8547395B1 (en)  20061220  20131001  Nvidia Corporation  Writing coverage information to a framebuffer in a computer graphics system 
US20150363963A1 (en) *  20140612  20151217  Siemens Medical Solutions Usa, Inc.  Visualization With Anatomical Intelligence 
US20160364880A1 (en) *  20140304  20161215  Ucl Business Plc  Apparatus and method for generating and using a subjectspecific statistical motion model 
US10984585B2 (en) *  20171213  20210420  Covidien Lp  Systems, methods, and computerreadable media for automatic computed tomography to computed tomography registration 
US11003898B2 (en) *  20160725  20210511  BGR Technologies Pty Limited  Creating videos with facial expressions 
Families Citing this family (18)
Publication number  Priority date  Publication date  Assignee  Title 

EP2038839A2 (en) *  20060628  20090325  Philips Electronics N.V.  Variable resolution model based image segmentation 
EP2006803A1 (en) *  20070619  20081224  Agfa HealthCare NV  Method of segmenting anatomic entities in 3D digital medical images 
EP2176799B1 (en) *  20070801  20190911  Koninklijke Philips N.V.  Accessing medical image detabases using medically relevant terms 
US8253725B2 (en) *  20071228  20120828  St. Jude Medical, Atrial Fibrillation Division, Inc.  Method and system for generating surface models of geometric structures 
WO2009112893A1 (en) *  20080313  20090917  Biospace Lab  Method and installation for obtaining an image of a sample emitting a light signal from within its inside 
US10210179B2 (en) *  20081118  20190219  Excalibur Ip, Llc  Dynamic feature weighting 
US8594398B2 (en) *  20090626  20131126  International Business Machines Corporation  Systems and methods for cardiac view recognition and disease recognition 
US8437521B2 (en) *  20090910  20130507  Siemens Medical Solutions Usa, Inc.  Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging 
CN102113897B (en) *  20091231  20141015  深圳迈瑞生物医疗电子股份有限公司  Method and device for extracting targetofinterest from image and method and device for measuring targetofinterest in image 
WO2012109658A2 (en) *  20110211  20120816  Emory University  Systems, methods and computer readable storage mediums storing instructions for segmentation of medical images 
DE102011078405B4 (en) *  20110630  20130321  Siemens Aktiengesellschaft  Method for endoscopy with magnetically guided endoscope capsule and device therefor 
JP5395868B2 (en) *  20110926  20140122  富士フイルム株式会社  Image processing apparatus and method, and program 
CN102663762B (en) *  20120425  20151209  天津大学  The dividing method of symmetrical organ in medical image 
US8744165B2 (en) *  20120719  20140603  Sony Corporation  Method and apparatus for stain separation in digital pathology images 
JP6426608B2 (en) *  20120830  20181121  ザ リージェンツ オブ ユニバーシティー オブ ミシガン  Analytical Morphomics: A Fast Medical Image Analysis Method 
JP6639123B2 (en) *  20150706  20200205  キヤノン株式会社  Image processing apparatus, image processing method, and program 
EP3136290A1 (en)  20150828  20170301  Thomson Licensing  Method and device for determining the shape of an object represented in an image, corresponding computer program product and computer readable medium 
CN108846399A (en) *  20180410  20181120  阿里巴巴集团控股有限公司  A kind of method and device of image procossing 
Family Cites Families (1)
Publication number  Priority date  Publication date  Assignee  Title 

EP1923809A3 (en)  20020327  20101229  Agfa HealthCare NV  Method of performing geometric measurements on digital radiological images 

2006
 20060818 US US11/465,724 patent/US20070047790A1/en active Granted

2010
 20100119 US US12/689,329 patent/US7916917B2/en active Active
 20100303 US US12/716,771 patent/US8023708B2/en active Active
Cited By (27)
Publication number  Priority date  Publication date  Assignee  Title 

US7650023B2 (en) *  20040224  20100119  Siemens Aktiengeśellschaft  Method for filtering tomographic 3D images after completed reconstruction of volume data 
US20050190984A1 (en) *  20040224  20050901  Daniel Fischer  Method for filtering tomographic 3D images after completed reconstruction of volume data 
US20070047789A1 (en) *  20050830  20070301  AgfaGevaert N.V.  Method of Constructing Gray Value or Geometric Models of Anatomic Entity in Medical Image 
US8165359B2 (en) *  20050830  20120424  Agfa Healthcare N.V.  Method of constructing gray value or geometric models of anatomic entity in medical image 
US20070173744A1 (en) *  20050913  20070726  Siemens Corporate Research Inc  System and method for detecting intervertebral disc alignment using vertebrae segmentation 
US7804986B2 (en) *  20050913  20100928  Siemens Medical Solutions Usa, Inc.  System and method for detecting intervertebral disc alignment using vertebrae segmentation 
US20070223795A1 (en) *  20051019  20070927  Siemens Corporate Research, Inc.  System and Method For Tracing Rib Posterior In Chest CT Volumes 
US7949171B2 (en) *  20051019  20110524  Siemens Corporation  System and method for tracing rib posterior in chest CT volumes 
US7408553B1 (en) *  20051215  20080805  Nvidia Corporation  Inside testing for paths 
US20120134568A1 (en) *  20061219  20120531  Daniel Russakoff  Method and apparatus of using probabilistic atlas for feature removal/positioning 
US8547395B1 (en)  20061220  20131001  Nvidia Corporation  Writing coverage information to a framebuffer in a computer graphics system 
US20100177946A1 (en) *  20070518  20100715  Marleen De Bruijne  Semiautomatic contour detection 
EP2006802A1 (en) *  20070619  20081224  Agfa HealthCare NV  Method of constructing a grey value model and/or a geometric model of an anatomic entity in a 3D digital medical image 
WO2008155255A1 (en) *  20070619  20081224  Agfa Healthcare  Method of constructing a gray value model and/or a geometric model of an anatomic entity in a 3d digital medical image 
US8004522B1 (en)  20070807  20110823  Nvidia Corporation  Using coverage information in computer graphics 
US8325203B1 (en)  20070815  20121204  Nvidia Corporation  Optimal caching for virtual coverage antialiasing 
US20090080773A1 (en) *  20070920  20090326  Mark Shaw  Image segmentation using dynamic color gradient threshold, texture, and multimodalmerging 
US20100056919A1 (en) *  20080829  20100304  Yasuhiko Abe  Ultrasonic diagnosis apparatus, image processing apparatus, and image processing method 
US9138200B2 (en) *  20080829  20150922  Kabushiki Kaisha Toshiba  Ultrasonic diagnosis method and apparatus image processing for calculating rotational angles in a space by threedimensional position tracking 
US20100098313A1 (en) *  20081016  20100422  Riverain Medical Group, Llc  Ribcage segmentation 
US8340378B2 (en) *  20081016  20121225  Riverain Medical Group, Llc  Ribcage segmentation 
US20160364880A1 (en) *  20140304  20161215  Ucl Business Plc  Apparatus and method for generating and using a subjectspecific statistical motion model 
US10115201B2 (en) *  20140304  20181030  Ucl Business Plc  Apparatus and method for generating and using a subjectspecific statistical motion model 
US20150363963A1 (en) *  20140612  20151217  Siemens Medical Solutions Usa, Inc.  Visualization With Anatomical Intelligence 
US10460508B2 (en) *  20140612  20191029  Siemens Healthcare Gmbh  Visualization with anatomical intelligence 
US11003898B2 (en) *  20160725  20210511  BGR Technologies Pty Limited  Creating videos with facial expressions 
US10984585B2 (en) *  20171213  20210420  Covidien Lp  Systems, methods, and computerreadable media for automatic computed tomography to computed tomography registration 
Also Published As
Publication number  Publication date 

US20110026785A1 (en)  20110203 
US20100119134A1 (en)  20100513 
US8023708B2 (en)  20110920 
US7916917B2 (en)  20110329 
Similar Documents
Publication  Publication Date  Title 

Vishnevskiy et al.  Isotropic total variation regularization of displacements in parametric image registration  
Despotović et al.  MRI segmentation of the human brain: challenges, methods, and applications  
US9256941B2 (en)  Microcalcification detection and classification in radiographic images  
US10062014B2 (en)  Deep imagetoimage network learning for medical image analysis  
Korez et al.  A framework for automated spine and vertebrae interpolationbased detection and modelbased segmentation  
Ghose et al.  A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images  
Soliman et al.  Accurate lungs segmentation on CT chest images by adaptive appearanceguided shape modeling  
Rivaz et al.  Selfsimilarity weighted mutual information: a new nonrigid image registration metric  
Cuingnet et al.  Automatic detection and segmentation of kidneys in 3D CT images using random forests  
US9984283B2 (en)  Methods, systems, and computer readable media for automated detection of abnormalities in medical images  
Mitchell et al.  3D active appearance models: segmentation of cardiac MR and ultrasound images  
Duta et al.  Segmentation and interpretation of MR brain images. An improved active shape model  
Nambakhsh et al.  Left ventricle segmentation in MRI via convex relaxed distribution matching  
Christensen et al.  Consistent image registration  
Chakraborty et al.  Deformable boundary finding in medical images by integrating gradient and region information  
Zhuang et al.  A registrationbased propagation framework for automatic whole heart segmentation of cardiac MRI  
Mharib et al.  Survey on liver CT image segmentation methods  
Bosch et al.  Automatic segmentation of echocardiographic sequences by active appearance motion models  
Zhuang  Challenges and methodologies of fully automatic whole heart segmentation: a review  
Brejl et al.  Object localization and border detection criteria design in edgebased image segmentation: automated learning from examples  
Shi et al.  A hierarchical local regionbased sparse shape composition for liver segmentation in CT scans  
US9710730B2 (en)  Image registration  
Heimann et al.  A shapeguided deformable model with evolutionary algorithm initialization for 3D soft tissue segmentation  
Shi et al.  A comprehensive cardiac motion estimation framework using both untagged and 3D tagged MR images based on nonrigid registration  
US7995810B2 (en)  System and methods for image segmentation in ndimensional space 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: AGFAGEVAERT N.V., BELGIUM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DEWAELE, PIET;REEL/FRAME:018300/0755 Effective date: 20060920 

STCB  Information on status: application discontinuation 
Free format text: EXPRESSLY ABANDONED  DURING PUBLICATION PROCESS 