US20120229445A1  System and method of reducing transmission bandwidth required for visibilityevent streaming of interactive and noninteractive content  Google Patents
System and method of reducing transmission bandwidth required for visibilityevent streaming of interactive and noninteractive content Download PDFInfo
 Publication number
 US20120229445A1 US20120229445A1 US13/420,436 US201213420436A US2012229445A1 US 20120229445 A1 US20120229445 A1 US 20120229445A1 US 201213420436 A US201213420436 A US 201213420436A US 2012229445 A1 US2012229445 A1 US 2012229445A1
 Authority
 US
 United States
 Prior art keywords
 mesh
 viewcell
 polygon
 edge
 silhouette
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Abandoned
Links
 239000002131 composite material Substances 0 abstract claims description 24
 230000002452 interceptive Effects 0 description title 41
 230000001603 reducing Effects 0 description title 10
 230000000875 corresponding Effects 0 claims description 569
 239000010911 seed Substances 0 claims description 112
 238000003860 storage Methods 0 claims description 88
 238000009740 moulding (composite fabrication) Methods 0 claims description 69
 230000000007 visual effect Effects 0 claims description 58
 239000002609 media Substances 0 claims description 39
 230000015654 memory Effects 0 claims description 32
 238000009877 rendering Methods 0 claims description 30
 238000000034 methods Methods 0 description 1234
 238000010276 construction Methods 0 description 195
 241001293511 Umbra Species 0 description 86
 150000001875 compounds Chemical class 0 description 78
 210000004027 cells Anatomy 0 description 73
 238000004422 calculation algorithm Methods 0 description 68
 239000000872 buffers Substances 0 description 39
 230000000694 effects Effects 0 description 32
 230000001276 controlling effects Effects 0 description 25
 238000007514 turning Methods 0 description 22
 230000001965 increased Effects 0 description 20
 239000010912 leaf Substances 0 description 17
 230000003068 static Effects 0 description 16
 239000011162 core materials Substances 0 description 15
 239000011519 fill dirt Substances 0 description 15
 238000002372 labelling Methods 0 description 15
 238000005266 casting Methods 0 description 14
 238000007781 preprocessing Methods 0 description 14
 230000002829 reduced Effects 0 description 14
 210000000887 Face Anatomy 0 description 13
 101700023987 OBE1 family Proteins 0 description 13
 210000002356 Skeleton Anatomy 0 description 13
 230000000977 initiatory Effects 0 description 13
 230000003993 interaction Effects 0 description 12
 239000000203 mixtures Substances 0 description 12
 230000003247 decreasing Effects 0 description 11
 239000011295 pitch Substances 0 description 11
 238000004364 calculation methods Methods 0 description 10
 238000000354 decomposition Methods 0 description 10
 239000011799 hole materials Substances 0 description 10
 230000002708 enhancing Effects 0 description 9
 230000001427 coherent Effects 0 description 8
 238000005286 illumination Methods 0 description 8
 230000004256 retinal image Effects 0 description 8
 238000000638 solvent extraction Methods 0 description 8
 201000005804 Eastern equine encephalitis Diseases 0 description 7
 238000004458 analytical methods Methods 0 description 7
 239000007787 solids Substances 0 description 7
 230000003044 adaptive Effects 0 description 6
 230000004048 modification Effects 0 description 6
 238000006011 modification Methods 0 description 6
 230000036961 partial Effects 0 description 6
 238000005192 partition Methods 0 description 6
 241000362773 Espirito Santo virus Species 0 description 5
 101700054115 ROS1A family Proteins 0 description 5
 230000001133 acceleration Effects 0 description 5
 229930002945 alltransretinaldehyde Natural products 0 description 5
 239000003795 chemical substance by application Substances 0 description 5
 238000007906 compression Methods 0 description 5
 230000001976 improved Effects 0 description 5
 230000002207 retinal Effects 0 description 5
 239000011604 retinal Substances 0 description 5
 241001446316 Bohle iridovirus Species 0 description 4
 241000282414 Homo sapiens Species 0 description 4
 206010029148 Nephrolithiasis Diseases 0 description 4
 101700071704 SPE family Proteins 0 description 4
 230000003405 preventing Effects 0 description 4
 230000000717 retained Effects 0 description 4
 102100016187 PVR Human genes 0 description 3
 101700086639 PVR family Proteins 0 description 3
 238000007792 addition Methods 0 description 3
 238000009963 fulling Methods 0 description 3
 230000014509 gene expression Effects 0 description 3
 239000010410 layers Substances 0 description 3
 239000000463 materials Substances 0 description 3
 238000005259 measurements Methods 0 description 3
 230000011514 reflex Effects 0 description 3
 230000002441 reversible Effects 0 description 3
 230000002123 temporal effects Effects 0 description 3
 230000001131 transforming Effects 0 description 3
 101700070676 OBE2 family Proteins 0 description 2
 101700068123 OBE3 family Proteins 0 description 2
 101700033975 PROM1 family Proteins 0 description 2
 101700059496 RNB family Proteins 0 description 2
 101700051226 US11 family Proteins 0 description 2
 238000004220 aggregation Methods 0 description 2
 230000002776 aggregation Effects 0 description 2
 239000002004 ayurvedic oil Substances 0 description 2
 239000010941 cobalt Substances 0 description 2
 230000002596 correlated Effects 0 description 2
 230000018109 developmental process Effects 0 description 2
 230000001747 exhibited Effects 0 description 2
 230000004424 eye movement Effects 0 description 2
 230000000670 limiting Effects 0 description 2
 239000011159 matrix materials Substances 0 description 2
 230000000116 mitigating Effects 0 description 2
 238000005457 optimization Methods 0 description 2
 230000000149 penetrating Effects 0 description 2
 230000035515 penetration Effects 0 description 2
 230000003334 potential Effects 0 description 2
 230000000063 preceeding Effects 0 description 2
 239000000047 products Substances 0 description 2
 238000007670 refining Methods 0 description 2
 230000004044 response Effects 0 description 2
 239000000725 suspension Substances 0 description 2
 238000009966 trimming Methods 0 description 2
 230000004304 visual acuity Effects 0 description 2
 210000002383 AT1 Anatomy 0 description 1
 241001290610 Abildgaardia Species 0 description 1
 241001656913 Buxus balearica Species 0 description 1
 101700067215 EPN4 family Proteins 0 description 1
 101700072964 F168B family Proteins 0 description 1
 102100001490 FAM168B Human genes 0 description 1
 210000003128 Head Anatomy 0 description 1
 206010022000 Influenza Diseases 0 description 1
 101700062282 NECT2 family Proteins 0 description 1
 208000006440 Open Bite Diseases 0 description 1
 241000404883 Pisa Species 0 description 1
 206010036618 Premenstrual syndrome Diseases 0 description 1
 210000001525 Retina Anatomy 0 description 1
 108060007392 SEG1 family Proteins 0 description 1
 101700023289 SEG2 family Proteins 0 description 1
 241000025483 Symphonia globulifera Species 0 description 1
 101700085652 TRI18 family Proteins 0 description 1
 101700041407 VISTA family Proteins 0 description 1
 102100015314 VSIR Human genes 0 description 1
 241000710959 Venezuelan equine encephalitis virus Species 0 description 1
 239000008186 active pharmaceutical agents Substances 0 description 1
 239000011805 balls Substances 0 description 1
 230000006399 behavior Effects 0 description 1
 230000002457 bidirectional Effects 0 description 1
 230000015572 biosynthetic process Effects 0 description 1
 230000015556 catabolic process Effects 0 description 1
 230000001721 combination Effects 0 description 1
 238000004891 communication Methods 0 description 1
 230000000295 complement Effects 0 description 1
 238000004590 computer program Methods 0 description 1
 230000001143 conditioned Effects 0 description 1
 238000006731 degradation Methods 0 description 1
 230000004059 degradation Effects 0 description 1
 230000003111 delayed Effects 0 description 1
 230000003292 diminished Effects 0 description 1
 238000009826 distribution Methods 0 description 1
 201000004997 druginduced lupus erythematosus Diseases 0 description 1
 238000005516 engineering processes Methods 0 description 1
 238000000802 evaporationinduced selfassembly Methods 0 description 1
 230000002349 favourable Effects 0 description 1
 238000007667 floating Methods 0 description 1
 238000005755 formation Methods 0 description 1
 239000000727 fractions Substances 0 description 1
 239000010437 gem Substances 0 description 1
 238000007429 general methods Methods 0 description 1
 230000001771 impaired Effects 0 description 1
 238000003010 incremental construction method Methods 0 description 1
 238000005304 joining Methods 0 description 1
 230000000051 modifying Effects 0 description 1
 239000010932 platinum Substances 0 description 1
 229920000314 poly pmethyl styrenes Polymers 0 description 1
 230000000135 prohibitive Effects 0 description 1
 230000002633 protecting Effects 0 description 1
 NGVDGCNFYWLIFOUHFFFAOYSAN pyridoxal 5'phosphate Chemical compound data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='300px' height='300px' >
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='300' height='300' x='0' y='0'> </rect>
<path class='bond-0' d='M 13.6364,127.932 51.5036,123.642' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 51.5036,123.642 57.7293,109.348' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 57.7293,109.348 63.9551,95.0541' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 60.3591,122.397 64.7172,112.391' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 64.7172,112.391 69.0752,102.386' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 51.5036,123.642 74.1528,154.29' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 72.2268,88.0787 88.4078,86.2454' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 88.4078,86.2454 104.589,84.4121' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 104.589,84.4121 127.238,115.061' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 101.856,93.5392 117.711,114.993' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 127.238,115.061 165.105,110.77' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 127.238,115.061 112.02,150' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 165.105,110.77 174.083,122.919' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 174.083,122.919 183.061,135.068' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 193.687,140.747 207.111,139.226' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 207.111,139.226 220.536,137.705' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 224.902,130.777 223.476,118.195' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 223.476,118.195 222.051,105.613' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 230.707,136.553 241.379,135.343' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 241.379,135.343 252.051,134.134' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 222.555,143.909 223.98,156.491' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 223.98,156.491 225.406,169.073' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 230.128,143.051 231.554,155.633' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 231.554,155.633 232.979,168.215' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 112.02,150 134.669,180.649' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 112.02,150 74.1528,154.29' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 105.482,143.07 78.9748,146.073' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 131.175,179.127 124.949,193.421' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 124.949,193.421 118.724,207.715' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 138.163,182.171 131.937,196.464' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 131.937,196.464 125.712,210.758' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 74.1528,154.29 67.927,168.584' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 67.927,168.584 61.7012,182.878' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='61.2164' y='95.0541' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#0000FF' ><tspan>N</tspan></text>
<text x='181.822' y='147.771' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='220.536' y='143.48' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF7F00' ><tspan>P</tspan></text>
<text x='209.894' y='105.613' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
<text x='252.051' y='139.19' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
<text x='223.98' y='181.348' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='113.519' y='221.94' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='47.4973' y='195.581' style='font-size:12px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
</svg>
 data:image/svg+xml;base64,<?xml version='1.0' encoding='iso-8859-1'?>
<svg version='1.1' baseProfile='full'
              xmlns='http://www.w3.org/2000/svg'
                      xmlns:rdkit='http://www.rdkit.org/xml'
                      xmlns:xlink='http://www.w3.org/1999/xlink'
                  xml:space='preserve'
width='85px' height='85px' >
<!-- END OF HEADER -->
<rect style='opacity:1.0;fill:#FFFFFF;stroke:none' width='85' height='85' x='0' y='0'> </rect>
<path class='bond-0' d='M 3.36364,35.7474 14.0927,34.5318' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 14.0927,34.5318 15.8566,30.4819' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 15.8566,30.4819 17.6206,26.432' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 16.6018,34.1792 17.8365,31.3442' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-1' d='M 17.8365,31.3442 19.0713,28.5093' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-15' d='M 14.0927,34.5318 20.5099,43.2156' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 19.9643,24.4556 24.5489,23.9362' style='fill:none;fill-rule:evenodd;stroke:#0000FF;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-2' d='M 24.5489,23.9362 29.1335,23.4168' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 29.1335,23.4168 35.5508,32.1006' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-3' d='M 28.3593,26.0028 32.8514,32.0815' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-4' d='M 35.5508,32.1006 46.2798,30.885' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-10' d='M 35.5508,32.1006 31.239,42' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 46.2798,30.885 48.8235,34.3271' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-5' d='M 48.8235,34.3271 51.3672,37.7692' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 54.3779,39.3783 58.1815,38.9474' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-6' d='M 58.1815,38.9474 61.9851,38.5164' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 63.2222,36.5535 62.8183,32.9886' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-7' d='M 62.8183,32.9886 62.4144,29.4237' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 64.8671,38.1899 67.8908,37.8473' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-8' d='M 67.8908,37.8473 70.9146,37.5047' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 62.5571,40.2743 62.961,43.8392' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 62.961,43.8392 63.3649,47.4042' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 64.7029,40.0312 65.1068,43.5961' style='fill:none;fill-rule:evenodd;stroke:#FF7F00;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-9' d='M 65.1068,43.5961 65.5107,47.161' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-11' d='M 31.239,42 37.6563,50.6838' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 31.239,42 20.5099,43.2156' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-13' d='M 29.3865,40.0365 21.8762,40.8875' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 36.6663,50.2526 34.9024,54.3025' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 34.9024,54.3025 33.1384,58.3525' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 38.6462,51.115 36.8822,55.1649' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-12' d='M 36.8822,55.1649 35.1183,59.2148' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 20.5099,43.2156 18.746,47.2655' style='fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<path class='bond-14' d='M 18.746,47.2655 16.982,51.3154' style='fill:none;fill-rule:evenodd;stroke:#FF0000;stroke-width:2px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1' />
<text x='16.8446' y='26.432' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#0000FF' ><tspan>N</tspan></text>
<text x='51.0163' y='41.3684' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='61.9851' y='40.1528' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF7F00' ><tspan>P</tspan></text>
<text x='58.9699' y='29.4237' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
<text x='70.9146' y='38.9372' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
<text x='62.9609' y='50.8818' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='31.6637' y='62.3829' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>O</tspan></text>
<text x='12.9576' y='54.9147' style='font-size:3px;font-style:normal;font-weight:normal;fill-opacity:1;stroke:none;font-family:sans-serif;text-anchor:start;fill:#FF0000' ><tspan>OH</tspan></text>
</svg>
 CC1=NC=C(COP(O)(O)=O)C(C=O)=C1O NGVDGCNFYWLIFOUHFFFAOYSAN 0 description 1
 239000011589 pyridoxal 5'phosphate Substances 0 description 1
 230000035945 sensitivity Effects 0 description 1
 238000004088 simulation Methods 0 description 1
 238000006467 substitution reaction Methods 0 description 1
 230000036410 touch Effects 0 description 1
 FHNFHKCVQCLJFQUHFFFAOYSAN xenon(0) Chemical compound data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nMzAwcHgnIGhlaWdodD0nMzAwcHgnID4KPCEtLSBFTkQgT0YgSEVBREVSIC0tPgo8cmVjdCBzdHlsZT0nb3BhY2l0eToxLjA7ZmlsbDojRkZGRkZGO3N0cm9rZTpub25lJyB3aWR0aD0nMzAwJyBoZWlnaHQ9JzMwMCcgeD0nMCcgeT0nMCc+IDwvcmVjdD4KPHRleHQgeD0nMTM4Ljk4OScgeT0nMTU3LjUnIHN0eWxlPSdmb250LXNpemU6MTVweDtmb250LXN0eWxlOm5vcm1hbDtmb250LXdlaWdodDpub3JtYWw7ZmlsbC1vcGFjaXR5OjE7c3Ryb2tlOm5vbmU7Zm9udC1mYW1pbHk6c2Fucy1zZXJpZjt0ZXh0LWFuY2hvcjpzdGFydDtmaWxsOiMwMDAwMDAnID48dHNwYW4+WGU8L3RzcGFuPjwvdGV4dD4KPC9zdmc+Cg== data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0nMS4wJyBlbmNvZGluZz0naXNvLTg4NTktMSc/Pgo8c3ZnIHZlcnNpb249JzEuMScgYmFzZVByb2ZpbGU9J2Z1bGwnCiAgICAgICAgICAgICAgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJwogICAgICAgICAgICAgICAgICAgICAgeG1sbnM6cmRraXQ9J2h0dHA6Ly93d3cucmRraXQub3JnL3htbCcKICAgICAgICAgICAgICAgICAgICAgIHhtbG5zOnhsaW5rPSdodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rJwogICAgICAgICAgICAgICAgICB4bWw6c3BhY2U9J3ByZXNlcnZlJwp3aWR0aD0nODVweCcgaGVpZ2h0PSc4NXB4JyA+CjwhLS0gRU5EIE9GIEhFQURFUiAtLT4KPHJlY3Qgc3R5bGU9J29wYWNpdHk6MS4wO2ZpbGw6I0ZGRkZGRjtzdHJva2U6bm9uZScgd2lkdGg9Jzg1JyBoZWlnaHQ9Jzg1JyB4PScwJyB5PScwJz4gPC9yZWN0Pgo8dGV4dCB4PSczMC45ODg2JyB5PSc0OS41JyBzdHlsZT0nZm9udC1zaXplOjE0cHg7Zm9udC1zdHlsZTpub3JtYWw7Zm9udC13ZWlnaHQ6bm9ybWFsO2ZpbGwtb3BhY2l0eToxO3N0cm9rZTpub25lO2ZvbnQtZmFtaWx5OnNhbnMtc2VyaWY7dGV4dC1hbmNob3I6c3RhcnQ7ZmlsbDojMDAwMDAwJyA+PHRzcGFuPlhlPC90c3Bhbj48L3RleHQ+Cjwvc3ZnPgo= [Xe] FHNFHKCVQCLJFQUHFFFAOYSAN 0 description 1
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/10—Geometric effects
 G06T15/20—Perspective computation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/005—General purpose rendering architectures

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/04—Texture mapping

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/10—Geometric effects
 G06T15/40—Hidden part removal

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/10—Geometric effects
 G06T15/40—Hidden part removal
 G06T15/405—Hidden part removal using Zbuffer

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T15/00—3D [Three Dimensional] image rendering
 G06T15/50—Lighting effects
 G06T15/60—Shadow generation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
 G06T17/20—Finite element generation, e.g. wireframe surface description, tesselation

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T19/00—Manipulating 3D models or images for computer graphics
 G06T19/003—Navigation within 3D models or images

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2200/00—Indexing scheme for image data processing or generation, in general
 G06T2200/16—Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2200/00—Indexing scheme for image data processing or generation, in general
 G06T2200/28—Indexing scheme for image data processing or generation, in general involving image processing hardware

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2200/00—Indexing scheme for image data processing or generation, in general
 G06T2200/36—Review paper; Tutorial; Survey
Abstract
In an exemplary embodiment, a computerimplemented method determines a set of mesh polygons or fragments of the mesh polygons visible from a navigation cell. The method includes determining a composite view frustum containing predetermined view frusta and determining mesh polygons contained in the composite view frustum. The method includes determining at least one supporting polygon between the navigation cell and the contained mesh polygons. The method further includes constructing at least one wedge from the at least one supporting polygon, the at least one wedge extending away from the navigation cell beyond at least the contained mesh polygons. The method includes determining one or more intersections of the at least one wedge with the contained mesh polygons. The method also includes determining the set of the contained mesh polygons or fragments of the contained mesh polygons visible from the navigation cell using the determined one or more intersections.
Description
 This application claims the benefit of the earlier filing date of PCT patent application number PCT/US2011/042309 entitled “System and Method of FromRegion Visibility Determination and DeltaPVS based Content Streaming Using Conservative Linearized Umbral Event Surfaces” and filed on Jun. 29, 2011, which claims the benefit of the earlier filing date of U.S. Provisional Application 61/360,283, filed on Jun. 30, 2010, the entirety of each of which is incorporated herein by reference. This application claims the benefit of the earlier filing date of PCT patent application number PCT/US2011/051403 entitled “System and Method of Delivering and Controlling Streaming Interactive Media Comprising Predetermined Packets of Geometric, Texture, Lighting and Other Data Which are Rendered on a Receiving Device” and filed on Sep. 13, 2011, which claims the benefit of the earlier filing date of U.S. Provisional Application 61/382,056 entitled “System and Method of Delivering and Controlling Streaming Interactive Media Comprising Predetermined Packets of Geometric, Texture, Lighting and Other Data Which are Rendered on a Receiving Device” and filed on Sep. 13, 2010, the entirety of which is incorporated herein by reference. PCT patent application number PCT/US2011/051403 further claims the benefit of the earlier filing date of U.S. Provisional Application 61/384,284 entitled “System and Method of Recording and Using Clickable Advertisements Delivered as Streaming Interactive Media” and filed on Sep. 19, 2010, the entirety of which is incorporated herein by reference. This application further claims the benefit of the earlier filing date of U.S. Provisional Application 61/452,330 entitled “System and Method of Controlling VisibilityBased Geometry and Texture Streaming for Interactive Content Delivery” and filed on Mar. 14, 2011, the entirety of which is incorporated herein by reference. This application further claims the benefit of the earlier filing date of U.S. Provisional Application 61/474,491 entitled “System and Method of Protecting Game Engine Data Formats and Visibility Event Codec Formats Employing an Application Programming Interface Between the Game Engine and the Codec” and filed on Apr. 12, 2011, the entirety of which is incorporated herein by reference. This application also claims the benefit of the earlier filing date of U.S. Provisional Application 61/476,819 entitled “System and Method of Delivering Targeted, Clickable, OptOut or Optin Advertising as a Unique, Visibility Event Stream for Games and Streaming Interactive Media” and filed on Apr. 19, 2011, the entirety of which is incorporated herein by reference.
 1. Field of the Invention
 This invention relates to a method and system for delivering interactive content as a visibility event stream comprising renderable 3D graphics information.
 2. Description of Background
 The method of controlling a visibilityevent data stream delivering interactive content, which may deliver a fully interactive game experience or, alternatively a videolike experience in which interactivity is not required but available to the user, is described in PCT patent application number PCT/US2011/051403 entitled “System and Method of Delivering and Controlling Streaming Interactive Media Comprising Predetermined Packets of Geometric, Texture, Lighting and Other Data Which are Rendered on a Receiving Device”
 The present embodiments specify methods of reducing the bandwidth required to deliver a visibility event data stream. In one technique, bandwidth requirement is reduced by constructing and employing nonomnidirectional visibility event packets. These nonomnidirectional visibility event packets are based on fromregion visibility determination that encodes surfaces visible from a set of view frusta allowed within each navigation cell. This directional restriction of visibility can be used to encode visibility event packets corresponding to prescripted camera motion path through a modeled environment. Alternatively, nonomnidirectional visibility event packets can be used to encode and deliver a visibility event data stream supporting fully interactive control of viewpoint position and limited interactive control of view direction vector during navigation within a modeled environment.
 In another technique, the bandwidth required to support a visibility event data stream is reduced by employing a method of encoding the PVS of child viewcells using “encounter numbers” which define the visible limits of a deterministic traversal of a polygon mesh. These numbers can be efficiently transmitted to a client using runlength encoding and can be used by a client to generate multiple child potentially visible sets from a single parent PVS.
 In another technique, the bandwidth required to send and receive the graphical description of moving objects as part of a visibility event data stream is decreased by sending only those moving objects that become potentially newly visible to a clientuser.
 Realtime 3D graphics display hardware has become increasingly powerful and affordable. The availability of this hardware has enabled computer and gameconsole applications to routinely display scenes containing tens of thousands of graphic primitives in each frame. With few exceptions these hardware display systems employ a Zbuffer based hidden surface removal algorithm.
 The zbuffer hiddensurface removal algorithm solves visibility perpixel by computing the Z (depth) value of each rasterized pixel (or fragment) of every primitive in the view frustum. During rasterization the Z value of the current fragment is compared to the existing Z value in the frame buffer and the color of the fragment is written to the frame buffer only if it is has a lower Z value than the existing value in the Zbuffer.
 While this approach provides acceptable performance for relatively simple scenes, it can fail to provide adequate realtime performance for complex, realistic scenes. Such scenes tend to have high depth complexity which typically forces each element of the Zbuffer to be compared multiple times during the rendering of a single frame. Essentially all hidden surface samples that lie within the view frustum must be Zrasterized and compared to the Zbuffer values to find the closest visible samples.
 In some Zbuffer implementations the rasterizer often performs not only the Z determination and Zbuffer compare for all hidden fragments but also computes the complete rendering of hidden fragments, writing the resulting color to the frame buffer only if the corresponding Z value was closer than the existing Zbuffer value. For scenes of even modest depth complexity, this can result in wasted computation and diminished performance.
 Other zbuffer implementations includes some type of “earlyZ” rejection in which the color value of the fragment is not computed if its Z value is greater than the corresponding Zbuffer value. This can reduce rendering of hidden fragments but is only maximally effective if the graphic primitives are rendered in a backtofront order.
 Another improvement to the hardware Zbuffer is the integration of certain elements of the “Hierarchical Zbuffer” algorithm (Green et al 1993)(Green N., Kass, M., Miller, G “hierarchical ZBuffer Visibility” Proceedings of ACM Siggraph 1993 pp. 231238, the entirety of which is incorporated herein by reference) This algorithm employs a hierarchical representation of the Zbuffer to perform rapid visibility rejections tests. The complete hierarchical Zbuffer algorithm has proven difficult to implement in hardware although basic versions of the hierarchical Zbuffer pyramid itself has been implemented in some systems (e.g., Nvidia, ATI). In these implementations a low resolution version of the Zbuffer is maintained in memory that is local to the individual rasterizer units. These local representations are used in the previously described “earlyZ” rejection test. If an individual fragment can be rejected by comparing it to the low resolution, locally stored Zbuffer element, then a slower access of the high resolution (nonlocal) Zbuffer is avoided.
 In these accelerated hardware zbuffer systems “earlyZ” rejection can sometimes prevent rendering of hidden fragments and hierarchicalZ pyramid can speed the “earlyZ” rejection test. Nevertheless such accelerated systems still require that all primitives within the view frustum are processed through the geometry phase of the graphics pipeline and all fragments, including occluded surfaces, in the view frustum are processed through at least the Z generation/rejection test phase. Consequently, these systems can still perform poorly when rendering scenes of high depth complexity.
 Given the relatively poor performance of Zbuffer systems for scenes of high depth complexity, algorithms have been developed which identify occluded geometry and exclude such geometry from both the geometry and rasterization stages of the hardware graphics pipeline. These occlusion culling techniques can be performed either at runtime or in a preprocessing stage. A review of visibility culling techniques is published in CohenOr et. al. (2003) (CohenOr, Daniel, et al. “A Survey of Visibility for Walkthrough Applications.” IEEE Transactions on Visualization and Computer Graphics 9.3 (2003): 41231. Print., the entirety of which is incorporated herein by reference.) Visibility culling refers to any method which identifies and rejects invisible geometry before actual hidden surface removal (i.e. by Zbuffer) is performed. The well established methods of backface culling and view frustum culling using hierarchical techniques are routinely employed by applications to cull graphics primitives from the hardware pipeline. Occlusion culling is a type of visibility culling approach which avoids rendering primitives that are occluded in the scene. Occlusion culling involves complex interrelationships between graphic primitives in the model and is typically far more difficult to perform than view frustum culling.
 In general, runtime occlusion culling techniques determine what geometry is visible from a single viewpoint. These are called “frompoint” culling techniques. In contrast, preprocessing approaches to occlusion culling determine the subset of geometry that is visible from any viewpoint in a specified region. The latter methods are referred to as “fromregion” visibility techniques.
 The survey of CohenOr et. al. (2003) focuses on “walkthrough” type applications which are characterized by a relatively large amount of static geometry and high potential depth complexity. Many computer games, simulators and other interactive applications fall into this category. These applications tend to benefit substantially when “fromregion” occlusion culling techniques are applied to the geometric database in a preprocessing step. These techniques partition the model into regions or cells. These viewcells are navigable regions of the model which may contain the viewpoint. During preprocessing the subset of graphics primitives that are potentially visible from anywhere within a viewcell (potentially visible set or PVS) is determined. The principal advantage of fromregion visibility techniques is that the considerable computational cost of occlusion culling is paid in a preprocessing step rather than at runtime.
 In general, fromregion visibility preprocessing techniques aim to compute a conservative overestimate of the exact PVS for a view cell. The first fromregion visibility methods were developed for interactive viewing of architectural models. Architectural models are naturally subdivided into cells (eg. rooms, halls) and the visibility between cells occurs through connecting openings (doorways, windows) called portals. Airey (1990) exploited this structure in simple, axially aligned models. He demonstrated a method of identifying polygons visible through portals using an approximate, but conservative, shadow umbra calculation.
 Teller (1992) (Teller, Seth, Visibility Computations in Densely Occluded Polyhedral Environments. Diss. U of California at Berkeley, 1992. Berkeley: U of California at Berkeley, 1992. GAX9330757. ACM Portal, the entirety of which is incorporated herein by reference) and Sequin extended the method of cellandportal fromregion visibility to nonaxis aligned polygonal models which do not require user defined walls and portals. Teller employed a BSP tree defined by the polygons of the model (autopartition). The leafs of the BSP tree are necessarily convex polyhedra which may not be completely closed. These convex polyhedra are the visibility cells (or viewcells) of the model. Using cell adjacency information available in the BSP graph, the open regions on the boundary between adjacent cells are identified and enumerated as portals between visibility cells.
 Thus, Teller exploited the structure of the BSP autopartition to reduce the fromregion visibility problem to a more restricted and simplified problem of visibility through a sequence of polygonal portals. Teller showed that even for this relatively restricted visibility problem, the visibility event surfaces separating fromcell visible volumes and fromcell occluded volumes are usually quadric surfaces.
 Teller determined celltocell visibility by employing a test for the existence of extremal stabbing lines between cells through a portal or sequence of portals. In this method celltocell visibility is determined by establishing the existence of at least one ray that originates in the source cell and penetrates a sequence of portals to connecting cells. For example, the existence of such a ray through four portals is given by an extremal stabbing ray which is incident on any four edges of the relevant portals. Such a ray is identified using a Plucker mapping in which lines in three space are mapped to planes in 5space. The intersection of these four planes form a line in 5space which is intersected with the Plucker quardic to produce at most two nonimaginary results. Each of these intersections corresponds to a line in 3space which intersects the four portal edges, i.e. an extremal stabbing line. The cost of locating an extremal stabbing ray is O(n^{2}) in the number of edges in the portal sequence. Because the stabbing is performed incrementally the overall cost is O(n^{3}). The method employs singular value matrix decomposition which can exhibit numerical instability as a consequence of geometric degeneracies encountered in the stabbing sequence.
 Teller also developed a method of computing the exact visible volume through a portal sequence: the antipenumbra volume. As previously noted this volume is, in general, bounded by both planar and quadric surfaces. In this method the edges of the portals are once again dualized to Plucker coordinates, with each line in 3space representing the coordinates of a plane in 5space. The planes corresponding to all edges in a portal sequence are intersected with each other, using higher dimensional convex hull computation, to form a polyhedron in 5space. The intersection of the faces of this polyhedron with the Plucker quadric correspond to the extremal swaths, or visibility event surfaces between the portal edges. The intersection of the 5D faces with the Plucker quadric are not computed directly. Instead the intersection of the 5D edges with the Plucker quadric are computed. The intersection of the edges of the 5D polyhedron with the Plucker quadric correspond to extremal stabbing lines which bound the swaths. The intersections of these 5D edges with the Plucker quadric are identified by finding the roots of a quadratic equation. The swaths are identified indirectly by computing the intersections of the 5D edges with the Plucker quadric and examining the faces of the 5D polytope (edges in 3D) that share the 5D edge.
 Each swath may be a component of the boundary of the antipenumbra or, alternatively may be entirely within the antipenumbra volume. A containment test is used to identify boundary swaths.
 Teller found that the antipenumbra computation is difficult to implement robustly. This method requires highdimensional linear programming computations and root finding methods which, together, are not sufficiently robust to be used for complex models.
 Teller (1992) and Teller and Hanrahan (1993) (Teller, Seth J., and Pat Hanrahan. “Global Visibility Algorithms for Illumination Computations.” Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1993, the entirety of which is incorporated herein by reference) also developed a simpler technique to determine celltocell visibility and celltoobject visibility through a portal sequence. In this implementation the antipentumbra is conservatively approximated by a convex polyhedron. This “linearized” antipentumbra is bounded by separating planes of the portal sequence effectively forming a convex hull of the antipentumbra. The planes defining the boundary of the linearized antipentumbra are intersected with each other and with the bounding planes of the bsp leaf cell to determine visibility through the portal sequence.
 Although the linearized antipentumbra method overestimates the celltocell visibility through a portal sequence it is amenable to a robust implementation.
 In 1996 John Carmack employed a method of precomputing celltocell visibility for the computer game Quake. Carmack's method of visibility precomputation in Quake is somewhat similar to the linearized antipenumbra method described by Teller. In both Teller's and Carmack's method the geometric database is subdivided by a BSP tree in which large occluders (e.g. walls, floors) acted as splitting planes. The terminal leafs of such a subdivision are convex polyhedra which may have one or more nonclosed boundaries, or portals. In both methods the portals between leaf cells are identified and celltocell visibility is established using a linearized overestimate of the antipentumbra between the portals.
 In Teller's method the linearized antipenumbra is constructed by pivoting from each portal edge to two specific extremal or “separating” vertices in the portal sequence: one in each halfspace of the portal. (An extremal vertex of a portal is a vertex that together with the original portal edge form separating planes between the two portals.) The extremal vertices chosen result in planes which have the portal and all other extremal vertices in the same halfspace.
 In Carmack's implementation this pairwise, sequential intersection of linearized antipentumbra is used to establish the existence of celltocell visibility in a portal chain. The actual intersection of the antipentumbra with objects in each cell is not performed. The results are stored as a celltocell PVS for each leaf cell.
 Carmack's 1996 implementation of Teller's algorithms established BSP spatial subdivision with throughportal celltocell visibility as the preferred method of visibility precomputation for computer games. Subsequent 3D computer game systems either derived directly from Carmack's Quake Code (e.g. Quake II, Quake III, and Valve Software's “Source” game engine) or unrelated to it (e.g. Epic Game's Inc. “Unreal” game engine) have adopted this method of precomputed occlusion culling for densely occluded polyhedral environments.
 In all of these systems the modeled environments of the game are constructed using “level editing” tools to create the geometry of the walls, floors, ceilings and other stationary, potentially occluding elements of the environments. This geometry is then submitted to a preprocess that constructs a BSP tree from this geometry using conventional BSP algorithms. Typically a second preprocess is then invoked to calculate the celltocell PVS for each leaf cell of the BSP tree using the previously described throughportal visibility method. The PVS for a particular leaf cell is typically stored as an efficient compressed bit vector which indicates the other the bsp leaf cells that are visible from the source cell.
 During runtime display the specific leaf cell containing the current viewpoint, the viewpoint leaf cell, is established using a simple BSP algorithm. The PVS for the viewpoint leaf cell is read and the corresponding (potentially visible) leaf cells are then hierarchically culled with respect to the current view frustum using standard hierarchical view frustum culling methods. Those graphic primitives from PVS leaf cells that are within the view frustum are then sent to the hardware graphics pipeline. During runtime display various frompoint occlusion culling methods such as frompoint portal and antiportal culling may also be employed to further limit which primitives are sent to the hardware graphics pipeline. Nevertheless the precomputed PVS is typically the working set of primitives on which runtime frompoint culling is performed. Consequently the precomputed PVS is central to the runtime performance not only because its own occlusionculling costs have already been paid in a preprocess but also because an accurate PVS can lower the cost of runtime frompoint occlusion culling methods by limiting the amount of geometry on which they must operate.
 Although the BSP/portalsequence method of PVS precomputation is widely employed to enhance the performance of computer games and similar applications current implementations of the method have a number of shortcomings. As previously disscussed, the use of a linearized approximation of the portal sequence antipentumbra can cause the method to significantly overestimate the size of the PVS.
 Another limitation of the method is that it requires construction of a BSP from the potentially occluding geometry of the model (an autopartition). Spatial subdivision using a BSP tree which is wellbalanced and spaceefficient is known to be an inherently difficult problem (see pg. 96 TELLER (1992)). The best bounds on time complexity for tree construction tends is O(n^{3}) for a tree of worst case size O(n^{2}). With well chosen splitting heuristics BSPs of reasonable size can be produced for models of moderate complexity. However for larger models these time and space cost functions can make practical BSP construction and storage prohibitive. Consequently when employing the method users must often limit the number of primitives used to construct the BSP. Complex objects which contain large numbers of noncoplanar primitives are typically deliberately excluded as potential occluders because they would increase the time and space cost of BSP construction. Such objects are typically managed separately by the method which requires that the user (i.e. the level designer) designate the objects as “detail” objects which do not contribute BSP planes and do not function as occluders during the PVS precomputation. These detail objects can still function as potential occludees in the method. If a detail object is completely contained within a PVS leaf cell and the leaf cell is determined not to be part of the celltocell PVS for a given viewpoint leaf cell then the detail object can be excluded from the PVS of the viewpoint leaf cell. Nevertheless by eliminating objects from consideration as potential occluders based on their geometric complexity instead of their occluding potential, the method can significantly overestimate the actual fromregion PVS.
 A related weakness of the BSP/portalsequence method is that it can perform poorly for modeled environments other than architectural interiors. When applied to architectural interior models the method tends to naturally construct BSP leaf cells that correspond to rooms having portals which correspond to doors or windows. In contrast for open, outdoor scenes as well as many complex interior scenes visibility is less clearly governed by a closedcell, openportal relationship. In such scenes visibility is often limited primarily by freestanding occluders not associated with a relatively closed cell; or by the aggregation or fusion of multiple smaller occluders. The BSP/portalsequence does not effectively account for the fusion of individual freestanding occluders when culling occluded geometry. Applying the BSP/portalsequence method to such scenes can produce a very large BSP and very long portal sequences. Under these conditions the method tends to take a very long time to compute PVS's that are highly overestimated and inefficient at runtime. Applications that employ the BSP/portalsequence method will typically avoid PVS precomputation for such scenes and may instead rely on frompoint occlusion culling methods such as frompoint portal culling, such as the dynamic antiportal method used by Valve Software's Source^{R }game engine, which must be computed during runtime.
 Teller's initial description of the portal sequence method included a technique of computing celltoprimitive PVS by intersecting the linearized antipenumbra with individual primitives in bsp leaf cells. In practice this technique has not been adopted by Carmack or other existing systems in part because the storage costs of a celltoprimitive PVS would be much higher than a celltocell PVS.
 Despite the variety of approximations that have been employed to simplify and expedite BSP/portalsequence visibility preprocessing, it remains a computationally expensive process. Because the BSP/portalsequence method overestimates the PVS, completely occluded graphic primitives may undergo expensive runtime processing despite being invisible in the scene. The computational cost of processing occluded primitives during runtime may be paid by the CPU, the GPU, or both. CPU processing may include view frustum culling, frompoint portal culling, frompoint antiportal culling, as well as the CPU cost of batch primitive submission to the GPU. On the GPU side, occluded primitives may undergo both vertex processing and rasterization phases of the hardware graphics pipeline. One measure of the efficiency of precomputed occlusion culling is the degree of overdraw that occurs during runtime. Overdraw may occur during rasterization whenever a rasterized fragment must be compared to a nonempty entry in the Zbuffer. This nonempty entry in the Zbuffer resulted from earlier rasterization of a fragment at the same imagespace coordinates. The earlier entry may be in front of or behind (occluded by) the current fragment. The situation must be resolved by a Zbuffer read and compare operation. The earlier entry is overwritten if its Z value is more distant than that of the current fragment. As previously described, modern hardware Zbuffer systems can sometimes prevent actual shading of occluded fragments using an “earlyZ” rejection test which may include a hierarchical Z compare mechanism. Nevertheless completely occluded primitives that make it to the rasterization stage of the graphics pipeline will, at a minimum, have each of their rasterized fragments compared to a corresponding Zbuffer and/or its hierarchical equivalent. We adopt the convention that overdraw includes any “overlap” of fragments in imagespace which will at least require a Zcompare operation.
 When the BSP/portalsequence method was applied to the architectural interiors of the game Quake it was found that an average overdraw of 50% but ranging up to 150% in worst cases. (Abrash 1997, pg. 1189, Abrash, Michael “Michael Abrash's Graphics Programming Black Book Special Edition”, 1997 The Corilois Group, the entirety of which is incorporated herein by reference). This level of overdraw was encountered for relatively simple models which have a maximum depth complexity on the order of 10 and in which the visible depth complexity is often intentionally minimized by carefully selecting the position of occluding walls and portals.
 A later implementation of Carmack's visibility precomputation method is employed in ID Software's Quake III computer game. In this game the simulated environments have significantly more geometric detail than the original Quake game (approximately 40,000 polygons per level). As in the original game, levels are carefully designed to contain a variety of obstacles including rightangled hallways, walls behind doorways, stairways with Uturns, and other visibility barriers. These obstacles are intentionally arranged to limit visibility within the model and thereby reduce the size of the PVS for the model's visibility cells. Even with these visibility barriers the approximate celltocell portal visibility calculation results in considerable overdraw during runtime display. When applied to Quake III levels the BSP/portalsequence precomputation method generally results in typical overdraws of 80% with worst cases exceeding 300%. These results are obtained by measuring depth complexity during runtime walkthrough of typical Quake III levels using the dc command line option. During these measurements care must be taken to control for the effect of multipass shading.
 Thus even when the BSP/portalsequence method is applied to modeled environments for which it is best suited, it is a computationally expensive and relatively ineffective method of fromregion occlusion culling. Consequently more recent work has focused on fromregion occlusion culling methods which can be applied to general scenes and which produce a more precise PVS at a reasonable computational cost.
 Early conservative methods of general fromregion occlusion culling were described in CohenOr et al. (1998)(Chrysanthou, Yiorgos, Daniel CohenOr, and Dani Lischinski. “Fast Approximate Quantitative Visibility for Complex Scenes.” Proceedings of the Computer Graphics International 1998. Washington, D.C.: IEEE Computer Society, 1998. 220, the entirety of which is incorporated herein by reference). In these methods, objects are culled only if they are occluded by a single, large, convex occluder. Unfortunately the presence of large, convex occluders is rarely encountered in actual applications.
 More recently, methods of fromregion visibility precomputation have been developed which attempt to account for the combined occlusion of a collection of smaller occluders (occluder fusion).
 Durand et al. (2000) (Durand, Fredo, et al. “Conservative Visibility Preprocessing using Extended Projections.” Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. Proc. of International Conference on Computer Graphics and Interactive Techniques. New York: ACM Press/Wesley Publishing Co., 2000. 23948, the entirety of which is incorporated herein by reference) proposed a method of fromregion visibility precomputation that employs a conservative, imagespace representation of occluders and occludees called the extended projection. In this method a conservative, pixel based, representation of a convex occluder is constructive by rasterizing the occluder primitives from eight different viewpoints corresponding to the vertices of the viewcell. The extended projection of the convex occluder is the intersection of its projection from these views. This intersection can be computed by rasterizing the occluder into a hardware Zbuffer and stencil buffer data structure, which together form the “extended depth buffer”. Ocludees are conservatively represented as the union of the projections of their bounding boxes from the same viewcell vertices. Occludees are culled as invisible from the region if they are completely covered by an occluder in the extended depth buffer. The extended projections of multiple occluders aggregate on the extended depth buffer, which accounts for occluder fusion.
 The method may use extended depth buffers corresponding a single set of six planes which surround the entire environment. Alternatively, consecutive sets of surrounding planes at increasing distances from the viewpoint cell can be employed. In this case aggregated occluders on a near plane can be reprojected, using a conservative convolution operator, to subsequent planes. This “occlusion sweep” reprojection approach is more effective in capturing the fusion of multiple, small occluders at varying distances from the viewpoint cell. This arrangement was used, for example, to account for occluder aggregation in a forest scene of high depth complexity.
 The extended projection method employs a number of approximations which result in overestimation of the PVS. First, the size of potential occludees is always overestimated since the method does not use the projection of the occludee itself. Instead the bounding box of the occludee is projected. In addition a second approximation, the bounding rectangle of this projection, is used to compute the extended projection of the occludee. These consecutive approximations result in an overestimate of the size of the occludee and consequently reduce the precision of the PVS. Moreover, the requirement to use occludee bounding boxes effectively limits the precision of the method to producing to a celltoobject (rather than celltoprimitive) PVS.
 The extended projection method can directly rasterize only convex occluders into the extended depth buffer. Concave occluders must first be converted to a convex representation by intersecting the concave occluder surface with the projection plane. This is an additional step requiring a objectspace calculation that, depending on the characteristics of the occludee surface, may be computationally expensive. In addition, if the location of the projection plane is not ideal, the intersection calculation can significantly underestimate the actual occluding effect of the concave occluder.
 Another approximation employed by the extended projection method is the technique for reprojecting an occluder from one projection plane to a more distant one. The goal of this reprojection is effectively to identify the umbra of a planar occluder (with respect to a light source represented by the viewcell) and find the intersection of this umbra with a more distant plane. The extended projection method conservatively estimates this intersection by convolving the image of the occluder with an inverse image of rectangle that functions as an overestimate of a light source formed by the viewpoint cell. This technique can significantly underestimate the umbra of occluders which are similar in size to the viewpoint cell. By significantly underestimating the size of reprojected occluders the method will tend to overestimate the PVS.
 A principal motivation of the extended projection method is to detect occlusion caused by the combined effects of multiple small occluders. Durand et al (2000). acknowledge that the method only detects fusion between occluders where the umbra (occluded region) of the occluders intersect and when this intersection volume itself intersects one of the arbitrarily chosen parallel projecting planes. Since relatively few projection planes are used in the occlusion sweep implementation, the method can frequently fail to detect occluder fusion caused by umbra which intersect outside the vicinity of a projection plane.
 Schaufler et al. (2000) (Schaufler, Gernot, et al. “Conservative Volumetric Visibility with Occluder Fusion.” Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM Press/Wesley Publishing Co., 2000. 22938, the entirety of which is incorporated herein by reference) developed a method of precomputing a conservative, fromregion PVS that requires a volumetric representation of the modeled environment. In this method modeled objects must be bounded by closed surfaces. The closed bounding surface must produce a well defined interior volume for each object. The interior volume of an object is assumed to be opaque and is represented with convex voxels that are generated by a volumetric decomposition of the interior. The voxels act as occluders for the method. Occlusion is computed by finding a shaft which connects the viewpoint cell and a voxel. The extension of this shaft is an umbra within which all geometry is occluded. The method accounts for occluder fusion by combining adjacent voxels and by combining voxels and adjacent regions of occluded space. The implementation presented calculates a celltocell PVS for 2D and 2.5D environments (e.g. cities modeled as heightfields). While the extension to full 3D environments is discussed by the authors, the computational and storage costs of a detailed volumetric representation of a 3D model are a real limitation of the method. While the volumetric visibility method of Schaufler et al. does not require occluders to be convex it does require them to be wellformed manifolds with identifiable solid (watertight) interior volumes. This allows an individual occluder to be conservatively approximated by a boxshaped structure that is completely within the interior of the original occluder. This approximate occluder is generated by decomposing the interior into voxels and recombining the voxels in a process of blocker extension which attempts to maximize the size of the contained boxshaped approximate occluder. The method requires that the approximate occluders retain a box shape to facilitate the construction of the shaft used to determine occlusion. A principal limitation of this approach is that many occluders are poorly approximated by a contained boxshaped structure. In particular, concave objects or objects with topological holes (manifolds with genus greater than zero) present an ambiguous case to the blocker extension algorithm and significantly underestimate the occlusion caused by the object. A 2.5D implementation of the method described by Schaufler et al. to compute a PVS for viewcells in a city model was tested using primarily convex objects of genus zero. These objects tend to be reasonably well approximated using a boxshaped interior occluder. For more realistic models containing concave elements and holes (e.g. doors and windows) the method would be less effective in approximating occluders and consequently less efficient in culling occluded geometry.
 The volumetric visibility method detects occluder fusion in cases where the linearized umbra of the occluders intersect. However, as with individual occluders, the blocker extension algorithm ultimately produces a simplified boxshaped approximation to the aggregate region of occlusion that can significantly underestimate the effect of occluder fusion.
 Both the extended projection method and the volumetric visibility method effectively treat the viewcell as an area light source and respectively employ imagespace and objectspace techniques to compute a conservative, linearized approximation to the umbrae of polygon meshes. Algorithms for computing the shadow boundaries (umbra and penumbra) of a polygonal area light source, Nishita, Nakame (1985)(Nishita, Tomoyuki, Isao Okamura, and Eihachiro Nakamae. “Shading Models for Point and Linear Sources.” ACM Transactions on Graphics (TOG) 4.2 (1985): 12446, the entirety of which is incorporated herein by reference) and ChinFiner (1992) (Chin, Norman, and Steven Feiner. “Fast ObjectPrecision Shadow Generation for Area Light Sources Using BSP Trees.” Proceedings of the 1992 Symposium on Interactive 3D Graphics. Proc. of Symposium on Interactive 3D Graphics, 1992, Cambridge, Mass. New York: Association for Computing Machinery, 1992, the entirety of which is incorporated herein by reference) have also employed a conservative, linearized umbra boundaries.
 These shadow boundary methods employ only the linear umbral event surfaces that form between a single convex polygonal light source and single convex polygons. The use of these methods on nonconvex polygon meshes for instance would result in a discontinuous umbral event surface that would not accurately represent an umbral volume. Consequently their utility is practically limited to very simple models.
 In 1992 Heckbert (Heckbert, P “Discontinuity Meshing for Radiosity”, Third Eurographics Workshop on Rendering, Bristol, UK, May 1992, pp 203216, he entirety of which is incorporated herein by reference) used a different approach called incomplete discontinuity meshing to construct the exact linear visibility event surfaces (umbral and penumbral) cast by simple polygon models from an area light source. In this technique the linear event surfaces, or wedges, are formed between the edges of the light source and the vertices of the occluder and between the vertices of the light source and the edges of the occluders. The wedges are intersected with all of the model polygons and the segments of the polygons that are actually visible on the wedge are subsequently determined using a 2D version of the WeilerAtherton objectspace frompoint visibility algorithm (Weiler, Kevin, and Peter Atherton. “Hidden Surface Removal using Polygon Area Sorting.” Proceedings of the 4th Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1977. 21422, the entirety of which is incorporated herein by reference).
 The primary motivation of the discontinuity meshing method is to identify discontinuity boundaries within the penumbra. These boundaries can be used to increase the precision of illumination calculations within the penumbra. Unfortunately because the incomplete discontinuity meshing method constructs only the exact linear umbral event wedges, it generally fails to produce the complete, continuous umbral event surface. This is because for all but the simplest models, the continuous umbral event surface (for example incident on the silhouette contour of a polygon mesh) is formed by both planar and quadric visibility event surfaces. Consequently the method of incomplete discontinuity meshing is unsuited to identify mesh polygon or mesh polygon fragments that are visible or occluded from an area light source (or viewcell).
 In the priorart method of incomplete discontinuity meshing, all of the visibility event surfaces are formed by a vertex and an edge.
FIG. 53 is from the priorart method of incomplete discontinuity meshing by Heckbert. The figure shows an exact linear visibility event surface, or wedge, as the shaded triangular structure WEDGE R. The wedge labeled WEDGE R is incident on an edge e of a polygon and also incident on a vertex v, which may be a vertex of a light source. In the method of incomplete discontinuity meshing, the linear event surfaces are not defined over segments of an edge which are not visible from the vertex. In the case ofFIG. 53 , WEDGE R is not defined over the segment of edge e labeled GAP E, because polygon O occludes vertex v from GAP E. Because the wedge is not defined over this segment, the wedge's intersection with polygon P causes a corresponding gap between SEG1 and SEG2. If wedge R was an umbral wedge, its intersection with P would produce an incomplete umbral boundary. As a result of these gaps, the linear visibility event wedges constructed by the method of incomplete discontinuity meshing cannot be used alone to define umbral boundaries (or fromregion visibility boundaries).  Drettakis and Fiume (1994) (Drettakis, George, and Eugene Fiume. “A Fast Shadow Algorithm for Area Light Sources Using Backprojection.” Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1994. 22330, the entirety of which is incorporated herein by reference) completely characterized the visibility event surfaces that arise between a polygonal light source and objects in a polyhedral environment. In the method, called complete discontinuity meshing, both umbral and penumbral event surfaces are identified and intersected with model polygons. These intersections partition the model geometry into a “complete discontinuity mesh” such that in each face the view of the light source is topologically equivalent. The discontinuity mesh is shown to be a useful data structure for computing global illumination within the penumbra.
 In the complete discontinuity meshing method four types of event surfaces between an area light source (called the “emitter”) and polyhedral mesh objects are identified. Two of these event surfaces are planar and two are quadrics.
 The first type of visibility event surface identified is formed between a vertex or edge of the emitter and specific edges or vertices of the polyhedral model. These polygons are called an emitterVE or (EEV) wedges. The authors emphasize that not all vertices of a polyhedral mesh support an EEV wedge. Only those mesh edges which are frompoint silhouette edges (which they call “shadow edges”) for any point on the emitter surface will support a wedge. By defining “fromregion silhouette edge” in this way all mesh edges which support an umbral or penumbral EEV wedge are identified.
 The other type of planar visibility event surface employed in complete discontinuity meshing is the NonemitterEV (NonEEV) wedge. This type of wedge is potentially formed between any edge of the polyhedral mesh and any other edge such that the formed wedge intersects the emitter. For any edge of the polyhedral mesh the supported NonEEV wedges occur only in a shaft formed between the edge and the emitter. This fact is used to construct identify the NonEEV wedges.
 A third type of visibility event surface is a quadric formed from an edge of the emitter and two edges of the polyhedral meshes. This is called a EmitterEEE event or E_{e}EEE surface. Such a surface is identified wherever two nonadjacent skew edges of the discontinuity mesh intersect. [This intersection actually corresponds to the intersection of a planar wedge with a fromregion silhouette edge to form a compound silhouette contour]. The continuous visibility event surface at this point is a quadric surface.
 The fourth and final type of visibility event surface formed between an area emitter and polyhedral mesh objects is the NonEEEE. This is a quadric event surface formed between three skew edges of the polyhedral mesh such that the resulting quadric intersects the viewcell.
 In the present specification the classification of fromregion visibility event surfaces based on Drettakis and Fuime (1994) is adopted with some modification of the nomenclature to accommodate further subclassification. Table Ia includes the four types of visibility event surfaces originally proposed by Drettakis and Fuime (1994), renamed for clarity.

TABLE Ia Prior Art Nomenclature of FromRegion Visibility Event Surfaces Visibilty Event Surface Drettakis et al. Naming Planar Event Surface Containing a Feature of EEV the Emmitter/Viewcell/Source (EmitterEdge Vertex) Planar Event Surface Not Containing a NonEEV Feature of the Emitter/Viewcell/Source Quadric Event Surface Containing a Feature EmitterEEE, E_{e}EE of the Emitter/Viewcell/Source Quadric Event Surface Not Containing a NonEmitterEEE Feature of the Emitter/Viewcell/Source  Any of the four types of visibility event surfaces may ultimately contribute to the actual fromemitter (fromregion) umbral boundary which separates the volume of space that is occluded from all points on the emitter from the volume of space visible from any point on the emitter. Unfortunately, using existing discontinuity mesh methods there is no apriori way to determine which event surfaces will contribute to this umbral boundary that defines fromregion visibility. Consequently, in order to use discontinuity meshing methods to identify the conservative, fromregion umbral visibility event boundaries, all visibility event surfaces would first have to be generated and the resulting discontinuity mesh would have to be postprocessed to determine which of the event surfacemesh polygon intersections represent true fromregion umbel boundaries.
 Several other problems limit the use of discontinuity meshing methods to compute conservative fromregion visibility. The quadric event surfaces make a robust implementation of the event surface casting difficult. Event surfacing casting is required to find visible quadratic curve segments visible from the emitter edge (in the case of EmitterEEE wedge). This onwedge visibility is typically solved using a 2D implementation of WeilerAtherton visibility algorithm which is difficult to implement robustly when using quadric surfaces.
 As previously discussed, if the quadric surfaces are simply omitted (as in the method of incomplete discontinuity meshing) then continuous fromregion umbral surfaces are not guaranteed, making determination of the fromregion visible mesh polygons impossible.
 Another important limitation of conventional discontinuity meshing methods is that they do not exhibit outputsensitive performance. This is because existing discontinuity meshing algorithms begin by generating all visibility event surfaces on all (fromregion) silhouette edges of the polyhedreal meshes. This includes silhouette edges that are actually occluded from the emitter/source. These event surfaces are then intersected with potentially each polygon of the polyhedral meshes, and the onwedge visible segments are subsequently identified, using 2D WeilerAtherton visibility, as a postprocess. Since there is no depthprioritization at any stage of theses algorithms they perform very poorly in densely occluded environments, where the majority of the boundaries generated would be inside the conservative fromregion umbral boundary and therefore not contribute to the fromregion visibility solution.
 As shown later in this specification, the present method of visibility map construction using conservative linearized umbral event surfaces generated using an outputsensitive algorithm addresses many of the limitations of existing discontinuity meshing methods when applied to the problem of conservative fromregion visibility.
 Using the classification of fromregion visibility event surfaces described by Drettakis and Fiume (1994) it is clear that the volumetric visibility method (Schauffler 2000) employs only EEV surfaces to represent umbra boundaries. The extended projection method (as well as other projective methods) also implicitly use EEV umbra boundaries.
 A number of imagespace techniques of conservative fromregion visibility precomputation employ “shrunk occluders” to conservatively approximate visibility from a region using visibility from a single point in the region. The method of Wonka et al. (2000) (Wonka, Peter, Michael Wimmer, and Dieter Schmalstieg. “Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs.” Proceedings of the Eurographics Workshop on Rendering Techniques 2000. London: SpringerVerlag, 2000.7182., the entirety of which is incorporated herein by reference) uses this approach to conservatively compute visibility from a region surrounding a viewpoint placed on the surface of a viewcell. Using multiple viewpoints placed on the surface of the viewcell, the visibility from the viewcell is computed as the combined visibility from the points. The distance between the viewpoints determines the magnitude of occluder shrinkage that must be applied to insure a conservative result. Since this method does sample visibility at multiple locations on the viewcell it does not a priori assume that all unoccluded elements are completely visible from the entire viewcell
 In contrast to many of the previously described methods (including volumetric visibility and extended projection), the Wonka et al method does not assume that all unoccluded elements are completely visible from everywhere on the viewcell surface. Since it samples visibility from multiple locations on the viewcell it can approximate a backprojection which accounts for the partial occlusion of the viewcell from the unoccluded elements. The authors refer to this as penumbra effects, since elements in the penumbra of the viewcell/lightsource may give rise to planar (NonEEV) umbra boundaries as well as quadric umbra boundaries (EmitterEEE and NonEmitterEEE surfaces) that are more precise than the EEV boundaries generated by assuming that the entire viewcell is visible from unoccluded elements. An implementation of the method is presented for 2.5D models in which the viewcells are rectangles. This greatly reduces the complexity of occluder shrinkage process and substantially reduces the number of viewpoint samples required compared to a full 3D implementation. Unfortunately, because the implementation is limited to 2.5D models it cannot be employed in most walkthrough applications.
 Another method of visibility precomputation which employs “shrunk occluders” to approximate fromviewcell visibility using the visibility from a single point within the viewcell is described by Chhugani et al. (2005) (Chhugani, Jatin, et al. “vLOD: HighFidelity Walkthrough of Large Virtual Environments.” IEEE Transactions on Visualization and Computer Graphics 11.1 (2005): 3547, the entirety of which is incorporated herein by reference). This method employs a combination objectspace and imagespace approaches. In objectspace the “supporting planes tangential to the viewcell and an object” are constructed. A viewpoint contained within these supporting planes is selected and for each supporting plane, an offset plane passing through the viewpoint and parallel to the original plane is constructed. According to the authors the intersection of the positive halfspaces of these offset planes comprises a frustum that is within the actual umbra of the original object. For each object polygon that generated a supporting plane, the shrinkage of the polygon is determined by the offset of the corresponding polygon to the chosen viewpoint. Occlusion behind an occluder object is determined by rendering the shrunk version from the viewpoint and then drawing the occludees using the occlusion query extension of the depth buffer. The query returns zero for occludees that are not visible. The method performs limited occluder fusion by rendering the shrunk occluders prior to occludees. The same viewpoint must be used to generate and render all shrunk occluders. This viewpoint must lie in the frusta of all the occluders. The location of the viewpoint is selected to maximize the sum of the volumes of the shrunk frusta using a convex quadratic optimization to achieve a local minimum solution.
 The precision of the shrunk occluders is largely determined by the size and distribution of occluders being considered. Consequently the precision is not easily controlled in this method.
 While the method admits nonconvex occluders, including individual polygons and connected polygon meshes, it does not accommodate occluders that have holes. This is because the method depends upon each occludee having a single polyline “boundary” which is actually a type of fromregion silhouette contour. This is a significant limitation since some large polygon meshes (e.g. buildings) which generally produce significant fromregion occlusion also have multiple topological holes (e.g. doors and windows).
 From the preceeding analysis it is clear that many existing methods of PVS precomputation employ conservative, linearized approximations to umbral boundaries based on simple EEV event surfaces (e.g. extended projection, volumetric visibility) which assume that unoccluded elements are visible from everywhere on the viewcell (i.e. that the entire viewcell is visible from the unoccluded element).
 Although existing primalspace methods of fromregion visibility precomputation do not employ exact, quadric visibility event boundaries; other visibility applications do compute quadric visibility event surfaces in the primalspace. One of these applications, the Visibility Skeleton Durand (1997), is a data structure for answering global visibility queries. The other application, discontinuity meshing, is a method of computing illumination in the presence of area light sources. The discontinuity meshing method will be examined first.
 As previously described, in the complete discontinuity meshing method of Drettakis and Fuime (1994), all of the visibility event surfaces arising between a polygonal light source and a polyhedral model are identified and intersected with model's polygons. These intersections comprise the “complete discontinuity mesh” of the model with respect to the source. The discontinuity mesh partitions the model geometry into a mesh of faces, such that in each face the view of the source is topologically equivalent. The complete discontinuity mesh is a useful data structure for computing global illumination near umbra and penumbra boundaries.
 In the complete discontinuity meshing method four types of event surfaces are identified (see Table Ia and Ib). Two of these event surfaces are planar and two are quadrics. The two planar event surface types discussed previously, EEV and NonEEV, are used by the conservative fromregion visibility event methods to conservatively contain the fromregion umbralboundary surfaces. In some cases these planar surfaces are actually components of the exact umbra boundary formed by a silhouette edge and an a viewcellaslightsource.
 The two types of quadric surfaces: EmitterEdgeEdgeEdge (EmitterEEE or E_{e}EE) and NonEmitterEdgeEdgeEdge (NonEmitterEEE) are components of certain visibility event surfaces between the area light source and model polygons. For example, in some cases these quadric surfaces may be components of the exact umbra boundary formed by a silhouette edge and a viewcellaslightsource. In most cases these event surfaces are components of the penumbra. The discontinuity mesh methods describe techniques for identifying all of the quadric event surfaces that arise between the area light source and the model polygons.
 For example in Drettakis and Fuime (1994) both EmitterEEE and NonEmitterEEE event surfaces can be identified by forming a shaft between a generator edge and the convex hull of the emitter polygon. EmitterEEE event surfaces are formed by the original edge, an edge of the emitter, and other edges in this shaft. NonEmitterEEE event surfaces are formed by the original edge and pairs of nonparallel edges in the shaft. NonEmitterEEE surfaces are those that intersect the emitter polygon. In both cases the ruled quadric event surface is identified using the parametric equation of the first generator edge:

P _{t} =a _{1} +t(b _{1} −a _{1})  where a_{1 }and b_{1 }are the endpoints of e_{1}.
 The value of t for a point P_{t }on the ruled quadric is found by forming the two planes containing point P and e_{2 }and P and e_{3}. The intersection of these two planes forms a line that is intersected with e_{1 }
 The valid interval of the ruled quadric on the generator edge is found by computing t for the endpoints a_{2 }and b_{2 }of edge e_{2 }and for the endpoints a_{3 }and b_{3 }of edge e_{3}. The intersection of the intervals is the valid region on the first generator edge. (This parametric representation of the ruled quadric was also suggested by Teller to represent the surfaces in 3D. However in Teller's method the ruled quadric visibility event surfaces are not actually identified in primal space. Instead their delimiters, the extremal stabbing lines, are identified in 5D line space.)
 In the discontinuity meshing method once a quadric surface is identified by finding the valid intervals of its generator edges, the coefficients of the corresponding quadric equation:

Ax ^{2} +By ^{2} +Cz ^{2} +Dyz+Exz+Fxy+Gx+Hy+Iz+J=0  are determined. The intersection of this quadric surface with a model polygon is a quadratic curve. It is determined by transforming the three generating edges such that the polygon is embedded in the plane z=0. The quadratic curve is defined by the coefficients of the corresponding quadric equation minus all terms containing z. To generate the discontinuity mesh elements the quadratic curve is intersected with the edges of the model polygons and checked for visibility using a line sweep visibility processing algorithm.
 In the discontinuity meshing method all visibility event surfaces involving an area light source and model polygons are identified. These visibility event surfaces include not only the umbral and extremal penumbra boundaries but many other event surfaces across which the topological view or “aspect” of the light source from the model geometry changes. In discontinuity meshing the visibility event surfaces are identified and intersected with the model polygons but specific bounding volumes of these surfaces, such as the umbra volume are not computed. These intersections in general produce forth degree space curves which can be difficult to solve robustly. Fortunately, the illumination calculations in which the discontinuity mesh is employed do not require the umbra volume to be represented.
 The construction of the complete discontinuity mesh does require the event surfaces to be intersected with the model polygons, forming lines or quadratic curves on the surfaces of the polygons. These intersections are performed by casting the surfaces through the model. A regular gridbased spatial subdivision data structure is used to limit the number intersections performed. After all of the intersections are calculated a visibility step determines visible subsegments on the wedge. Consequently the construction of the discontinuity mesh is not output sensitive. and the cost of EEV processing is expected O(n^{2}) in the number of polygons. Quadric surfaces are also processed by finding first finding all of the quadratic curves formed by intersections of the quadric with model polygons, visibility on the quadric is resolved by a line sweep algorithm that is applied later, the cost of quadric processing is O(n^{6}) in the number of polygons.
 Like the complete discontinuity mesh, the visibility skeleton (Durand et al 1997) (Durand, Fredo; Drettakis, George; Puech, Calude; “The Visibility Skeleton: a Powerful and Efficient MultiPurpose Global Visibilty Tool, SIGGRAPH 1997 Proceedings, the entirety of which is incorporated herein by reference) is a data structure that accounts for quadric visibility event boundaries using primal space methods. The visibility skeleton is a complete catalog of visibility events that arise between edges in a polyhedral environment. In the visibility skeleton the visibility information of a model is organized as a graph structure in which the extremal stabbing lines are nodes of the graph and the visibility event surfaces are the arcs of the graph. The visibility skeleton can be used to answer visibility queries in the scene such as those that arise during global illumination calculations.
 Unlike complete discontinuity meshing, the visibility skeleton avoids direct treatment of the line swaths that comprise the quadric visibility event surfaces. Instead the skeleton is constructed by directly computing only the extremal stabbing lines which bound the event surfaces themselves and which correspond to the nodes of the visibility skeleton graph structure.
 In the general case of an extremal stabbing lines incident on four edges (EEEE nodes) the nodes are identified using the sets of tetrahedral wedges formed between the four edges. In this method an extended tetrahedron is formed between two of the edges as shown in
FIGS. 8 a, 8 b and 8 c of Durand et al (1997). The figures mentioned in this paragraph refer to the Durand et al (1997) paper. InFIG. 8 a the extended tetrahedron formed by edge ei and ej is shown. InFIG. 8 b a third edge ej is shown along with the segment of ej inside the extended tetrahedron formed by ei and ej. [The component of ej inside this extended tetrahedron will form a quadric visibility event (EEE) surface with ei and ej.] InFIG. 8 c a fourth edge el is shown. This fourth edge el is similarly restricted by the three other extended tetrahedra which it may intersect: ekej, ekei, and ejei. The segment of el that is within all of these tetrahedral wedges could form three quadric surfaces with the other three edges. Only one line will actually intersect el and the other lines. This is the extremal EmitterEEE line or node of the visibility skeleton involving these four edges. It is found by a simple binary search on the restricted segment of el. In this binary search an initial estimate of the intersection point P on el is chosen. The plane formed by this point and ei is then intersected with ej and e, giving two lines originating at P. The estimated intersection point P on el is refined by binary search until the angle between the two lines originating at P approaches zero. This occurs when the lines are congruent and therefore intersect ei, ej, ek, and el. The extremal lines so identified are intersected with model polygons using ray casting to determine if any scene polygons occlude the extremal line between its generating edges. If the line is so occluded no extremal stabbing line is recorded.  Other nodes of the visibility skeleton such as EVE, VEE, and EEV nodes form the limits of planar visibility event surfaces (eg. VE) and are also found by intersecting the relevant edges with corresponding extended tetrahedra.
 The extremal stabbing lines so identified are stored explicitly as the nodes of the visibility skeleton. The visibility event surfaces (polygons or quadrics) that are bounded by these lines are not directly computed but instead stored implicitly as arcs in the graph. The component edges of the event surface are inferred from the nodes connected to the corresponding arc. Later use of the visibility skeleton for global visibility querys, such as discontinuity meshing in the presence of an area light source, may require the quadric surfaces to be generated directly using, for example, the parametric form of the quadric as described by Teller (1992).
 From the preceeding analysis it is clear that both the discontinuity meshing and visibility skeleton methods include primal space techniques for identifying planar and quadric visibility event surfaces produced by area light sources. Both effectively employ the extended tetrahedral wedge test to identify quadric surfaces and the segments of the edge triples that support them. Both methods produce all of the visibility event surfaces between the relevant edges. Neither method is structured to efficiently generate only the fromregion umbral boundary surfaces that are relevant in computing fromregion visibility.
 Another approach to computing fromregion visibility is to transform the problem to line space and compute the umbra boundary surfaces using Plucker coordinates.
 As previously described, the method of Teller (1992) developed the computational machinery necessary to compute the exac t planar and quadric elements of an antipenumbra boundary of a portal sequence. This method transformed the problem to 5D line space.
 The portal sequence is a significantly more restricted visibility problem than the general problem of visibility from an area lightsource (or equivalently a viewcell) in the absence of distinct portals. Moreover, to identify the quadric elements of the antipenumbra boundary Teller had to transform the problem to line space using Plucker coordinates and perform hyperplane intersections in 5D. This transformation increases the algorithmic complexity of the process and introduces potential robustness issues that are not present when working in the primal space.
 Beginning in 2001 two groups of investigators Bittner (2001) (J. Bittner and J. P{hacek over ( )}rikryl. Exact Regional Visibility using Line Space Partitioning. Tech. Rep. TR18620106, Institute of Computer Graphics and Algorithms, Vienna University of Technology, March 2001.) and Nierenstein (2002) (Nirenstein, S., E. Blake, and J. Gain. “Exact FromRegion Visibility Culling.” Proceedings of the 13th Eurographics Workshop on Rendering. Proc. of ACM International Conference Proceeding Series, Pisa, Italy, 2002. Vol. 28. AirelaVille: Eurographics Association, 2002.191202., the entirety of which is incorporated herein by reference) developed methods to compute the exact viewcell to polygon PVS. Like Teller's exact antipenumbra calculation these methods require a transformation of the problem to Plucker coordinates and depend upon a combination of numerical techniques including singular value decomposition, robust root finding, and highdimensional convex hull computations. Unlike Teller's approach these methods do not require an autopartition of the model into a BSP tree with enumerated portals.
 In general, both of these exact methods, Niernstein (2002) and Bittner (2001), are structured as a visibility query which determines whether an unoccluded sightline exists between two convex graphic primitives (i.e. polygons). One of the tested polygons is a face of the viewcell, the other tested polygon is a mesh polygon of the modeled environment. The query determines if other polygons in the model, alone or in combination, occlude all the sightlines between the tested polygons. This occlusion query represents the linespace between the polygons by a 5D Euclidean space derived from Plucker space. This mapping requires singular value matrix decomposition. In a subsequent step the method employs constructive solid geometry operations performed in 5 dimensional space. These processes, which form the basis of the visibility query, have a high computational cost. Moreover, because the fundamental organization of the method uses a polygontopolygon query, the cost on a naive implementation is O(n^{2.15}) in the number of polygons (Nirenstein 2002).
 The scalability of the method is improved over this worstcase by employing trivial acceptance and trivial rejection tests. Trivial acceptance of polygontopolygon visibility is established using a polygontopolygon ray casting query. If a ray originating at one test polygon reaches the other test polygon without intersecting any intervening polygons in the database then the visibility query can be trivially accepted. While this query has a lower computational cost than the exact Plucker space visibility query, its is itself a relatively expensive test for trivial acceptance. Trivial rejection of clusters of polygons can be accelerated by using a hierarchically organized database. If a query determines that the bounding box of an object is occluded with respect to a viewpoint cell then all of the polygons contained by the bounding box are also occluded. Furthermore, the method treats the occluded bounding box itself as a simple “virtual occluder.” (Koltun et al 2000) (Koltun, Vladen, Yiorgos Chrysanthou, and Daniel CohenOr. “Virtual Occluders: An Efficient Intermediate PVS Representation.” Proceedings of the Eurographics Workshop on Rendering Techniques 2000. London: SpringerVerlag, 2000. 5970, the entirety of which is incorporated herein by reference). As defined by Koltun et al. (2000), a virtual occluder is not part of the original model geometry, but still represents a set of blocked lines. If the bounding box of an object is occluded then it can be used as an occluder for any geometry behind it. None of the polygons within the occluded bounding box need be considered as occluder candidates, as the bounding box itself is more than sufficient to test for occlusion of objects behind it. By employing these virtual occluders in conjunction with a front to back processing of scene objects Nirenstein et al (2000). significantly improved the scalability of the method from O(n^{2.15}) to O(n^{1.15}) for some tested scenes. Nevertheless, the method was shown to have a large constant computational overhead. For a densely occluded forest scene consisting of 7.8 million triangles preprocessing required 2 days 22 hrs on a dual Pentium IV 1.7 GHz multiprocessor. This compared to only 59 minutes preprocessing the same database using the extended projection method of Durand et al. implemented on a 200 MHz MIPS R10000 uniprocessor with SGI Onyx2 graphics hardware. The exact method culled an average of 99.12% of the geometry compared to 95.5% culling achieved with the conservative extended projection method
 One reason for the exact method's high computational cost is that the polygontopolygon occlusion query treats the occlusion caused by each polygon separately and does not explicitly consider the connectivity relationships between polygons to compute an aggregate occlusion. The exact method accounts for the combined occlusion of connected polygons only by the expensive 5D constructive solid geometry process in which each polygon in processed separately. For this exact method the combined occlusion of connected polygons is determined only by the separate subtraction of individual 5D polyhedra (representing the candidate occluding polygons) from a 5D polytope (representing the celltopolygon sightlines). In the case of a connected mesh, the shared edges represent a trivial case of occluder fusion but for the exact method the fusion of these occluders must be explicitly computed and represent a degenerate case for the algorithm since the resulting polyhedra intersect exactly along the shared edges. In this sense the Niernstein et al. (2002) method completely neglects the important problem of identifying those specific edges of the polygon model which potentially support fromregion visibility event surfaces (the potential fromregion silhouette edges) and instead conducts the visibility query using all polygon edges.
 In a later implementation, Nirenstein et al (2005) (Nirenstein, S., Haumont, D., Makinen, O., A Low Dimensioinal Framework for Exact PolygontoPolygon Occlusion Queries, Eurographics Sysmposum on Rendering 2005, the entirety of which is incorporated herein by reference) addressed this shortcoming of the method by identifying potential fromviewcell silhouette boundaries and constructing blocker polyhedra in 5D only along these boundaries. The definition of fromregion silhouette edges employed in this method is essentially the same as that used in the earlier complete discontinuity meshing method of Drettakis et al. (1994) Although one testbed implementation using this improvement accelerated the method by a factor of 30, the method still has a high constant computational overhead.
 Besides being computationally expensive, the exact method is difficult to implement robustly. The singular value decompositions, robust root finding, and higher dimensional constructive solid geometry computations of the method tend to be very sensitive to numerical tolerances and geometric degeneracies.
 Another shortcoming of the exact fromregion method is that current implementations generally do not identify and remove occluded parts of partially occluded polygons. Current implementations of the method employ a polygontopolygon visibility query between the faces of the viewcell and the model polygons. The query is specifically structured to identify unoccluded regions between the tested polygon and to terminate early if any such regions are detected. Such implementations include an entire polygon in the PVS even if only a small part of it is visible from the viewcell. Consequently, although the PVS computed by these implementations may be the “exact” set of polygons visible from the region; the PVS may considerably overestimate the exposed surface area visible from the viewcell for large polygons. This can result in considerable overdraw at runtime. Modifying the exact fromregion implementations to determine unoccluded fragments would substantially increase the computational cost and complexity of the implementation because: 1) the benefit of early termination would be lost, and 2) the boundaries between unocccluded and occluded fragments are quadratic.
 Because these linespace methods compute the quadric umbra boundaries between source and target polygon they can provide an exact solution to this visibility query. In contrast, conservative methods of visibility precomputation employ less precise linearized umbra boundaries either explicitly (volumetric visibility) or implicitly (projective methods). However since these conservative methods operate in the primal space they are amenable to simpler, more robust implementations than the line space methods which require robust root finding and higher dimensional constructive solid geometry.
 In both the extended projection method and the volumetric visibility method, as well as the exact fromregion methods, a PVS is computed for parallelepiped viewcells that comprise a subdivision of navigable space. The use of parallelepiped viewcells has several advantages over the general convex polyhedral viewcells used by the BSP/portal sequence methods. The spatial subdivision defining the parallelepiped viewcells can easily be arranged as a spatial hierarchy (e.g. kd tree) which facilitates a hierarchical approach to PVS determination. In this approach, used by both the extended projection and volumetric visibility methods, the PVS is determined for a viewcell at a high level in the hierarchy and is used as a working set to recursively determine the PVS of child viewcells lower in the hierarchy.
 Another advantage of parallelepiped cells is that they have a simple cell adjacency relationship to neighboring cells. This relationship was exploited in the extended projection implementation, Durand et al. (2000) to implement a deltaPVS storage scheme. In this scheme the entire PVS for a number of key viewcells is stored. For most other viewcells, sets representing the differences of the PVS of adjacent viewcells are stored. This storage scheme substantially reduces the storage requirements for PVS data.
 In the extended projection implementation the computed PVS encodes conservative viewcelltoscenegraphcell visibility at a coarse level of granularity. For this approximate solution the deltaPVS storage for 12,166 viewcells (representing 1/12th of the street area of a city model comprising 6 million polygons required 60 MB storage. Extrapolated, the storage of the deltaPVS data for the viewcells comprising all of the streets would be 720 MB. In the runtime portion all geometry is stored in main memory but the deltaPVS data is fetched from disk.
 Another fromregion visibility method which employs a deltaPVS storage scheme is the vlod method of Chhugani et al. (2005) In this implementation the fromregion visibility solution provides a conservative viewcelltoobject PVS using a variation of the “shrunk occluder” method.
 The deltaPVS is a list of object Ids referring to newly visible or newly invisible objects for a viewcell transition. In contrast to the extended projection method, the vlod implementation does not require all model geometry to be stored in main memory. Instead geometry is stored on disk and the current and predicted viewpoint locations are used to guide a speculative prefetch process which dynamically loads deltaPVS data and model geometry data. The model geometry is stored on disk using an object reordering scheme that reduces the number of disk accesses by storing together objects on the disk that tend to be fetched together. The deltaPVS data is also stored on disk. For a powerplant model of 13 million triangles and 500,000 viewcells, 7 GB is required to store the deltaPVS object ids. At runtime the vlod implementation allows realtime rendering of models that are too large to be stored in main memory. Since the models rendered in the vlod implementation are not textured, the method does not address the storage and dynamic prefetch of texture information. In most modern walkthough applications such as games, the amount of texture information for a model is typically much greater than the amount of geometry information.
 The vlod system is an example of outofcore, realtime rendering system that uses geometry prefetch based on precomputed fromregion visibility. An earlier example by Funkhouser (Database Management for Interactive Display of Large Architectural Models, Proceedings of the conference on Graphics interface '96 Toronto, Ontario, Canada Pages: 18 Year of Publication: 1996 ISBN: 0969533853, the entirety of which is incorporated herein by reference) of this approach used geometry PVS data computed using the portal sequence method. This implementation also used untextured geometry and, like vlod, does not address the prefetch of texture information.
 Other outofcore methods use geometry prefetch based on a runtime, conservative, frompoint visibility method (e.g. prioritized layered projection or PLP) which is used to determine a conservative subset of the model visible from the viewpoint (IWALK, MMR). In one variation of this approach the process of primitive reprojection is used to directly identify model geometry that becomes newly exposed as a result of viewpoint motion (U.S. Pat. No. 6,111,582 Jenkins). These fromregion visibility methods must be computed at runtime and therefore contribute to the overall runtime computational cost.
 The goal of outofcore rendering systems is to allow uninterrupted exploration of very large, detailed environments that cannot fit in core memory. Implemented effectively, this streaming approach can eliminate the frequent interruptions caused by traditional loading schemes in which entire sections (e.g. levels) of the environment are loaded until the next level is reached. Subdividing a complex 3D model into distinct “levels” drastically simplifies the loading and display of the graphics information while it forces the user to experience a series of disjoint locations, separated by load times that often disrupt the coherence of the experience.
 The available data transfer rate between secondary storage and the core is a significant limiting factor for streaming implementations (Brad Bulkley, “The Edge of the World” Game Developer Magazine June/July 2006 pg. 19, the entirety of which is incorporated herein by reference). A deltaPVS storage scheme can substantially reduce the transfer rate required to stream prefetched data. Current deltaPVS implementations do not provide methods to manage texture information. Moreover they employ coarsegrained celltoobject or celltoscenegraphcell PVS data that is computed using imprecise fromregion visibility computations which results in overestimated PVS/deltaPVS data. If the size of the deltaPVS data causes the prefetch process to exceed the available transfer rate between secondary storage and core memory then visibility errors can result.
 A fromregion visibility precomputation method capable of determining occluded polygon fragments and textures could produce a more precise celltopolygon PVS/deltaPVS than existing methods. This would reduce the transfer rate required to support streaming prefetch and also enhance the performance of the display hardware by reducing overdraw.
 From the preceding analysis of the prior art it is clear that existing methods of fromregion visibility precomputation use either, a) imprecise visibility event boundaries which produce imprecise PVS solutions, or b) exact visibility event surfaces which must be computed in five dimensional line space. Such line space computations incur high computational cost and algorithmic complexity and are difficult to implement robustly. Moreover, for a single collection of polyhedral objects, some exact fromregion visibility event surfaces are well approximated by simpler, linearized extremal umbra boundaries; while others are not. This makes exact approaches overly sensitive to detailed input in the sense that in some regions of a typical polyhedral model much computation can be expended to compute a very small amount of occlusion.
 Consequently a general method of PVS determination that identifies conservative linearized umbral event surfaces in the primal space; estimates the deviation of these surfaces from the exact event surfaces, and adaptively refines these surfaces to more precisely approximate the exact surfaces, would enable fromregion visibility precomputation with improved precision and reduced computational cost compared to existing methods.
 Such a practical method of precisioncontrolled PVS determination could be used in conjunction with deltaPVS and intermediate representation schemes which reduce storage costs and facilitate visibilitybased streaming prefetch. This visibilitybased streaming prefetch method would allow the user to quickly begin interacting with a massive textured 3D model because initially only the geometry, texture, and other graphic elements visible in the vicinity of the user's initial location would be delivered. This initial data is typically a small fraction of the entire graphical database for the modeled environment. This method would significantly decrease the waiting time for interactivity when compared to existing methods, such as MPEG4 part 11 (VRML or X3D), which do not specify an efficient, visibilitybased prefetch streaming approach. Such existing methods typically either require the entire database to be downloaded before interactivity begins or, alternatively, are subject to visibility errors (e.g., the sudden appearance of objects) during user navigation.
 In exemplary embodiments, a computerimplemented method determines a set of mesh polygons or fragments of said mesh polygons visible from a navigation cell, said mesh polygons forming polygon meshes. The method includes determining a composite view frustum containing predetermined view frusta in said navigation cell. The method further includes determining mesh polygons contained in said composite view frustum. The method further includes determining at least one supporting polygon between said navigation cell and said contained mesh polygons. The method further includes constructing at least one wedge from said at least one supporting polygon, said at least one wedge extending away from said navigation cell beyond at least said contained mesh polygons. The method further includes determining one or more intersections of said at least one wedge with said contained mesh polygons. The method also includes determining said set of said contained mesh polygons or fragments of said contained mesh polygons visible from said navigation cell using said determined one or more intersections of said at least one wedge with said polygon meshes.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes storing graphics information including a first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, said first set of graphics information visible from any direction in said second navigation cell, and said graphics information including a second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information included within predetermined view frusta associated with said second navigation cell. The method further includes determining a period during which said first set of graphics information arrives after a client computing device is scheduled to access said first set of graphics information on said client computing device. The method also includes sending said second set of graphics information during said determined period to said client computing device.
 In exemplary embodiments, a server determines a set of mesh polygons or fragments of said mesh polygons visible from a navigation cell, said mesh polygons forming polygon meshes. The server includes a processor configured to determine a composite view frustum containing predetermined view frusta in said navigation cell. The processor is further configured to determine mesh polygons contained in said composite view frustum and determine at least one supporting polygon between said navigation cell and said contained mesh polygons. The processor is further configured to construct at least one wedge from said at least one supporting polygon, said at least one wedge extending away from said navigation cell beyond at least said contained mesh polygons. The processor is further configured to determine one or more intersections of said at least one wedge with said contained mesh polygons, and determine said set of said contained mesh polygons or fragments of said contained mesh polygons visible from said navigation cell using said determined one or more intersections of said at least one wedge with said polygon meshes.
 In exemplary embodiments, a system determines a set of mesh polygons or fragments of said mesh polygons visible from a navigation cell, said mesh polygons forming polygon meshes, The system includes a server having a processor configured to determine a composite view frustum containing predetermined view frusta in said navigation cell. The processor is further configured to determine mesh polygons contained in said composite view frustum and determine at least one supporting polygon between said navigation cell and said contained mesh polygons. The processor is further configured to construct at least one wedge from said at least one supporting polygon, said at least one wedge extending away from said navigation cell beyond at least said contained mesh polygons. The processor is further configured to determine one or more intersections of said at least one wedge with said contained mesh polygons, and determine said set of said contained mesh polygons or fragments of said contained mesh polygons visible from said navigation cell using said determined one or more intersections of said at least one wedge with said polygon meshes. The system further includes a client computing device configured to receive and display said determined set of said contained mesh polygons or fragments of said contained mesh polygons visible from said navigation cell.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method for determining a set of mesh polygons or fragments of said mesh polygons visible from a navigation cell, said mesh polygons forming polygon meshes.
 The method includes determining a composite view frustum containing predetermined view frusta in said navigation cell. The method further includes determining mesh polygons contained in said composite view frustum. The method further includes determining at least one supporting polygon between said navigation cell and said contained mesh polygons. The method further includes constructing at least one wedge from said at least one supporting polygon, said at least one wedge extending away from said navigation cell beyond at least said contained mesh polygons. The method further includes determining one or more intersections of said at least one wedge with said contained mesh polygons. The method also includes determining said set of said contained mesh polygons or fragments of said contained mesh polygons visible from said navigation cell using said determined one or more intersections of said at least one wedge with said polygon meshes.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes storing information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons forming at least one polygon mesh. The method further includes sending, to a client computing device, information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said second navigation cell. The method also includes sending, to a client computing device, information indicating at least one encounter number, said encounter number being a number of iterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes receiving information from a server, said information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons, said mesh polygons forming at least one polygon mesh. The method further includes receiving information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said first navigation cell. The method further includes receiving information indicating at least one encounter number, said encounter number being a number of edgeiterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said at least one seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell. The method further includes conducting said deterministic mesh traversal, beginning at said at least one seed polygon. The method also includes interrupting said deterministic mesh traversal at said at least one encounter number corresponding to said transitional edge.
 In exemplary embodiments, a server includes a memory to store information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons forming at least one polygon mesh. The server further includes a processor configured to send, to a client computing device, information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said second navigation cell. The processor is further configured to send, to a client computing device, information indicating at least one encounter number, said encounter number being a number of iterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell.
 In exemplary embodiments, a client computing device includes a processor configured to receive information from a server, said information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons, said mesh polygons forming at least one polygon mesh. The processor is further configured to receive information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said first navigation cell. The processor is further configured to receive information indicating at least one encounter number, said encounter number being a number of edgeiterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said at least one seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell. The processor is further configured to conduct said deterministic mesh traversal, beginning at said at least one seed polygon. The processor is also configured to interrupt said deterministic mesh traversal at said at least one encounter number corresponding to said transitional edge.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method. The method includes storing information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons forming at least one polygon mesh. The method further includes sending, to a client computing device, information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said second navigation cell. The method also includes sending, to a client computing device, information indicating at least one encounter number, said encounter number being a number of iterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a client computing device causes the processor to execute a method. The method includes receiving information from a server, said information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons, said mesh polygons forming at least one polygon mesh. The method further includes receiving information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said first navigation cell. The method further includes receiving information indicating at least one encounter number, said encounter number being a number of edgeiterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said at least one seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell. The method further includes conducting said deterministic mesh traversal, beginning at said at least one seed polygon. The method also includes interrupting said deterministic mesh traversal at said at least one encounter number corresponding to said transitional edge.
 In exemplary embodiments, a system includes a server having a memory to store information indicating a set of renderable graphics information visible from a first navigation cell, said first navigation cell contained in a second navigation cell, said renderable graphics information including mesh polygons forming at least one polygon mesh. The server further including a processor configured to send, to a client computing device, information indicating at least one seed polygon, said at least one seed polygon being a polygon visible from said second navigation cell, and send, to a client computing device, information indicating at least one encounter number, said encounter number being a number of iterations of a deterministic mesh traversal required to encounter at least one transitional edge, said deterministic mesh traversal starting at said seed polygon and traversing said at least one polygon mesh, and said transitional edge being an edge of said at least one polygon mesh and said transitional edge having at least one polygon sharing said transitional edge that is occluded from said first navigation cell. The system further includes a client computing device having a processor configured to receive said information indicating said set of renderable graphics information visible from said first navigation cell, receive said information indicating said at least one seed polygon, said at least one seed polygon being a polygon visible from said first navigation cell, receive information indicating said at least one encounter number, conduct said deterministic mesh traversal, beginning at said at least one seed polygon, and interrupt said deterministic mesh traversal at said at least one encounter number corresponding to said transitional edge.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes sending information to a client computing device, said information indicating a set of renderable graphics information representing at least one moving object included in a computer generated modeled environment. The method further includes determining a set of navigation cells including regions of the computer generated modeled environment in which the moving objects are permitted to traverse and determining if said at least one moving object enters at least one of said navigation cells. The method also includes sending said graphics information representing said at least one moving object entering said at least one of said navigation cells.
 In exemplary embodiments a system includes a server having a processor configured to send information to a client computing device, said information indicating a set of renderable graphics information representing at least one moving object included in a computer generated modeled environment. The processor is further configured to determine a set of navigation cells including regions of the computer generated modeled environment in which the moving objects are permitted to traverse and determine if said at least one moving object enters at least one of said navigation cells. The processor is also configured to send said graphics information representing said at least one moving object entering said at least one of said navigation cells. The system further includes a client computing device to receive and display said graphics information.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method. The method includes sending information to a client computing device, said information indicating a set of renderable graphics information representing at least one moving object included in a computer generated modeled environment. The method further includes determining a set of navigation cells including regions of the computer generated modeled environment in which the moving objects are permitted to traverse and determining if said at least one moving object enters at least one of said navigation cells. The method also includes sending said graphics information representing said at least one moving object entering said at least one of said navigation cells.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes receiving information from a server, said information indicating a set of renderable graphics information representing at least one moving object in a computer generated modeled environment. The method further includes receiving said graphics information representing said at least one moving object entering a navigation cell including a subset of a region of the computer generated modeled environment in which the at least one moving object is permitted to traverse.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes receiving a first set of information from said server, said first set of information including parameters determining movement of at least one autonomous moving object. The method further includes receiving a second set of information from said server, said second set of information including information representing at least one navigation cell including a subset of a region of a computer generated modeled environment in which said at least one autonomous moving object is permitted to traverse. The method further includes determining if said at least one autonomous moving object enters said at least one navigation cell and receiving, if said at least one autonomous moving object enters said at least one navigation cell, a third set of information from a server, said third set of information indicating a set of renderable graphics information representing said at least one autonomous moving object.
 In exemplary embodiments, a client computing device includes a processor configured to receive a first set of information from said server, said first set of information including parameters determining movement of at least one autonomous moving object. The processor is further configured to receive a second set of information from said server, said second set of information including information representing at least one navigation cell including a subset of a region of a computer generated modeled environment in which said at least one autonomous moving object is permitted to traverse. The processor is further configured to determine if said at least one autonomous moving object enters said at least one navigation cell, and receive, if said at least one autonomous moving object enters said at least one navigation cell, a third set of information from a server, said third set of information indicating a set of renderable graphics information representing said at least one autonomous moving object.
 In exemplary embodiments a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a client computing device, causes the processor to execute a method. The method includes receiving a first set of information from said server, said first set of information including parameters determining movement of at least one autonomous moving object. The method further includes receiving a second set of information from said server, said second set of information including information representing at least one navigation cell including a subset of a region of a computer generated modeled environment in which said at least one autonomous moving object is permitted to traverse. The method further includes determining if said at least one autonomous moving object enters said at least one navigation cell. The method also includes receiving, if said at least one autonomous moving object enters said at least one navigation cell, a third set of information from a server, said third set of information indicating a set of renderable graphics information representing said at least one autonomous moving object.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes storing information indicating at least one navigation cell that represents part of a navigable space of a computer generated modeled environment. The method further includes sending said information representing said navigation cell to said client computing device upon determination that said at least one navigation cell is reachable via the navigable space from a predicted client viewpoint location.
 In exemplary embodiments, a system includes a server having a memory to store information indicating at least one navigation cell that represents part of a navigable space of a computer generated modeled environment. The server is further configured to send said information representing said navigation cell to said client computing device upon determination that said at least one navigation cell is reachable via the navigable space from a predicted client viewpoint location. The system further includes a client computing device having a processor configured to determine a location in the navigable space using said information.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method. The method includes storing information indicating at least one navigation cell that represents part of a navigable space of a computer generated modeled environment. The method further includes sending said information representing said navigation cell to said client computing device upon determination that said at least one navigation cell is reachable via the navigable space from a predicted client viewpoint location.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes receiving information from a server, said information indicating at least one navigation cell that is reachable from a client viewpoint location, said navigation cell representing part of the navigable space of a computer generated modeled environment. The method further includes determining a location in said navigable space using said received information.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a client computing device causes the processor to execute a method. The method includes receiving information from a server, said information indicating at least one navigation cell that is reachable from a client viewpoint location, said navigation cell representing part of the navigable space of a computer generated modeled environment. The method further includes determining a location in said navigable space using said received information.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes receiving, from a server, information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The method further includes predicting a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell. The method also includes receiving, from the server upon determination that the likelihood that said clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold, a set of renderable graphics information visible from said second navigation cell and not visible from said first navigation cell.
 In exemplary embodiments, a client computing device includes a processor configured to receive, from a server, information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The processor is further configured to predict a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell. The processor is also configured to receive, from the server upon determination that the likelihood that said clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold, a set of renderable graphics information visible from said second navigation cell and not visible from said first navigation cell.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a client computing device causes the processor to execute a method. The method includes receiving, from a server, information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The method further includes predicting a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell. The method also includes receiving, from the server upon determination that the likelihood that said clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold, a set of renderable graphics information visible from said second navigation cell and not visible from said first navigation cell.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes sending navigation cell information to a client computing device, said navigation cell information including information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The method further includes receiving, from said client computing device, a request for a set of renderable graphics information including information visible from said second navigation cell and not visible from said first navigation cell, said request issued by said client computing device upon determination of a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold. The method also includes sending, upon receiving said request from said client, said set of renderable graphics information.
 In exemplary embodiments, a system includes a server having a processor configured to:
 send navigation cell information to a client computing device, said navigation cell information including information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The system further includes a client computing device having a processor configured to determine a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold. Further, the server receives, from said client computing device, a request for a set of renderable graphics information including information visible from said second navigation cell and not visible from said first navigation cell upon determination that the likelihood is greater than a predetermined threshold. Additionally, the server sends, upon receiving said request from said client, said set of renderable graphics information.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method. The method includes sending navigation cell information to a client computing device, said navigation cell information including information representing a second navigation cell and a first navigation cell each of which representing part of a navigable space of a computer generated modeled environment. The method further includes receiving, from said client computing device, a request for a set of renderable graphics information including information visible from said second navigation cell and not visible from said first navigation cell, said request issued by said client computing device upon determination of a likelihood that a clientuser viewpoint moves from said first navigation cell to said second navigation cell is greater than a predetermined threshold. The method also includes sending, upon receiving said request from said client, said set of renderable graphics information.
 In exemplary embodiments, a computerimplemented method is conducted on a server. The method includes storing graphics information including a first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, and a second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information, each of said first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The method further includes determining a first period during which said first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device.
 The method further includes determining a visual salience of said first set and said second set of graphics information, said visual salience representing a likelihood that the client computing device is tracking an object moving in said navigable space, said visual salience being a function of a current client viewpoint and one or more view direction vectors extending from said current client viewpoint. The method further includes sending said second set of graphics information during said first period upon determination that said visual salience of said first set and said second set of graphics information is below a predetermined value. The method also includes sending said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
 In exemplary embodiments, a server includes a memory to store graphics information including a first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, and a second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information, each of said first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The server further includes a processor configured to determine a first period during which said first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device and determine a visual salience of said first set and said second set of graphics information, said visual salience representing a likelihood that the client computing device is tracking an object moving in said navigable space, said visual salience being a function of a current client viewpoint and one or more view direction vectors extending from said current client viewpoint. The processor is further configured to send said second set of graphics information during said first period upon determination that said visual salience of said first set and said second set of graphics information is below a predetermined value, and send said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
 In exemplary embodiments a non transitory computer readable storage medium having executable instructions stored thereon, which when executed by a processor in a server causes the processor to execute a method. The method includes storing graphics information including a first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, and a second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information, each of said first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The method further includes determining a first period during which said first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device. The method further includes determining a visual salience of said first set and said second set of graphics information, said visual salience representing a likelihood that the client computing device is tracking an object moving in said navigable space, said visual salience being a function of a current client viewpoint and one or more view direction vectors extending from said current client viewpoint. The method further includes sending said second set of graphics information during said first period upon determination that said visual salience of said first set and said second set of graphics information is below a predetermined value. The method also includes sending said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
 In exemplary embodiments, a computerimplemented method is conducted on a client computing device. The method includes determining a first period during which a first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device, said first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, each of the first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The method further includes receiving a second set of graphics information during said first period if a visual salience of said first set and said second set of graphics information is below a predetermined value, said visual salience being a function of current client viewpoint and view direction vectors, said visual salience representing a likelihood that said client computing device is tracking an object moving in said navigable space, said second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information. The method also includes receiving said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
 In exemplary embodiments, a client computing device includes a processor configured to determine a first period during which a first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device, said first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, each of the first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The processor is configured to receive a second set of graphics information during said first period if a visual salience of said first set and said second set of graphics information is below a predetermined value, said visual salience being a function of current client viewpoint and view direction vectors, said visual salience representing a likelihood that said client computing device is tracking an object moving in said navigable space, said second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information. The processor is also configured to receive said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
 In exemplary embodiments, a nontransitory computer readable storage medium has executable instructions stored thereon, which when executed by a processor in a client computing device causes the processor to execute a method. The method includes determining a first period during which a first set of graphics information is determined to arrive after said client computing device is scheduled to access said first set of graphics information on said client computing device, said first set of graphics information visible from a second navigation cell and not visible from a first navigation cell, each of the first and second navigation cells representing part of a navigable space of a computer generated modeled environment. The method further includes receiving a second set of graphics information during said first period if a visual salience of said first set and said second set of graphics information is below a predetermined value, said visual salience being a function of current client viewpoint and view direction vectors, said visual salience representing a likelihood that said client computing device is tracking an object moving in said navigable space, said second set of graphics information visible from said second navigation cell and not visible from said first navigation cell, said second set of graphics information having a lower levelofdetail than said first set of graphics information. The method also includes receiving said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.

FIG. 1 is an exemplary flowchart showing a topdown organization of constructing conservative linearized umbral event surfaces or wedges at firstorder silhouette mesh silhouette edges or vertices using the pivot and sweep method. This flowchart shows the degenerate case of parallel supporting viewcell edge and silhouette edge being explicitly identified and managed by constructing the corresponding SEME wedge. 
FIG. 2A is an exemplary diagram showing a viewcell and two polygon meshes with firstorder wedges incident on two firstorder silhouette edges. 
FIG. 2B is an exemplary diagram showing a viewcell and two polygon meshes with a fromsilhouette edge (backprojection) firstorder wedge and the corresponding higherorder fromviewcell (frontprojection) wedge. 
FIG. 3 is an exemplary flowchart showing the method of identifying firstorder fromregion (in this case fromviewcell) silhouette edges.FIG. 3 shows details of the step 110 inFIG. 1 . 
FIG. 4A is an exemplary flowchart showing the method of constructing a SVME supporting polygon incident on mesh silhouette edge.FIG. 4A gives additional detail of the process shown in step 116 ofFIG. 1 . 
FIG. 4B shows a mesh object M1 a viewcell, and two candidate supporting polygons with their respective pivot angles. 
FIG. 4C is an exemplary flow diagram showing a test for determining if a polygon formed between a firstorder silhouette edge and a viewcell vertex is a supporting polygon.  FIG. 4D1 is an exemplary diagram showing two mesh polygons having a consistent vertex ordering.
 FIG. 4D2 is an exemplary diagram showing two mesh polygons having an inconsistent vertex ordering.

FIG. 5A is an exemplary flowchart showing the method of constructing SEMV swept triangle incident on an inside corner mesh silhouette vertex. 
FIG. 5B is a continuation ofFIG. 6A . 
FIG. 5C is an exemplary flow diagram showing a test for determining if a polygon formed between an insidecorner firstorder silhouette vertex and a viewcell edge is a supporting polygon. 
FIG. 6A is an exemplary flowchart showing a method of constructing SVME and SEME wedges from the corresponding SVME and SEME supporting polygons. 
FIG. 6B is an exemplary flowchart showing a method of constructing SEMV wedges from the corresponding SEMV supporting polygons. 
FIG. 7A is an exemplary diagram showing a convex viewcell and a nonconvex polygon mesh, Firstorder, fromviewcell silhouette edges of the mesh are shown in heavy lines, perspective view looking in general direction from the viewcell toward the polygon mesh.  FIG. 7B1 is an exemplary diagram showing the same objects as
FIG. 7B , but from a perspective view looking in a general direction from the polygon mesh toward the viewcell.  FIG. 7B2 shows a different polygon mesh than the one shown in FIG. 7B1 and shows an insidecorner edge of the mesh which is not a firstorder silhouette edge.
 FIG. 7C1 is an exemplary diagram showing the supporting polygons for firstorder silhouette edges A and B, perspective view looking in a general direction from viewcell toward mesh object.
 FIG. 7C2 is an exemplary diagram showing the supporting polygons for the firstorder silhouette edges A and B and the corresponding sourcevertex meshedge (SVME) wedges, perspective view looking in a general direction from viewcell toward mesh object.
 FIG. 7C3 is an exemplary diagram showing only the SVME wedges formed from the extension of the edges of the corresponding supporting polygons.
 FIG. 7D1 is an exemplary diagram showing the same objects as
FIG. 7C but from a perspective view looking a general direction from mesh object toward viewcell.  FIG. 7D2 is an exemplary diagram showing the same objects as FIG. 7C1, but from a perspective view looking a general direction from mesh object toward viewcell.
 FIG. 7D3 is a diagram showing the same objects as FIG. 7C2 but from a perspective view looking a general direction from mesh object toward viewcell.
 FIG. 7D4 is a hiddendiagram which shows the same polygon mesh and viewcell as FIG. 7D3 and shows two pivoted wedges intersecting at an outside corner vertex of a firstorder silhouette contour.
 FIG. 7D5 is a hiddendiagram which shows the same polygon mesh and viewcell and restricted pivoted wedge as FIG. 7D4 but from a different perspective.
 FIG. 8A1 is an exemplary diagram showing a swept triangle (a SEMV supporting polygon) on the inside corner vertex shared by firstorder silhouette edges labeled A and B. Perspective view looking in the general direction from the viewcell to the polygon mesh object.
 FIG. 8A2 is an exemplary diagram showing a swept triangle (a SEMV supporting polygon) on the inside corner vertex shared by firstorder silhouette edges labeled A and B and the corresponding SEMV wedge. Perspective view looking in the general direction from the viewcell to the polygon mesh object.
 FIG. 8A3 is an exemplary diagram showing the inside corner vertex shared by firstorder silhouette edges labeled A and B and the corresponding SEMV wedge. Perspective view looking in the general direction from the viewcell to the polygon mesh object.
 FIG. 8A4 is an exemplary diagram showing the firstorder wedges incident on silhouette edges A and B, including two SVME wedges and a single SEMV wedge, all intersecting at the inside corner silhouette vertex labeled ICSV. Perspective view looking in the general direction from the viewcell to the polygon mesh object
 FIG. 8B1 is an exemplary diagram showing the same objects as FIG. 8A1 but from a perspective view looking in a general direction from mesh object toward viewcell.
 FIG. 8B2 is an exemplary diagram showing the same objects as FIG. 8A2 but from a perspective view looking in a general direction from mesh object toward viewcell.
 FIG. 8B3 is an exemplary diagram showing the same objects as FIG. 8A3 but from a perspective view looking in a general direction from mesh object toward viewcell.
 FIG. 8A4 is an exemplary diagram showing the firstorder wedges incident on silhouette edges A and B, including two SVME wedges and a single SEMV wedge, all intersecting at the inside corner silhouette vertex labeled ICSV. Perspective view looking in the general direction from the polygon mesh object toward the viewcell.

FIG. 8C is an exemplary diagram showing the firstorder umbra boundary incident on the silhouette edges A and B, perspective view looking in a general direction from viewcell toward mesh object. 
FIG. 9A is an exemplary diagram showing the firstorder umbra boundary incident on silhouette edges A and B constructed by the prior art method of Teller (1992) perspective view looking in a general direction from viewcell toward mesh object. 
FIG. 9B is an exemplary diagram showing the same objects asFIG. 9A but from a perspective view looking in a general direction from mesh object toward viewcell. 
FIG. 9C is an exemplary diagram showing the more precise umbra boundary produced by the present method as compared to the umbra boundary produced by the prior art method of Teller, perspective view looking in a general direction from viewcell toward mesh object. 
FIG. 9D is an exemplary diagram showing the same objects asFIG. 9C but from a perspective view looking in a general direction from mesh object toward viewcell. 
FIG. 10A is an exemplary diagram showing some additional UBPs of the umbra boundary surface formed by the intersection of UBPs for several adjacent firstorder silhouette edges, perspective view looking in a general direction from viewcell toward mesh object. 
FIG. 10B is a view of the same polygon mesh asFIG. 10A and the same viewcell, but showing a set of UBPs forming a PAU. 
FIG. 11A is an exemplary diagram showing firstorder visibility event surfaces (wedges) generated by the present pivot and sweep method in the case of a compound silhouette contour. 
FIG. 11B is a different view of the same structures shown inFIG. 11A . 
FIG. 11C shows a portion of a continuous linearized umbral event surface formed at a compound silhouette vertex using at least one higherorder pivoted wedge. Same view asFIG. 2B andFIG. 11A . 
FIG. 12 is an exemplary flowchart showing a method of constructing a conservative, firstorder, linearized umbral discontinuity mesh using pivotandsweep construction of wedges. 
FIG. 13 is an exemplary flowchart showing the process of identifying and resolving overlap cycles during 3D mesh traversal. 
FIG. 14 is an exemplary flowchart showing the control process for a method of constructing onwedge, fromviewcell element 2D visibility map using 2D mesh traversal. 
FIG. 15 is an exemplary flowchart showing the main traversal process for a method of constructing onwedge, fromviewcell element 2D visibility map using 2D mesh traversal. 
FIG. 16 is an exemplary flowchart showing a process for determining if 2D discontinuity mesh point is otherwise conservatively occluded from the wedge's corresponding viewcell element (VCE). 
FIG. 17 is an exemplary flowchart showing the control process for a method of constructing higherorder wedge lines for determining an onviewcell edge visibility map by backprojection. 
FIG. 18 is an exemplary flowchart showing the main process for backprojection, fromvertex, 2D mesh traversal main process for constructing higherorder wedge lines. 
FIG. 19 is an exemplary flowchart showing a controlling process for an outputsensitive method of constructing a fromregion visibility map using 3D polygon mesh traversal.  FIG. 20A1 is an exemplary flowchart showing the main process for outputsensitive method of constructing a conservative, linearized, fromregion visibility map using 3D mesh traversal.
 FIG. 20A2 is an exemplary flow diagram showing a method determining a fromregion PVS which reflects not only containment of a viewpoint in a specific navigation cell but also maximum extents of a view frustum imposed on a interactive or scripted viewpoint while the viewpoint is within the corresponding viewcell.
 FIG. 20A3 is an exemplary flow diagram showing a method of constructing a conservative linearized umbral discontinuity mesh that is very similar to the method shown in the exemplary flow diagram of
FIG. 12 .  FIG. 20A4 shows a viewcell and two frompoint view frusta (FRUSTUM 1 and FRUSTUM 2) corresponding to the maximal directional extents of a camera during movement along a camera path that intersects the viewcell.
 FIG. 20A5 shows a viewport (equivalent conservative frompoint viewport) that, if used to construct a view frustum, would conservatively bound the two extremal frompoint view frusta FRUSTUM 1 and FRUSTUM 2. FIG. 20A5 also shows a fromviewcell frustum (equivalent conservative fromviewcell frustum or ECFVF) that is constructed by pivoting from the edges of the equivalent conservative frompoint viewport to the viewcell.
 FIG. 20A6 shows the same frusta as FIG. 20A4 but from the reverse angle.
 FIG. 20A7 shows the same frusta as FIG. 20A5 but from the reverse angle.
 FIG. 20A8 shows the same frusta as FIG. 20A7 but from the side view.
 FIG. 20A9 shows the same frusta as FIG. 20A7 but from the top view.
 FIG. 20A10 shows exemplary pseudocode for an exemplary method of determining an equivalent conservative fromviewcell frustum (ECFVF).
 FIG. 20A11 is an exemplary flow diagram showing a method predicting late packet arrival of VE packets sent from a server unit to a client unit and decreasing the packet size of the VE packets to prevent late packet arrival.

FIG. 20B is an exemplary flowchart showing a method of using an estimate of difference in umbral volumes produced by the pivotandsweep method and the intersecting planes method, estimated at an insidecorner vertex; and the difference used to determine the method of constructing the continuous umbral event surface at the insidecorner vertex. 
FIGS. 20C20J illustrate steps of a 3D mesh traversal of polygon meshes. 
FIG. 20K is a diagram showing a surrounding polygon mesh which contains other polygon meshes. 
FIG. 20L shows a view of the same viewcell and mesh objects asFIG. 20G , and from a similar perspective.FIG. 20L also shows an ECFVF (equivalent conservative from viewcell frustum), which reflects directional constraints on a view direction vector while the viewpoint is within the viewcell. 
FIG. 20M shows a view of the same viewcell and mesh objects asFIG. 20H , and from a similar perspective.FIG. 20M also shows an ECFVF. 
FIG. 21A is an exemplary flowchart of a method to determine if discontinuity mesh segment is otherwise occluded from the viewcell (i.e. is discontinuity mesh segment a fromregion occlusion boundary). 
FIG. 21B is a continuation ofFIG. 21A . 
FIG. 21C is an exemplary flowchart showing a method of classifying pvs polygons as strongly visible, nonoccluding, and alwaysfrontfacing. 
FIG. 22 is an exemplary flowchart showing the controlling process for a method of 3D mesh traversal to construction a backprojection, fromsilhouette edge visibility map for determining the fromsilhouetteedge visible supporting viewcell vertex (VSVV) and visible supporting viewcell silhouette contour (VSVSC). 
FIG. 23 is an exemplary flowchart showing the main process for a method of 3D mesh traversal to construct a backprojection, fromsilhouette edge visibility map for determining the fromsilhouetteedge visible supporting viewcell vertex (VSVV) and visible supporting viewcell silhouette contour (VSVSC). 
FIG. 24A is an exemplary flowchart showing a process to determine if a dm_segment is otherwise occluded from a silhouette edge source, used in construction of a fromsilhouetteedge visibility map backprojection employing 3D mesh traversal. 
FIG. 24B is an exemplary continuation ofFIG. 24A . 
FIG. 24C is an exemplary flowchart showing a method of using the fromsilhouette edge backprojection visibility map to constructive a conservative visible supporting viewcell silhouette contour (VSVSC) that contains the VSVSs corresponding to adjacent silhouette edges. 
FIG. 25 is an exemplary flowchart showing a method of pointocclusion test using firstorder wedges and higherorder wedges. 
FIG. 26 is an exemplary flowchart showing and alternate embodiment method of constructing polyhedral aggregate umbrae (PAU) from umbral boundary polygons (UBPs) using 3D mesh traversal. 
FIG. 27A is an exemplary diagram showing a viewcell and two polygon mesh objects, MESH E and MESH D.FIG. 27A illustrates that a firstorder, fromregion, SVME umbral wedge may be inexact on segments where the corresponding supporting polygon intersects geometry between the viewcell and the supporting firstorder silhouette edge. 
FIG. 27B is an exemplary diagram showing the same view asFIG. 27A except that the inexact portion of the firstorder wedge is refined by subdividing the corresponding segment of the firstorder silhouette edge and conducting firstorder backprojection using the subsegments as a linear light source. The result is that the inexact portion of the wedge is replaced by two SVME wedges connected by a single SEMV wedge which together form a continuous umbral surface that more precisely approximates the actual quadric umbral event surface incident on the inexact segment of the firstorder silhouette edge. 
FIG. 27C is an exemplary diagram showing the same view asFIG. 27B except that the subdivision of the inexact portion of the original firstorder wedge is now refined by subdividing the corresponding segment of the firstorder silhouette into four subsegments instead of two, producing an even more precise approximation to the actual umbral event surface (a quadric) in this region. 
FIG. 27D is an exemplary diagram of the same structures asFIG. 27A from a different view (from slightly behind the viewcell) showing that the firstorder silhouette edge having segments SE1U and SE10 is firstorder visible from the viewcell. 
FIG. 28 is an exemplary flowchart showing a method of controlling the fromedge backprojection process by examining maximal possible deviation between firstorder and exact wedge, and by identifying segments of silhouette edge for which firstorder wedge is inexact. 
FIG. 29 is an exemplary flowchart showing control of fromedge backprojection process by examining maximal possible deviation between firstorder and exact wedge, and by identifying simple and compound insidecorner silhouette vertices for which firstorder semy wedge(s) are inexact. 
FIG. 30A is an exemplary flowchart showing method of identifying fromviewcelloccluded regions in visibility map having high effective static occlusion (ESO) and the process of conservatively simplifying both the occluded region boundary and the corresponding mesh silhouette contour. 
FIG. 30B is a continuation ofFIG. 30A . 
FIG. 30C is a continuation ofFIG. 30B . 
FIG. 30D is a 3D hiddenline diagram showing a viewcell and two polygon meshes. 
FIG. 30E is a 3D hiddenline diagram showing the same perspective view asFIG. 30D , and including an occlusion region and corresponding occlusion boundary. 
FIG. 31A shows exemplary data structures employed by the method of labeled silhouette edges. 
FIG. 31B is a continuation ofFIG. 31A . 
FIG. 31C is a continuation ofFIG. 31B . 
FIG. 31D is a diagram showing data structures for an exemplary embodiment employing deltaG+ data. 
FIG. 32A is an exemplary flowchart showing a method of identifying edges and vertices of a silhouette contour using data structures for labeled silhouette contours. 
FIG. 32B is a continuation ofFIG. 32A . 
FIG. 33A is an exemplary flowchart showing the method of identifying delta regions of visibility difference for a transition from viewcell A to viewcell B. 
FIG. 33B is an exemplary continuation ofFIG. 33A . 
FIG. 33C is a continuation of the exemplary flow diagram ofFIG. 33B . 
FIG. 34A is an exemplary flowchart showing a method of rapid runtime construction of visibility map occlusion boundary segments using labeled silhouette contour information for a single contour. 
FIG. 34B is a continuation ofFIG. 34A . 
FIG. 35A is an exemplary flowchart showing a method of constructing visibility map occlusion boundary segments derived from a single silhouette edge of a labeled silhouette contour. 
FIG. 35B is a continuation ofFIG. 35A . 
FIG. 36 is an exemplary flowchart showing a process controlling the runtime process of constructing visibility map ROI using ROI boundaries constructed from prestored labeled silhouette contours wherein the ROI boundaries define delimit a simplified, hinted, runtime 3D mesh traversal process which traverses the ROI. 
FIG. 37A is the main process of using simplified, hinted, runtime 3D mesh traversal process to construct ROI from prestored labeled silhouette contour information and a list of seed triangles for the connected components of the ROI. 
FIG. 37B is an exemplary flow diagram showing a method of identifying and storing significant viewcellviewcell occlusion and silhouette boundaries using mesh traversal. 
FIG. 37C , is an exemplary flow diagram showing a method of constructing connected components of VM/PVS corresponding to a viewcell transition using traversal employing precomputed significant occlusion boundaries and/or silhouette contours stored as runlength encoded encounter numbers (ENs).  FIG. 37D1 is an exemplary diagram showing a triangle mesh and the shows a starting triangle T0 and 12 other labeled triangles encountered in a depthfirst traversal starting from triangle T0.
 FIG. 37D2 is an exemplary diagram showing a triangle mesh and the shows a starting triangle T0 and 12 other labeled triangles encountered in a breadthfirst traversal starting from triangle T0.
 FIG. 37E1 shows the subset of the triangles of the triangle mesh that are traversed during 12 steps of a depthfirst traversal starting from triangle T0.
 FIG. 37E2 shows the subset of the triangles of the triangle mesh that are traversed during 12 steps of a breadthfirst traversal starting from triangle T0.
 FIG. 37F1 shows the subset of the triangles of the triangle mesh that are traversed during 12 steps of a depthfirst traversal starting from triangle T0, and the order of the edges encountered during this traversal.
 FIG. 37F2 shows the subset of the triangles of the triangle mesh that are traversed during 12 steps of a breadthfirst traversal starting from triangle T0, and the order of the edges encountered during this traversal.

FIG. 38 is an exemplary flowchart showing a method of attaching a deltaG+submesh corresponding to newly exposed mesh elements for a specific viewcell transition to the corresponding labeled silhouette contour's starting boundary. 
FIG. 39A shows an exemplary simple occluder. 
FIG. 39B shows exemplary the delta regions (DR) of occlusion formed by the simple occluder (ofFIG. 39A ) when viewed from connected viewcells A and B. 
FIG. 40 shows the same unified fromregion visibility map asFIG. 39B except that the portions of the OCCLUSION REGION VIEWCELL A that are outside the OCCLUSION REGION VIEWCELL B are labeled as DR_{O}BA (delta region of occlusion from B to A) and DR_{E}AB (delta region of exposure from A to B). 
FIG. 41A is an exemplary diagram showing the use of the onwedge visibility method (FIG. 14 ,FIG. 15 , andFIG. 16 ) to identify CSVs and construct wedge lines for a SVME wedge.FIG. 41A shows the case of a simple CSV no cusp. 
FIG. 41B is an exemplary diagram showing the use of the onwedge visibility method (FIG. 14 ,FIG. 15 , andFIG. 16 ) to identify CSVs and construct wedge lines for a SVME wedge.FIG. 41B shows case of degenerate CSV forming a cusp of the firstorder silhouette contour. 
FIG. 41C is an exemplary drawing showing a SEME wedge incident on a firstorder silhouette edge intersecting 3 polygon mesh objects, the firstorder fromviewcelledge wedge lines (WLs) and their intersection with mesh polygons are shown. The figure is used to illustrate the operation of the 2D mesh traversal process for constructing an onwedge visibility map (FIG. 15 and related figures). 
FIG. 41D is a perspective view diagram showing a polygon mesh, a viewcell, and a portion of a firstorder silhouette contour including a cusp and a compound silhouette vertex. 
FIG. 42A is an exemplary flowchart Showing the method using hierarchical viewcells. 
FIG. 42B is an exemplary flowchart Showing the method using hierarchical viewcells. 
FIG. 43A is an exemplary diagram andFIG. 43B data structures for incremental VM/PVS maintenance using delta VM/PVS data. 
FIG. 43B is a continuation ofFIG. 43A . 
FIG. 44A is an exemplary flowchart showing a method of data storage and transmission supporting incremental VM/PVS maintenance using delta VM/PVS (DeltaG+submesh) data sent from a remote server. 
FIG. 44B is a continuation ofFIG. 44A 
FIG. 45A is an exemplary flow diagram showing a method, conducted on a server unit of identifying potentially newly reachable navigation cells and sending data representing potentially newly reachable navigation cells to a client unit if that data is not already present on the client. 
FIG. 45B shows exemplary data structures used by the processes ofFIG. 45A andFIG. 46FIG . 48. 
FIG. 46A is an exemplary flow diagram showing a method, conducted on a server unit of identifying potentially newly visible navigation cells and sending data representing potentially newly reachable navigation cells to a client unit if that data is not already present on the client. 
FIGS. 46B46C are exemplary diagrams of a modeled environment. 
FIG. 47 is an exemplary flow diagram showing a method, conducted on a server unit, of identifying moving objects that have not been sent to a client unit but which have entered a navigation cell that is potentially visible to the client unit. 
FIG. 48 is an exemplary flow diagram showing a method, conducted on a client unit, of identifying moving objects for which the graphical information has not been sent to a client unit but which have entered a navigation cell that is potentially visible to the client unit. 
FIG. 49 is a block diagram/flowchart showing server process sending navigation cell data based on navigationprediction process performed on server and client unit requesting visibility event data based on navigationprediction process performed on client and using navigation cell data previously sent by server. 
FIG. 50 is an exemplary flow diagram showing a method of using salience to select the levelofdetail of VE packets that are sent by a VE server to a VE client. 
FIG. 51 is an exemplary schematic illustration of a uniprocessor computer system for implementing System and Method of FromRegion Visibility Determination and DeltaPVS Based Content Streaming Using Conservative Linearized Umbral Event Surfaces according to the present invention. 
FIG. 52 is an exemplary diagram of a processor. 
FIG. 53A is exemplary ANSI C source code for a function which identifies all of the firstorder silhouette edges of a manifold mesh, using the algorithm of the flow diagram ofFIG. 3 . 
FIG. 53B is a continuation ofFIG. 53A . 
FIG. 53C is a continuation ofFIG. 53B . 
FIG. 54A is a perspective view of a manifold mesh and a viewcell showing all edges of the mesh. 
FIG. 54B is the same perspective view of the same manifold mesh and viewcell shown inFIG. 54A , but showing only the firstorder silhouette edges. Exemplary firstorder silhouette contours are labeled as FOSCI and FOSCO. An exemplary cusp of the contour is labeled as CUSP1. 
FIG. 54C shows a different perspective view of the same manifold mesh and viewcell as shown inFIG. 54A andFIG. 54B , view is looking toward the viewcell. An exemplary cusp of a firstorder silhouette contour is labeled as CUSP2. 
FIG. 54D shows a different perspective view of the same manifold mesh and viewcell as shown inFIG. 54A andFIG. 54B , andFIG. 54D , view is looking from beneath the manifold mesh toward the viewcell. An exemplary cusp of a firstorder silhouette contour is labeled as CUSP3.  In exemplary embodiments, the terminology ESO (Effective Static Occlusion) refers to a metric that is in some direct proportion to the number of (original mesh) polygons and/or surface area of these polygons inside an occluded region of a visibility map. The ESO is also in some inverse proportion to the number of new polygons introduced in the visible region surrounding the occluded region as a result of retriangulation caused by the edges of the occlusion boundary. The metric is used in conservative simplification of a VM or unified VM.
 In exemplary embodiments, the terminology EDO (Effective Dynamic Occlusion) refers to a metric that is in some direct proportion to the number of polygons and/or surface area of polygons occluded in a delta region (DR) of occlusion wherein the DR represents the region of occlusion produced during a specific viewcell transition. The ESO is also in some inverse proportion to the number new polygons introduced in the visible region surrounding the DR as a result of retriangulation caused by the edges of the occlusion boundary.
 In exemplary embodiments, the terminology EDV (Effective Dynamic Visibility) refers to a measure of the effectiveness of a delta region (DR) of a unified visibility map. If the DR is a DR_{O }(delta region of occlusion) for the specific viewcell transition then the EDV corresponds to the EDO of the DR.
 If the DR is a DR_{E }(delta region of exposure) then the EDV is determined by examining the ESO of the surrounding occlusion regions. Simplification of the DR_{E }proceeds by simplification of the surrounding OR and extending the polygons of the DR_{E }into the OR or DR_{O}.
 In exemplary embodiments, the terminology Unified Visibility Map refers to a visibility map including fromviewcell occlusion boundaries generated from two viewcells (e.g. A and B) wherein the viewcells are related in one of two ways: 1) one viewcell is completely contained in the other, or 2) the viewcells completely share a common face. The unified visibility map is an arrangement of VM regions such that some regions contain newly occluded mesh triangles/fragments and other regions contain newly exposed mesh triangles/fragments for the transition from viewcell A to viewcell B. The unified visibility map is used to construct deltaPVS data for direct storeage. Alternatively the unified visibility map can be used to identify significantly occluding or significantly silhouette contours which can be labeled and used to generate the deltaG/deltaPVS data later.
 In exemplary embodiments, the terminology wedge (see also CLUES) refers to a visibility event surface formed by a feature (vertex or edge) of a viewcell and vertices or edges of the mesh polygons. In general a wedge defines the visibility from the viewcell's feature, and across the mesh polygon's vertex or edges.
 The wedges employed in the priorart method of discontinuity meshing are exact. These edges may be planar or quadric surfaces. The planar wedges described in the discontinuity mesh literature are of two types renamed here as:
 1) SVME wedge—Formed by a vertex of the viewcell (or “source”) and an edge of the mesh. Also called a pivoted wedge or a supporting vertex wedge.
 2) SEMV wedge—Formed by an edge of the viewcell and a vertex of the polygon mesh. Also called a swept wedge or supporting edge wedge.
 3) SEME wedge—Formed in the special case where the mesh silhouette edge is parallel to a supporting viewcell silhouette edge.
 These definitions assume frontprojection (i.e. using the viewcell as the lightsouce). In the backprojection method a silhouette edge or segment of a silhouette edge is used as the “source” and various silhouette edges in the shaft between the source edge and the viewcell support the backprojection event surfaces. The definitions are otherwise identical for the backprojection case.
 Since the wedges employed in discontinuity meshing are typically used to identify components of the sources penumbra they are constructed on a relatively large number of edges of the polygon meshes, called from viewcell silhouette edges.
 Since the planar wedges used in discontinuity meshing are exact event surfaces they are not defined on regions for which the wedge's viewcell feature (vertex or edge) is occluded from the wedge's polygon mesh feature. This definition of a wedge creates “gaps” in the planar event surfaces that cause the surfaces to be discontinuous. In the method of complete discontinuity meshing these gaps are filled with higherorder visibility event surfaces which may be quadric wedges. The gaps are filled by these higherorder event surfaces the and resulting visibility event surfaces, in general, continuous.
 See Table Ia and Ib for wedge nomenclature.
 Embodiments also employ planar fromfeature event surfaces, the conservative linearized umbral event surfaces (CLUES) which are similar to the planar wedges employed in discontinuity meshing but differ from these wedges in important respects.
 One difference between the planar wedges used in discontinuity meshing and the CLUES (also called firstorder wedges, or simply wedges in the present specification) is that the wedges employed in the present method are only those wedges that could form fromviewcell umbral event surface, penumbral events per se are not considered in fromviewcell visibility. The wedges of the present method are constructed on fewer polygon mesh edges (called the firstorder silhouette edges) and they are constructed using a pivotandsweep technique which generates only potential umbral event wedges. This means that the number of wedges constructed in the present method is far less than the number of wedges generated in discontinuity meshing.
 Another difference between discontinuity meshing wedges and the wedges of the present method is that the wedges of the present method are defined and constructed using only by the wedge's viewcell feature and the wedge's polygon mesh feature. Any intervening geometry between these two features is ignored.
 This method of wedge construction is based on the firstorder model of visibility propagation in polyhedral environments which insures that conservative, continuous umbral boundaries are constructed.
 In actuality, intervening geometry may cause regions for which the viewcell feature is occluded from the polygon mesh feature. These are regions of the wedge in which the corresponding discontinuity mesh wedge would not be defined (thereby producing a gap or discontinuity in the event surface which is normally filled by a higherorder wedge or quadric). By ignoring this intervening geometry the present method constructs wedges which define a continuous event surface without gaps. Since the wedges of the present method are constructed by ignoring this type of higher order occlusion they conservatively represent the actual fromfeature umbral event surface. For regions of the wedge in which there is no intervening geometry, the wedges constructed by the present method are exact.
 In regions where the wedge is inexact the wedge may be optionally replaced by other wedges constructed using a modified method of wedge construction which accounts for higherorder occlusion caused by the intervening geometry.
 The present method includes three types of (firstorder) wedges:
 1) SVME wedge—formed by extending the edges of a corresponding pivoted supporting polygon. The corresponding pivoted supporting polygon is formed by a supporting vertex of the viewcell (SVV) and a firstorder silhouette edge of the polygon mesh by the process of pivoting from the edge to the viewcell. The pivoted supporting polygon is also called a SVME supporting polygon or a vertex supporting polygon. This type of visibility event surface reflects containment at a point on the viewcell and occlusion by an (silhouette) edge of the mesh. Also called a pivoted wedge. The pivoting process is described as a process that identifys the supporting plane between the firstorder silhouette edge and a viewcell. While the process may appear to a human being to be an actual continuous rotation of a plane about the silhouette edge until it touches the viewcell, in fact embodiments can measure specific discrete angles formed by each candidate supporting plane (formed by corresponding viewcell vertex) and another polygon. Comparing these angle measurements in one embodiment allows determination of the actual supporting polygon from a number of candidate supporting polygons.
 2) SEMV wedge—formed by extending the edges of a corresponding swept supporting polygon (also simply called a swept polygon or an edge supporting polygon), which is a supporting polygon formed by a supporting edge of the viewcell and an inside corner mesh silhouette vertex by the process of sweeping along the supporting viewcell silhouette contour (SVSC) between the SVVs supporting the adjacent SVME wedges. This type of visibility event surface reflects containment on a (boundary) edge of the viewcell restricted at an (inside corner) mesh silhouette vertex. An SEMV wedge is also called a swept wedge.
 3) SEME wedge—formed only where the supporting viewcell edge and the supported mesh silhouette edge are parallel. Formed by extending the edges of the corresponding SEME supporting polygon formed between the parallel supporting viewcell edge and the supported mesh silhouette edge. Unlike the other types of planar wedges the determination of onwedge visibility for an SEME wedge is a fromregion, not a frompoint visibility problem. This type of visibility event surface reflects containment on a (boundary) edge of the viewcell and occlusion by an (silhouette) edge of the mesh.
 Another important difference between the wedges used in priorart discontinuity meshing and those used in the present invention is that in the present method onwedge visibility is determined using a conservative method in which onwedge silhouette vertices are constrained to occur on firstorder, fromviewcell silhouette edges. This insures that each onwedge silhouette vertex is a compound silhouette vertex (CSV), a point of intersection of two wedges (one corresponding to the current wedge). In contrast, in priorart discontinuty meshing methods, onwedge visibility is determined exactly, typically using frompoint object space visibility methods like the WeilerAtherton algorithm.
 In exemplary embodiments, the terminology pivoted wedge referst to an SVME wedge formed by extending the edges of a pivoted supporting polygon.
 In exemplary embodiments, the terminology CLUES (Conservative Linearized Umbral Event Surface) (See Wedge) refers to another name for the firstorder umbral wedges constructed using the pivotandsweep method of the present invention. These wedges may be refined to reflect higherorder visibility interactions using the backprojection method of the present invention.
 In exemplary embodiments, the terminology Umbra Boundary Polygon (UBP) refers to a polygon that is part of the surface of the fromviewcell umbral volume. In the present method the fromviewcell umbral volumes (called the polyhedreal aggregate umbrae, or PAU) may be constructed using conservative UBPs that are derived from the corresponding (firstorder) wedges.
 The wedges employed by the present method are fromviewcellfeature umbral event surfaces that are guaranteed to be fromviewcell umbral event surfaces (from the entire viewcell) only in the immediate vicinity of the mesh silhouette edge that supports the wedge. This is because the wedge may intersect another wedge beyond the supporting silhouette edge in a way that restricts the fromviewcell umbral boundary on the wedges. That is to say that the wedge itself, which is tangentially visible from the supported viewcell feature, may become visible from other parts of the viewcell.
 Higherorder UBPs may be constructed from the corresponding higherorder wedges.
 In exemplary embodiments, the terminology polygon mesh refers to a finite collection of connected vertices, edges, and faces (also called polygons) formed from the vertices and edges. If two polygons of a mesh intersect, the edge or vertex of intersection must be a component of the mesh. No interpenetration of faces is allowed. Also called a polygon mesh object, triangle mesh or simply mesh. If each edge of the mesh is shared by at most two polygons it is a manifold polygon mesh. If each edge is shared by exactly two faces then the mesh is a closed manifold polygon mesh. Polygon meshes in this specification are assumed to be closed manifold meshes unless otherwise indicated.
 In exemplary embodiments, the terminology viewcell or view region refers to a polyhedron, which may be represented as a polygon mesh, which describes a region to which the viewpoint is restricted. Viewcells and view regions in this specification are assumed to be convex unless otherwise indicated. A viewcell may be constrained to be a parallelpiped or box, while a view region may not necessarily be so constrained.
 In exemplary embodiments, the terminology PVS (potentially visible set) refers to a set of polygons or fragments of polygons that are visible from a viewcell. Generally a PVS is computed to be conservative, including all polygons or polygon fragments that are visible as well as some that are not.
 In exemplary embodiments, the terminology Polyhedral Aggregate Umbrae (PAU) refers to the volume of space occluded by a mesh object from a viewcell, assuming the firstorder model of visibility propagation, is called the firstorder polyhedral umbra volume. Since individual umbral volumes may intersect to aggregate the occlusion we call these volumes the firstorder polyhedral aggregate umbra (PAU).
 Firstorder PAU, also simply called PAU, are bounded by polygons called umbra boundary polygons or UBP. These polygons are formed by the intersection of the firstorder wedges with triangle mesh polygons and with other firstorder wedges. The PAU are also bounded by the firstorder visible mesh polygon fragments (the fragments comprising the fromviewcell visibility map). Together the UBPs and the visible mesh polygon fragment form continuous (though not necessairly closed) umbral surfaces that define the boundaries of the PAU.
 As described in detail in conjunction with the 3D 2manifold traversal method (
FIG. 20 and related figures); the construction of the visibility map involves a step in which it is determined if a point on an onwedge visible polyline segment is actually within a PAU volume, and therefore occluded from the entire viewcell. The method includes a modified pointinpolyhedron test which can answer this query for firstorder PAU without explicitly constructing the entire PAU.  In exemplary embodiments, the terminology Discontinuity Mesh (DM) refers to a mesh formed by the intersection of visibility event surfaces with mesh polygons. A discontinuity mesh formed from visibility event surfaces incident on a viewcell partitions the mesh polygons into partitions (called regions) of uniform qualitative visibility or “aspect” with respect to the viewcell.
 In the priorart method of complete discontinuity meshing all event surfaces, umbral and penumbral, incident on the light source are constructed.
 In some embodiments, fromviewcell discontinuity meshes are constructed from firstorder, fromviewcell umbral visibility event surfaces or from firstorder umbral visibility event surfaces which have been refined, by a backprojection technique, to account for higherorder visibility interactions.
 Despite the fact that only umbral event surfaces are employed; not all regions of the umbral DM bordered by the occluded side of oriented DM polylines are actually occluded from the entire viewcell. This is because the fromviewcell status of a region (its actual inclusion as part of a PAU) is determined by wedgewedge intersections in R3 that may not be reflected in the corresponding wedgepolygon mesh intersection.
 In exemplary embodiments, the terminology Visibility Map (VM) refers to a partitioning of mesh polygons into regions that are occluded from the entire viewcell and other regions that are visible from some point on the viewcell. In priorart methods of exact fromregion visibility (Nierenstein et al. 2000,2005) these partitions are constructed using exact visibility event surfaces which are generally, quadrics.
 Embodiments construct conservative, linearized, umbral discontinuity meshes using the corresponding CLUES. The resulting DM is conservative partitioning of mesh polygons into regions that are occluded from the entire viewcell and other regions that are visible from some point on the viewcell. The boundaries of the VM are a subset of the boundaries of the corresponding DM, since not all regions of the umbral DM bordered by the occluded side of oriented DM polylines are actually occluded from the entire viewcell. In contrast the corresponding VM contains only regions that are guaranteed to be occluded from the entire viewcell (umbral regions of the VM) and other regions that are visible from some point on the viewcell, wherein the occlusion may be conservatively underestimated and the visibility consequently overestimated.
 In exemplary embodiments, the terminology silhouette edge refers to an edge of a polygon mesh which has one component polygon that is front facing from a particular location and another component polygon that is backfacing from the same location.
 In exemplary embodiments, the terminology FromPoint Silhouette Edge refers to an edge of a polygon mesh which has one component polygon that is front facing from a particular point and another component polygon that is backfacing from the same point.
 In exemplary embodiments, the terminology FromRegion Silhouette Edge (also called general fromregion silhouette edge) is defined with respect to a region such as a viewcell (or an polygon mesh edge in the case of backprojection) acting as a light source. If the location is a viewcell the fromregion silhouette edge may be called a fromviewcell silhouette edge. If the region is an edge then the fromregion silhouette edge may be called a fromedge silhouette edge. In the present specification any of type of silhouette edge (frompoint, fromviewcell, fromedge) may simply be called a silhouette edge, with the type of silhouette edge being implied by the context.
 A fromviewcell general silhouette edge is any edge of a polygon mesh that is a frompoint silhouette edge for any point on a viewcell (or area lightsource). This is the definition of fromviewcell silhouette edge employed by Nierenstein et. al. 2005 and in the complete discontinuity meshing method of Drettakis et. al. 1994.
 In general such edges support fromregion penumbral event surfaces but a subset actually support fromregion umbral event surfaces which are typically quadric surfaces.
 Fromregion silhouette edges may be defined exactly, when higherorder visibility interactions of edge triples are considered. Alternatively fromregion silhouette edges may be defined, as in the present method, conservatively by considering only visibility event surfaces that arise as a result of interactions between edge pairs; as in the firstorder visibility model of visibility propagation.
 In exemplary embodiments, the terminology FirstOrder Silhouette Edge refers to a firstorder fromviewcell silhouette edge (also called simply a firstorder silhouette edge) is an edge of a polygon mesh that has one component polygon that is backfacing for the entire viewcell, and the other component polygon that is front facing for at least one vertex of the viewcell, wherein the component polygons are backfacing with respect to each other.
 This definition is based on a simple, conservative model of visibility propagation in polyhedral environments called firstorder visibility, which considers only the visibility event surfaces that arise as a result of interactions between edge pairs.
 One embodiment of the present invention employs polygon meshes that are manifold triangle meshes. In a manifold triangle mesh, each edge is completely shared by exactly two triangles. The specification of firstorder silhouette edges is simplified by using manifold triangle meshes.
 A firstorder silhouette edge of a polygon mesh with respect to a viewcell is a locally supporting edge of the polygon mesh with respect to the viewcell. A locally supporting edge supports a polygon between the viewcell and the edge if only the viewcell and the two component polygons (triangles) sharing the edge are considered in the test for support. (See definition of test for support)
 Generally firstorder fromregion silhouette edges are a small subset of the exact fromregion silhouette edges of any polygon mesh.
 In the present specification, any type of firstorder silhouette edge (fromviewcell, fromedge) may simply be called a firstorder silhouette edge, or simply a silhouette edge with the type of silhouette edge being implied by the context.
 The present invention includes a method of identifying (by adaptive refinement and backprojection) where a firstorder silhouette edge is inexact and “retracting” the silhouette edge to a closer edge that belongs to the set of exact fromregion silhouette edges of the polygon mesh.
 In exemplary embodiments, for the terminology Locally Supporting Edge, see FirstOrder Silhouette Edge.
 In exemplary embodiments, the terminology supporting polygon refers to a supporting polygon that is “supported” by two structures. In the present method, a supporting polygon between a firstorder silhouette edge of a polygon mesh and a viewcell is, in one case, formed by the firstorder silhouette edge and a vertex of the viewcell (SVME supporting polygon). The vertex of the viewcell supporting this polygon is called the supporting viewcell vertex (SVV). It can be identified by pivoting the plane of the backfacing component polygon of the silhouette edge, wherein the pivoting occurs about the silhouette edge and in a direction of the normal of the backfacing component polygon of the edge toward the viewcell until the plane of the supporting polygon intersects the viewcell. This intersection will, in the general case, occur at the supporting viewcell vertex, which together with the firstorder silhouette edge, forms a supporting polygon that is a triangle. If the supporting viewcell vertex is a vertex of an edge of the viewcell that is parallel to the silhouette edge of the mesh then the pivoting plane will intersect the edge of the viewcell, not just a single vertex, and the supporting polygon will be a quadrangle formed by the mesh silhouette edge and the intersected viewcell edge. This second type of supporting polygon is called a SEME supporting polygon.
 In another case of the present method a different type of supporting polygon is formed between an inside corner vertex of a firstorder silhouette edge and an edge of the viewcell (SEMV supporting polygon also called a supporting triangle).
 In the context of the present invention, supporting polygons are conservatively defined as being supported by a firstorder silhouette edge (also called a locally supporting edge), or vertex thereof, and the corresponding viewcell, neglecting any occlusion or interference between the firstorder silhouette edge and the viewcell. If a supporting polygon, as defined by the present invention, intersects geometry between the firstorder edge and the viewcell, then the supporting polygon is not a supporting polygon as defined in the prior art (which does not generally allow a supporting polygon to be defined if such interference exists).
 As defined in priorart a polygon would pass a “test for support” (i.e. be a supporting polygon) between two structures if the polygon is supported by a vertex or edge of one structure and a vertex or edge of the other structure without intersecting anything else. The test for support also requires that the extension of the supporting polygon (e.g. this extension is the “wedge”) in the direction away from the first supported object (e.g. the viewcell) also does not intersect the other supported structures (e.g. the polygon meshes) in a way that causes it to be “inside” the other supported structure (e.g. on the topological “inside” of a manifold mesh). This test for support effectively requires a supporting edge to be an “outside” edge of the structure (e.g. a polygon mesh) which will support a supporting polygon tangentially to the structure, as opposed to an “inside” or reflex edge of a structure such as a polygon mesh which will not.
 In the present method this test for support is used in a more limited way by including only the polygons sharing an edge of a mesh in the determination of whether the edge supports a conservative supporting polygon between the viewcell and the mesh (i.e., whether the edge is a “locally supporting” or firstorder silhouette edge, see definition of firstorder silhouette edge and locally supporting edge).
 In cases where the difference between the present, conservative, definition of the supporting polygon is distinguished from the priorart definition of the supporting polygon is to be emphasized, a supporting polygon as defined by the present invention may be called a conservative supporting polygon. Otherwise a conservative supporting polygon as defined in the present invention is simply called a supporting polygon.
 As defined in the present invention, wedges derived from (conservative) supporting polygons always form continuous conservative linearized umbral event surfaces that can be intersected with mesh polygons to conservatively determine the set of mesh polygons (or fragments thereof) that are visible from a viewcell, without the need for quadric surfaces that usually dominate (and complicate) exact solutions.
 In exemplary embodiments, for the terminology Conservative Supporting Polygon see the above terminology for supporting polygon
 In exemplary embodiments, the terminology Test for Support refers to a polygon that would pass a “test for support” (i.e. be a Fupporting polygon) between two polygonal structures if the polygon is supported by a vertex or edge of one structure and a vertex or edge of the other structure without intersecting anything else. The test for support also requires that the extension of the supporting polygon (e.g. this extension is the “wedge”) in the direction away from the first supported object (e.g. the viewcell) also does not intersect the other supported structures (e.g. the polygon meshes) in a way that causes it to be “inside” the other supported structure (e.g. on the topological “inside” of a manifold mesh). This test for support effectively requires a supporting edge to be an “outside” edge of the structure (e.g. a polygon mesh) which will support a supporting polygon tangentially to the structure, as opposed to an “inside” or reflex edge of a structure such as a polygon mesh which will not.
 In the present method this test for support is used in a more limited way by including only the polygons sharing an edge of a mesh in the determination of whether the edge supports a conservative supporting polygon between the viewcell and the mesh (i.e., whether the edge is a “locally supporting” or firstorder silhouette edge, see definition of firstorder silhouette edge and locally supporting edge).
 In cases where the difference between the present, conservative, definition of the supporting polygon is distinguished from the priorart definition of the supporting polygon is to be emphasized, a supporting polygon as defined by the present invention may be called a conservative supporting polygon. Otherwise a conservative supporting polygon as defined in the present invention is simply called a supporting polygon.
 In exemplary embodiments, the terminology Conservative Supporting Hull refers to a polygonal structure formed by the conservative supporting polygons between one polyhedron (e.g. a viewcell) and one or more other polyhedra (e.g. polygon mesh objects). The pivotandsweep method is a method of constructing a specific subset of the conservative supporting hull polygons between a viewcell and nonconvex polygon mesh objects.
 The supporting hull is a generalization of the “convex hull” which is important priorart in computational geometry and linear programming. The convex hull between two convex polyhedral is a polygonal structure that contains all of the “sightlines” of visibility between the two convex polyhedral. Prior art methods of forming the convex hull between one convex polyhedron (e.g. a viewcell) and another convex polyhedron (e.g. a convex polygon mesh) are well known and important. These priorart methods employ the construction of supporting polygons between the two convex objects. (See O'Rourke, Computational Geometry in C Second edition Cambridge University Press 1998).
 There is no apparent priorart description for forming the supporting hull between a convex polyhedron and one or more nonconvex polyhedra (e.g. polygon mesh objects used in the present invention and which are ubiquitous in computer graphics). An exact supporting hull would include not only polygons but also quadric surfaces incident on compound silhouette vertices.
 In contrast, the set of conservative supporting polygons that can be constructed using the pivotandsweep method of the present invention can be easily supplemented (by adding swept wedges incident on outsidecorner vertices of the polygon meshes) to form a continuous, conservative approximation to the exact supporting hull between a convex polyhedron (e.g. the viewcell) and a one or more nonconvex polyhedral.
 The pivotandsweep method as specified in one embodiment of the present invention constructs the subset of the conservative supporting hull polygons that, when extended, form wedges that, in combination, form conservative continuous umbral event surfaces which can be used to determine the set of polygons visible from a viewcell without the need for quadric surfaces.
 Some polygons that would be included in the complete conervative supporting hull are not constructed in the pivotandsweep method in one embodiment because the corresponding wedges (e.g. swept, or SEMV wedges incident on outside corner vertices of the polygon meshes) do not contribute to the continuous umbral boundary separating what is visible from the viewcell from what is occluded from the viewcell.
 In the pivotandsweep method these supporting polygons are not identified. Consequently their corresponding wedges are not constructed.
 Alternate embodiments of the employing conservative supporting polygons to construct the continuous umbral event surfaces other than the specified pivotandsweep method are possible. For example, alternate embodiments can construct the entire complete conservative supporting hull between a viewcell and polygon mesh objects and then extend the edges of all of the supporting hull polygons to form wedges. The wedges so formed include wedges (e.g. wedges formed by extending supporting polygons supported by an edge of the viewcell and an outside corner vertex of the polygon mesh) that do not contribute to a continuous umbral event surface. In such an alternate embodiment these superfluous wedges can be ignored or removed.
 In exemplary embodiments, the terminology SVV (supporting viewcell vertex) refers to, for a given mesh silhouette edge, the first viewcell vertex that is encountered when pivoting a plane through the mesh silhouette edge in the direction of the normal of the backfacing component polygon of the silhouette edge. (see also supporting polygon)
 In exemplary embodiments, the terminology Supporting Viewcell Silhouette Contour (SVSC) refers to that portion of the viewcell silhouette contour, as viewed from an inside corner vertex of a mesh silhouette edge, that produces the most extreme umbra boundary. This is the portion of the viewcell silhouette contour which produces the least occlusion when looking through the inside corner mesh silhouette vertex from the viewcell silhouette. It is also the contour that, when subjected to sweep operation, produces SE_MV wedges that have a consistent orientation with the connected SVME wedges and form a continuous surface. The supporting viewcell silhouette contour extends between two SVVs that corresponding to the mesh silhouette edges which produce the inside corner vertex.
 SEMV wedges are oriented visibility event surfaces that reflect the restriction of visibility at a mesh silhouette vertex by virtue of containment on the viewcell surface.
 In contrast, SVME wedges are oriented visibility event surfaces that reflect the restriction of visibility at a mesh silhouette edge by virtue of the (fromviewcell) occlusion caused by the mesh polygon at the silhouette edge.
 The SVSC is the set of (from mesh silhouette edge) viewcell silhouette edges that produces corresponding SEMV wedges having a orientation that is consistent with the orientation of adjacent SVME wedges; thus producing a continuous, conservative, consistently oriented umbral event surface at the mesh silhouette vertex.
 In exemplary embodiments, for the terminology swept triangle, see swept polygon.
 In exemplary embodiments, the terminology swept polygon (also called a swept supporting polygon or a swept triangle) refers to the visibility event boundary at an nonconvex (or “inside”) corner of a firstorder silhouette edge of a polygon mesh is formed not only by extending those supporting polygons supported by thesilhouette edges forming the inside corner, but possibly also by one or more swept polygons which are a different type of supporting polygon formed between the inside corner vertex of the mesh silhouette and certain edges of the viewcell that are frompoint silhouette edges from the perspective of the inside corner silhouette vertex of the mesh object. These frompoint silhouette edges of the viewcell form a contour chain (the extremal or supporting viewcell silhouette contour) between the SVVs corresponding to the inside corner edges of the mesh object. Polygons (triangles) are “swept” out for each edge of this chain, forming the swept polygons. The edges of these swept polygons are extended to form SEMV or swept wedges that also contribute to the firstorder visibility event surface at inside corners of the mesh silhouette contour.
 In exemplary embodiments, the terminology swept wedge refers to a SEMV wedge formed by extension of the edges of a swept supporting polygon.
 In exemplary embodiments, the terminology separating polygon refers to a polygon that separates two structures. A separating polygon between a silhouette edge of a polygon mesh and a viewcell is, in the general case, formed by the silhouette edge and a vertex of the viewcell. The vertex of the viewcell supporting this polygon is called the separating viewcell vertex. It can be identified by pivoting the plane of the backfacing component polygon of a silhouette edge, wherein the pivoting occurs about the silhouette edge and in a direction opposite of the normal of the backfacing component polygon of the edge toward the viewcell until the plane intersects the viewcell. This intersection will, in the general case, occur at the separating viewcell vertex, which together with the silhouette edge, forms a separating polygon that is a triangle. If the separating viewcell vertex is a vertex of an edge of the viewcell that is parallel to the silhouette edge of the mesh then the pivoting plane will intersect the edge of the viewcell, not just a single vertex, and the separating polygon will be a quadrangle formed by the mesh silhouette edge and the intersected viewcell edge. Used to determine the maximum deviation between a firstorder UBP and a higherorder UBP incident on a silhouette edge.
 In exemplary embodiments, the terminology Umbra Boundary Polygon (UBP) refers to a polygon that is part of the umbra boundary formed by a polygon mesh object using the viewcell as an area lightsource. A UBP may correspond to the exact umbra boundary or may conservatively approximate the umbra boundary in a region. Constructed by extension of supporting polygons and swept polygons using the pivot and sweep construction method of the present invention. On initial construction UBPs extend semiinfinitely away from the viewcell. In subsequent steps of constructing PAUs, UBPs are intersected with each other, with mesh polygons, and possibly with a bounding box surrounding all mesh objects.
 In exemplary embodiments, the terminology FirstOrder UBP Refers to a polygon constructed using the pivot and sweep method and alternate embodiments of the method described in this specification.
 In exemplary embodiments, the terminology FirstOrder SVME UBP (Source Vertex—Mesh Edge UBP) refers to a polygon constructed by extending the corresponding supporting polygon (SVME Supporting Polygon) between a mesh silhouette edge and a viewcell vertex.
 In exemplary embodiments, the terminology FirstOrder SEMV UBP (Source Edge—Mesh Vertex UBP) refers to a polygon constructed by extending the corresponding swept polygon (SEMV Swept Polygon) between a mesh silhouette contour inside corner vertex (of a simple or composite silhouette contour) edge and a viewcell vertex.
 In exemplary embodiments, for the terminology SVME Supporting Polygon, see SVME UBP.
 In exemplary embodiments, for the terminology SEMV Swept Polygon, see SEMV UBP.
 In exemplary embodiments, the terminology Higherorder UBP refers to a UBP constructed using a higherorder model of visibility propagation in polyhedral environments. This model accounts for portions of the light source (eg. viewcell) that may be occluded from an exposed silhouette edge. A higherorder UBP may more precisely approximate the actual umbra boundary in a region where the umbra boundary is actually formed by higher order (quadric) surfaces formed by edgeedgeedge (EEE) interactions. In the present method higherorder UBPs are constructed using the method of backprojection.
 A higherorder UBP may be incident on a firstorder silhouette edge, in which the higherorder UBP is called an adjusted UBP. Alternatively a higherorder UBP may be incident on a higherorder silhouette edge. The higherorder silhouette edge may be computed if the adjusted UBP violates local visibility.
 In exemplary embodiments, the terminology backprojection refers to a determination of the portion of a viewcell (light source) visible from a silhouette edge. In the present method this determination employs the pivot and sweep method of PAU construction using a silhouette edge as a light source.
 In exemplary embodiments, the terminology VSVV (Visible Supporting Viewcell Vertex) refers to a vertex determined for a mesh silhouette edge or edge segment: the supporting viewcell vertex that is actually visible from the edge. Determined by the method of backprojection. Used to construct adjusted SVME UBPs.
 In exemplary embodiments, the terminology Visible Extremal Viewcell Contour refers to the extremal viewcell contour that is actually visible from an inside corner vertex of a mesh silhouette. Used to construct the swept polygons that are extended to form higherorder SEMV UBPs.
 In exemplary embodiments, the terminology Simple Silhouette Contour refers to a chain of silhouette edges connected by shared vertices belonging to a single mesh object. Also called a simple contour.
 In exemplary embodiments, the terminology Compound Silhouette Contour refers to a chain of silhouette edges comprising silhouette edges connected by shared vertices or connected by vertices formed by the intersection of a wedge/UBP from one contour with a nonadjacent silhouette edge. In the study of smooth manifolds such an intersection is called a tjunction. (See Durand, Fredo PhD thesis University of Grenoble)
 In exemplary embodiments, for the terminology TJunction, also called a compound silhouette vertex (CSV), see Compound Silhouette Contour.
 In exemplary embodiments, the terminology PAU (Polyhedral Aggregate Umbra) refers to a polyhedron forming the boundary of an umbra cast by one or more polygon mesh objects using the viewcell as a lightsource. The PAU is represented as a polygon mesh comprising UBPs and visible fragments of polygon mesh objects.
 In exemplary embodiments, the terminology TRIVC SHAFT (Triangle x Viewcell shaft) refers to a shaft (supporting shaft or convex hull) between a mesh triangle and a convex viewcell.
 In exemplary embodiments, the terminology SEGSILE SHAFT refers to a 2D shaft between a MSEGMENT and a silhouette edge. Used in 2D version of mesh traversal to find intersection of UBP with mesh polygons.
 In exemplary embodiments, the terminology UBL (Umbra Boundary Line) refers to a 2D equivalent of UBP, formed between a vertex of a silhouette edge and a mesh silhouette vertex.
 In exemplary embodiments, the terminology PLAU (Polyline Aggregate Umbra) refers to a 2D equivalent of PAU, restricted to the surface of a UBP.
 In exemplary embodiments, the terminology viewcell silhouette contour refers to a Silhouette contour of the viewcell as viewed from some element of the triangle mesh.
 In exemplary embodiments, polygon meshes can be represented as directed graphs. In Exemplary embodiments, the terminology mesh traversal refers to a traversal of such a graph is a procedure which visits the nodes of the graph. In exemplary embodiments, mesh traversal may follow a breadthfirst order in which the edgeneighboring polygons are examined. Other traversal orders are possible.
 In exemplary embodiments, for the terminology Supporting Viewcell Vertex, see SVV.
 In exemplary embodiments, the terminology Supporting Viewcell Edge (SVE) refers to an edge of the viewcell which is parallel to corresponding mesh silhouette edge. Supporting polygon between the two edges is a quadrangle.
 In exemplary embodiments, the terminology Visible Supporting Viewcell Edge (VSVE) refers to a portion of the SVE that is visible (unoccluded) from the entire corresponding silhouette edge.
 In exemplary embodiments, the terminology SOSC (significantly occluding silhouette contour for a viewcell transition.
 In exemplary embodiments, the terminology SESC refers to a significantly exposing silhouette contour for a viewcell transition.
 In exemplary embodiments, the terminology silhouette contour—of a manifold mesh refers to a fold singularity of the manifold corresponding to a transition between visibility and occlusion. For a polyhedral manifold mesh the silhouette contour is piecewise linear, a polyline.
 In exemplary embodiments, the terminology Cusp refers to a point singularity of a silhouette contour representing the terminus of a silhouette contour. Nonconvex manifold meshes may have multiple cusps, each corresponding to the terminus of a silhouette contour.
 In exemplary embodiments, the terminology CSV (Compound Silhouette Vertex) refers to the point of intersection of a wedge and a silhouette edge. For a firstorder implementation the wedge is a firstorder wedge and the silhouette edge is a firstorder silhouette edge. In topological terms the CSV corresponds to a conservatively defined tvertex of the fromregion compound silhouette contour. Typically an inside corner of the compound mesh silhouette contour occurs at a CSV.
 A much less common type of CSV can theoretically occur where a wedge intersects a silhouette vertex. This degenerate case can correspond to an outside corner of a compound silhouette contour.
 Corresponds to a TVertex.
 In exemplary embodiments, the terminology Wedge refers to a triangle formed between a supporting vertex of a lightsource/viewcell and a silhouette edge (SEMV wedge). When the silhouette edge is parallel to an edge of the lightsource/viewcell the wedge is formed between the silhouette edge and the supporting lightsource/viewcell edge. In this case the (SEME) wedge is quadrangular.
 Wedges used in discontinuity mesh methods are not defined on segments which are occluded between the source and the silhouette. This type of wedge results in planar visibility event surfaces which are exact but which do not necessarily produce continuous umbra boundaries.
 In contrast, firstorder wedges are defined as an extension of the entire supporting triangle or quadrangle between the viewcell and the silhouette edge. The firstorder wedge results in planar visibility event surfaces which may be exact or conservative but which always produces a continuous umbra boundary.
 In further embodiments, a wedge is any desired polygon between the viewcell and a polygon mesh.
 A wedge is different from a UBP (umbra boundary polygon) in that the extent of a wedge is limited only by intersection with a mesh polygon. The structure of a UBP is determined not only by intersection with mesh polygons but also by intersection with other UBPs. In fact a UBP is formed from a corresponding wedge which is intersected with other wedges and with mesh polygons to form the UBP. The set of UBPs for a manifold defines the umbra boundary of the manifold and is a subset of the wedges for the manifold.
 The PAU can be constructed by forming the UBPs directly using wedgewedge and wedgemesh polygon intersections. In this case geometry inside a PAU is determined using a pointinpolyhedron test.
 Alternatively, the PAU can be constructed indirectly, without wedgewedge intersections, by traversing only the visible side of the wedgepolygon intersections. In this case geometry inside the PAU is determined using a wedge penetration test of a line between the geometry and the surface of the viewcell.
 In exemplary embodiments, the terminology FirstOrder Visibility (also called firstorder model of visibility propagation) refers to a model of fromregion visibility propagation in which fromregion umbral event surfaces are incident on (firstorder) visible, firstorder silhouette edges and are constructed (using the pivot and sweep method) which assumes that the entire view region (e.g., viewcell) is visible from the firstorder silhouette edge.
 In exemplary embodiments, the terminology HigherOrder Visibility refers to a model of visibility propagation which does not assume that the entire view region (e.g., viewcell) is visible from the edges of the model. Where the supporting viewcell element corresponding to a firstorder silhouette edge (e.g., SVV or SVE) is not visible from the firstorder edge then the corresponding firstorder event surface is inexact. In this case a more precise fromregion event surface can be constructed by backprojection: using the firstorder silhouette edge as a source and determining the corresponding visible supporting viewcell element (vertex or edge). This backprojection process can employ the firstorder model or may itself employ higherorder visibility (by finding the SVV of the source silhouette edge). By subdividing first order edges that are inexact and optionally allowing the silhouette contour to retract the process of backprojection produces an umbral event surface that, in the limit, converges on the exact quadric event surface.
 In exemplary embodiments, the terminology backfacing refers to an orientation of a polygon. An oriented polygon has one front side and one back side. Each polygon is contained in a plane which also has a corresponding front side and back side. If a polygon is backfacing with respect to a point, then the point is on the back side of the polygon's plane. One test to determine if polygon is backfacing with respect to a point employs the equation of the polygon's plane.
 The orientation of a plane is determined by its normal vector which is defined by the coefficients A, B, and C of the plane equation:

Ax+By+Cz+D=0  A point (xp, yp, zp) is on the back side of this plane if it satisfies the inequality:

A(xp)B(yp)+C(zp)<0  Otherwise the point is on the plane or on the front side of the plane.
 A polygon may also be oriented with respect to another polygon. If two polygons share an edge, then one method of determining their orientation is to select a vertex of polygon 2 that is not a vertex of the shared edge. Next, determine if the selected vertex is on the back side of the plane of polygon 1, in which case the two polygons are backfacing, otherwise they are front facing (or in the dame plane). The aforementioned objects and advantages, as well as other objects and advantages, are achieved in accordance with the present embodiments which include a method of conservative, fromregion visibility precomputation in which polygon fragments potentially visible from a polyhedral viewcell are determined by constructing a conservative, linearized, fromviewcell visibility map.
 In one embodiment the mesh objects are comprised of closed manifold triangle meshes (in which each edge is shared by exactly two triangles) although embodiments using other polygon meshes are possible. The niethod also accommodates nonclosed manifold polygon/triangle meshes in which each edge is shared by one or two triangles.
 The conservative visibility map is constructed from the mesh triangles using conservative linearized umbral event surfaces (CLUES) which contain conservative fromviewcell umbral boundaries.
 The CLUES, which are also called firstorder wedges or simply wedges in this specification, are fromfeature visibility event surfaces that are related to the wedges employed in discontinuity meshing methods; although they differ from discontinuity mesh wedges in important respects.
 The CLUES are constructed on specific edges (and vertices of these edges) of the triangle meshes (called firstorder silhouette edges) using a novel simplified model of visibility propagation in polyhedral environments called firstorder visibility. The present invention includes methods for construction of firstorder CLUES and for adaptively refining the firstorder CLUES to produce more precise conservative linerarized umbral event surfaces. These refined CLUES reflect higherorder visibility effects caused when the entire viewcell is not visible from the supporting silhouette edge. These higherorder refined linear event surfaces tend to conservatively approximate the exact (often quadric) umbral boundaries using conservative polygonal surfaces that are much simpler to employ. According to some embodiments, refinement of the firstorder event surfaces is conducted where the maximum deviation between the firstorder event surface and the higherorder event surface exceeds a predetermined value.
 In some embodiments, the refinement process is conducted by backprojection in which the silhouette edge supporting a CLUES is used as a lineal light source to determine the portion of the viewcell visible from the edge.
 The firstorder model of visibility propagation is based on the simplifying conservative assumption that if a silhouette edge is visible from a viewcell, then it is visible from all parts of the viewcell. This assumption leads to a simple definition of firstorder silhouette edges as those edges for which one component triangle is backfacing for all points on the viewcell and the other component triangle is frontfacing for at least one point on the viewcell, and further that the component triangles are not facing each other. This definition is effectively identical to the definition of a frompoint silhouette edge and reflects the fact that the firstorder model effectively treats the viewcell as a viewpoint in some important respects.
 One type of CLUES, called a source vertex—mesh edge, or SVME wedge, is constructed on firstorder silhouette edges using a simple pivot from the edge to the supporting point of the viewcell. These SVME CLUES are analogous to frompoint umbral boundary polygons that are used in shadow algorithms. Unlike frompoint umbral boundary polygons, the SVME CLUES alone do not necessarily form a continuous umbral boundary surface on nonconvex manifolds.
 In the firstorder method, a second type of CLUES (called a source edge—mesh vertex, or SEMV wedge) is constructed which join the aforementioned SVME wedges (constructed by pivot) into a continuous umbral event surface. This second type of CLUES is formed by a sweep operation at an inside corner mesh silhouette vertex where the previously described SVME type of wedge from adjacent silhouette edges do not otherwise form a continuous umbral event surface. In such a case the SVME wedges incident on adjacent firstorder silhouette edges are connected to form a continuous umbral event surface by the SVME wedges incident on the connecting inside corner mesh silhouette vertex.
 SEMV wedges are constructed from supporting polygons formed by a sweep operation anchored at the inside corner mesh silhouette vertex and sweeping across edges of the viewcell which are silhouette edges when viewed from the inside corner mesh silhouette vertex. The inside corner mesh silhouette vertex may be a vertex of a simple silhouette, formed by connected firstorder silhouette edges. Alternatively the inside corner mesh silhouette vertex may be a compound silhouette vertex (CSV) formed where a firstorder wedge intersects another silhouette edge. These correspond to tvertices of the fromregion visible manifold and typically correspond to quadric event surfaces when using exact fromregion visibility solutions. By constructing SEMV wedges on the CSVs, the present method insures that a continuous, conservative, linearized fromregion umbral event surface is generated which reflects the intrinsic occluder fusion of a compound silhouette contour but without using quadric surfaces.
 Table Ib shows the four types of visibility event surfaces as employed in the method of complete discontinuity meshing also shown in Table Ia. In table Ib the visibility event surfaces of the present invention, CLUES, are also presented and compared to the visibility event surfaces employed in complete discontinuity meshing. Note that the Jenkins Nomenclature does not include quadric (EEE) surfaces since, in the visibility propagation model of the present invention, these quadric surfaces are replaced with SVME and SEMV planar surfaces in the firstorder version of the method and backprojection SVME/SEMV using higherorder refinement embodiment of the method.

TABLE Ib Nomenclature of FromRegion Visibility Event Surfaces Drettakis et al. Jenkins CLUES Visibilty Event Surface Naming Nomenclature Planar Event Surface EEV SVME (Viewcell Containing a Feature of the (EmitterEdge Vertex  Mesh Edge) Emmitter/Viewcell/Source Vertex) SEMV (Viewcell Edge  Mesh Vertex) SEME (Viewcell Edge  Mesh Edge) (Special Case) Planar Event Surface Not NonEEV Backprojection SVME Containing a Feature of the Backprojection SEMV Emitter/Viewcell/Source Backprojection SEME Quadric Event Surface EmitterEEE, Approximated By Containing a Feature of the E_{e}EE Backprojection Event Emitter/Viewcell/Source Surfaces Quadric Event Surface Not NonEmitterEEE Approximated By Containing a Feature of the Backprojection Event Emitter/Viewcell/Source Surfaces  In one embodiment, the construction of the fromviewcell visibility map using CLUES can employ the priorart methods of discontinuity mesh construction in which the CLUES are substituted for the linear and quadric “wedges” that are used in discontinuity meshing. This embodiment is not optimal since the discontinuity meshing approach is not outputsensitive. In the prior art method of discontinuity meshing, event surfaces are generated on all silhouette edges even though many of these silhouette edges may be occluded from the viewcell. In this approach, the visibility of the discontinuity mesh regions is determined after all of the discontinuity mesh regions have been constructed. For densely occluded environments many of these constructed regions are completely occluded from the viewcell. As a result, the complexity of the arrangement of the discontinuity mesh regions can be much higher than the complexity of the visible component of the discontinuity mesh (which corresponds to the visibility map).
 In another embodiment, the CLUES are used to construct the actual fromviewcell umbra volumes, called polyhedral aggregate umbrae (PAU), which are comprised of the CLUES and the unoccluded mesh triangle fragments. The purpose of the method is to determine only the unoccluded mesh triangle fragments (which comprise the potentially visible set or PVS). The construction of the entire PAU (which requires potential intersection of all of the CLUES) is typically not necessary to determine the unoccluded triangle mesh fragments. Instead, the unoccluded mesh triangle fragments can be more efficiently determined by the direct construction of a fromregion visibility map.
 Therefore, the present invention includes an outputsensitive method of conservative linearized visibility map construction, which is based on the traversal of triangle mesh 2manifolds (embedded in R3). In this method, a breadthfirst traversal of the unoccluded triangle mesh manifolds is conducted. Traversal of a triangle is suspended if any potentially occluding triangles (those triangles in the shaft between the triangle and the viewcell) have not been traversed and the traversal is jumped to the closer, unprocessed triangles. This approach enforces a fronttoback order. Manifold mesh traversal proceeds to the silhouette contours of the mesh or to fromviewcell occlusion boundaries. The fromviewcell silhouette contours are treated as the catastrophic visibility event curves of the manifold. At these contours, the corresponding CLUES are constructed and cast into the environment to determine their intersection with the mesh triangles. This casting is itself an onsurface (e.g., on wedge) visibility problem encountered in discontinuity meshing and has previously been solved using conventional techniques such as WeilerAtherton algorithm, which is not outputsensitive. Alternatively, the present method includes a technique in which this onwedge visibility problem is solved using a simplified version of the 2manifold traversal (now being described) applied to the 1manifolds encountered in the onwedge or onCLUES visibility problem. For simplicity, the present specification frequently uses the terms wedge, firstorder wedge, and CLUES interchangeably, although it is shown that the firstorder wedge, which is used in the present method, differs in important respects from the wedge constructed in the priorart method of discontinuity meshing.
 The onwedge visible intersections of the umbral event surfaces and the manifold mesh triangles correspond to the segments of the fromviewcell umbral discontinuity mesh but may not be actually segments of the corresponding fromviewcell visibility map occlusion boundaries. This is because a wedge represents the visibility of triangle segments from a specific feature (vertex or edge) of the viewcell, not necessarily an umbral boundary from the entire viewcell. In one embodiment of the present method, each umbral discontinuity mesh segment is tested to determine if it is a fromviewcell occlusion boundary at the time it is constructed.
 By enforcing a fronttoback processing order and constructing occlusion boundaries when they are encountered, the mesh traversal largely avoids the traversal of occluded triangles and thereby tends to achieve outputsensitive performance. In this outputsensitive method, the manifold traversal effectively cascades off the silhouette contours, flowing onto other manifolds intersected by the occlusion boundaries corresponding to visibility event surfaces. Traversal is continued only on the unoccluded side of an occlusion boundary in a manner that realizes an outputsensitive visibility cascade.
 The resulting outputsensitive performance is achieved at the cost of having to test each unoccluded mesh element for unprocessed, potentially occluding elements using a triangleviewcell shaft. In the present method, the cost of these shaft inclusion tests is greatly reduced by employing a hierarchical spatial subdivision and intersecting the shaft with these hierarchical containing structures. This results in an overall cost for all shaft inclusion tests that tends towards O(N Log(M)), where N is the number of visible mesh elements traversed and M is the average number of potentially occluding mesh elements.
 Mesh polygons are processed by mesh traversal initiated at strongly visible polygon fragments and continuing traversal to either a) origin of the conservative linearized umbral boundary wedges at silhouette edges or b) intersection of the wedges (forming a true fromviewcell occlusion boundary) with the mesh polygons. To insure proper depth order the mesh traversal algorithm identifies any unprocessed, potentially occluding mesh elements and immediately shifts mesh traversal to the closer untraversed elements. Ambiguous depthorder between mesh elements is detected by maintaining a directed graph representing the triangle overlap relationships and identifying cycles in this graph using a lineartime algorithm such as Tarjan's algorithm. Where cycles exist the triangles in the viewcelltriangle shaft of the offending triangle are intersected with the shaft to identify portions of these overlapping triangles that are completely within the shaft. These components cannot form a cycle with the offending triangle since they are completely within the shaft. Reinitiating the traversal using these components breaks the cycle.
 By enforcing a fronttoback traversal of meshes, terminating traversal at occlusion boundaries, and employing hierarchical spatial subdivision, the algorithm is designed to achieve output sensitive performance even for densely occluded environments.
 One advantage of the mesh traversal/visibility map construction method is that it is more efficient at identifying occlusion than algorithms such as Volumetric Visibility, Extended Projection, and WeilerAtherton. All of these other methods depend on large convex occluders which are unusual in realistic models. For example, the WeilerAtherton algorithm, which is a frompoint visibility algorithm, can combine the occlusion of connected polygons (a process they call consolidation) only if the connected polygons form a convex polyhedra. Likewise, the Volumetric Visibility (Schuaffler et al. 2000) method depends on simple shafts formed between the viewcell and a single convex box shaped blocker that is inside the actual occluder. If the actual occluder is concave and or has topological holes then it can be difficult to identify such a simplified convex blocker that accurately represents the occlusion of the actual occluder.
 In contrast, the present invention does not depend on the presence of convex occluders, but rather directly exploits the occlusion coherence inherent in the connectivity of a manifold mesh, irrespective of the mesh's shape.
 The present method includes a technique of determining the “effective static occlusion” (ESO) of occluded regions of the visibility map. The effective occlusion of a region is a ratio reflecting the number and surface area polygons occluded in an occlusion region divided by the additional geometry created during the remeshing caused by the region.
 The precision of the visibility maps produced by the present method can be decreased by a conservative convex simplification of the silhouette contours employed. This can be useful when the occlusion boundary surrounding an occluded visibility map region contains too much detail, especially if the effective occlusion of the region is low. The effective static occlusion is used as a heuristic to control the simplification of the silhouette contours and therefore the precision of the corresponding visibility map/PVS.
 The precision of the visibility map can also be selectively increased, using the backprojection approach to higherorder refinement previously discussed. The control of this adaptive refinement toward the exact quadric event surfaces is also determined, in part, by the ESO metric.
 Storage requirements are reduced by using an intermediate deltaPVS representation wherein important silhouette edges, those which produce significant occlusion or exposure, are identified during the precomputation by identifying the corresponding regions of coherent occlusion or exposure.
 The present invention includes a method of directly identifying the polygons or polygon fragments of a model that are exposed or occluded during a specific viewcell transition. The list of newly visible polygons or polygon fragments for a viewcell transition is called the deltaG+ submesh. The list of newly occluded polygons or polygon fragments for a viewcell transition is called the deltaG− submesh.
 The present invention includes a method of identifying coherent regions of newly occluded and newly exposed regions for a viewcell transition by computing the visibility maps for each viewcell and traversing the resulting visibility map for one viewell to the occlusion/exposure boundaries of the other viewcell. This approach is used to identify connected regions of exposure/occlusion. The effective occlusion of these regions is measured using the same approach as the effective static occlusion previously described. In the case of these deltaG regions, the effective occlusion is called the effective dynamic occlusion (EDO). The EDO is used to identify regions of coherent effective differential visibility.
 The visibility PVS data for one viewcell can be generated, in the usual way, from an existing PVS and the stored deltaG+ and deltaG− data for the viewcell transition.
 Alternatively, silhouette contours which form such regions of high EDO are identified and labeled during an offline precompute phase. These labeled silhouette contours are the basis of an intermediate representation of the deltaPVS which substantially reduces the storage requirements compared to directly storing all deltaG+ and deltaG− submesh data for each viewcell transition.
 In this intermediate visibility map/PVS representation, the dynamically exposing or dynamically occluding silhouette contours (bounding regions of high EDO) are labeled. The regions of coherent, high EDO are identified, in an offline preprocess, using a simple traversal of a unified visibility map which contains occlusion boundaries for both viewcells of a particular transition.
 The silhouette labels are stored with the triangle mesh data along with occlusion boundary intersection hints for each viewcell transition. The occlusion boundaries are boundaries of the fromregion visibility map produced by the umbral event surfaces incident on a labeled silhouette contour. Both the labeled silhouette contour and the corresponding occlusion boundary form polylines. The complete silhouette contour (and corresponding occlusion boundaries) can be constructed at runtime from a few labeled silhouette edges (and corresponding occlusion boundary segments) using simple algorithms for finding connecting silhouette edges and polygonpolygon intersections.
 According to some embodiments, for simple silhouette contours, an entire labeled silhouette contour can often be stored by labeling only a single starting edge of the contour. The remaining connecting firstorder silhouette edges that form the contour can be rapidly identified at runtime. This scheme makes the intermediate representation using labeled silhouette contours very storage efficient.
 According to some embodiments, for compound silhouette contours (formed where the umbral event surfaces incident on one simple silhouette contour intersect another silhouette contour) the storage scheme is similar except that the compound silhouette vertices (CSVs) representing the intersection points of the simple contours are also stored.
 Using the intermediate representation, the visibility map/PVS for one viewcell can be generated from the visibility map of a previously constructed, parent (containing) viewcell using a simplified traversal. This traversal of a parent visibility map proceeds to labeled occluding silhouette contours which support umbral event surfaces that produce new occlusion boundaries which effectively bypass newly occluded mesh elements. This approach obviates the need for storing deltaG− information and deltaG+ information for viewcell transitions between child viewcells having a common parent. This method of incrementally generating visibility map/PVS at runtime using only labeled significantly occluding silhouette contours is particularly useful in a distributed clientserver implementation, since the client can use it to remove newly occluded geometry for a viewcell transition without receiving explicit deltaG− information from the server.
 In addition to being used to directly generate the visibility map/PVS for a viewcell, the labeled silhouette contours can also be used to generate deltaPVS data when needed. Using this approach, according to some embodiments, the deltaPVS data (e.g., deltaG+ and deltaG− submesh data) is not stored but generated when needed using the labeled silhouette contour information, an existing visibility map, and (for deltaG+ submesh data) a superset of the current visibility map/PVS that is guaranteed to contain the newly visible geometry for a viewcell transition. In some embodiments, the latter superset information can be provided as stored deltaG+ submesh data for a parent viewcell that contains the child viewcells for which the specific parenttochild viewcell transitions occur.
 Using these three data sets, the parenttochild deltaG+ and deltaG− data for a specific viewcell transition is generated by a simplified traversal of a previously constructed visibility map corresponding to a parent viewcell. The labeled silhouette contours (and associated occlusion boundary hints) are used to quickly construct the visibility map/PVS of the child viewcell from that of the parent. Alternatively, the deltaG+ and deltaG− data can be explicitly generated by traversal of the newly exposed and newly occluded regions respectively. The latter method is useful in a clientserver implementation in which the server is a visibility event server which delivers deltaG+ and/or deltaG− submesh data to the client using navigationbased prefetch.
 Alternatively, only the deltaG+ data for a viewcell transition may be stored explicitly, and the deltaG− data generated by the simplified traversal of a parent viewcell. In this implementation, a simplified (and fast) traversal of a parent visibility map proceeds to labeled occluding silhouette contours which support umbral event surfaces that produce new occlusion boundaries which effectively bypass newly occluded mesh elements.
 This deltaPVS method represents an efficient codec for visibilitybased streaming of outofcore geometry and texture information in which the dynamic occluding or exposing silhouette contours (for the viewcelltoviewcell transitions) are identified and labeled in an offline, precomputed encoding; and the resulting labeled contours, along with other hint information, are used to rapidly construct a PVS/visibility map (or deltaG submesh data) from an existing PVS/visibility map at runtime. This codec allows for a distributed clientserver implementation in which the storage/transmission costs can be selectively decreased at the expense of increased runtime compute costs.
 In addition, a perceptionbased encoding strategy is used to encode low levelofdetail (LOD) geometric and texture information during periods when the deltaG+ submesh information is not delivered to the client in time to generate a complete PVS for the current viewcell/viewpoint. This strategy exploits the fact that the human visual system cannot fully resolve information that it presented to it for less than approximately 1000 milliseconds. This approach allows a relatively perceptually lossless performance degradation to occur during periods of low spatiotemporal visibility coherence: a situation which challenges the performance of both the codec and the human visual system in similar ways.
 Details of this codec and its use in a clientserver method streaming content delivery employing navigationbased prefetch are disclosed in the specification.
 Table Ic summarizes a number of the priorart methods of PVS determination and shadow calculation which employ the various visibility event surfaces characterized in Table Ia. The last row of the table includes the current method of fromviewcell deltaPVS determination using the methods of the present invention including firstorder and higherorder conservative, linearized, umbral event surfaces (CLUES).

TABLE Ic PVS and Shadow Methods Umbral Event Solution Method Purpose Model Surfaces Space PVS Precision Teller(1992) PVS BSP/Portals EEV, NonEEV Object CelltoObject Carmack(1996) PVS BSP/Portals EEV Object CelltoCell ChinFeiner Shadow General 3D EEV Object NA Koltun(2000) PVS 2.5D EEV, NonEEV Object CelltoObject Discontinuity Umbra & General 3D EEV, NonEEV, Object NA Mesh, Drettakis Penumbra EEEE, (1994) Shadows NonEEEE Extended dPVS Convex EEV Image CelltoPolygon Projection Occluders (Effectively Durand(2000) (Non Sampled on Convex = Planes) Special) Volumetric PVS 2.5D EEV Object CelltoCell Visibility Voxelized (Approximate) Schauffler (2000) Shrunk PVS 2.5D All Image CelltoObject Occluders (Sampled, Wonka(2000) Approximated) Vlod dPVS & Simple 3D EEV Object/ CelltoPolygon Chhugani(2005) Streaming Occluders, (Approximated, Image Genus 0 Sampled) Exact From PVS General 3D All 5D Line CelltoPolygon Viewcell Exact Space Nirenstein (2005), Bittner (2002) CLUES Jenkins dPVS & General 3D EEV, NonEEV, Object CelltoPolygon (2010) Streaming EEEE Fragment (linearized approximation)  According to some embodiments, the present method fromregion visibility precomputation uses fromregion visibility surfaces that are constructed using a simplified, conservative model of visibility propagation called firstorder visibility.
 The exact visibility in polyhedral environments is dominated by quadric visibility event surfaces which arise as a result of visibility interactions among triples of edges. In contrast, the firstorder model of considers visibility event surfaces which arise as a result of visibility interactions between pairs of edges. Using the methods disclosed herein, firstorder visibility model produces continuous, conservative umbral event surfaces which can be used to construct conservative fromviewcell visibility maps and related fromviewcell potentially visible sets (PVS).
 The firstorder model of visibility propagation is based on the simplifying conservative assumption that if a silhouette edge is visible from a viewcell then it is visible from all parts of the viewcell. This assumption leads to a simple definition of firstorder silhouette edges as those edges for which one component triangle is backfacing for all points of the viewcell and the other component triangle is frontfacing for at least one point of the viewcell, and further that the component triangles are not facing each other. This definition is effectively identical to the definition of a frompoint silhouette edge and reflects the fact that the firstorder model treats the viewcell as a viewpoint in some important respects.
 In firstorder visibility, any segment of a silhouette edge is assumed to be either completely occluded from the viewcell or completely visible from the viewcell (visible from all parts of the viewcell). That is, in firstorder visibility, if a silhouette edge is visible from any part of the viewcell, it is assumed to be visible from all parts of the viewcell.
 The firstorder model does not account for the effects of varying occlusion along a silhouette edge segment that is caused by an edge intervening between the silhouette edge and the viewcell to produce a quadric triple edge (or EEE) visibility event surface. Instead the firstorder visibility model produces planar visibility event surfaces which either correspond to the exact, planar fromregion umbral event surfaces or conservatively lie within the exact quadric fromregion umbral boundaries, which are often quadric surfaces. The firstorder model of visibility propagation employs only planar visibility event surfaces that arise from visibility interactions between pairs of edges. Further, often the firstorder, planar visibility event surfaces are very close to the exact event surfaces, which may be quadrics, and in many cases the firstorder event surfaces are the exact fromregion visibility (umbra) boundaries.
 According to some embodiments, firstorder visibility event surfaces are generated using a simple pivot and sweep algorithm. In one embodiment, the viewcell is assumed to be convex. This assumption simplifies the pivot and sweep construction construction method. Alternate embodiments of the pivot and sweep method allow construction of firstorder visibility event surfaces from a nonconvex viewcell. Any nonconvex viewcell can be decomposed into convex components for example by tetrahedralization.
 In some embodiments, firstorder mesh silhouette edges, which give rise to the firstorder visibility event surfaces, are identified using three criteria. In some embodiments, firstorder silhouette edges are defined as those edges of a manifold triangle mesh which pass the following tests:
 1) one triangle sharing the edge is back facing for all vertices of the viewcell,
 2) the other triangle sharing the edge is front facing for at least one of the vertices of the viewcell,
 3) the component triangles sharing the edge are backfacing with respect to each other.
 The firstorder conservative linerized umbral event surfaces (CLUES), also called wedges, are of two types. In some embodiments, the viewcell is also conceptually treated as a “source” or lightsource.
 According to some embodiments, one type of wedge is formed by a vertex of the viewcell and a firstorder silhouette edge of the mesh (SVME). Another type of wedge is formed by an edge of the viewcell and an insidecorner silhouette vertex of the mesh (SEMV). The SVME type is discussed first.
 According to some embodiments, to construct a SVME wedge, the supporting triangle between a firstorder silhouette edge and the viewcell is identified. This triangle is formed between the silhouette edge and a specific vertex of the viewcell called the supporting viewcell vertex (SVV). The supporting viewcell vertex corresponding to a firstorder silhouette edge is identified by testing the angle between the backfacing triangle of the edge, and the triangles formed between each viewcell vertex and the silhouette edge. The vertex which produces a vertexedge triangle forming the smallest angle with the backfacing triangle (i.e., most negative cosine value) is the first vertex encountered in a “pivot” of the plane of the backfacing triangle through the silhouette edge. This viewcell vertex is the supporting viewcell vertex for the corresponding mesh silhouette edge.
 The firstorder wedge incident on the firstorder mesh silhouette edge is formed by the edge itself and two other edges, each of which is a line through a vertex of the edge and the supporting viewcell vertex (SVV) corresponding to the silhouette edge. These two edges extend semiinfinitely from the SVV, through the silhouette vertices in the direction away from the viewcell source. This wedge can be seen as an extension of the supporting triangle formed between the silhouette edge and the corresponding supporting viewcell vertex (SVV). As previously indicated, since this type of wedge is formed from a silhouette edge of the mesh and a vertex of the viewcell, it is called a SourceVertexMeshEdge (SVME) wedge.
 A degenerate case may occur in which the pivot from the mesh silhouette edge to the viewcell encounters two or more supporting viewcell vertices (SVVs) producing the same pivot angle. This occurs when an edge of the viewcell containing the SVV(s) is parallel to the mesh silhouette edge. In this case, the supporting triangle between the mesh silhouette edge and the viewcell is actually a supporting quadrangle. The present method handles this degenerate case by constructing a special SEME wedge.
 In some embodiments, the pivot operation produces a SVME wedge for each mesh firstorder silhouette edge. However, the visibility event surface at the shared vertex of two firstorder silhouette edges is not necessarily completely defined entirely by the intersection of the two adjacent SVME wedges. While adjacent SVME wedges always intersect at the shared silhouette vertex, at inside corners of the silhouette contour these SVME wedges can intersect only at the single point shared by their two supporting silhouette edges. In this case, their intersection does not form a continuous umbral surface across the portion of the silhouette contour. The structure of the visibility event surface spanning the silhouette contour at the shared silhouette vertex depends on how the adjacent SVME wedges intersect.
 According to some embodiments, a conceptual reverse sweep operation can be used to determine whether adjacent SVME wedges intersect to form a continuous umbra surface. A reverse sweep operation in which a line segment anchored at the SVV is swept along the corresponding mesh silhouette edge from vertex to vertex generates the same supporting triangle formed in the previously described pivot operation. Conceptually, however the reverse sweep operation can be used to identify discontinuities of the visibility event surface that may occur at the shared vertex of adjacent silhouette edges.
 If two adjacent mesh silhouette edges form an “outside corner” or convex corner of a mesh manifold, then such a reverse sweep operation would not encounter any restriction to the sweep (i.e., occlusion) at the shared vertex. Consequently, the SVME wedges corresponding to the adjacent “outside corner” silhouette edges will intersect to form a continuous visibility event surface which spans the two silhouette edges. SVME wedges incident on adjacent outside corner firstorder silhouette edges will intersect to form such a continuous visibility event surface even if the supporting triangles for the adjacent silhouette edges pivot to different SVVs on the viewcell.
 Conversely, if two adjacent mesh silhouette edges form an “inside corner” or nonconvex corner of a mesh manifold, then the SVME wedges incident on these two edges may not intersect at the shared silhouette vertex in such a way as to form a continuous visibility event surface which spans the adjacent mesh silhouette edges. Supporting polygons corresponding to adjacent “inside corner” silhouette edges may pivot to different SVVs on the viewcell. In such a case, the adjacent SVME wedges will still intersect at the shared silhouette vertex but their intersection will not form a continuous visibility event surface spanning the adjacent silhouette edges. A reverse sweep operation anchored at the SVV and sweeping through the silhouette edge would encounter a restriction (occlusion) at such an inside corner vertex. This restriction results in a discontinuity in the visibility event surface formed by the adjacent inside corner SVME wedges.
 The continuous visibility event surface at such an inside corner can be constructed by reversing the previously described reverse sweep operation at the inside corner. The sweep is now anchored at the shared inside corner mesh silhouette vertex and sweeping occurs along the silhouette edges of the viewcell, edges which are frompoint silhouette edges with respect to the inside corner mesh silhouette vertex, starting at the SVV for one of the mesh silhouette edges and ending at the SVV for the neighboring mesh silhouette edge. Each swept viewcell silhouette edge forms a swept triangle with the inside corner vertex. The edges of this triangle, extended through the corresponding mesh silhouette edge, defines a wedge. Since such wedges are formed from an edge of the viewcell and a vertex of the mesh they are called SEMV wedges. Such a sweep operation conducted along the (frompoint) silhouette contour of the viewcell will produce a set of SEMV wedges that form a continuous visibility event surface which connects the (otherwise disconnected) SVME wedges of the adjacent mesh silhouette edges.
 Conceptually, then, when the conceptual reversed sweep operation anchored at the SVV encounters a restriction (occlusion) at an inside corner of a firstorder silhouette contour, the reversed sweep operation is reversed. This reversal produces the actual sweep operation which constructs the swept triangles and the corresponding SEMV wedges that form a continuous visibility event surface (firstorder umbral event surface) which connects the SVME wedges from the adjacent firstorder mesh silhouette edges. This sweep operation generates SEMV wedges that are incident on a vertex of the mesh silhouette contour and which reflect a visibility event boundary that is primarily determined by a combination of “occlusion” at the silhouette edges, reflected in the SVME wedges, and containment of the viewpoint on the viewcell surface, reflected in the SEMV wedges incident on the silhouette vertex.
 It should be noted that, for a convex viewcell, two paths of connected viewcell silhouette edges will generally connect one SVV to the other. Only one of these paths will sweep out a chain of SEMV wedges that connect the adjacent SVME wedges to form a continuous visibility event surface having a consistent face orientation. In some embodiments, this particular path is called the supporting viewcell silhouette contour (SVSC). A test do identify the SVSC is presented elsewhere in this specification.
 According to some embodiments, for the construction of firstorder wedges, the conceptual reverse sweep operation which would detect an occlusive restriction to visibility at the inside corner mesh vertex can be replaced by another test. This test involves comparing the normals of the adjacent mesh silhouette edges. If the two connected mesh silhouette edges have their normals oriented such that they are mutually front facing, then the shared vertex is called an outside corner of the mesh.
 According to some embodiments, when an inside corner mesh silhouette vertex is encountered, then the firstorder wedges through this vertex are generated by the sweep operation, wherein the sweep is anchored at the inside corner mesh silhouette vertex is swept along the supporting viewcell silhouette contour (SVSC), from the SVV corresponding to one silhouette edge to the SVV corresponding to the other silhouette edge, generating SEMV wedges.
 The sweep operation to generate SEMV wedges is conducted only at inside corners of the silhouette contour. Conducting this sweep at outside corner silhouette vertices would generate superfluous SEMV wedges that intersect the adjacent SVME wedges only at the silhouette vertex and therefore, they do not contribute to the continuous umbral event surface of the supported silhouette contour.
 As previously described, SEMV wedges may arise at an “inside corner” of the silhouette contour formed by connected silhouette edges of a single mesh, called a simple contour. More generally, SEMV wedges may be incident on any “inside” or nonconcave edge of a polyhedral aggregate umbra (PAU) surface. Such “inside corner” features can also be formed where the wedge from two silhouette contours (belonging to the same mesh or different meshes) intersect. The intersection of a wedge from one contour with a nonadjacent silhouette edge is called a composite or compound silhouette vertex (CSV). In the study of smooth manifolds such an intersection is called a tjunction. At a tjunction intersection, the wedge of one silhouette edge intersects a nonadjacent silhouette edge (from the same or different contour). This tjunction intersection generally occurs in such a way that the intersecting SVME wedges do not intersect with each other at the tjunction to form a continuous event surface. The resulting degenerate point of intersection of the two SVME wedges at a firstorder silhouette edge represents an CSV.
 At such CSV's the present method employs the same sweep operation previously described, anchored now at the CSVs to generate the set of SEMV wedges that connect the otherwise disjoint SVME wedges into a continuous, conservative umbral event surface. As will be discussed in detail in another part of this specification, in general the exact umbral event surface is a higherorder surface (e.g., a quadric). The present invention includes a method of conducting the previously described sweep operation on CSVs in such a way that the constructed wedges conservatively approximate the actual higherorder surfaces incident on the CSV.
 According to some embodiments, the firstorder model of visibility propagation employs a new geometric construct which is referred to as the supporting hull.
 According to some embodiments, the supporting hull between a polyhedral viewcell and a polyhedral mesh object is a polyhedral volume that contains all of the possible sight lines between the viewcell and the mesh object. The supporting hull is a polyhedron bounded by the supporting polygons between the viewcell and the mesh object. If the viewcell and the mesh object are both convex, then the supporting hull is identical to the convex hull and it can be constructed using familiar gift wrapping algorithms (O'Rourke, Computational Geometry in C Second edition Cambridge University Press 1998). In some embodiments, if the viewcell is convex but the mesh object is not necessarily convex, then the supporting polygons can be formed using the following algorithm.
 Identify each firstorder, fromregion silhouette edge of the mesh object as those edges which have one component triangle that is backfacing for all vertices of the viewcell and the other component triangle that is frontfacing for at least one vertex of the viewcell, and for which the component triangles are backfacing with respect to each other. For each of these firstorder silhouette edges, construct the supporting polygon incident on the edge by pivoting from the edge, in the direction of the normal of the backfacing component triangle, to the vertex of the viewcell which forms the smallest pivot angle. This vertex, called the supporting viewcell vertex or SVV, together with the endpoints of the firstorder silhouette edge, form the supporting polygon (generally a triangle) incident on the silhouette edge. This type of supporting polygon is called a SVME (source vertex—mesh edge) supporting polygon.
 If this viewcell vertex happens to be the endpoint of a viewcell edge that is parallel to the mesh object silhouette edge, then the pivot will encounter two viewcell vertices forming the same angle. In this case, the supporting polygon is a quadrangle formed by the viewcell edge and the mesh object silhouette edge (i.e., an SEME supporting polygon). All of the supporting polygons which contain an edge of the mesh object and a vertex of the viewcell are formed by pivoting to the supporting viewcell element.
 If adjacent mesh object silhouette edges produce supporting polygons which pivot to the same viewcell vertex then the supporting polygons intersect at the common edge formed by this vertex and the shared mesh object silhouette vertex. In this case, the supporting hull at the mesh object silhouette vertex is completely defined by these two supporting polygons. Adjacent mesh object silhouette edges may also produce supporting polygons which pivot to different vertices of the viewcell. In this case the two supporting polygons do not form a continuous surface at the mesh silhouette vertex. To close the supporting hull surface at this vertex, one or more supporting polygons are constructed between the mesh silhouette vertex and specific edges of the viewcell. This construction proceeds by the previously described “sweep” operation: sweeping along the chain of viewcell silhouette edges between each of the viewcell silhouette vertices to which the adjacent mesh silhouette edges has pivoted. During this sweep, a supporting polygon is formed from each of these viewcell silhouette edges and the mesh silhouette vertex. This construction can be seen as a “sweep” of the viewcell silhouette edge chain such that a swept polygon is generated for each viewcell silhouette edge. In general, the sweep between two viewcell vertices can take more than one path, but only one path will sweep out a set of polygons which connect the two original supporting polygons to form a continuous surface with a consistent face orientation. This path is the supporting viewcell silhouette contour (SVSC).
 This algorithm produces a continuous polygonal surface which envelopes or supports both the mesh object and the viewcell. In some embodiments, if both the viewcell and the mesh object are convex the supporting polygons constructed by this algorithm intersect only at their edges and form the convex hull of the viewcell and the mesh object.
 If the viewcell is nonconvex, then the frompoint silhouette contour of the viewcell, as seen from an inside corner vertex of a manifold mesh firstorder silhouette, may be a complex contour containing cusps and tvertices. If the mesh object is also nonconvex then the supporting polygons may intersect in their interiors.
 However, if the viewcell is restricted to be a convex polyhedron, then the frompoint silhouette contour of the viewcell (viewed from an inside corner mesh silhouette vertex) is always a simple contour, without cusps or tvertices. Consequently, when the viewcell is convex, the sweep operation on the viewcell contour is substantially simplified. According to some embodiments, the sweep operation is substantially simplified by restricting the viewcells to be convex polyhedra.
 A firstorder wedge incident on a firstorder mesh silhouette edge is the extension of the corresponding supporting polygon which is formed between the same mesh silhouette edge and a supporting viewcell vertex (SVV). This type of wedge is constructed from the mesh silhouette edge (i.e., a line segment) and the two extended lines of the supporting polygon that intersect the mesh silhouette edge. Consequently, the wedge, as initially constructed, tends to extend semiinfinitely away from the viewcell, until it intersects a mesh polygon. This type of wedge is formed from the extension of a SVME supporting polygon and is called a from a SVME wedge.
 A firstorder wedge incident on a firstorder mesh object silhouette inside corner vertex is the extension of the swept triangle (i.e., the SEMV supporting polygon formed between the mesh silhouette vertex and an edge of the viewcell silhouette contour). This type of wedge is constructed from the mesh silhouette vertex and the two lines of the supporting polygon that intersect this vertex. These two lines are extended semiinfinitely away from the viewcell to form boundaries of the SEMV wedge. Consequently, the wedge tends to extend semiinfinitely away from the viewcell, until it intersects a mesh polygon. Since this type of wedge is formed from a source (i.e., viewcell) edge and a mesh vertex, it is called a SEMV wedge.
 SEMV supporting polygons that are incident on an outside corner vertex of a mesh silhouette contour are actual bounding polygons of the supporting hull between a convex viewcell and the mesh silhouette. However the extension of such supporting polygons would produce a SEMV wedges that intersects the firstorder umbral event surface tangentially, only at the point of the outside corner silhouette vertex. Consequently such wedges would not contribute to the firstorder umbral event surface/volume and need not be constructed.
 A special case occurs in which the firstorder mesh silhouette edge pivots to (i.e., is supported by) a SVV which is a vertex of a viewcell edge that is parallel to the mesh silhouette edge. In this case the supporting polygon between the mesh silhouette edge and the viewcell edge is quadrangular. Such a supporting polygon and its corresponding umbral event wedge are called SEME supporting polygons, wedges. Embodiments include a method of explicitly identifying SEME wedges. Identifying SEME wedges is useful because unlike the other types of wedges, finding onwedge visible intersections for SEME wedges is itself a fromregion (fromsegment) visibility problem. The SEME onwedge visibility solution is somewhat more complex than the frompoint, onwedge visibility solutions used for SVME and SEME wedges.
 The preceding description of the supporting hull between a mesh object, and a viewcell assumed that the supported firstorder silhouette contours of the mesh object are simple contours in which each contour is a polyline. In fact, any firstorder fromregion silhouette contour may actually be a compound contour, in which the entire contour is formed by intersecting contours. The contours intersect where a wedge from one contour intersects another contour (i.e., firstorder silhouette edge). This intersection occurs at a compound silhoutte vertex (CSV). When higherorder interaction of edge triples is considered, these CSVs in general correspond to quadric surfaces. The present method of pivotandsweep construction based on the firstorder visibility model effectively treats the CSVs as simple inside corner silhouette vertices; constructing one or more SEMVs on each CSV, creating a continuous polygonal umbral event surface which conservatively approximates the exact quadric surfaces supported by the firstorder silhouette edges.
 By using both SVME (and SEME in the special case) and SEMV supporting polygons/umbral wedges, embodiments including the present method provide a more precise approximation to the actual fromviewcell umbral event surfaces than the linearized antipenumbra method of Teller, which computes a convex hull of SVME planes, which thereby significantly underestimates the occlusion.
 Unlike the linearized antipenumbra methods, the pivot and sweep method is not limited to the more restricted problem of visibility through a portal sequence.
 In some embodiments, to construct a fromregion umbral discontinuity mesh or fromregion visibility map, the visible intersections of the firstorder wedges and the mesh polygons are be determined. The visible intersection of mesh triangles with a wedge are polylines on the wedge. The identification of the visible intersections of a wedge with mesh triangles is called the “onwedge” visibility problem. Embodiments include a method of 1manifold (polyline) traversal in 2D (i.e., on the wedge) in which the construction of visibility event lines (i.e., 1degree of freedom event surfaces) is interleaved with 1manifold traversal and interference checks to produce an outputsensitive solution to onwedge visibility.
 This manifold traversal method is extended to a method of traversing 2manifolds (i.e., the triangle meshes) in 3D to construct fromviewcell visibility maps that include the mesh polygon fragments that are visible from the viewcell. The PVS is derived from the visibility map. This 3D mesh traversal method calls the aforementioned 2D (1manifold) mesh traversal process to solve onwedge visibility.
 The volume of space occluded by a mesh object from a viewcell, assuming the firstorder model of visibility propagation, is called the firstorder polyhedral umbra volume. Since individual umbral volumes may intersect to aggregate the occlusion, these volumes are referred to as the firstorder polyhedral aggregate umbra (PAU).
 Firstorder PAU, also referred to as PAU, are bounded by polygons called umbra boundary polygons or UBP. These polygons are formed by the intersection of the firstorder wedges with triangle mesh polygons and with other firstorder wedges. The PAU are also bounded by the firstorder visible mesh polygon fragments (i.e., the fragments comprising the fromviewcell visibility map). Together the UBPs and the visible mesh polygon fragment form continuous, though not necessarily closed, umbral surfaces that define the boundaries of the PAU.
 As described in detail in conjunction with the 3D 2manifold traversal method (
FIG. 20 and related figures), the construction of the visibility map, according to some embodiments involves a step in which it is determined if a point on a onwedge visible polyline segment is actually within a PAU volume, and therefore, occluded from the entire viewcell. The method includes a modified pointinpolyhedron test which can answer this query for firstorder PAU without explicitly constructing the PAU.  The onwedge visibility method uses a 1manifold polyline traversal method in 2D (
FIG. 15 and related figures) is a simpler implementation of the 2manifold traversal method in 3D used to contruct the fromviewcell visibility map.  Embodiments accommodate three different representations of fromviewcell visibility. In Table II, features of these three representations are presented and compared with the priorart method of representing fromregion visibility using the complete discontinuity mesh.
 In one representation of conservative linearized fromviewcell visibility, using Polyhedral Aggragate Umbrae (PAU), the actual fromviewcell occluded volumes of space are identified. These volumes are bounded by umbra boundary polygons (UBPs) which are formed from the fromviewcellelement umbral wedges. The wedges are effectively intersected with the mesh polygons and with each other to determine the UBPs. This representation is comparable to shadow volume representations, although most shadow volume methods are frompoint shadows.
 In another representation of conservative linearized fromviewcell visibility, the Conservative Linearized Umbral Discontinuity Mesh (CLUDM), the fromviewcellelement umbral wedges are not intersected with each other, but only with the mesh polygons, to form a conservative discontinuity mesh in which the regions of the mesh correspond to completely visible regions, umbral regions or antumbral regions. The antumbral regions are actually a type of penumbral region from which the viewcell is partially visible. Additional tests are utilized to differentiate between umbral and antumbral regions (e.g., to determine the fromviewcell PVS).
 In a third representation of conservative linearized fromviewcell visibility, according to some embodiments, the Conservative Linearized Umbral Discontinuity Visibility Map (CLUVM), only completely visible regions and umbral regions are represented. This is a particularly useful representation since, in this case, the PVS corresponds to the completely visible regions. The construction of the CLUVM proceeds by determining if each potential occlusion boundary, formed by the visible intersection of the fromviewcellelement (i.e., point or edge) umbral wedge, is actually a fromviewcell umbral boundary. Details of this determination, together with an output sensitive method of constructing a CLUVM, are presented elsewhere in the specification.
 These three representations of a conservative fromviewcell visibility are compared with the priorart method of complete discontinuity meshing. In a complete discontinuity mesh the vast majority of boundaries contain penumbral regions, which are regions from which the viewcell is partially visible. Generally, a much smaller number of regions are actual umbral regions from which no part of the viewcell is visible. Both the penumbral regions and the umbral regions of the complete discontinuity mesh may be bounded by line segments and/or quadratic curves. The use of only the linear components, as proposed in the priorart method of incomplete discontinuity meshing, results in discontinuous umbral boundaries and therefore cannot be used to determine fromregion visibility.
 For a number of reasons, disclosed elsewhere in this specification, the conservative linearized umbral event surfaces (CLUES) are much less numerous than the exact event surfaces employed by the priorart method of complete discontinuity meshing. Consequently, the approximate complexity of the arrangement of the CLUDM is much lower than the complexity of the complete discontinuity mesh. In fact, using an outputsensitive construction method of the present invention, the complexity (both construction and storage) is generally only determined by the number of visible silhouette edges, as indicated by N_{v} ^{4}, for the CLUVM in Table II.
 Estimates of these complexities are given in Table II, and discussed in detail elsewhere in the specification.

TABLE II Table Comparing Three Methods of Representing Conservative Linearized FromViewcell Visibility and Classic Discontinuity Meshing Conservative Conservative Polyhedral Linearized Linearized Aggregete Umbral Umbral Complete Umbrae Visibility Discontinity Discontinuity (PAU) Map (CLUVM) Mesh (CLUDM) Mesh Event Umbral FromViewcell FromViewcell Umbral and Surfaces Boundary Element Umbral Element Umbral Penumbral Polygons Wedges Wedges Wedges(Polygonal (UBP) (Polygonal) (Polygonal) and Quadric) Region UBP Occlusion Occlusion & DM Boundaries Boundaries (Polygons) Boundaries Antumbral (Polylines and (Polylines) Boundaries Quadratics) (Polylines) Region PAU Occlusion Occlusion DM Regions (Occluded Region Regions & (Polygonal and Polyhedral (Polygons) Antumbral Quadratic Regions) Regions Bounded Planar (Polygons) Regions) Approximate N^{4}* N_{v} ^{4}* N^{4}* N^{8} Complexity (N_{v} ^{4} (Using 3D (N_{v} ^{4} of Regions Using 3D Mesh Using 3D Mesh Traversal) Mesh Traversal) Traversal) *assumes that number of firstorder silhouette edges is O(number of edges)^{1/2}  According to some embodiments, the firstorder visibility model assumes that for any supporting polygon between the viewcell and the firstorder manifold mesh silhouette, the edge of the supporting polygon corresponding to the firstorder silhouette edge is completely visible (unoccluded) from the vertex of the supporting polygon corresponding to the supporting viewcell vertex (SVV). That is, for an SVME wedge, the corresponding supporting triangle is assumed to intersect no other polygons which would occlude any part of the corresponding mesh silhouette edge when viewed from the corresponding SVV. Likewise, for an SEMV wedge, the corresponding swept triangle is assumed to intersect no other polygons which would occlude any part of the corresponding viewcell vertex contour edge when viewed from the corresponding inside corner mesh firstorder silhouette vertex.
 In actuality, the supporting polygon corresponding to a wedge may be completely occluded, completely unoccluded, or partially occluded. If the supporting polygon is completely unoccluded, then the corresponding firstorder wedge is the exact visibility event boundary supported by the mesh edge or vertex. If the supporting polygon is completely occluded, then no part of the corresponding wedge is incident on the exact visibility event boundary, but the entire wedge remains a conservative approximation to this boundary. If the supporting polygon is partially occluded, then portions of the wedge corresponding to unoccluded segments of the supporting polygon are the exact visibility event boundary, while the portions of the wedge corresponding to occluded segments of the supporting polygon are conservative approximations to the exact boundary.
 The following section summarizes a method using backprojection to adaptively refine firstorder wedges to account for higherorder visibility interactions that exist when supporting polygons are completely or partially occluded. Backprojection is the process of determining the portions of a source (i.e., the viewcell) visible from a particular mesh element (i.e., a firstorder silhouette edge). According to some embodiments, to compute the backprojection, the firstorder visibility model and methods are employed using silhouette edges as lineal light sources.
 The methods described thus far have employed a simplified firstorder model of visibility propagation which results in linearized visibility event surfaces. These firstorder surfaces are bounded by firstorder wedges, which are generated by the pivot and sweep method.
 These firstorder wedges are of two types: SVME wedges and SEMV wedges. The SVME wedges, generated by pivoting from a mesh edge to a viewcell vertex, reflect a restriction of visibility that results from the combination of containment of the viewpoint to a point on the viewcell, and the occlusion at the silhouette edge of the mesh. The SEMV wedges, generated by sweeping from a point on the mesh through an edge of the viewcell, reflect a restriction of visibility that results from the containment on an edge (i.e., boundary) of the viewcell. Under the firstorder visibility model SVME (i.e., SEME in the special case) and SEMV wedges are the only types of visibility event surfaces that arise in polyhedral environments
 Both types of firstorder wedges can be constructed by extending the corresponding supporting polygons between the mesh and the viewcell. An important assumption of the first order visibility model is that any firstorder mesh silhouette edge is either completely visible from the viewcell or completely occluded. This is the same as saying that for any firstorder silhouette edge, the viewcell is assumed to be either completely occluded from the edge or completely visible.
 Likewise, the firstorder model assumes that the supported silhouette edge or vertex is either completely occluded or completely unoccluded when viewed from the corresponding supporting viewcell vertex or edge.
 According to some embodiments, using the firstorder pivot and sweep method, for example, if a firstorder silhouette edge segment is not occluded, then the supporting triangle between the segment and the corresponding SVV is assumed to be completely unoccluded (i.e., not intersected by any other mesh polygons). If, in fact, this supporting triangle is completely unoccluded, then the firstorder model is exact and the corresponding SVME wedge is an exact component of the fromviewcell umbral event boundary supported by the mesh silhouette edge. If however, this supporting triangle is partly or completely occluded, then the firstorder model is an approximation and the actual visibility event surface incident on the silhouette edge may be composed of intersecting quadric and planar surfaces. Moreover, the firstorder silhouette edge (or segments of it) may not even support actual visibility event surfaces. Instead, the actual visibility event surfaces may actually arise from other edges, called higherorder silhouette edges, such that all or parts of a firstorder silhouette edge are actually inside the visibility event (i.e., umbra) boundary and therefore occluded.
 Embodiments include a method of identifying silhouette edges and vertices for which the firstorder assumption is inexact by conducting a sweep of the corresponding supporting triangles to identify occluding elements which induce higherorder visibility event surfaces. These higherorder visibility event surfaces are approximated by computing a backprojection which identifies portions of the viewcell actually visible from the silhouette edge or silhouette vertex. This backprojection is itself a fromregion visibility problem that is solved using the firstorder pivot and sweep method. Using this method, conservative firstorder wedges can be adaptively refined to approximate the corresponding exact higherorder visibility event surfaces to within a desired error tolerance.
 In some embodiments, the higherorder method is implemented as a technique to test the exactness of firstorder visibility event surface and modify or “adjust” such surfaces to more precisely approximate the relevant higherorder visibility surfaces. Firstorder visibility event surfaces are incident on firstorder fromregion silhouette edges. Firstorder silhouette edges define a conservative silhouette contour of a mesh. Exact higherorder visibility umbral event surfaces are not necessarily incident on firstorder silhouette edges and may also arise on other mesh edges, called higherorder silhouette edges. Higherorder visibility event surfaces, which are incident on these higherorder silhouette edges, may produce considerably more occlusion than the corresponding event surface incident on the firstorder silhouette edge. In fact, typically the event surfaces emerging from higherorder silhouette edges will actually bound an occlusion volume which contains the corresponding firstorder silhouette edge.
 Embodiments include a method of approximating higherorder visibility event surfaces by “adjusting” firstorder visibility event surfaces in such a way that the adjusted event surfaces remain incident on the firstorder silhouette edges. A later section introduces a method of identifying when constraining a higherorder visibility event surface to a firstorder silhouette edge significantly decreases the precision of the calculated higherorder event surface. Further embodiments include a method of identifying the specific higherorder silhouette edges that support visibility event surfaces, which more precisely approximates the exact visibility event surface.
 The following is a description of where and how higher ordervisibility event surfaces arise on polyhedral mesh objects. This framework provides the basis of a novel method of adaptively, progressively approximating these higherorder surfaces using polyhedral surfaces.
 To illustrate the concepts, we begin with the simpler case of a linear light source instead of an area light source. Envision a single linear light source comprising a line segment and a single convex polyhedron. Because the polyhedron is convex, there is no self occlusion or inside corners. Consequently, the umbra of the polyhedron is exactly formed using the firstorder pivot and sweep algorithm previously described. In this case, each firstorder silhouette edge of the mesh supports a single SVME wedge formed by pivoting to the corresponding supporting source vertex (SVV) of the source, which in this case is a line segment.
 Now, imagine that for a particular firstorder silhouette edge of the mesh, the firstorder assumption is violated such that from this silhouette edge, the corresponding SVV on the source line segment is not visible (i.e., completely occluded). This occurs if the supporting triangle formed by the silhouette mesh edge and the SVV is intersected by other polygons such that no unobstructed sightlines exist between the SVV and the mesh silhouette edge. Occlusion of this shaft in this case indicates that the firstorder wedge is not the exact umbra boundary for the mesh silhouette edge since the corresponding SVV is not even visible from the silhouette edge.
 A better approximation to the actual visibility event surface incident on the mesh silhouette edge could be obtained by identifying the point on the linear lightsource that is closest to the supporting viewcell vertex for the edge (i.e., the “pivot to” point) but which is actually visible from the mesh silhouette edge. This point is called the visible supporting viewcell vertex (VSVV) for the mesh silhouette edge. The VSVV is on the surface of the viewcell (i.e., on the line segment representing the viewcell/lightsource). It is the point visible from the mesh silhouette edge to which the SVME UBP would pivot. The corresponding SVME wedge is an umbral visibility event surface formed by the linear light source and the mesh silhouette edge.
 This higherorder SVME wedge clearly produces a larger umbra volume than the corresponding firstorder SVME wedge, since the VSVV provides a less extreme “look” across the mesh silhouette edge, and “behind” the mesh.
 According to some embodiments, this visible supporting viewcell vertex (VSVV) for mesh silhouette edge is computed by treating the mesh silhouette edge itself as a linear light source. In this approach, the pivot and sweep method is used to construct a visibility map on the surface of the viewcell using a specific mesh silhouette edge as a light source. In the backprojection process, firstorder silhouette edges are identified on intervening mesh polygons between the mesh silhouette edge and the viewcell. Firstorder wedges are constructed on these silhouette edges in the direction of the viewcell. Theses event surfaces induce a visibility map on the viewcell which partitions it into components that are visible from the mesh silhouette edge and components that are not. The vertex of the visible component of the viewcell to which the SVME wedge incident on the original mesh silhouette edge, now being used as a backprojection light source would pivot is the VSVV corresponding to the mesh silhouette edge.
 Assume that the linear light source is positioned so that it looks “over the top” of the mesh object at the mesh silhouette edge in question. Assume also that in this particular case the visibility of the line segment light source from the mesh silhouette edge is affected by a single intervening triangle which occludes the supporting triangle (i.e., the 2D shaft between the supporting viewcell vertex and the mesh silhouette edge). Further, assume that a single edge of this intervening triangle spans the entire tetrahedral shaft formed by the line segment light source and the mesh silhouette edge in such a way that the intervening triangle “hangs down” into the tetrahedral shaft. Also, assume the light source edge, the edge of the intervening triangle, and the mesh silhouette edge are mutually skew. This single intervening edge affects the mutual visibility of the other two edges at various points on the source and silhouette edge.
 The conjunction of the three skew edges in this way indicates that the actual visibility event surface incident on the mesh silhouette edge includes a quadric surface. This is a classic EEE event TELLER (1992). Nevertheless, the backprojection pivot and sweep algorithm applied in this case will still identify a single conservative VSVV on the light source. Pivoting from the mesh silhouette edge to this VSVV defines a single SVME wedge incident on the silhouette edge that conservatively approximates the actual quadric surface incident on the silhouette edge. Moreover, the actual higherorder (quadric) visibility event surfaces incident on the mesh silhouette edge can be more precisely approximated by subdividing the mesh silhouette edge and computing a VSVV for each of the subsegments. During this subdivision process, adjacent silhouette segments may produce different VSVVs during backprojection. The corresponding SVME wedges do not share a common edge but are connected by a SEMV wedges formed by sweeping from the vertex of the adjacent silhouette segments through the linear light source from one VSVV to the other VSVV. In this way, a quadric visibility event surface is conservatively approximated by an alternating sequence of SVME and SEMV wedges.
 In some cases the pivotandsweep process using a mesh silhouette edge as a lightsource will not produce a single VSVV on the viewcell. For example, if an inside corner of a silhouette contour is encountered during the backprojection, either in single continuous contour or as a CSV, then the resulting visible “extremal” feature on the viewcell may not be a point but a line segment parallel to the mesh silhouette edge as lightsource. This occurs when a backprojection SEMV wedge is generated by a sweep anchored at the inside corner through the mesh silhouette edge (as lightsource). The resulting SEMV sedge is parallel to the mesh silhouette edge (as lightsource). This wedge intersects the viewcell such that the intersection is a supporting feature (i.e., both endpoints of the wedge intersection are VSVVs). This case is analogous to the previously described case in the simple forward firstorder pivotandsweep in which a pivot operation results in a supporting viewcell edge (SEME wedge) (e.g., the firstorder silhouette edge is parallel to an extremal edge of the viewcell). This higherorder forward SEME wedge construction is managed similarly in both cases.
 The details of higherorder visibility event surface construction using the backprojection process for the general case of a polyhedral light source are disclosed in the detailed description portion of the specification. In general, the backprojection applies the firstorder pivot and sweep method using the mesh silhouette edges or subsegments of these edges as linear light sources to identify VSVVs. These VSVVs are in general connected by visible supporting viewcell contours VSVSCs. Intervening SEMV higher order wedges are constructed by sweep process on the VSVSCS. Further embodiments include methods to construct higher order SEMV wedges in the cases where the VSVSCs corresponding to adjacent silhouette edges are disconnected).
 According to some embodiments, this backprojection method is used to compute a single higherorder SVME wedge for a mesh firstorder silhouette edge that conservatively approximates a very complex visibility event surface incident on the mesh silhouette edge, which may include the intersection of multiple quadric and planar surfaces. In such cases, a mesh silhouette edge may be subdivided, and the backprojection applied to subsegments to more accurately approximate an actual event surface that is varying substantially across a single edge. This subdivision can be performed adaptively based on simple tests, which indicate the maximum possible deviation of the linearized event surface from the actual visibility event surface along a particular segment. This method requires less computation than methods such as Teller (1992) and Nirenstein (2005) that first compute the entire set of event surfaces incident on a silhouette edge and then determines which ones are the actual umbra boundary surfaces by using some type of containment test or higher dimensional CSG.
 As previously encountered for the firstorder visibility map construction, in some cases the SVME wedges for adjacent silhouette edges or segments are disjoint and must be connected by SEMV wedgess generated by sweeping from the shared vertex of the edges through the boundary silhouette contour of the viewcell such that the sweep connects the two VSVVs for the connected mesh silhouette edges.
 In the firstorder case, the two SVVs corresponding to adjacent silhouette edges always lie on the actual boundary of the viewcell and are connected by a single boundary silhouette contour of the viewcell. In the higherorder backprojection case, the two VSVVs may or may not lie on the same contour. If the two portions of the viewcell visible from the adjacent edges are disjoint, then the VSVVs are not connected by a single contour. In this case, the convex hull of the two contours can be used to conservatively connect the two higherorder wedges and the higherorder SEMV wedges can be conservatively generated from this connected contour.
 According to some embodiments, the backprojection method is applied to a mesh silhouette edge only if the corresponding supporting viewcell vertex (SVV) is occluded from the mesh silhouette edge, as indicated by an occlusion of the 2D shaft between these two structures. This occlusion of the 2D shaft for SVME wedges is a from point visibility problem that can be computed using the previously described 2D version of the mesh traversal algorithm. Any segments of the silhouette edge for which the EVV is visible do not require application of the backprojection method since, for these segments, the firstorder wedge is the exact visibility event surface.
 Further, according to some embodiments, subdivision and recursive backprojection for a silhouette segment from which the SVV or VSVV is occluded is guided by a simple test that measures the maximum possible deviation between the currently computed wedge and the actual visibility event surface incident on the segment. This test is performed by pivoting from the silhouette segment to the viewcell in the opposite direction normally used to find the SVV. Pivoting in this direction identifies a separating plane between the silhouette edge and the viewcell. This separating plane corresponds to the maximal possible extent of a higherorder visibility surface incident on the silhouette edge segment. It also corresponds to the extremal penumbra boundary between the segment and the viewcell. In some embodiments, a higherorder occlusion surface would only approach this plane when nearly the entire viewcell is occluded from the corresponding silhouette segment. The angle between this penumbra plane and the current conservative SVME wedge for the segment indicates the maximum possible deviation of the current conservative event surface from the actual event surface at this silhouette edge. These two planes, intersecting at the silhouette edge in question, form a wedge supported over the length of the segment. The volume of this wedge reflects the maximum possible deviation of the current conservative occluded volume from the actual occluded volume over the silhouette edge.
 It should be noted that this deviation decreases as a function of distance from the viewcell. This reflects the fact that, at greater distances, fromregion visibility event surfaces approach frompoint visibility event surfaces. Consequently, higherorder visibility effects are less important at greater distances from the viewcell. In some embodiments, silhouette edges are adaptively subdivided depending on the visibility of the corresponding SVV and the value of this umbra/penumbra metric. Using this approach, according to some embodiments, higherorder visibility event surfaces are generated only where they significantly enlarge the occluded volume compared to the simpler firstorder event boundaries.
 The preceding discussion assumed that the backprojection process is used to refine the wedges that are incident on a firstorder silhouette edge of the mesh. In fact, applying the backprojection process to firstorder silhouette edges can produce SVME wedges which violate local visibility when the triangle formed by the corresponding VSVV and the silhouette edge lies on the backfacing side of both triangles that share the silhouette edge. In some embodiments, such a SVME wedge is still a conservative representation of the actual visibility event surface incident on the firstorder mesh silhouette edge. However, such a violation of local visibility indicates that the corresponding firstorder mesh silhouette edge is not actually a fromviewcell silhouette edge. Instead it is on the occluded side of another visibility event surface that arises from the actual fromviewcell silhouette edge, which is closer to the viewcell than the firstorder silhouette edge. This type of from viewcell silhouette edge is called a higherorder mesh silhouette edge.
 A general fromregion silhouette edge may or may not support a higherorder visibility event surface. As defined by Drettakis (1994) and Nierenstein (2005) a general from region silhouette edge is any mesh edge that is a frompoint silhouette edge for any point on the viewcell. This generally includes many more edges of mesh polygons than firstorder silhouette edges.
 General fromregion mesh silhouette edges may or may not give rise to fromviewcell umbral visibility event surfaces, depending upon the exact arrangement of intervening geometry between the general fromregion silhouette edge and the viewcell. General fromregion mesh silhouette edges can be identified using criteria that are slightly different for identifying firstorder mesh silhouette edges. According to some embodiments, an edge is a general fromviewcell silhouette edge if it meets three criteria: 1) it must have at least one component triangle that is frontfacing for at least one vertex of the viewcell, 2) it must have at least one component triangle that is backfacing for at least one vertex of the viewcell, and 3) the component triangles must be mutually backfacing.
 The previously described 3D mesh traversal algorithm may be modified to include umbral event surfaces that are incident on nonfirstorder, general fromviewcell silhouette edges. In one modification, the 3D mesh traversal initially proceeds in the usual way: each mesh edge is examined to determine if it is a firstorder silhouette edge. Backprojection is performed, using the firstorder mesh silhouette edge as a lineal light source, to compute the higherorder wedges incident on the firstorder mesh silhouette edge by identifying the VVS and VSVSC on the viewcell surface. If the corresponding higher order SVME wedge violates local visibility, then a closer, general fromviewcell silhouette contour is identified by traversing the mesh away from the firstorder edge until one or more general fromviewcell silhouette edges are encountered which comprise a silhouette contour that supporta a higherorder visibility event surface (i.e., by backprojection) that occludes the original firstorder mesh silhouette edges. This retraction can be repeated where the higherorder wedges also violate local visibility. This modification begins with a conservative result and refines it to a desired precision based on measurements of the maximum deviation of the current event surface from the actual event surface.
 The linearized backprojection method of the present invention provides a more precise approximation of higherorder visibility event surfaces than the linearized antipenumbra method of Teller (1992). Teller's antipenumbra method uses a pivoting strategy from a portal edge to a source portal which effectively identifies a VSVV on the source portal corresponding to the target portal edge. This point, together with the source portal edge, is used to define a plane which bounds the antipenumbra volume.
 These planes correspond to the planes of SVME wedges/UBPs defined by the present embodiments. As previously indicated for the case of firstorder visibility (e.g., between two portal sequences), Teller uses only SVME planes to approximate the visibility boundary, whereas the present invention uses both SVME and SEMV polygons (e.g., the UBPs). The present embodiments' use of these polygonal wedges always produces a more precise approximation to the actual visibility event boundary than Teller's antipenumbra, which is based on intersecting planes. Moreover, the present method defines a systematic approach to linearized backprojection including mesh traversal, silhouette edge identification, and adaptive subdivision, which can be applied to the general fromregion visibility problem. In contrast, Teller's antipenumbra method depends on a simple pivoting strategy that can only be applied the more limited problem of visibility through a portal sequence.
 Referring to FIG. 57 of PCT/US2011/051403, the figure illustrates an exemplary diagram showing the relationships, in one embodiment, between a visibility event encoder, a visibility event server, and a visibility event client.
 In some embodiments, a game database or other modeled environment, shown as data 5710, comprising geometry, texture and other information; is processed using conservative linearized umbral event surfaces to produce deltaPVS data stored as Visibility Event Data (5730). This processing is shown in
FIG. 57 as being performed by a Visibility Event Encoder, 5720. In one embodiment, this processing/encoding is performed offline to generate the Visibility Event Data 5730, which is stored for later use. In some embodiments, the visibility event encoder 5720 includes the processor 5600 and performs the processes illustrated inFIGS. 1 , 3, 4A, 4C, 5A5C, 6A, and 6B. In further embodiments, the Visibility Event Encoder employs the 3D mesh traversal process ofFIG. 20A and related figures to generate the Visibility Event Data 5730.  In some embodiments, the Visibility Event Data 5730 is delivered at runtime by a server unit labeled SERVER. In some embodiments, the server unit includes stored visibility event data 5730, previously generated by the visibility event encoder. The server unit may also implement a Visibility Event DecoderServer process 5740. In some embodiments, this Visibility Event Server process may implement server elements of navigationbased prefetch to deliver the Visibility Event Data to a client unit, labeled CLIENT, through a network interconnect labeled 5790. In some embodiments, the Visibility Event Server may implement perceptionbased packet control methods discussed inconjunction with
FIG. 48A ,FIG. 49 ,FIG. 50A ,FIG. 50B , andFIG. 51 .  In some embodiments Visibility Event Server 5740 is interfaced to a Game EngineServer process 5750. A Game EngineServer process is often used in existing multiplayer games, for example to receive the location of players in a multiplayer game and to deliver this data to client units. In contrast, the Visibility Event Server 5740 progressively delivers the geometry, texture and other information that comprises the modeled environment, as visibility event data which is, in some embodiments, prefetched based on a user's movements within the modeled environment.
 Visibility Event Data 5730 is delivered to a client unit labeled CLIENT which in some embodiments includes a Visibility Event DecoderClient process 5780. The Visibility Event Client process 5780 receives Visibility Event Data 5730. Process 5780 processes the Visibility Event Data into PVS information that can be rendered. In some embodiments this rendering is performed by a Game Engine Client, labeled 5770.
 In some embodiments the DecoderClient process 5780 receives visibility event data that has been effectively compressed by the method of identifying and labeling silhouette contours and occlusion boundary regions having high effective dynamic occlusion. This effective compression in some embodiments by the contour identification and labeling process described in conjunction with the exemplary flowchart of
FIG. 33A ,FIG. 33B ,FIG. 33C , andFIG. 33D .  In such embodiments the DecoderClient process 5780 can use the labeled contour information provided included in the delivered visibility event data to identify entire contours from a limited number of labeled firstorder silhouette edges (see exemplary flowchart of
FIG. 32A andFIG. 32B ). Embodiments of the DecoderClient process may also generate entire occlusion boundaries at runtime from labeled silhouette contour data (seeFIG. 34A andFIG. 34B as well asFIG. 35A andFIG. 35B ).  Using this contour data generated from the labeled edge information, the DecoderClient process 5780, in some embodiments, generates a PVS (e.g. one or more child PVSs from parent PVS data), or deltaPVS information at runtime by traversing to the contours from a labeled seed triangle for each connected component of the PVS or deltaPVS being generated (see exemplary flowcharts of
FIG. 36 andFIG. 37A ).  The DecoderClient process 5780, in some embodiments, interfaces with Game Engine Client (5770). In some embodiments the PVS or deltaPVS data delivered to the DecoderClient process or generated in the aforementioned decompression subprocesses of the DecoderClient process, is submitted to rendering, depending on a the location of a user's or other agent's viewpoint location. This rendering may employ standard graphics API such as Microsoft DirectX or OpenGLES employed by Sony Corporation's Playstation 3. In some embodiments, these graphics APIs typically interface to graphics hardware through drivers.
 In some embodiments, the DecoderClient process also aquires information indicating a user's or autonomous agent's location in the modeled environment. This viewpoint location information is transmitted, in some embodiments, to the DecoderServer process using the bidirectional communication interconnect 5790.
 As previously described, two priorart methods make extensive use of fromregion visibility event surfaces: shadow volume algorithms for area light sources, and discontinuity meshing algorithms.
 In shadow volume methods, the visibility event surfaces being constructed include umbral and penumbral event surfaces that intersect to form the boundaries of the corresponding shadow volume. In simple cases, the umbral event surfaces are polygons (herein called umbra boundary polygons or UBPs) and form the boundary of the umbral volumes which are polyhedra.
 Discontinuity meshing methods also employ visibility event surfaces that are both umbral and penumbral. In discontinuity meshing methods, the visibility event surfaces, called wedges, are not intersected with each other. Consequently, discontinuity mesh methods do not, for example, produce an explicit umbral volume. Instead, in discontinuity meshing methods, the wedges are only intersected with mesh polygons. Following the wedgepolygon intersection step, a 2D visibility process is applied on each wedge to determine visible portions of the intersected polygon segments. These visible segments of the intersected mesh polygons form the discontinuity boundaries of the mesh. The discontinuity boundaries define regions of uniform qualitative visibility (e.g., umbra, antipenumbra, etc.) on the polygon mesh that can be determined after the discontinuity mesh has been constructed.
 According to some embodiments, the present method of conservative fromregion visibility determination employs conservative linearized umbral visibility event surfaces which are constructed using a novel method of visibility event surface construction.
 In one embodiment of the present method, these conservative, linearized, umbral event surfaces are intersected with each other and with mesh polygons to form UBPs that are analogous to the event surfaces used in shadow volume methods.
 In another embodiment of the present method, these conservative, linearized, umbral event surfaces are effectively intersected with mesh polygons to form wedges that are analogous to the event surfaces used in discontinuity meshing methods. In a variation of this method, a conservative, linearized, fromregion visibility map (VM) is constructed from these wedges.
 The following is an overview of the firstorder model of visibility propagation which applies to both types of firstorder visibility event surfaces: wedges and UBPs (which can be constructed by wedgewedge intersection).
 As is evident from the analysis of the prior art, the exact visibility event surfaces that define fromregion visibility in polyhedral environments are often quadric surfaces. These higherorder surfaces present significant computational challenges which have made the development of robust, practical, fromregion visibility precomputation methods very difficult.
 Embodiments include a method of fromregion visibility precomputation that is based on a simplified model of fromregion visibility propagation in polyhedral environments. We call this the firstorder model. According to some embodiments, this model produces visibility event surfaces that are always planar, always conservative, and frequently exact. Tests are used to determine if the firstorder surface is exact and to measure the maximum deviation of the firstorder surface from the exact result. A higherorder method can be used to refine the firstorder event surface in regions where the firstorder method is imprecise. In some embodiments, the higherorder method is an implementation of the firstorder method in the reverse direction: computing the portion of the viewcell visible from an edge.
 Unlike the planar visibility event surfaces used in the discontinuity meshing methods (Heckbert et. al. 1992), the conservative, firstorder, fromregion visibility event surfaces employed by the present method are guaranteed to form continuous umbral surfaces. These continuous umbral surfaces produce continuous discontinuity mesh boundaries that partition the discontinuity mesh into regions visible from the viewcell and regions occluded from the viewcell. Consequently, these regions form a conservative, linearized umbral discontinuity mesh. Methods of constructing a conservative linearized umbral fromviewcell visibility map are disclosed. Methods for deriving a conservative fromregion PVS from the corresponding fromregion visibility map are also specified.
 According to some embodiments, it is assumed that a polygon mesh is a closed manifold triangle mesh (i.e., a set of triangles that are connected by their common edges or corners) with each edge having exactly two component polygons. Additionally, it is assumed that the view region is a convex viewcell. In some embodiments, these assumptions are not required by the method of firstorder visibility determination, but they do enhance the simplification of the implementations. For example, the polygon mesh may be manifold but not closed. In this case, each edge has either one or two component triangles.
 According to some embodiments, the firstorder fromregion visibility model is based on the simplifying, conservative assumption that if any element of a polygon mesh is visible from any part of a view region (herein called a viewcell) then it is visible from all parts of the viewcell. This assumption leads to a definition of a firstorder fromregion silhouette edge.
 An edge of a polygon mesh is a firstorder fromregion silhouette edge if one component polygon sharing the edge is front facing (visible) to any vertex of the region and the other component polygon is backfacing (invisible) to all vertices of the view region. The definition of a firstorder silhouette edge further requires that the component polygons are not facing each other.
 This is a more restrictive definition than the definition of a general fromregion silhouette edge (e.g., used by Dretakis et al, and Nirenstein 2005). An edge is a general fromregion silhouette edge if one component polygon is front facing and the other component polygon is backfacing for any vertex of the view region. Stated differently, an edge is a general fromregion silhouette edge if the edge is a frompoint silhouette edge for any point in the view region.
 The following table compares firstorder fromregion silhouette edges to general fromregion silhouette edges and frompoint silhouette edges.

TABLE III Silhouette Edge Definition Table Silhouette Definition Backfacing Polygon Front Facing Polygon FromPoint Backfacing from point Front facing from point General FromRegion Backfacing from any Front facing from any point on viewcell point on viewcell FirstOrder, From Backfacing from all Front facing from at Region points on viewcell least one point (on supporting hull) on viewcell FromRegion, Backfacing from any Front facing from all Extremal Penumbral point on viewcell. points on viewcell (on separating planes)  The definition of a firstorder fromregion silhouette edge is similar to a frompoint silhouette edge in that both of these silhouette edges define a boundary between visibility and complete invisibility from the respective “regions”, with a viewpoint being a degenerate region. Clearly, if a component polygon is backfacing for all vertices of a convex viewcell, then it is invisible from that viewcell. The firstorder silhouette edge definition requires that the other component polygon sharing the edge is visible from any point on the viewcell.
 Clearly, on any polygon mesh, there may be many more general fromregion silhouette edges than firstorder fromregion silhouette edges. Every firstorder silhouette edge is a general fromregion silhouette edge but the converse is not true.
 Fromregion visibility is determined from a view region, which in the present embodiments is a polyhedral viewcell.
 Fromregion visibility event surfaces are incident on fromregion silhouette edges. These fromregion visibility event surfaces may be penumbral or umbral.
 According to some embodiments, as defined here, a fromregion umbral visibility event surface (also called simply an umbral surface) is an oriented surface having a fromregion occluded side and a fromregion unoccluded side. Points on the fromregion occluded side of the umbral surface are occluded from any and all points on (or in) the viewregion. Points on the fromregion unoccluded side of the umbral surface are unoccluded (i.e. visible) from any point on (or in) the view region.
 A fromregion umbral visibility event surface may be exact or it may be conservative.
 In some embodiments, an exact fromregion umbral event surface is comprised of quadric and planar components and may be incident on any of the general fromregion silhouette edges. In order to determine which of the general fromregion silhouette edges support exact umbral event surfaces, an exact solution of the fromregion visibility problem is solved. As previously discussed, this is a difficult computational problem that typically requires solving in higherdimensional spaces.
 In contrast, embodiments employ the firstorder model of visibility propagation defining a pivotandsweep method of constructing conservative umbral event surfaces which are all planar and which are incident only on firstorder silhouette edges.
 In some embodiments, points on the occluded side of a conservative umbral event surface are actually occluded from the view region, whereas points on the unoccluded side of a conservative umbral event surface may actually be unoccluded or occluded. Consequently, using conservative umbral event surfaces to determine fromregion visibility, e.g., using the method of conservative fromviewcell visibility mapping, the geometry visible from a viewcell is never underestimated but may be overestimated.
 The planar visibility event surfaces (wedges) employed in the priorart method discontinuity meshing are exact, but they do not, in general, form continuous visibility event surfaces. This is because the exact visibility event surface is generally comprised of both planar and quadric components. Consequently, the planar visibility event surfaces of the priorart method of discontinuity meshing cannot be used to determine umbral regions.
 In contrast, the firstorder visibility event surfaces constructed using the methods of the present embodiments are exact or conservative but are guaranteed to form a continuous umbral event surface that can be employed, for example in the present method of fromregion visibility mapping, to determine what geometry is inside umbral regions. Fromregion penumbral event surfaces are oriented visibility event surfaces that are incident on general fromregion silhouette edges. On the unoccluded side of a penumbral event surface a certain subregion or “aspect” of the source view region is visible. Whereas on the occluded side of the same penumbral event surface the same subregion of the view region is occluded. The priorart method of discontinuity meshing uses penumbral event surfaces to determine the various components of a penumbra cast by polygon mesh objects from an area light source.
 According to some embodiments, only umbral event surfaces are employed to determine fromregion visibility. In one embodiment, all of these umbral event surfaces are incident on firstorder silhouette edges. In an alternate embodiment, the firstorder umbral event surfaces may be adaptively refined by a process of backprojection to more precisely approximate the exact umbral visibility event surfaces. These refined or “adjusted” visibility event surfaces are, like firstorder umbral event surfaces, planar; but they reflect the “higherorder” visibility effects caused by partial occlusion of the view region from the silhouette edge. These visibility event surfaces are therefore called higherorder visibility event surfaces. In this alternate embodiment, these higherorder visibility event surfaces (umbral) may “retract” to nonfirstorder, general fromregion silhouette edges.
 Table IV shows the types of visibility event surfaces incident on various types of silhouette edges and certain characteristics of these visibility event surfaces.

TABLE IV Visibility Event Surfaces Incident on Types of Silhouette Edges SilhouetteEdge Visibility Event Type Surfaces Supported Event Surface Type FromPoint FromPoint Umbral Planar, Exact General FromRegion Penumbral Planar: Exact, Not FromRegion and Guaranteed Continuous FromRegion Umbral Quadric: Exact, Not Guaranteed Continuous FirstOrder, FirstOrder, FirstOrder: Planar, FromRegion FromRegion Umbral Exact or Conservative, Conservative or Exact Guaranteed Continuous  These basic aspects of the firstorder model of visibility propagation are illustrated in
FIG. 2A andFIG. 2B . Subsequent details are given in the DescriptionEmbodiments sections of the specification. 
FIG. 2A is a diagram showing a viewcell and two simple polygon meshes A and B. 
FIG. 2A also shows two firstorder, fromviewcell silhouette edges: edge A1, and edge B1 (which is subdivided into segments B10 and B1V).  The construction of conservative linearized umbral event surfaces (CLUES) incident on these firstorder silhouette edges is now described. In the following discussion, the umbral event surfaces constructed are similar to discontinuity mesh wedges in the sense that they define visibility from a single feature of a viewcell (generally a supporting viewcell vertex or edge). In a subsequent section of this specification, it is shown that these wedges can be used to construct a conservative linearized fromviewcell visibility map (VM) from which a PVS can be derived.
 In some embodiments, firstorder umbral boundary polygons (UBPs), which define visibility from an entire viewcell, are explicitly constructed by intersecting the corresponding firstorder wedges. The construction and use of UBPs is shown as an alternate embodiment in a later part of this specification.
 Consequently, the first steps in the construction of firstorder wedges and firstorder UBPs are identical, which is illustrated with simpler firstorder wedges in
FIGS. 2A and 2B . 
FIG. 2A illustrates a viewcell and mesh objects A and B. In some embodiments, the viewcell is a 3D cube having eight vertices. For example, the viewcell inFIG. 2A is a cube having vertices V_{1}V_{8}. In further embodiments, the viewcell is any desired convex polyhedron. An edge of mesh A is labeled A1 having edges A_{1} _{ — } _{0 }and A_{1} _{ — } _{1}. An edge of mesh B is labeled as two segments: B1O and B1V. With respect to firstorder silhouette edge A1, segment B1V is visible from supporting viewcell vertex SVV1, as B1V is on the unoccluded side of the event surface WEDGE1 that is formed between edge A1 and the corresponding supporting viewcell vertex SVV1, which corresponds to viewcell V_{8}. In this regard, B1V is on the unoccluded side of WEDGE 1 since a backfacing plane incident on first order silhouette edge A1 pivots in a clockwise direction towards viewcell V_{8 }to determine the corresponding supporting viewcell vertex. Accordingly, in some embodiments, the direction which a backfacing plane incident on a first order silhouette edge pivots toward the viewcell vertex indicates the unoccluded side of an event surface supported by the viewcell vertex. The opposite direction which the backfacing plane pivots indicates the occluded side of the event surface supported by the viewcell vertex.  With respect to firstorder silhouette edge A1, segment B1O is occluded from supporting viewcell vertex SVV1, as B1O is on the occluded side of the event surface WEDGE1 that is formed between edge A1 and the corresponding supporting viewcell vertex SVV1.
 The firstorder visibility event surface, labeled WEDGE1, lies in the supporting plane between edge A1 and the viewcell. The supporting polygon SP1 between edge A1 and the viewcell is the triangle (labeled SP1) formed by the vertices A_{1} _{ — } _{0 }and A_{1} _{ — } _{1 }of edge A1 and the viewcell vertex labeled SVV1.
 According to some embodiments, WEDGE1, the firstorder visibility event surface incident on edge A1, is formed by extending the two edges of the corresponding supporting polygon (SP1) that are incident on the vertices A_{1} _{ — } _{0 }and A_{1} _{ — } _{1 }of edge A1. This extension occurs semiinfinitely starting at the vertices A_{1} _{ — } _{0 }and A_{1} _{ — } _{1 }of A1, in a direction away from the viewcell. The two extended rays are connected to the vertices A_{1} _{ — } 0 and A_{1} _{ — } _{1 }of edge A1 to form the semiinfinite umbral visibility event surface labeled WEDGE1. Only a portion of WEDGE1 is shown in
FIG. 2A , as it actually extends semiinfinitely away from the viewcell. In some embodiments, the plane of an event surface is represented by a 3D planar equation such as ax+by+cz.=0  Thus, in some embodiments, to form a (fromviewcell) firstorder visibility event surface incident on a firstorder silhouette edge and a viewcell vertex, the supporting polygon between the silhouette edge and the viewcell is first constructed. This construction is analogous to a pivot operation on the silhouette edge in the direction away from the backfacing component polygon and toward the viewcell until a supporting viewcell feature (edge or vertex) is encountered. In some embodiments the, wedge is formed by extending the nonsilhouette edges of this supporting polygon away from the viewcell.
 As illustrated in
FIG. 2A , event surface WEDGE1 intersects edge B1, dividing B1 into the two segments B1V, which is firstorder visible from the viewcell feature (the viewcell vertex SVV1) with respect to firstorder silhouette edge A1, and B10, which is not firstorder visible from SVV1 with respect to firstorder silhouette edge A1. Wedge 1 intersects firstorder silhouette edge B1 (comprised of segments B10 and B1V) at the point labeled CSV. This point is a compound silhouette vertex.  For the purposes of illustration, assume now that the segment B1V is on the unoccluded side of all firstorder visibility event surfaces formed by the edges of mesh A and the features of the VIEWCELL. In this case, B1V is outside (on the unoccluded side) of the firstorder polyhedral aggregate umbrae (PAU) formed by the intersection of the firstorder wedges with the mesh polygons and with each other. Under these conditions segment B1V is firstorder visible from the viewcell.
 If the segment BIV is firstorder visible from the viewcell, then under the conservative assumptions of the firstorder visibility model, segment B1V is assumed to be visible from any part of the viewcell. Consequently, the firstorder visibility event surface incident on the segment B1V is constructed, by the previously described pivoting operation, which generates the supporting polygon (SP2), between the segment B1V and the supporting viewcell vertex labeled SVV2. As illustrated in
FIG. 2A , the supporting polygon SP2 is defined by viewcell vertex V_{3 }(SVV2) the vertices of segment B1V. The previously described method of extending the supporting polygon is once again employed. The resulting firstorder visibility event surface incident on BV1 is labeled WEDGE2.  WEDGE1 is an exact visibility event surface incident on edge A1 because in this case, the corresponding supporting viewcell vertex SVV1 is actually visible from the supported firstorder silhouette edge A1.
 WEDGE2 is not an exact visibility event surface through edge B1V because the conservative assumption of the firstorder visibility model is violated in a very specific way: the corresponding supporting viewcell vertex SVV2 is not actually visible from the supported firstorder silhouette edge B1V, it is occluded when viewed from this edge.
 The exactness of any firstorder visibility event surface (e.g., wedge) incident on a silhouette edge can be determined using a 2D visibility test which tests the visibility of the supporting viewcell vertex from the silhouette edge. In some embodiments, if the supporting viewell feature is a vertex, then this is a frompoint visibility test that is equivalent to testing the visibility of the firstorder silhouette edge from the corresponding supporting viewcell vertex (SVV). According to some embodiments, segments of the firstorder silhouette edge that are visible from the corresponding SVV support exact visibility event surfaces, and segments of the firstorder silhouette edge that are occluded from the corresponding SVV support inexact/conservative visibility event surfaces.
 In the special case where the silhouette edge is parallel to a supporting viewcell edge, a special fromedge visibility test is required. This is presented in detail in a later part of the specification.
 Embodiments also include a method to increase the precision of inexact visibility event surfaces. In this method, for each segment of a firstorder silhouette edge supporting an inexact wedge, a point on the surface of the viewcell is identified that is the visible supporting viewcell vertex (VSVV) for the segment. The VSVV is actually visible from the corresponding silhouette edge segment and forms a supporting polygon with the segment.
 According to some embodiments, VSVV is determined by backprojection: using the silhouette edge as a linear light source and constructing the firstorder, fromregion (in this case fromedge) visibility event surfaces cast by polygon mesh objects from the linear light source back toward the viewcell. The intersection of these firstorder wedges with the mesh polygons and with the viewcell comprise a fromsilhouetteedge, onviewcell visibility map. This visibility map contains the components of the viewcell that are visible from the silhouette edge. The VSVV is the supporting vertex of these visible components.
 A wedge constructed by pivoting from the inexact silhouette edge segment to the corresponding VSVV is an adjusted or “higherorder” visibility event surface. These higherorder visibility event surfaces reflect the effect of partial occlusion of the viewcell (source) from the silhouette edge, an effect which is not accounted for by the simple, conservative firstorder model of visibility propagation.

FIG. 2B shows the results of a backprojection process in which B1V is treated as a linear light source. A wedge labeled WEDGE_BACK incident on vertices A_{1} _{ — } _{0 }and A_{1} _{ — } _{1 }of edge A1 is constructed from segment B1V, treated as a linear light source. Note that edge A1 is a firstorder silhouette edge with respect to the source region B1V. The area below WEDGE_BACK, in this example, is the unoccluded side of WEDGE_BACK, which indicates the portion of the viewcell visible from B1V.  The supporting polygon between B1V and A1 is a triangle with edge A1 and vertex VB of edge B1V. The corresponding wedge, WEDGE_BACK intersects the viewcell, creating a new visible contour of the viewcell which includes vertex VSVV.
 In some embodiments, the process of constructing backprojected wedges such as WEDGE_BACK, employs the methods of firstorder silhouette edge identification and pivotandsweep wedge construction as described in some embodiments in this specification. When these methods are applied using the viewcell as the view region, the resulting firstorder wedges extend away from the viewcell and intersect polygon meshes, partitioning them into portions that are firstorder visible from the viewcell and portions which are firstorder occluded. In contrast, when these methods are applied to backprojection, the corresponding wedges extend away from the firstorder silhouette edge (such as B1V), which acts as a lineal view region, and intersect the viewcell, partitioning the viewcell into portions that are firstorder visible from the silhouette edge and portions which are firstorder occluded from the silhouette edge. This partitioning of the viewcell defines a new visible viewcell boundary or contour (also called the visible supporting viewcell silhouette contour), which is conservatively visible from the firstorder silhouette edge used as the lineal view region. The vertices of this contour are then tested to determine which is the supporting vertex (the visible supporting viewcell vertex) for the higherorder wedge to be constructed on the firstorder silhouette edge used as a lineal view region.
 The “adjusted” or higher order wedge is constructed by pivoting from BV1 to VSVV to form a supporting polygon SP_HIGH that is a supporting vertex between the edge B1V and the visible viewcell contour.
 The nonsilhouette edges of the higherorder supporting polygon SP_HIGH are extended through the vertices of B1V, as previously described, to form the higherorder wedge WEDGE_HIGH.
 Thus, in order to construct higherorder visibility event surface on a fromviewcell, firstorder silhouette edge, the firstorder method of visibility propagation is applied in the reverse direction to determine the portion of the viewcell visible from the silhouette edge.
 As shown in the later parts of the specification, a firstorder silhouette edge supporting an inexact visibility event surface can be adaptively subdivided based on error metrics. Higherorder wedges can be constructed on the subdivided segments guided by these error metrics such that the result is a piecewise planar approximation of the corresponding exact quadric event surface. Further, the present method of firstorder visibility, so applied realizes a new method of constructing quadric surfaces which insures that the constructed surface conservatively underestimates occlusion even as it converges on the exact result.
 The preceding theoretical introduction to firstorder visibility employed a single type of visibility event surface for the purposes of illustration. This type of visibility event surface is formed between a source (viewcell) vertex and a silhouette edge. This type of visibility event surface is called a SVME wedge. Another type of visibility event surface is used to construct a continuous fromregion visibility event surface incident on nonconvex polygon meshes. This type of visibility event surface is formed from a viewcell (source) edge and a mesh vertex and is called a SEMV wedge, which is discussed in detail in other parts of this specification.
 In conclusion, firstorder wedges are constructed using the simple firstorder “pivottoviewcell” method from firstorder silhouette edges. Firstorder wedges can be intersected with mesh polygons and other wedges form continuous fromviewcell visibility maps or continuous firstorder PAU. Both of these data structures conservatively underestimate the fromviewcell occlusion. Embodiments include implementations in which conservative, fromviewcell PVS is derived from either firstorder visibility maps or firstorder PAU.
 Higherorder visibility event surfaces can be constructed by a backprojection process in which firstorder visibility methods are applied to determine portions of a viewcell visible from a silhouette edge.
 The above detailed description introduces the firstorder visibility model of visibility propagation and a general overview of some methods for constructing firstorder visibility event surfaces. The details of firstorder silhouette edge identification and firstorder wedge construction are provided in further detail in the specification.
 One embodiment includes a method of conservative, linearized visibility map construction that is based on a simplified firstorder model of visibility propagation in polyhedral environments. As previously described in embodiments, the firstorder visibility model is based on the conservative assumption that if a silhouette edge of a polygon mesh is visible from any part of a viewcell, then it is visible from all parts of the viewcell. According to embodiments of this model, silhouette edges (called firstorder silhouette edges) are limited to those triangle mesh edges that have one component polygon that is backfacing for all vertices of the viewcell and another component polygon that is front facing for at least one vertex of the viewcell. Additionally, to be a firstorder silhouette edge, the component polygons is backfacing with respect to each other.
 This model also leads to a method in which firstorder conservative linearized umbral event surfaces (called CLUES, or also called firstorder wedges or simply wedges) are formed either by pivoting from the (firstorder) silhouette edge to a vertex of the viewcell (SVME wedge derived from the pivoted supporting polygons) or by sweeping from an (firstorder) inside corner silhouette vertex through viewcell silhouette edges (SEMV wedges derived from swept supporting triangles). The method also employs SEME wedges generated in the special case where the supported silhouette edge is parallel to a supporting viewcell edge. The firstorder embodiment always produces a conservative umbra boundary, and in some cases, it is the exact umbra boundary.
 Other embodiments are based on a higherorder model of visibility propagation in polyhedral environments. This model does not assume that if a silhouette edge is visible from any part of a viewcell then it visible from all parts of the viewcell. Rather, this model accounts for portions of the viewcell that are occluded from a silhouette edge. The higherorder model forms the basis of alternate embodiments which can produce a more precise approximation to the exact umbra boundaries in cases where the firstorder method is imprecise.
 The firstorder embodiment of the method is described first.

FIG. 1 shows a flowchart disclosing the general organization of the construction of firstorder wedges formed by a polygon mesh object and a convex viewcell using the pivot and sweep method. According to some embodiments, process flow starts at step 110, where polygons of the polygon mesh object are individually examined for firstorder silhouette edges. The method of identifying firstorder silhouette edges is disclosed in detail inFIG. 3 . Embodiments disclosing the order in which the polygons are examined is illustrated inFIG. 20 , that illustrates an algorithm enforcing a strict front to back order.  Process flow proceeds to step 112 to determine if the firstorder silhouette edge encountered in step 110 is parallel to the supporting viewcell edge.
 If, in decision step 112, it is determined that the firstorder silhouette edge is not parallel to the supporting viewcell vertex, then process flow proceeds to step 114 to construct a supporting polygon between the silhouette edge and the viewcell.
FIG. 4A illustrates the details of this construction.  Process flow proceeds to step 116 to construct the SVME wedge incident on the firstorder silhouette edge by extending specific edges of the corresponding pivoted supporting polygon incident on vertices of the firstorder silhouette edge. Additional details of step 120 are disclosed in
FIG. 6A .  If the firstorder silhouette edge is parallel to a supporting silhouette then process flow proceeds from 112 to step 118.
 In step 118, the supporting quadrangle, called a SEME (source edge—mesh edge) quadrangle is constructed by pivoting from the viewcell edge to the viewcell as previously described.
 Process flow proceeds to step 120 to construct the SEME wedge corresponding to the SEME supporting quadrangle by extending the line segments formed by diagonal vertices of the SEME supporting quad. The edges of the SEME wedge are comprised of the supported silhouette edge and the two lines formed by extending the diagonal line segments through the silhouette edge vertices and away from the viewcell.
 Process flow proceeds from steps 116 or 120 to decision step 125 to determine if adjacent silhouette edges form an outside corner of the firstorder silhouette contour. In some embodiments, this determination is made using a simple test for the relative orientation of adjacent silhouette edges. Each edge, being on the boundary of a polygon mesh, has a natural orientation in which one normal to the edge is facing outside the polyhedron (outfacing normal) and the opposite normal is facing inside the polyhedron. If the two outfacing normals for adjacent silhouette edges are facing away from each other, then the shared silhouette vertex is an inside corner of the silhouette contour. Otherwise the shared silhouette vertex forms an outside corner.
 In some embodiments the relative orientation of edges on a mesh is used to determine which vertices of a mesh can possibly be inside corner vertices. For example, vertices of edges which are vertices of inside corner edges (reflex or nonconvex edges) may be inside corner vertices. In some embodiments the determination of whether a vertex is an inside corner vertex is ultimately made by examining the relationship between the pivoted wedges incident on the edges shared by the vertex. In some embodiments if the pivoted wedges incident on adjacent mesh edges intersect only at the shared vertex (and have no face to face intersection or common edge), then the vertex is an inside corner vertex. In some embodiments if a nonshared vertex of one of the adjacent edges is on the nonoccluded side of the pivoted wedge incident on the other adjacent edge, then the vertex common to the adjacent edges is an inside corner vertex.
 If it is determined, in decision step 125, that the adjacent silhouette edges form an outside corner of the silhouette contour, then process flow proceeds to step 140 to intersect the wedges incident on the adjacent silhouette edges with each other. In some embodiments, if the adjacent SVME wedges were generated by pivoting to the same supporting viewcell vertex (SVV), then they exactly intersect at a common edge. Otherwise, the adjacent SVME wedges intersect each other in their polygon interiors and an explicit polygonpolygon intersection determination is made. In either case, the intersecting SVME wedges produce continuous umbral event surface spanning portion of the firstorder silhouette contour formed by the two supported silhouette edges. In some embodiments, adjacent SVME wedges are not intersected. In these embodiments the step 140 is optional. A SVME wedge which is not intersected with an adjacent SVME wedge can still be intersected with mesh polygons and the resulting wedgemesh polygon intersection tested to determine if it is a fromviewcell occlusion boundary. Additional discussion of intersecting adjacent SVME wedges is given in conjunction with FIG. 7D4 and FIG. 7D5.
 If it is determined, in decision step 125, that the adjacent silhouette edges do not form an outside corner of the silhouette contour, then process flow proceeds from step 125 to step 130. This case corresponds to an inside corner of a firstorder silhouette contour.
 In some embodiments, such inside corners formed by two silhouette edges that are connected by a vertex are simple silhouette vertices. Using the firstorder model of visibility propagation inside corners can also form on compound silhouette contours in which the component silhouette edges do not share a vertex in the original manifold mesh. These are called compound silhouette vertices (CSVs), they correspond to fromregion tvertices of the manifolds and are discussed in detail in a later part of this specification.
 In step 130, one or more supporting swept triangles are formed between the inside corner mesh silhouette vertex and certain edges of the viewcell that are frompoint silhouette edges with respect to the inside corner mesh silhouette vertex. Additional details of this process are disclosed in
FIG. 5A andFIG. 5B .  Process flow proceeds to step 135, the corresponding SEMV wedges are generated from the swept triangles by extending the edges of the swept triangles through the inside corner mesh silhouette vertex. Additional details of this process are disclosed in
FIG. 6B .  Alternate embodiments are possible in which the set of firstorder wedges are constructed using a different method. For example, in one alternate embodiment, the entire conservative supporting hull between the viewcell and the polygon mesh objects may be constructed, and the firstorder wedges selected as a subset of the conservative supporting hull polygons.

FIG. 3 shows details of the step 110 inFIG. 1 , the identification of firstorder silhouette edges. According to some embodiments, the process illustrated inFIG. 3 is entered at step 110 ofFIG. 1 . In some embodiments, the process of identifying firstorder silhouette edges starts at step 310 to identify the component polygons of the edge. In some embodiments, this process is facilitated by storing the polygon mesh as a linked data structure such as a wingededge data structure, in which a reference to the component polygons for each edge is stored. In other embodiments, any desired data structure is used to represent the polygon mesh. In one implementation, the polygon mesh is a closed manifold mesh in which each edge is shared by exactly two component polygons.  Process flow proceeds to decision step 315 to test one component polygon, called polygon B or PB, to determine if the component is backfacing for all vertices of the viewcell. In this case, all vertices of the viewcell would be on the backfacing side of the plane that contains the component polygon.
 If, in decision step 315 it is determined that PB is not backfacing for all viewcell vertices, then process flow proceeds from step 315 to step 320 to test the other component polygon, called PA, as described in step 315.
 If, in decision step 320, it is determined that PA is backfacing for all vertices of the viewcell, then process flow proceeds to step 325 to determine if component triangle PB is front facing for at least one viewcell vertex.
 If, in decision step 325, it is determined that PB is front facing for at least one viewcell vertex then process flow proceeds to decision step 330 to test PA and PB to determine if they are backfacing with respect to each other.
 If, in decision step 330, it is determined that PA and PB are backfacing relative to each other, then process flow proceeds to step 335, where the edge being tested is a firstorder silhouette edge.
 If, in decision step 330, it is determined that PA and PB are not backfacing relative to each other, then process flow proceeds to step 355 which returns a result that the edge being tested is not a firstorder silhouette edge.
 If in step 315, if PB is backfacing for all vertices of the viewcell, process flow proceeds to step 340 to determine if PA is frontfacing for at least one viewcell vertex. If PA is frontfacing for at lest one viewcell vertex, process flow proceeds to step 345 to determine if PA and PB are backfacing to each other as functionally described in step 330.
 If PA and PB are backfacing with respect to each other, process flow proceeds to step 350 which returns a result that the edge being tested is a firstorder silhouette edge. If PA and PB are not backfacing to each other, process flow proceeds from 345 to 355. If PA is not frontfacing for at least one viewcell vertex, process flow proceeds from 340 to 355. If any of the tests in steps 320, 325, 330, 340, or 345 fail, then the mesh edge is not a firstorder silhouette edge, as indicated in step 355.

FIG. 4A is a flowchart showing the method of constructing a SVME supporting polygon incident on a (firstorder) mesh silhouette edge.FIG. 4A gives additional detail of the process shown in step 116. According to some embodiments, the process illustrated inFIG. 4A is entered from step 116 inFIG. 1 .  In some embodiments, the process of constructing supporting polygons starts at step 410 upon encountering a silhouette edge of the polygon mesh. In the present embodiment this is a firstorder silhouette edge, although other embodiments may potentially use higher order silhouette edges.
 Process flow proceeds to step 415 to set a SUPPORTING_ANGLE between the firstorder silhouette edge and the viewcell to a MAX value (e.g., 180 degrees). According to some embodiments, the supporting angle is defined as the angle formed when pivoting a plane through the firstorder silhouette edge, starting in the plane of the backfacing component polygon and pivoting toward the viewcell (in the general direction of the normal of the backfacing component polygon) until the first vertex or edge of the viewcell is encountered. The position of the pivoting plane on contact with the viewcell is the plane of the supporting polygon between the silhouette edge and the viewcell. The angle traversed during the pivot is called the supporting angle or the pivot angle, and it is measured between the supporting plane and the plane of the backfacing component polygon of the silhouette edge. The viewcell vertex, or edge if the supporting polygon is SEME type, that results in the smallest pivot angle is the supporting vertex or supporting edge.
 The remainder of
FIG. 4A shows the process of identifying the supporting viewcell vertex and constructing the supporting polygon. Process flow proceeds to step 420 to set the VERTEX to the first vertex of the viewcell. In embodiments, the VERTEX is a candidate vertex that is tested to determine if the candidate vertex is a supporting vertex. Process flow proceeds to step 425 to construct a triangle between the mesh silhouette edge EDGE and VERTEX. Process flow proceeds to step 430 to measure the angle between the visible sides of the plane of the TRIANGLE and the plane of the backfacing component polygon of the silhouette edge using a standard method for measuring the angle between planes at their line of intersection. Process flow proceeds to step 435 to compare this ANGLE to the current value of the SUPPORTING_ANGLE. If the ANGLE is less than the current value of the SUPPORTING_ANGLE, then process flow proceeds to step 440 to set the SUPPORTING_ANGLE to ANGLE. Process flow proceeds to 445 to set the SUPPORTING_VERTEX to the current VERTEX.  Process flow proceeds to step 450, where the supporting polygon is set to the triangle formed by the silhouette edge and the supporting vertex.
 Process flow proceeds to step 455 to determine if unprocessed viewcell vertices remain. If, in decision step 455, it is determined that no unprocess viewcell vertices remain, then process flow proceeds to step 460, where the supporting polygon is output.
 If, in decision step 455, it is determined that unprocessed viewcell vertices remain, then process flow proceeds to step 475, where the next viewcell vertex is selected for processing.
 If, in decision step 435, it is determined that the ANGLE (pivot angle) measured is not less than the current SUPPORTING_ANGLE, then process flow proceeds to step 465 to determine if the pivot angle (ANGLE) equals the current value of SUPPORTING_ANGLE. If this condition is true, then two vertices of the viewcell form the same pivot angle with the silhouette edge, corresponding to a SEME supporting polygon, and process flow proceeds to step 470 to set the quadrangle between both viewcell vertices and the viewcell edge (an SEME supporting polygon).
 A quadrangular supporting polygon is constructed in step 470 only in the special case when the supporting angle between the silhouette edge and two viewcell vertices is equal. For a convex viewcell, which is assumed in the present embodiment, this occurs only when the two supporting viewcell vertices lie on an edge of the viewcell that is parallel to the mesh silhouette edge. In this case, the visibility from the viewcell “across” the silhouette edge is not determined by the usual frompoint visibility triangle but instead by a fromsegment visibility quadrangle.
 Other embodiments are possible which deal with this special case differently, for example by constructing two supporting triangles and a swept triangle incident on the parallel supporting viewcell edge. Using this approach, the resulting corresponding adjacent UBPs will not intersect only at an edge, but instead, they will overlap on their planes, causing a local degeneracy of the bounded polyhedral umbra volume. The present method of identifying quadrangular supporting polygons avoids such degeneracies in later steps.
 Regardless of whether the candidate supporting polygon is a triangle or a quadrangle, the process flow proceeds from step 470 to step 455 to determine if any unprocessed vertices remain as described above. If viewcell vertices remain, then process flow returns to step 475, where the next viewcell vertex is selected. Subsequently the process follows the previously described steps.
 At the final step 460, the process outputs a supporting polygon that is either a triangle, formed by the mesh silhouette edge and a vertex of the viewcell, or a quadrangle that is formed between the mesh silhouette edge and a viewcell edge.
 Alternate embodiments of the method of constructing SVME supporting polygons are possible. In one alternate embodiment, the SUPPORTING_VERTEX corresponding to one firstorder silhouette edge is limited to those viewcell vertices directly connected to the SUPPORTING_VERTEX for an adjacent firstorder silhouette edge, wherein the adjacent edges form an outside corner (convex fea cure) of the mesh. This method is similar to the method employed in the classic priorart method of divideandconquer method of constructing a convex hull in 3D. In the present application the viewcell is a very simple polyhedron and the speedup afforded by this method is very limited.

FIG. 4B shows a mesh object M1 and a viewcell. The VIEWCELL and polygon mesh M1 are the same objects shown inFIG. 7A and FIG. 7B1. InFIG. 4B the viewpoint is between that ofFIG. 7A and FIG. 7B1. A firstorder silhouette edge labeled B also appears in all three figures. The view direction inFIG. 4B is very close to being parallel to edge B. Thus edge B is seen nearly edgeon as a point. A vertex of the polygon mesh M1 is shown as vertex V3 inFIG. 4B and FIG. 7B1.  Two candidate supporting polygons are shown as CANDIDATE SP1 and CANDIDATE SP2. A candidate supporting polygon is identified for firstorder silhouette edge B by constructing a triangle formed by edge B and a vertex of the viewcell. The angle of that the plane of this supporting polygon forms with the plane of the backfacing component polygon sharing edge B is measured. This angle corresponds to the variable SUPPORTING_ANGLE determined in step 425 of
FIG. 4A and used in steps 435 and 465 of the same figure. In the example shown in inFIG. 4B the backfacing component polygon of firstorder silhouette edge B is the triangle formed by edge B and vertex V3.  In this example, the angle formed by CANDIDATE SP1 (corresponding to viewcell vertex V4 is indicated by a dashed arc labeled ANGLE1.
 In this example, the angle formed by CANDIDATE SP2 (corresponding to viewcell vertex V8 is indicated by a dashed arc labeled ANGLE2.
 From the two arcs, it is apparent that ANGLE1 is less than ANGLE2. According the exemplary flowchart of
FIG. 4A , CANDIDATE SP1 would be retained as a candidate for the actual supporting polygon on firstorder silhouette. If all the vertices of VIEWCELL are tested by the process shown in the exemplary flowchart ofFIG. 4A it will be found that vertex V4 results in the supporting polygon (CANDIDATE SP1) giving the smallest supporting angle. CANDIDATE SP1 is shown as the actual supporting polygon SPB in FIG. 7C1.  Standard angle measures can be employed to determine the angle including the cross product between the normal vectors of the plane of the backfacing polygon and the candidate supporting polygon.

FIG. 4C is a flow diagram showing a test for determining if a polygon formed between a firstorder silhouette edge and a viewcell vertex is a supporting polygon.  Alternate embodiments are possible in which SVME supporting polygons are identified by considering both the “sidedness orientation” of the candidate supporting polygon (relative to the interior of the polygon mesh) and the orientation of the candidate supporting polygon relative to the viewcell vertices.
 In one embodiment, mesh polygons are all assumed to be “outside” polygons which have their normal vector locally oriented away from the “inside” of the region contained by the polygon mesh. In such embodiments, all mesh polygons of a polygon mesh consistently have this same “sidedness” orientation.
 A polygon is a planar structure which can have two sides, corresponding to the two sides of the plane containing the polygon. Exemplary embodiments include polygon meshes which are manifold or closed. Manifold meshes divide the volume of space in which they are embedded into an inside and an outside. In computer graphics, it is useful to employ manifold meshes in which the normal vector of each polygon in the mesh is locally oriented to face away from the inside of this enclosed volume. This can be called the “outside” side of the polygon. The opposite side can be called the “inside” side of the polygon. If all polygons have this consistent sidedness orientation in a mesh, then no inside side of a polygon should ever be visible from the outside.
 In exemplary embodiments, it can be established that polygons of a mesh have the same sidedness orientation by examining the vertex orderings of adjacent polygons i.e., polygons which share an edge. (See Schneider (2003) Schneider, Philip J., Eberely, David H., “Geometric Tools for Computer Graphics” Morgan Kaufmann 2003 pp. 342345, the entire contents of which are incorporated herein by reference). Let F_{0 }and F_{1 }be two adjacent polygons sharing an edge comprised of two vertices V_{1 }and V_{3}. If vertices V_{1 }and V_{3 }occur in the order V_{1 }followed by V_{3 }for polygon F_{0}, then they must occur in polygon F_{1 }in the order V_{3 }followed by V_{1}. Adjacent polygons in which shared edges have this ordering are said to have a consistent vertex ordering. Polygons with a consistent vertex ordering have the same sidedness orientation. The vertex ordering reflects the order in which the vertices are stored for each triangle. Vertices accessed in this same order for a triangle defines vectors (triangle edges) whose cross products are the coefficients A, B, C of the plane equation or normal vector of the triangle. In some embodiments, all mesh triangles have consistent vertex orderings and all will have normal vectors that point away from the inside of the mesh, i.e. they are all outside facing triangles. Embodiments may employ known algorithms to identify and repair inconsistent vertex orderings in a polygon mesh prior to processing (See MakeConsistent procedure of Schneider (2003), pp 345).
 FIG. 4D1 is an exemplary diagram showing two adjacent polygons F_{0 }and F_{1 }in which the polygons have a consistent vertex ordering. Note that for polygon F_{0 }the shared edge is accessed in V_{1}V_{3 }order while for the adjacent polygon and F_{1 }the same shared edge is accessed in V_{3}V_{1}, thus meeting the definition of consistent ordering. Adopting a righthand rule convention, the normal of both polygons points out of the plane of the image.
 FIG. 4D2 is an exemplary diagram showing two adjacent polygons polygons F_{0 }and F_{1 }in which the polygons do not have a consistent vertex ordering.
 In one embodiment, a candidate SVME supporting polygon for a firstorder silhouette edge is formed between a viewcell vertex and the firstorder silhouette edge. The candidate supporting polygon is given the same sidedness orientation as the backfacing mesh polygon sharing the firstorder silhouette edge. (Using this consistent sidedness orientation, for example, a person walking across the firstorder silhouette edge on the “outside” surface of the backfacing mesh polygon would encounter the “outside” surface of the candidate supporting polygon). The orientation of the plane of each candidate supporting polygon is then examined relative to the viewcell vertices. If the plane of the candidate supporting polygon is not frontfacing with respect to each viewcell vertex, then the viewcell vertex forming the candidate supporting polygon is a supporting viewcell vertex, and the candidate supporting polygon is a supporting polygon.
 According to some embodiments, the employed definition of frontfacing with respect to a viewcell vertex excludes viewcell vertices which are in the plane of the candidate supporting polygon (i.e. admitting a supporting viewcell vertex as not frontfacing). Alternate embodiments can employ variations of the definitions of backfacing and front facing to determine that a candidate supporting polygon is not frontfacing with respect to each viewcell vertex. In at least one exemplary embodiment, the test includes establishing that the candidate supporting polygon is backfacing with respect to each viewcell vertex, where the definition of a plane that is backfacing to a vertex includes vertices which are in the plane (i.e. admitting a supporting viewcell vertex as backfacing to a supporting polygon).
 According to some embodiments, the process illustrated in
FIG. 4C is entered from step 480. In step 480 a candidate supporting polygon is formed between the firstorder silhouette edge and a viewcell vertex (V).  Process flow proceeds to step 485 to set the sidedness orientation of the candidate supporting polygon formed in step 480 to be the same as the backfacing component polygon sharing the firstorder silhouette edge.
 Process flow proceeds to step 487 to determine if the candidate supporting polygon is not frontfacing for each of the viewcell vertices. If, in decision step 487, it is determined that the candidate supporting polygon is not frontfacing with respect to each viewcell vertex then process flow proceeds to step 491 to identify the viewcell vertex (V) as a supporting viewcell vertex and to identify the candidate supporting polygon as a supporting polygon.
 If, in decision step 487, it is determined that the candidate supporting polygon is frontfacing for any viewcell vertex then process flow proceeds to step 489 to identify the viewcell vertex (V) as not a supporting viewcell vertex and to identify the candidate supporting polygon as not a supporting polygon.
 The test illustrated by exemplary flowchart of
FIG. 4C can also be employed to identify SEME type supporting polygons. 
FIG. 5A andFIG. 5B comprise a flowchart showing the method of constructing SEMV supporting swept triangles incident on an inside corner mesh silhouette vertex. This is additional detail of the step 130 ofFIG. 1 . According to some embodiments, the process illustrated inFIGS. 5A and 5B is entered from step 130 inFIG. 1 .  In some embodiments, the process of constructing SEMV supporting swept triangles starts at step 510 upon encountering an inside corner of a firstorder silhouette contour of a polygon mesh. This inside corner may be formed from a simple firstorder silhouette contour in which two firstorder silhouette edges share a vertex. If the normals of the silhouette edges forming the intersection (with normal direction assumed to be facing away from the interior of their component polygons) are facing each other, then the intersection is an inside corner vertex.
 Alternatively, the inside corner may be a vertex of a compound silhouette contour formed by the intersection of a wedge with a firstorder silhouette edge. In the latter case, the inside corner silhouette mesh silhouette vertex is called a compound silhouette vertex (CSV).
 Process flow proceeds to step 515 to identify the supporting viewcell vertex (SVV) for one of the silhouette edges forming the vertex using, for example, the process disclosed in
FIG. 4A . The identity of this vertex is stored as the variable SVV_START. Process flow proceeds to step 520, were the process for step 515 is repeated for the other edge of the inside corner, and the result is stored as the variable SVV_END.  If either supporting polygon of the inside corner is a quadrangle (generated in
FIG. 4A , step 470) then the supporting polygon has two SVVs. In this special case, care must be taken to select, in steps 515 and 520, the initial (SVV_START) or terminal (SVV_END) viewcell vertex in the chain as the vertex that is farthest removed from the other end of the chain.  Process flow proceeds to step 525, where the variable CURRENT_POLYGON is set to identify the supporting polygon between the viewcell vertex SVV_START and the corresponding supported edge of the polygon mesh.
 Process flow proceeds to step 530, where an initial point for the sweep of the viewcell silhouette contour, which ultimately occurs between the viewcell vertices SVV_START and SVV_END, is set to be the viewcell vertex SVV_START and stored as the variable CVV, which holds the current vertex of the sweep.
 Process flow proceeds to decision step 535 to compare CVV to SVV_END to determine if the sweep should be terminated.
 If in decision step 535, it is determined that the current viewcell vertex being processed (CVV) is the same as the last vertex in the sweep (SVV_END), then process flow proceeds to step 540 and terminates. If both edges of the inside corner have the same supporting point on the viewcell then the corresponding SVME wedges intersect along a common edge and there is no swept triangle corresponding to the inside corner vertex. This situation would be identified on the initial execution of step 535 and the sweep would be terminated without producing a swept triangle.
 If, in decision step 535, it is determined that CVV is not SVV_END, then process flow proceeds to step 545 to set a variable CURRENT_ANGLE to a maximum value.
 Process flow proceeds to step 550, where a first viewcell edge sharing the viewcell vertex CVV is selected and referenced by the variable EDGE.
 Process flow proceeds to decision step 555 to determine if the edge EDGE is a (frompoint) silhouette edge with respect to the inside corner mesh silhouette vertex MV.
 If, in decision step 555, it is determined that EDGE is a fromMV silhouette edge, then process flow proceeds to step 560 to form the triangle between the point MV and the edge EDGE. This triangle is a candidate swept triangle between MV and the viewcell, but it must be compared to other swept triangle candidates that share the same viewcell edge.
 Process flow proceeds to 565, where the comparison of these other swept triangle candidates begins. In this regard, the angle between the current swept triangle candidate TRIANGLE and the CURRENT_POLYGON (supporting polygon) incident on MV is measured. The value is stored in the variable ANGLE. Since TRIANGLE and CURRENT_POLYGON share a common edge, the angle can be measured at the edge, adopting the convention that the angle is the angle between the occluded sides of each polygon. The occluded side of a supporting polygon is the side that connects to the interior of the mesh polygon at the silhouette edge. The occluded side of the candidate swept triangle is the side that connects to the interior of the mesh polygons at the vertex MV. This angle is stored in the variable ANGLE.
 Alternate embodiments are possible in which the orientation of the swept triangle and corresponding SEMV wedge relative to neighboring wedges is examined. All wedges are oriented surfaces having a “visible” side and an “invisible” side. For SEMV wedges the visible side is the unoccluded side (visible on this side as a result of being not occluded by mesh polygon beyond the corresponding firstorder silhouette edge). For SVME wedges the visible side is the “contained” side (visible as a result of being contained in the viewcell when looking through and beyond the corresponding insidecorner firstorder silhouette vertex.
 In one embodiment the SWEPT_TRIANGLE is constructed from MV viewcell edges which produce a SWEPT_TRIANGLE that has a containment orientation that is consistent with the occlusion orientation of an adjacent SEMV wedge and consistent with the containment orientation of neighboring SVME wedges. SVME wedge which do not have this consistent orientation do not contribute to the continuous, conservative linearized umbral event surface.
 The orientation of an SVME wedge is opposite to the orientation of the corresponding SVME supporting polygon. This inversion occurs as a result of the edges of the SVME supporting polygons being effectively “projected” through the insidecorner firstorder silhouette vertex to form the corresponding SVME wedge. (e.g. a particular SEMV supporting polygon which has the containment shaft between the viewcell and the insidecorner firstorder silhouette vertex “below” the supporting polygon in the negative Y direction will produce a corresponding SEMV wedge which has its “contained” or visible side in the positive Y direction.
 Process flow proceeds to decision step 570, to determine if this angle (ANGLE) is less than the current value of CURRENT_ANGLE.
 If, in decision step 570, it is determined that the current value of ANGLE is less than the value of CURRENT_ANGLE, then TRIANGLE is a candidate swept triangle and process flow proceeds to process 51, which starts at step 580 in
FIG. 5B .  In step 580, the variable CURRENTANGLE is set to the value of ANGLE.
 Process flow proceeds to step 585 to set the variable SWEPT_EDGE to refer to the edge EDGE.
 Process flow proceeds to step 590 to set the variable SWEPT_TRIANGLE to reference the triangle TRIANGLE.
 Process flow proceeds to decision step 591 to determine if any other edges sharing the current viewcell vertex CVV have been unprocessed.
 If, in decision step 591, it is determined that unprocessed edges sharing the viewcell vertex remain, then process flow proceeds to process 53, which returns the process flow to step 575 (
FIG. 5A ), where the variable EDGE is set to reference the next viewcell edge sharing the vertex CVV. Process flow then returns to step 555 to generate the next candidate swept triangle and test it.  If, in decision step 591, is determined that no other unprocessed viewcell edges share the vertex, then process flow proceeds to step 592, where the CURRENT_POLYGON variable is set to reference the triangle SWEPT_TRIANGLE.
 Process flow proceeds to step 593 to output the swept triangle SWEPT_TRIANGLE. Process flow proceeds to step 594 to construct a SEMV wedgefrom the swept triangle. Further details of this step is disclosed in
FIG. 6B .  Process flow then proceeds to process 54, which starts at step 594 (
FIG. 5A ) to advance to the next connected viewcell vertex. Process flow then returns to step 535.  If, in decision step 555, it is determined that the viewcell edge is not a frompoint silhouette edge from the point MV, then process flow proceeds to process 52, which starts at step 591 (
FIG. 5B ) to select a remaining viewcell edge for processing. 
FIG. 5C is a flow diagram showing a test for determining if a polygon formed between an insidecorner firstorder silhouette vertex and a viewcell edge is a supporting polygon.  Alternate embodiments are possible in which SEMV supporting polygons are identified by considering both the “sidedness orientation” of the candidate supporting polygon (relative to the interior of the polygon mesh) and the orientation of the candidate supporting polygon relative to the viewcell vertices.
 In one embodiment, mesh polygons are all assumed to be “outside” polygons which have their normal vector locally oriented away from the “inside” of the region contained by the polygon mesh. In such embodiments, all mesh polygons of a polygon mesh consistently have this same “sidedness” orientation.
 As previously described, a polygon is a planar structure which can have two sides, corresponding to the two sides of the plane containing the polygon. Exemplary embodiments include polygon meshes which are manifold or closed. Manifold meshes divide the volume of space in which they are embedded into an inside and an outside. In computer graphics, it is useful to employ manifold meshes in which the normal vector of each polygon in the mesh is locally oriented to face away from the inside of this enclosed volume. This can be called the “outside” side of the polygon. The opposite side can be called the “inside” side of the polygon. If all polygons have this consistent sidedness orientation in a mesh, then no inside side of a polygon should ever be visible from the outside.
 In exemplary embodiments, it can be established that polygons of a mesh have the same sidedness orientation by examining the vertex orderings of adjacent polygons i.e., polygons which share an edge. (See Schneider, Philip J., Eberely, David H., “Geometric Tools for Computer Graphics” Morgan Kaufmann 2003 pp. 342345, the entire contents of which are incorporated herein by reference). Let F_{0 }and F_{1 }be two adjacent polygons sharing an edge comprised of two vertices V_{1 }and V_{2}. If vertices V_{1 }and V_{2 }occur in the order V_{1 }followed by V_{2 }for polygon F_{0}, then they must occur in polygon F_{1 }in the order V_{2 }followed by V_{1}. Adjacent polygons in which shared edges have this ordering are said to have a consistent vertex ordering. Polygons with a consistent vertex ordering have the same sidedness orientation.
 In one embodiment, a candidate SEMV supporting polygon for an insidecorner firstorder silhouette vertex is formed between a viewcell edge and the insidecorner firstorder silhouette vertex. The candidate supporting polygon is given the same sidedness orientation as a backfacing mesh polygon sharing a firstorder silhouette edge of the insidecorner firstorder silhouette vertex. (Using this consistent sidedness orientation, for example, a person walking across the firstorder silhouette edge on the “outside” surface of the backfacing mesh polygon would encounter the “outside” surface of the candidate supporting polygon). The orientation of the plane of each candidate supporting polygon is then examined relative to the viewcell vertices. If the plane of the candidate supporting polygon is not frontfacing with respect to each viewcell vertex then the viewcell edge forming the candidate supporting polygon is a supporting viewcell edge, and the candidate supporting polygon is a supporting polygon.
 According to some embodiments the process illustrated in
FIG. 5C is entered from step 595. In step 595 a candidate supporting polygon is formed between the insidecorner firstorder silhouette vertex and a viewcell edge (E). Process flow proceeds to step 596 to set the sidedness orientation of the candidate supporting polygon formed in step 595 to be the same as the backfacing component polygon sharing a firstorder silhouette edge of the insidecorner firstorder silhouette vertex. In exemplary embodiments, the sidedness orientation of the SEMV supporting polygon can be set to be consistent with a previously determined adjacent SVME or SEMV supporting polygon. Because the SEMV supporting polygon shares an edge with these adjacent polygons the sidedness orientation can be set by insuring that the adjacent polygons have consistent vertex ordering.  Process flow proceeds to step 597 to determine if the candidate supporting polygon is not frontfacing for each of the viewcell vertices. If, in decision step 597, it is determined that the candidate supporting polygon is not frontfacing with respect to each viewcell vertex then process flow proceeds to step 599 to identify the viewcell edge (E) as a supporting viewcell edge and to identify the candidate supporting polygon as a supporting polygon.
 If, in decision step 597, it is determined that the candidate supporting polygon is frontfacing for any viewcell vertex then process flow proceeds to step 598 to identify the viewcell edge (E) as not a supporting viewcell edge and to identify the candidate supporting polygon as not a supporting polygon.

FIG. 6A Flowchart Showing a Method of Constructing SVME and SEME Wedges from the Corresponding SVME and SEME Supporting Polygons 
FIG. 6A is a flowchart showing the process of constructing a SVME wedge from the corresponding supporting polygon. This provides additional detail to the step 116 inFIG. 1 . According to some embodiments, the process illustrated inFIG. 6A is entered from step 116 inFIG. 1 .  In some embodiments, the process to construct SVME and SEME wedges from corresponding SVME and SEME supporting polygons starts at step 610, where the connecting edges of the supporting polygon are identified as those edges which have one vertex that is a vertex of the viewcell and another vertex that is a vertex of the polygon mesh.
 Process flow proceeds to step 615, to construct rays from the connecting edges by extending the connecting edges in a semiinfinite fashion away from the viewcell starting at the corresponding vertices of the supported silhouette edge. If the supporting polygon is a triangle, then the two edges that connect the viewcell and the silhouette edge are extended. If the supporting polygon is a quadrangle (from
FIG. 4A , step 470), then the diagonals connecting the viewcell edge and silhouette edge can be extended. Extending the diagonals produces a larger wedge that actually reflects the visibility from the viewcell edge through the silhouette edge.  Process flow proceeds to step 620 to connect the extended edges to the corresponding (supported) polygon mesh silhouette edge to form the semiinfinite SVME (or SEME) wedges.

FIG. 6B Flowchart Showing a Method of Constructing SEMV Wedges from the Corresponding SEMV Supporting Polygons 
FIG. 6B is a flowchart showing the process of constructing a SEMV wedge from the corresponding swept triangle. This provides additional detail to the step 135 inFIG. 1 . According to some embodiments, the process illustrated inFIG. 6B is entered from step 135 inFIG. 1 .  In some embodiments, the process of constructing a SEMV wedge from the corresponding swept triangle starts at step 630, where the connecting edges of the swept triangle are identified as those edges which have one vertex that is a vertex of the viewcell and another vertex that is a vertex of the polygon mesh.
 Process flow proceeds to step 635 to construct rays from the connecting edges by extending the these edges in a semiinfinite fashion away from the viewcell starting at the corresponding mesh silhouette vertex.
 Process flow proceeds to step 640 to connect the extended edges to the corresponding polygon mesh inside corner silhouette vertex to form the semiinfinite wedge.
 The process of
FIGS. 6A and 6B describe the construction of firstorder wedges that are only restricted by their intersection with adjacent wedges on the silhouette contour. These may be called the initial wedges.  According to some embodiments, in subsequent processing, for example in the construction of firstorder visibility maps, these initial wedges may later be intersected with mesh polygons and with other wedges. Initial wedges may also be explicitly intersected with other wedges to form umbral boundary polygons (UBPs), which bound the conservative fromviewcell polyhedral aggregate umbral volumes that contain (conservatively) occluded regions.

FIG. 7A is a diagram showing a convex viewcell having vertices V_{1}V_{8 }and a nonconvex polygon mesh M1. Firstorder, fromviewcell silhouette edges of the mesh are shown in bold lines. Two of the firstorder silhouette edges are labeled A and B. This is a perspective view looking in general direction from the viewcell toward the polygon mesh.  Firstorder silhouette edge A has one component polygon that is front facing for at least one viewcell vertex. This component polygon is the triangle formed by edge A and the mesh vertex labeled MV1. The other component polygon for edge A is the triangle formed by edge A and the mesh vertex MV2 which is shown in FIG. 7B1. This component polygon is backfacing for all vertices V_{1}V_{8 }of the viewcell. Note that these two component polygons sharing edge A are backfacing with respect to each other, making the edge A a locally supporting edge of the polygon mesh M1 and a firstorder silhouette edge. It can be determined that the two component polygons sharing edge A are backfacing by selecting a first component polygon, e.g. the triangle formed by edge A and vertex MV2, and determining if a vertex of the other component polygon which is not part of the shared edge, e.g. vertex MV1 in this case, is on the front side or the back side of the plane containing the first polygon. If the unshared vertex is on the back side of the other component polygon's plane then the two component polygons are backfacing, as in this case. This determination can be made using the plane equation as described in the definition of “backfacing” provided in the glossary of terms. In some embodiments, the process illustrated in
FIG. 3 is repeated for each edge included in polygon mesh M1 to identify each first order silhouette edge of polygon mesh M1.  FIG. 7B1 is a diagram showing the same polygon mesh object M1 as
FIG. 7A , but from a perspective view looking in a general direction from the polygon mesh toward the viewcell. From this view, edge B has one component triangle (formed by edge B and mesh vertex MV3) that is backfacing for all vertices V_{1}V_{8 }of the viewcell. As illustrated inFIG. 7A , edge B has another component triangle formed by edge B and mesh vertex MV1 that is front facing to at least one viewcell vertex. Further, these two component polygons sharing edge B are backfacing with respect to each other, making the edge B a locally supporting edge of the polygon mesh M1 and a first order silhouette edge.  FIG. 7B2 shows a different polygon mesh than the one depicted in FIG. 7B1. This polygon mesh is labeled M3. One edge of polygon mesh M3 is shown bolded and labeled I. This edge has one component polygon which is a triangle labeled T1, and another component polygon which is a triangle labeled T2.
 Component polygon T1 is backfacing for all vertices of the viewcell labeled VIEWCELL since all of the viewcell vertices are on the back side of the plane containing triangle T1.
 Component triangle T2 has at least one viewcell vertex that is on the front side of the plane containing triangle T2, that is T2 is front facing with respect to at least one viewcell vertex.
 Consequently, component triangles T1 and T2 meet two of the criteria required to make their shared edge a firstorder silhouette edge with respect to the viewcell.
 However the shared edge I, is not a firstorder silhouette edge because the two component triangles are not backfacing with respect to each other. This can be determined by selecting triangle T1 and identifying a vertex of the other component triangle (T2) that is not a vertex of the shared edge. In this case the vertex is P2. The vertex P2 is on the front side of the plane containing the other component triangle T1. This fact can be established using the plane equation of triangle T1 as described in the glossary of terms description for “backfacing”.
 Since T1 and T2 are not backfacing with respect to each other they would, in one embodiment, fail the decision test shown in the exemplary flowchart of
FIG. 3 at steps 345 OR 330.  FIG. 7C1 is a diagram showing the supporting polygons for the firstorder silhouette edges A and B. The supporting polygon for firstorder silhouette edge A is labeled SPA, and the supporting polygon for the firstorder silhouette edge B is labeled SPB. The corresponding supporting viewcell vertices (SVVs) are labeled, respectively SVVA and SVVB, which correspond to viewcell vertices V_{4 }and V_{g}, respectively. This is a perspective view looking in a general direction from viewcell toward mesh object.
 FIG. 7C2 is a diagram showing the supporting polygons SPA and SPB for the firstorder silhouette edges A and B, respectively, and the corresponding sourcevertex meshedge (SVME) wedges. The supporting polygon for firstorder silhouette edge A is labeled SPA, and the supporting polygon for the firstorder silhouette edge B is labeled SPB. The corresponding supporting viewcell vertices (SVVs) are labeled, respectively SVVA and SVVB. The SVME wedge is formed by extension of supporting polygon SPA is labeled SVME WA. The SVME wedge is formed by extension of supporting polygon SPB is labeled SVME WB. According to some embodiments, the SVME wedges WA and WB are constructed according to the processes illustrated in
FIGS. 1 , 4, and 6A. This is a perspective view looking in a general direction from viewcell toward mesh object.  FIG. 7C3 is a diagram showing only the SVME wedges formed from the extension of the edges of the corresponding supporting polygons. The SVME wedge formed by extension of supporting polygon SPA is labeled SVME WA. The SVME wedge formed by extension of supporting polygon SPB is labeled SVME WB. The corresponding supporting viewcell vertices (SVVs) are labeled, respectively SVVA and SVVB. This is a perspective view looking in a general direction from viewcell toward mesh object.
 Although FIGS. 7C17C3 show wedges incident on first order silhouette edges A and B, further embodiments construct wedges for each first order silhouette edge included in the first order silhouette contour included in mesh M1 according to the processes illustrated in FIGS. 1 and 36B.
 FIG. 7D1 is a diagram showing the same objects as FIG. 7C1, but from a perspective view looking in a general direction from mesh object M1 toward the viewcell.
 FIG. 7D2 is a diagram showing the same objects as FIG. 7C2, but from a perspective view looking a general direction from mesh object M1 toward viewcell.
 FIG. 7D3 is a diagram showing the same objects as FIG. 7C2, but from a perspective view looking a general direction from mesh object M1 toward viewcell.
 FIG. 7D4 shows the same polygon mesh and viewcell as FIG. 7D3, from the same perspective. FIG. 7D4 shows two pivoted wedges intersecting at an outside corner vertex of a firstorder silhouette contour.
 One of the pivoted wedges is labeled SVME WA, which is also seen in FIG. 7D3. In FIG. 7D4 an additional pivoted wedge SVME WC is shown. This wedge is supported by the firstorder silhouette edge labeled C, and the supporting viewcell vertex labeled SVVC.
 The two pivoted wedges SVME WA and SVME WC share an outside corner vertex of a firstorder silhouette edge. This vertex is labeled OCV. As prescribed in steps 125 and 140 of the exemplary flowchart of
FIG. 1 , in one embodiment pivoted polygons which share an outside corner vertex are intersected with each other.  Pivoted polygons which share an outside corner silhouette vertex and which pivot to the same supporting viewcell vertex will intersect each other exactly at a shared edge. In this case the shared edge is a ray extending from the shared vertex and on the line formed by the supporting viewcell vertex and the shared outside corner vertex. In this special case the two pivoted wedges restrict each other on the shared edge.
 (Pivoted polygons which share an inside corner silhouette vertex and which pivot to the same supporting viewcell vertex also intersect each other exactly at the shared edge. In this case no swept supporting polygon exists and the corresponding swept wedge is not generated.)
 In the general case, pivoted wedges sharing an outside corner vertex can pivot to different supporting viewcell vertices. In FIG. 7D4 wedge SVME WA is supported by viewcell vertex V_{4}, while SVME WC is supported by SVVC. In this case, the intersection of wedge SVME WA and SVME WC is the line segment labeled I. Line segment I divides wedge SVME WC into two parts. The proximal part of the subdivided wedge SVME WC is bounded by line segment I and the vertex labeled VE. A portion of this proximal part is occluded in this view.
 This proximal part of wedge SVME WC is completely seen in FIG. 7D5, which shows the same objects as FIG. 7D4, from a different perspective. This proximal part is labeled SVME WCR in FIG. 7D5.
 In general, the intersection of two pivoted wedges sharing an outsidecorner vertex and pivoting to different supporting viewcell vertices will result in one of the wedges being restricted into a proximal portion [e.g., SVME WCR (indicating wedge C restricted)] and a distal portion. Only the proximal portion of such a locally restricted wedge is actually a fromviewcell umbral event surface. [Only this proximal portion is a polygon of the corresponding polyhedral aggregate umbra (PAU).] The distal portion, beyond the restriction and in a direction away from the viewcell does not represent a fromviewcell umbral event surface, since it is entirely on the unoccluded side of the adjacent wedge. In the example shown in FIG. 7D4 and FIG. 7D5, mesh polygons on both the unoccluded and the occluded side of the distal portion of SVME WC are actually unoccluded from viewcell vertex SVVA, and are therefore not occluded from the viewcell.
 This local restriction of a pivoted wedge by an adjacent pivoted wedge sharing an outside corner silhouette vertex in some instances produces a substantially smaller wedge. This smaller, locally restricted wedge can require substantially less processing when it is submitted for the determination of onwedge visibility since it has an additional containment boundary that limits processing (e.g. at step 1515 in one embodiment using 2D mesh traversal process shown in exemplary flowchart
FIG. 15 ).  The local restriction process can therefore accelerate the determination of onwedge visibility. Alternate embodiments which do not use this local restriction process can also be employed. Any wedges that have not been restricted by other wedges still intersect mesh polygons to produce discontinuity mesh segments. The determination of whether such a discontinuity segment is actually a fromviewcell umbral boundary is then made using the modified pointin polyhedron test described in the exemplary flowcharts of
FIG. 25 . This test accommodates both locally restricted and unrestricted wedges.  The preceding discussion assumes that the wedges employed are firstorder wedges. Higherorder wedges are subjected to wedgewedge intersection (restriction by other wedges) as described in one embodiment for example in step 2155 of the exemplary flowchart showing a method for determining if a DM_SEG is an actual fromviewcell occlusion boundary segment.
 FIG. 8A1 is a diagram showing a swept triangle (a SEMV supporting polygon) on the inside corner vertex shared by firstorder silhouette edges labeled A and B of mesh object M1. The swept triangle is labeled ST_AB. In some embodiments, the swept triangle ST_AB is generated using the sweep process shown in
FIG. 5A andFIG. 5B , with the sweep occurring from SVVA (V_{4}) to SVVB (V_{8}) and anchored on the insidecorner silhouette vertex labeled ICSV. In this case, the insidecorner mesh silhouette vertex is a simple insidecorner of the firstorder silhouette contour (i.e., the contour formed by all the firstorder silhouette edges of mesh object M1), formed where two firstorder silhouette edges share a vertex. This is a perspective view looking in a general direction from viewcell toward mesh object similar to the view shown inFIG. 7A and FIG. 7C1.  FIG. 8A2 is a diagram showing a swept triangle (a SEMV supporting polygon) on the inside corner vertex shared by firstorder silhouette edges labeled A and B. The swept triangle is labeled ST_AB, and is generated, according to some embodiments, using the sweep process shown in
FIG. 5A andFIG. 5B , with the sweep occurring from SVVA (V_{4}) to SVVB (V_{8}) and anchored on the insidecorner silhouette vertex labeled ICSV. In this case, the insidecorner mesh silhouette vertex is a simple insidecorner of the firstorder silhouette contour, formed where two firstorder silhouette edges share a vertex. The corresponding SEMV wedge, formed by extension of the swept triangle, is labeled SEMV WAB. According to some embodiments, the SEMV wedge WAB is formed according to the process illustrated inFIG. 6B . In this regard, the edges of the polygon STAB are extended through the inside corner vertex to form SEMV WAB. This is a perspective view looking in a general direction from viewcell toward mesh object similar to the view shown inFIG. 7A and FIG. 7C2.  FIG. 8A3 is a diagram showing the insidecorner silhouette vertex labeled ICSV. The corresponding SEMV wedge, formed by extension of the swept triangle is labeled SEMV WAB. This is a perspective view looking in a general direction from viewcell toward mesh object similar to the view shown in
FIG. 7A and FIG. 7C3.  FIG. 8A4 is a diagram showing the firstorder conservative linearized umbral event surface (CLUES) incident on the silhouette edges A and B. As illustrated in FIGS. 8A4, a continuous umbral event surface is comprised of the two SVME wedges (labeled SVME WA and SVME WB) and, in this case, the single SEMV wedge (labeled SE_MV WAB). The corresponding supporting viewcell vertices SVVA and SVVB are labeled as is the inside corner firstorder silhouette vertex labeled ICSV. This is a perspective view looking in a general direction from viewcell toward mesh object. As illustrated in FIG. 8A4, the CLUES comprised of SVME WA, SEMV WAB, and SVME WB form an occlusion boundary, where the unoccluded side of the boundary is in the direction of arrow U1, and the occluded side is in the direction of arrow O1.
 FIG. 8B1 is a diagram showing the same objects as FIG. 8A1, but from a perspective view looking in a general direction from mesh object M1 toward the viewcell.
 FIG. 8B2 is a diagram showing the same objects as FIG. 8A2, but from a perspective view looking in a general direction from mesh object toward the viewcell.
 FIG. 8B3 is a diagram showing the same objects as FIG. 8A3, but from a perspective view looking in a general direction from mesh object M1 toward the viewcell.
 FIG. 8B4 is a diagram showing the same objects as FIG. 8A4, but from a perspective view looking in a general direction from mesh object M1 toward the viewcell.

FIG. 8C , the same as FIG. 8A4, is a diagram showing the firstorder conservative linearized umbral event surface (CLUES) incident on the silhouette edges A and B. This continuous umbral event surface is comprised of the two SVME wedges (labeled SVME WA and SVME WB) and, in this case, the single SEMV wedge (labeled SE_MV WAB). This is a perspective view looking in a general direction from viewcell toward mesh object. 
FIG. 9A is a diagram showing the umbral event surfaces incident on silhouette edges A and B constructed by the prior art approach of the linearized antipenumbra described by Teller (1992). In this prior art method, which was used only for the limited problem of portal sequence visibility, the umbral event surface is constructed entirely from the planes of the supporting polygons. Portions of these supporting planes incident on silhouette edges A and B are shown and labeled WPLANE_A and WPLANE_B. These planes intersect at line L1 to form a continuous visibility event surface incident on silhouette edges A and B.  In Teller's priorart method of linearized antipenumbra, Teller (1992), visibility event surfaces are approximated by intersecting only the planes of supporting polygons incident on portal edges and supported by source vertices wherein the source is an earlier portal in a sequence of portals. Theses supporting polygons correspond to the SVME supporting polygons (using the nomenclature of the present embodiments). Teller's method does not employ the corresponding SEMV supporting polygons in the construction of umbral event surfaces, but the planes of these polygons.
 In contrast, SVME wedges, as constructed by the present embodiments, are semiinfinite polygons, restricted laterally by the semiinfinite extension of the supporting polygon edges, which arrays. The SVME wedges are also restricted at the corresponding firstorder silhouette edge. Teller “wedges” are actually planes that have no lateral restriction. The present embodiments of constructing “Teller Wedges” is to extend the planes of adjacent SVME wedges at an inside corner until the planes intersect.
 In the following analysis, we show that by using visibility event surfaces constructed from both SVME and SEMV supporting polygons, the present method can provide a significantly more precise fromregion visibility solution than by using Teller's approach in which the planes of only one type of supporting polygon are intersected.
 It must be emphasized that the method of Teller (1992) is designed only to provide a solution to the restricted visibility problem of visibility through a sequence of polygonal portals. Teller's method does not identify silhouette edges on which to construct visibility event surfaces, because in Teller's method, the edges supporting visibility event surfaces are limited to the edges of the portals. Since Teller's method does not apply the intersectingplanes method to construct visibility event surfaces on silhouette edges of general polygon meshes; the following analysis amounts to a theoretical comparison of Teller's intersectingplanes method if it were applied to the general problem of fromregion visibility in polyhedral environments versus the present method of pivotandsweep visibility event surface construction, which is actually used in the more general visibility problem.

FIG. 9B is a diagram showing the same objects asFIG. 9A , but from a perspective view looking in a general direction from mesh object toward viewcell. 
FIG. 9C andFIG. 9D are a diagrams showing the more precise umbral event surface produced by the present method as compared to the umbra event surface that would be produced by the prior art method of intersecting supporting planes. InFIG. 9C andFIG. 9D , the umbral event surface formed by the present method of pivot and sweep construction of wedges is shown superimposed on the umbral event surface formed by the priorart method of intersecting supporting planes. From the perspective view ofFIG. 9D , looking in a general direction from viewcell toward mesh object, it can be seen that the present method produces a larger, more precise, umbra volume than the prior art method. The addition of the SEMV wedge generated from the swept triangle (SEMV supporting polygon) produces a larger conservative umbra volume (and hence a more precise potentially visible set) than the intersection of the supporting planes alone. Unlike the prior art method of intersecting planes, the present method of sweeping the viewcell silhouette contour can account for the effect of containment on the viewcell surface on the visibility at inside corner silhouette vertices. Consequently, for any silhouette contour with inside corner vertices in which adjacent supporting polygons pivot to different vertices of the viewcell, the present method will produce a more precise result than the intersectingplanes approach. 
FIG. 9D also shows that the deviation between the umbral event surfaces produced by the present pivotandsweep method and the priorart intersecting planes method tends to increase with distance from the supported silhouette edges and vertex. Consequently, for most insidecorner silhouette vertices, the precision of the present method can be much higher than the priorart method of using intersecting planes. 
FIG. 9D is a diagram showing the same objects asFIG. 9C , but from a perspective view looking in a general direction from mesh object toward viewcell.  Flipbook Views of Identifying Conservative Supporting Polygons and Constructing Corresponding Wedges.
 Subsets of
FIGS. 79 , when viewed in specific sequences, provide flipbook views of the method of identifying conservative supporting polygons and constructing the corresponding wedges. These sequences are listed below:  Pivoted supporting polygon & wedge: View generally from behind viewcell: 7A, 7C, 7C1, 7C2,
 Pivoted supporting polyon & wedge: View generally from in front of viewcell: 7B, 7D, 7D1, 7D2,
 Swept supporting polygon & wedge: View generally from behind viewcell: 7A, 8A, 8A1, 8A2, (8A3 showing combination of pivoted wedges and swept wedges).
 Swept supporting polygon & wedge: View generally from in front of viewcell: 7B, 8B, 8B1, 8B2, (8B3 showing combination of pivoted wedges and swept wedges).

FIG. 10A is a diagram showing the same mesh polygon and viewcell asFIGS. 9A and 9B , but in a perspective view looking in a general direction from beneath the polygon mesh.FIG. 10A shows the same firstorder visibility event surfaces (wedges) as shown inFIG. 9C . Specifically SVME WA, incident on firstorder silhouette edge A, SVME WB, incident on firstorder silhouette edge B, and SEMV WAB are shown.  Two additional firstorder SVMF wedges, W4 and W5, are also shown. The supporting viewcell vertex for wedges W4 and W5 is V_{3}. The intersection of these wedges is shown. Wedges intersect each other and other mesh polygons to form umbra boundary polygons (UBPs). These UBPs form the surface of firstorder polyhedral aggregate umbrae (PAU). The volume of space enclosed by the PAU is firstorder occluded from the corresponding viewcell. The UBPs corresponding to the intersections of the wedges are not explicitly shown in
FIG. 10A but can be inferred from the intersection lines that are shown. Some of the wedges that would form the complete PAU are omitted so the interior structure of part of the firstorder PAU can be seen (e.g. intersection of wedges W4, W5, SVME WA, SEMV WAB, and SVME WB). 
FIG. 10B is a view of the same polygon mesh (M1) as shown inFIG. 10A . InFIG. 10B mesh M1 and the viewcell are viewed from a perspective similar to that ofFIG. 8C , looking generally at the “top” side of mesh M1, containing the inside corner mesh edge. This view is very different from the view of M1 and the viewcell given inFIG. 10A . Note the same edge of M1 is labeled E in both figures and is on the “bottom” of mesh M1. Edge A and edge B are also labeled in both figures.  In
FIG. 10A the occluded side of the wedges is shown.  In
FIG. 10B the unoccluded side of the corresponding UPBs is shown. 
FIG. 10B shows 5 UBPs that are formed by intersecting the corresponding wedges with other wedges.  UBPA is formed by the intersection of the corresponding wedge (SVME WA) with wedge W5 (shown in
FIG. 10A ). UBPA is also restricted by the intersection of SVME WA with wedge W4 shown inFIG. 10A . W4 is completely hidden in FIG. 10B., but the intersection of W4 and wedge SVME WA is shown as the edge labeled F inFIG. 10B . Edge F is an edge of UBPA. Additionally, UBPA shares a common edge with UBPAB (which is derived from SEMV WAB, shown inFIG. 10A ).  UBPAB is formed by the intersection of SEMV WAB with wedge W4 and with the wedge of UBPD. UBPAB shares a common edge with both UBPA and UBPB as a consequence of the sweep construction of the corresponding wedge SEME WAB. UBPAB is also restricted by its intersection with the pivoted wedge corresponding to UBPD (which is supported by mesh edge D).
 UBP5 is formed by the intersection of the corresponding pivoted wedge (W5 shown in
FIG. 10A , which has corresponding supporting viewcell vertex V_{3}) with W4, and with SVME WA.  UPBD is formed by the intersection of the wedge incident on firstorder silhouette edge D (wedge is not shown, but having supporting viewcell vertex V_{8}) with wedges SVME B, SEMV AB, and W4 as well as the wedge supported by edge E (wedge not shown).
 The UBPs form the boundary of the PAU for M1. Not all of UBPs forming the PAU of M1 are seen in the view given in
FIG. 10B . 
FIG. 10B illustrates wedges which are fully restricted by other wedges. Embodiments using such fully restricted wedges (e.g. the outputsensitive construction of PAU in the exemplary flowchartFIG. 26 ) are possible. Additionally, embodiments using partially restricted wedges (e.g. SVME wedges intersecting each other at outsidecorner firstorder silhouette edges) such as may optionally be employed in the outputsensitive construction of visibility maps shown in exemplary flowchart ofFIG. 20A which employs SVME wedges that may be optionally locally restricted by intersecting adjacent SVME wedges as described in a step 140 of the exemplary flowchart shown inFIG. 1 . Additionally, the wedges can be used without such local wedgewedge restriction because the described methods of determining if an intersection of a wedge with a mesh polygon is actually an occlusion boundary (employing the modified pointinpolyhedron test) do not require the apriori local or global restriction of a wedge with other wedges prior to making this determination. 
FIG. 11A is a diagram showing firstorder visibility event surfaces (wedges) generated by the present method in the case of a compound silhouette contour. In this case a SVME wedge (WEDGE 1) is incident on (supported by) firstorder silhouette edge A1. WEDGE 1 intersects a firstorder silhouette edge labeled B1. As discussed inFIG. 2A , WEDGE 1 divides firstorder silhouette edge B1 into an occluded side (B1O) and an unoccluded side (B1V). This view is identical to that ofFIG. 2A .  The intersection of the firstorder wedge WEDGE1 with the firstorder silhouette edge is a compound silhouette vertex labeled CSV. The compound silhouette vertex corresponds to an inside corner of a compound silhouette contour. Using the terminology of catastrophe theory, the CSV corresponds to a tvertex of the resulting manifold. Catastrophe theory includes the study of point singularities (e.g., CSVs or TVertex) and contour singularities (e.g., a first order silhouette edge) on manifold surfaces (e.g., manifold mesh).
 Wedge2 is a firstorder visibility event surface (a SVME wedge) that is supported by (incident on) the segment B1V, which is the visible portion of the firstorder silhouette edge B1.
 Thus WEDGE1 and WEDGE2 are both SVME wedges that intersect at the point CSV. Since WEDGE1 and WEDGE2 are constructed by the pivot process (
FIG. 4A andFIG. 6A ) of the pivotandsweep method using different supporting viewcell vertices (SVV1 and SVV2, respectively) the two wedges do not join onedge to form a continuous umbral visibility event surface.  The sweep process (
FIG. 5A andFIG. 5B , andFIG. 6B ) of the present pivotandsweep method is used to construct SEMV wedges (SEMV WA and SEMV WB) which join WEDGE1 and WEDGE2 into a continuous umbral visibility event surface. The wedge SEMV WA is formed from the supporting SEMV triangle generated between CSV, SVV1, and the intervening vertex IVV1 on the supporting viewcell silhouette contour (SVSC). The extension of the two edges of this supporting triangle through the point CSV forms the semiinfinite wedge SEMV WA. Similarly, the wedge SEMV WB is formed from the supporting SEMV (swept) triangle generated between CSV, SVV2, and the intervening vertex IVV1 on the supporting viewcell silhouette contour (SVSC). The extension of the two edges of this supporting triangle through the point CSV forms the semiinfinite wedge SEMV WB.  SEMV WA and SEMV WB connect at a common edge. SEMV WA shares a common edge with WEDGE1. SEMV WB shares a common edge with WEDGE2. The four connected wedges form part of the continuous firstorder umbral visibility event surface incident on the silhouette edges A1 and B1V. The view of
FIG. 11A shows the occluded side of WEDGE1 (arrow O1) and the unoccluded (fromviewcell, firstorder visible) side of WEDGE2 (arrows U1 and U2). The view ofFIG. 11A shows the “contained” (fromviewcell, firstorder visible) side of SEMV WA and SEMV WB. As illustrated inFIG. 11A the intersection of wedges WEDGE1, SEMV WA, SEMV WB, and WEDGE2 forms a continuous event surface with the arrows U1 and U2 indicating the unoccluded side of the even surface.FIG. 11B is a different view of the same structures shown inFIG. 11A . InFIG. 11B , the view is looking up to the occluded side of WEDGE1 and the unoccluded side of WEDGE2.FIG. 11B also shows the “contained” (fromviewcell, firstorder visible) side of SEMV WA and SEMV WB.  This concludes a description of a first embodiment. In this description, a process for generating firstorder visibility event surfaces is presented. Additional embodiments specify the order of processing the polygons and edges of a mesh to generate the firstorder visibility event surfaces. Further embodiments detail precisely how the visibility event surfaces are used to determine occluded polygons and polygon fragments. In the following detailed description of an alternate embodiment, a mesh traversal algorithm is disclosed in which firstorder wedge construction and fromviewcell visibility determination are efficiently interleaved in a fronttoback visibility map construction algorithm which tends to have outputsensitive performance.

FIG. 11C shows the same two polygon meshes as depicted inFIG. 2B ,FIG. 11A andFIG. 11B .FIG. 2B andFIG. 11C both show a higherorder pivoted wedge labeled WEDGE_HIGH. This wedge is constructed by the backprojection method of identifying a visible supporting viewcell vertex discussed in conjunction withFIG. 2B and related figures. In this case the visible supporting viewcell vertex for the first order silhouette edge segment B1V is labeled VSVV. 
FIG. 11A shows that the firstorder pivoted wedge incident on BIV is labeled WEDGE2.FIG. 11A shows that a continuous umbral event surface is formed by firstorder pivoted wedges and swept wedges all of which intersect at a compound silhouette vertex (CSV).  Similarly
FIG. 11C shows that a continuous umbral event surface is also formed by higherorder wedges intersecting firstorder wedges at a compound silhouette vertex. InFIG. 11C , the higherorder pivoted wedge labeled WEDGE_HIGH is formed on the visible portion (B1V) of the firstorder silhouette edge by the method described in conjunction withFIG. 2B . Since WEDGE_HIGH is formed by an adjusted or higherorder pivot on BIV, it intersects the compound silhouette vertex labeled CSV, which is an endpoint of BIV.  The firstorder wedge WEDGE1U is also incident on the point CSV. In fact, the intersection of WEDGE1U with the entire firstorder silhouette edge (shown as segments B1V+B1O) is the CSV. In this case, a continuous umbral surface is formed between WEDGE1U (firstorder wedge, pivoted to SVV1) and WEDGE_HIGH (higherorder pivoted wedge, pivoted to VSVV); by connecting these two pivoted wedges with a swept wedge labeled SEMV WC which is formed from the swept supporting polygon constructed by sweeping from SVV1 to VSVV through the CSV. All three of these wedges intersect at the CSV.
 Comparing the higherorder umbral event surface of
FIG. 11C to the corresponding firstorder umbral event surface shown inFIG. 2B it is evident that the higherorder event surface ofFIG. 11C produces a larger umbral region, and therefore a smaller visible region. When the higherorder event surfaces are intersected with other mesh polygons and used to determine which mesh polygons and/or fragments of mesh polygons are conservatively visible from the viewcell, the result will be a more precise visibility map and corresponding PVS than if only firstorder wedges are employed. In this particular case the use of a higherorder wedge instead of the corresponding firstorder wedge does not even increase the geometric complexity of the resulting visibility map, since in this case only one swept (SEMV) wedge is used to connect the two pivoted wedges, instead of two swept wedges required in the firstorder case. 
FIG. 12 is a flowchart showing a method of constructing a conservative, linearized umbral discontinuity mesh using pivotandsweep method of constructing firstorder wedges. According to some embodiments, the process illustrated inFIG. 12 starts at step 1205, where the firstorder silhouette edges of all mesh triangles are identified. In some embodiments, firstorder silhouette edges can be identified using the method detailed inFIG. 3 .  Process flow proceeds to step 1210 to construct the initial primary wedges incident on the firstorder silhouette edges using the pivotandsweep method detailed in
FIG. 1 throughFIG. 6 . In embodiments, the primary wedges are those wedges constructed on encountered firstorder silhouette edges using the pivot and sweep method. On initial construction, in some embodiments, all wedges are initial wedges which have not yet been further restricted by an onwedge visibility step.  In the present method, wedges are defined and constructed differently than in priorart discontinuity meshing methods. In priorart discontinuity meshing methods, planar wedges are not defined in regions of the wedge for which the corresponding viewcell supporting structure (vertex or edge) is occluded from the supported mesh silhouette element (vertex or edge). As a result, these priorart methods compute exact linear wedges which may not form continuous linear umbral event surfaces because parts of the wedge are undefined because of mesh polygons intersecting the corresponding supporting polygon. These “gaps” in the linear umbral event surface are evident when only planar event surfaces are considered, for example in the method of incomplete discontinuity meshing (Heckbert 1992). These gaps actually correspond to higherorder visibility event surfaces (often quadrics) which involve edgeedgeedge events between the silhouette edge, the intervening edge intersecting the supporting polygon, and a viewcell edge. These gaps are actually filled by higherorder event surfaces when complete discontinuity meshing is employed.
 In constrast, in the present method of wedge construction according to some embodiments, a wedge is defined only by the supported mesh silhouette structure and the supporting viewcell structure: any intervening geometry does not affect the structure of the wedge.
 In the present method of firstorder discontinuity meshing, the gaps evident in the umbral boundary produced by the incomplete discontinuity meshing method (Heckbert 1992) are filled by: 1) conservatively defining a wedge during construction of the wedge by ignoring intervening geometry between the wedge's supported silhouette structure (edge or vertex) and the supporting viewcell structure (i.e., ignoring geometry intersecting the wedge's supporting polygon) and, 2) constructing conservative, planar secondary SEMV wedges at the point of intersection of a wedge with (conservatively) visible mesh silhouette edges. This point is called the compound silhouette vertex (CSV). The result is a continuous, conservative, linear umbral boundary without the “gaps” produced by incomplete discontinuity meshing methods which employ only exact linear event surfaces.
 Process flow proceeds from step 1210 to step 1215 to place the initial wedges constructed in step 1210 in a list called the WEDGE_LIST.
 Process flow proceeds to step 1220 to subject the first wedge in the WEDGE_LIST to processing comprising the steps 1225 through 1250. In embodiments, the WEDGE_LIST is implemented using any desired data structure such as a linked list or hash table.
 Process flow proceeds to step 1225 to determine the onwedge visible intersections of the mesh triangles with the wedge. The intersection of a mesh triangle and a wedge is a line segment. Those segments (or portions thereof) which are visible on the wedge are the onwedge visible segments (VIS_SEGS).
 In the present method, the onwedge visible segments are determined, in some embodiments, by a 2D mesh traversal method which determines the conservatively visible segments using an output sensitive 1maniold (polyline) traversal. This method is detailed in
FIG. 14 ,FIG. 15 , andFIG. 16 and related figures and discussed elsewhere in this specification. During the conduct of this method of onwedge visible segment determination, specific vertices where firstorder, fromviewcell silhouette edges intersect the wedge are identified. These vertices are points of intersection between the current wedge and the other wedge incident on the firstorder silhouette edge. This type of vertex is called a compound silhouette vertex (CSV) and represents at tvertices of the silhouette contour, on which secondary conservative connecting SEMV wedges are later constructed.  Process flow proceeds to step 1235 each VISIBLE_SEG is stored as a bounding segment of the firstorder umbral discontinuity mesh. These segments form boundary polylines of the umbral discontinuity mesh that conservatively partition the mesh into regions which are unoccluded from the viewcell and regions which are occluded from the viewcell.
 Process flow proceeds to step 1240, the pivotandsweep method is used to construct one or more SEMV wedges incident on the wedge's CSVS identified during the onwedge visibility step, 1225. As previously defined, each CSV corresponds to the intersection of a current wedge and another wedge which is supported on the fromviewcell, firstorder silhouette edge intersecting the current wedge. These wedges intersect at the point of the CSV.
 The sweep operation used to generate the SEMV wedges connecting the two component wedges intersecting at the CSV is the same sweep operation described as part of the pivotandsweep method, described in conjunction with
FIG. 5A ,FIG. 5B , andFIG. 6B . Sweeping occurs between the supporting viewcell vertices (SVVs) corresponding to the CSV's two component wedges. In some embodiments, the SVVs for each wedge are determined either at the time of construction (SVME wedge). In other embodiments, the SVVs for each wedge are determined during the onwedge visibility step 1225 (SEMV wedge, see step 1553FIG. 15 ).  If both wedges intersecting at the CSV pivot to the same viewcell vertex, then the two wedges exactly intersect at their edges and no new SEMV wedge is constructed.
 If the two wedges intersecting at a CSV are formed by pivoting to two vertices of the same viewcell edge, then the result of pivotandsweep construction on the CSV is a single SEMV wedge.
 If the two intersecting wedges are SVME type then this connecting SEMV conservatively approximates the quadric formed by the viewcell edge (connecting the two supporting viewcell vertices) and the two SVME silhouette edges corresponding to the intersecting wedges of the CSV. The single SEMV wedge constructed on the CSV in this case conservatively approximates the corresponding quadric formed by the EEE event. In fact, the constructed SEMV triangle can be interpreted as a degenerate quadric having infinite pitch.
 If the two wedges intersecting at the CSV are formed by pivoting to vertices belonging to different viewcell edges then the result of pivotandsweep construction on the CSV is an edgeconnected sequence SEMV wedges.
 If the two intersecting wedges are SVME type then these connecting SEMV wedges conservatively approximate the quadrics formed by the viewcell edges and the two other silhouette edges corresponding to the intersecting wedges of the CSV. Once again, each of the SEMV wedges can be considered to be a corresponding degenerate quadric with infinite pitch.
 Process flow proceeds from step 1240 to step 1250 to add all secondary initial wedges constructed in step 1240 to the WEDGE_LIST. Which means that they will ultimately be processed by step 1225 to find onwedge visible segments. In a subsequent step 1250 any SEMV wedges constructed in step 1240 are added to the WEDGE_LIST.
 Process flow proceeds to decision step 1255 to determine if all wedges in the WEDGE_LIST have been processed. If wedges remain in the WEDGE_LIST, then process flow proceeds to step 1260 to process the next unprocessed wedge in the WEDGE_LIST is selected in step 1260, where the process flow returns to step 1225.
 If, in decision step 1255, it is determined that all wedges in the WEDGE_LIST have been processed, then process flow continues to step 1265 to determine the visibility of each region of the firstorder discontinuity mesh by testing the fromviewcell visibility of a single point in each region. In some embodiments, the fromviewcell visibility of each tested point is determined using the pointocclusion method shown in
FIG. 24B . This test, which is described in detail in conjunction withFIG. 24B and related figures, is based on a modified pointinpolyhedron test. It is important that this test employs the same conservative visibility event surfaces (wedges) that were used to construct the conservative.  Process flow proceeds to step 1270, where the firstorder PVS is the set of mesh triangles and fragments of mesh triangles not inside umbral (occluded) regions of the conservative firstorder umbral discontinuity mesh.
 Comparison of NonOutputSensitive Method of Conservative Linearized Discontinuity Mesh Construction with OutputSensitive Method of Conservative Linearized Visibility Map Construction Using 3D and 2D Mesh Traversal
 As detailed in
FIG. 12 , the conservative, linearized umbral discontinuity mesh can be constructed using the general priorart approach to constructing discontinuity meshes. In this prior art approach, a wedge is constructed on each relevant silhouette edge, even those that are completely occluded from the source (viewcell in the present application). Then each wedge, including those constructed on completely occluded silhouette edges, is intersected with all potentially intersecting mesh triangles and the visible segments of mesh triangles on each wedge are later determined as a postprocess.  In contrast, the method of constructing fromviewcell conservative linearized umbral visibility maps using 3D mesh traversal (
FIG. 20A and related figures), used with 2D mesh traversal for onwedge visibility (FIG. 15 and related figures), provides a more efficient, outputsensitive method of determining fromviewcell visibility. This method exploits the intrinsic connectedness and occlusion coherence of manifold meshes and solves the visibility problem in a fronttoback order. This method interleaves the processes of visible silhouette edge determination and wedge construction on the visible silhouette edges to achieve outputsensitive performance that is relatively independent of the depth complexity of the model.  In general, an outputsensitive process has a computational cost that is determined primarily by the size of the algorithm's output, as opposed to the size of its input. Since in realistic modeled environments, the size of the visible data set from any view region (output) is typically much smaller than the size of the entire model (input), an outputsensitive from region visibility precomputation process is advantageous.
 The differences between the two methods of determining fromregion visibility using conservative, linearized, umbral event surfaces, the outputinsensitive method of
FIG. 12 , and the outputsensitive 2D/3D mesh traversal method (FIG. 20 and related figures) are summarized in Table Va. 
TABLE Va Comparison of NonOutputSensitive Method of CLUDM Construction With OutputSensitive Method of CLUVM Construction Conservative Linearized Conservative Linearized Umbral Visibility Map Umbral Discontinuity Mesh (CLUVM) (CLUDM) Output Sensitive Method NonOutputSensitive of FIG. 20 (3D Traversal) Method of FIG. 12 & FIG. 15 (2D Traversal) Wedge 1. Intersect Wedge & All OutputSensitive 2D Mesh Construction Potentially Intersecting Traversal for OnWedge Mesh Triangles Visibility 2. 2D visibility postprocess to Find Visible Segments Wedges Visible + Occluded Visible Generated Output No Yes Sensitive Number of M^{2}*N^{2}*S^{2 }*S_{Shaft} ^{2} M_{V} ^{2}*N^{2}*S_{V} ^{2 }*S_{VShaft} ^{2} Cells in (Discontinuity Mesh) Region
Where the following terms are used in the table and subsequent equations:  M=number of polygons in the model
 N=number of edges in a viewcell
 S=number of firstorder silhouette edges in environment
 S_{shaft}=number of firstorder silhouette edges in a shaft formed between a single firstorder silhouette edge and the viewcell
 M_{V}=number of visible polygons in the model
 S_{V}=number of visible firstorder silhouette edges in environment
 S_{VShaft}=number of visible firstorder silhouette edges in a shaft formed between a single firstorder silhouette edge and the viewcell
 V_{w}=number of vertices of intersection between all polygons and a single wedge
 M_{w}=number of mesh polygons intersecting a wedge
 V_{svw}=number of visible (from point or from edge) silhouette vertices on a wedge
 Seg_{vw}=number of onwedge visible segments of intersection between mesh polygons and a wedge
 The preceding table emphasizes that for the 2D/3D mesh traversal method, visible silhouette edges are identified during the fronttoback traversal of the manifolds. Consequently, only those wedges supported by visible silhouette edge segments are constructed. This results in a more outputsensitive implementation.
 The priorart method of discontinuity meshing was discussed in the Description of Background section of this specification. Discontinuity meshing methods construct both umbral and penumbral visibility event surfaces and determine their onwedge visible intersection with mesh polygons. These intersections repartition the mesh polygons such that in each face or “region” of the discontinuity mesh the view of the source (the “backprojection instance”) is topologically equivalent. The goal of priorart discontinuity meshing methods is primarily to identify illumination discontinuities that occur in the penumbra region of an area light source.
 The present method of fromregion visibility precomputation, in some embodiments, does not employ penumbral visibility event surfaces but instead uses only conservative umbral visibility event surfaces to identify mesh polygon fragments that are conservatively visible from a viewcell. These event surfaces can be employed to construct a conservative umbral discontinuity mesh as described in
FIG. 12 (nonoutputsensitive discontinuity mesh construction) andFIG. 19 ,FIG. 20 ,FIG. 21 and related figures (outputsensitive fromviewcell visibility map construction). Alternatively, the conservative umbral wedges can be intersected with each other to form umbral boundary polygons (UBPs) as described inFIG. 26 .  Table Vb presents a comparison of the method of conservative linearized umbral visibility map (shown in
FIG. 20 and related figures) with priorart discontinuity meshing methods.  The row labeled “Wedges Generated” illustrates that the present method of 3D mesh traversal (
FIG. 20 and related figures) using 2D mesh traversal (FIG. 15 and related figures) together comprise a fromregion visibility method which is relatively output sensitive, as visibility event surfaces are generated only on visible (unoccluded) (firstorder) silhouette edges. This contrasts to priorart discontinuity mesh methods in which event surfaces are generated on all (general fromregion) silhouette edges. 
TABLE Vb Comparison of Conservative Linearized Umbral Visibility Map (CLUVM) With PriorArt Methods of Incomplete and Complete Discontinuity Meshing Conservative Incomplete Complete Linearized Umbral Discontinuity Mesh Discontinuity Mesh Visibility Map (Prior Art) (Prior Art) Wedge Type Planar Exact and Planar Planar Exact Planar Exact & Quadric Conservative Exact Event Surfaces Umbral Umbral, Extremal Umbral, Extremal Penumbra, and Any Penumbra, and Any Other Penumbral Surface Other Penubral Surface intersecting Viewcell intersecting Viewcell Silhouette Edges 1. FirstOrder Wedges All FromRegion All FromRegion Only FirstOrder Edges Silhouette Edges Silhouette Edges 2. HigherOrder May Include other General FromRegion Silhouette Edges Planar Wedge Planar Conservatively Wedge Not Defined on Planar Wedge Not Structure Assumes Entire Segments of Supported Defined on Segments of Supported Silhouette Silhouette Element That Supported Silhouette Element is Visible from Are Occluded from Element That Are Entire Supporting Supporting Viewcell Occluded from Viewcell Element Element Supporting Viewcell Element Wedge Construction 1. 3D manifold traversal 1. Intersect Wedge & 1. Intersect Wedge & identifies unoccluded All Potentially All Potentially silhouette edges. Intersecting Mesh Intersecting Mesh 2. 2D manifold traversal Triangles Triangles to solve onwedge 2. 2D visibility post 2. 2D visibility post visibility process to Find Visible process to Find Visible Segments Segments Wedges Generated