US20200273354A1 - Visibility event navigation method and system - Google Patents
Visibility event navigation method and system Download PDFInfo
- Publication number
- US20200273354A1 US20200273354A1 US16/793,959 US202016793959A US2020273354A1 US 20200273354 A1 US20200273354 A1 US 20200273354A1 US 202016793959 A US202016793959 A US 202016793959A US 2020273354 A1 US2020273354 A1 US 2020273354A1
- Authority
- US
- United States
- Prior art keywords
- visibility event
- client device
- server
- packets
- navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 134
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000004891 communication Methods 0.000 claims description 30
- 230000007613 environmental effect Effects 0.000 abstract description 23
- 230000000875 corresponding effect Effects 0.000 description 51
- 230000008569 process Effects 0.000 description 44
- 230000005540 biological transmission Effects 0.000 description 20
- 239000013598 vector Substances 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 230000035515 penetration Effects 0.000 description 9
- 230000004807 localization Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 239000003795 chemical substances by application Substances 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 6
- 239000000243 solution Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 239000012895 dilution Substances 0.000 description 4
- 238000010790 dilution Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000010923 batch production Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 229910052719 titanium Inorganic materials 0.000 description 1
- 239000010936 titanium Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0047—Navigation or guidance aids for a single aircraft
- G08G5/0069—Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0011—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
- G05D1/0022—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement characterised by the communication link
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0004—Transmission of traffic-related information to or from an aircraft
- G08G5/0008—Transmission of traffic-related information to or from an aircraft with other aircraft
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0004—Transmission of traffic-related information to or from an aircraft
- G08G5/0013—Transmission of traffic-related information to or from an aircraft with a ground station
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/003—Flight plan management
- G08G5/0039—Modification of a flight plan
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0047—Navigation or guidance aids for a single aircraft
- G08G5/0052—Navigation or guidance aids for a single aircraft for cruising
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0047—Navigation or guidance aids for a single aircraft
- G08G5/0056—Navigation or guidance aids for a single aircraft in an emergency situation, e.g. hijacking
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0082—Surveillance aids for monitoring traffic from a ground station
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0086—Surveillance aids for monitoring terrain
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/02—Automatic approach or landing aids, i.e. systems in which flight data of incoming planes are processed to provide landing data
- G08G5/025—Navigation or guidance aids
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/04—Anti-collision systems
- G08G5/045—Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L23/00—Details of semiconductor or other solid state devices
- H01L23/28—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection
- H01L23/31—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape
- H01L23/3157—Partial encapsulation or coating
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L24/00—Arrangements for connecting or disconnecting semiconductor or solid-state bodies; Methods or apparatus related thereto
- H01L24/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L24/02—Bonding areas ; Manufacturing methods related thereto
- H01L24/03—Manufacturing methods
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L24/00—Arrangements for connecting or disconnecting semiconductor or solid-state bodies; Methods or apparatus related thereto
- H01L24/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L24/02—Bonding areas ; Manufacturing methods related thereto
- H01L24/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L24/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L24/00—Arrangements for connecting or disconnecting semiconductor or solid-state bodies; Methods or apparatus related thereto
- H01L24/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L24/10—Bump connectors ; Manufacturing methods related thereto
- H01L24/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L24/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/034—Manufacturing methods by blanket deposition of the material of the bonding area
- H01L2224/03444—Manufacturing methods by blanket deposition of the material of the bonding area in gaseous form
- H01L2224/0345—Physical vapour deposition [PVD], e.g. evaporation, or sputtering
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/034—Manufacturing methods by blanket deposition of the material of the bonding area
- H01L2224/03444—Manufacturing methods by blanket deposition of the material of the bonding area in gaseous form
- H01L2224/03452—Chemical vapour deposition [CVD], e.g. laser CVD
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/034—Manufacturing methods by blanket deposition of the material of the bonding area
- H01L2224/0346—Plating
- H01L2224/03462—Electroplating
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/034—Manufacturing methods by blanket deposition of the material of the bonding area
- H01L2224/0346—Plating
- H01L2224/03464—Electroless plating
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/0347—Manufacturing methods using a lift-off mask
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/03—Manufacturing methods
- H01L2224/039—Methods of manufacturing bonding areas involving a specific sequence of method steps
- H01L2224/03912—Methods of manufacturing bonding areas involving a specific sequence of method steps the bump being used as a mask for patterning the bonding area
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/0401—Bonding areas specifically adapted for bump connectors, e.g. under bump metallisation [UBM]
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/05005—Structure
- H01L2224/05008—Bonding area integrally formed with a redistribution layer on the semiconductor or solid-state body, e.g.
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/0502—Disposition
- H01L2224/05022—Disposition the internal layer being at least partially embedded in the surface
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/0502—Disposition
- H01L2224/05026—Disposition the internal layer being disposed in a recess of the surface
- H01L2224/05027—Disposition the internal layer being disposed in a recess of the surface the internal layer extending out of an opening
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/05099—Material
- H01L2224/051—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/05117—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 400°C and less than 950°C
- H01L2224/05124—Aluminium [Al] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/05099—Material
- H01L2224/051—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/05138—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/05147—Copper [Cu] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/05099—Material
- H01L2224/051—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/05163—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than 1550°C
- H01L2224/05166—Titanium [Ti] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/05001—Internal layers
- H01L2224/05099—Material
- H01L2224/05186—Material with a principal constituent of the material being a non metallic, non metalloid inorganic material
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/0554—External layer
- H01L2224/0556—Disposition
- H01L2224/05569—Disposition the external layer being disposed on a redistribution layer on the semiconductor or solid-state body
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/0554—External layer
- H01L2224/05575—Plural external layers
- H01L2224/0558—Plural external layers being stacked
- H01L2224/05583—Three-layer coating
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/0554—External layer
- H01L2224/05599—Material
- H01L2224/056—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/05638—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/05655—Nickel [Ni] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/0554—External layer
- H01L2224/05599—Material
- H01L2224/056—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/05663—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than 1550°C
- H01L2224/05681—Tantalum [Ta] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/02—Bonding areas; Manufacturing methods related thereto
- H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process
- H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area
- H01L2224/0554—External layer
- H01L2224/05599—Material
- H01L2224/05686—Material with a principal constituent of the material being a non metallic, non metalloid inorganic material
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/11—Manufacturing methods
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/11—Manufacturing methods
- H01L2224/113—Manufacturing methods by local deposition of the material of the bump connector
- H01L2224/1133—Manufacturing methods by local deposition of the material of the bump connector in solid form
- H01L2224/11334—Manufacturing methods by local deposition of the material of the bump connector in solid form using preformed bumps
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/1302—Disposition
- H01L2224/13022—Disposition the bump connector being at least partially embedded in the surface
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/1302—Disposition
- H01L2224/13024—Disposition the bump connector being disposed on a redistribution layer on the semiconductor or solid-state body
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13101—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13111—Tin [Sn] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13101—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13113—Bismuth [Bi] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13101—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13116—Lead [Pb] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13138—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13139—Silver [Ag] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13138—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13147—Copper [Cu] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/13001—Core members of the bump connector
- H01L2224/13099—Material
- H01L2224/131—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13138—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13155—Nickel [Ni] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/1356—Disposition
- H01L2224/13561—On the entire surface of the core, i.e. integral coating
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13601—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13611—Tin [Sn] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13601—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13613—Bismuth [Bi] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13601—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of less than 400°C
- H01L2224/13616—Lead [Pb] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13638—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13639—Silver [Ag] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13638—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13647—Copper [Cu] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13638—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than or equal to 950°C and less than 1550°C
- H01L2224/13655—Nickel [Ni] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/136—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof
- H01L2224/13663—Material with a principal constituent of the material being a metal or a metalloid, e.g. boron [B], silicon [Si], germanium [Ge], arsenic [As], antimony [Sb], tellurium [Te] and polonium [Po], and alloys thereof the principal constituent melting at a temperature of greater than 1550°C
- H01L2224/13681—Tantalum [Ta] as principal constituent
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00
- H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L2224/10—Bump connectors; Manufacturing methods related thereto
- H01L2224/12—Structure, shape, material or disposition of the bump connectors prior to the connecting process
- H01L2224/13—Structure, shape, material or disposition of the bump connectors prior to the connecting process of an individual bump connector
- H01L2224/1354—Coating
- H01L2224/13599—Material
- H01L2224/13686—Material with a principal constituent of the material being a non metallic, non metalloid inorganic material
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L23/00—Details of semiconductor or other solid state devices
- H01L23/28—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection
- H01L23/31—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape
- H01L23/3107—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape the device being completely enclosed
- H01L23/3114—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape the device being completely enclosed the device being a chip scale package, e.g. CSP
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L24/00—Arrangements for connecting or disconnecting semiconductor or solid-state bodies; Methods or apparatus related thereto
- H01L24/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto
- H01L24/10—Bump connectors ; Manufacturing methods related thereto
- H01L24/11—Manufacturing methods
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2924/00—Indexing scheme for arrangements or methods for connecting or disconnecting semiconductor or solid-state bodies as covered by H01L24/00
- H01L2924/01—Chemical elements
- H01L2924/01046—Palladium [Pd]
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2924/00—Indexing scheme for arrangements or methods for connecting or disconnecting semiconductor or solid-state bodies as covered by H01L24/00
- H01L2924/049—Nitrides composed of metals from groups of the periodic table
- H01L2924/0494—4th Group
- H01L2924/04941—TiN
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L2924/00—Indexing scheme for arrangements or methods for connecting or disconnecting semiconductor or solid-state bodies as covered by H01L24/00
- H01L2924/15—Details of package parts other than the semiconductor or other solid state devices to be connected
- H01L2924/181—Encapsulation
Definitions
- the present invention relates generally to navigation using 3D representations of a given space to be navigated, and more particularly to a method and system for streaming visibility event data to navigating vehicles (on land, sea, air, or space), and using the visibility event data in 3D map-matching and other computer vision navigation methods.
- GPS does not provide the precision required to navigate in obstacle rich environments.
- GPS reception requires line of sight access to at least four satellites, which is often not available in the urban canyon because of occlusion of the sky by buildings.
- GPS radio signals are relatively weak, and can be jammed or spoofed (replaced by a false data stream that can mislead the targeted navigational system).
- eLORAN Higher-power radio navigation methods such as eLORAN may be less susceptible to jamming near the transmitter. Otherwise these methods inherit most of the vulnerabilities of GPS radio navigation, including susceptibility to denial of service by attack on the fixed transmitters.
- the precision of eLORAN localization is significantly less than GPS, and the global availability of eLORAN service has been significantly limited by the recent decision by the United Kingdom to discontinue eLORAN service in Europe.
- 3D map-matching is a navigational basis that is orthogonal to GPS and eLORAN navigation and consequently does not suffer from the same limitations and vulnerabilities.
- 2.5D map matching systems such as TERCOM (Terrain Contour Matching), were effectively employed in cruise missile navigation prior to GPS.
- TERCOM Trane Contour Matching
- the ability to pre-acquire detailed 3D environmental data has increased exponentially since the time of TERCOM.
- commodity sensors are now available which generate real-time point-clouds that could potentially be matched to the pre-acquired 3D environmental data to provide rapid, precise localization in many GPS denied environments.
- a method of visibility event navigation includes one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- the method of visibility event navigation also includes receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server, detecting, via the circuitry, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device, and calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to a first visibility event packet of the one or more visibility event packets.
- the method of visibility event navigation further includes, transmitting, via the circuitry, the at least one position from the client device to the server, and receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets when the at least one position is within the navigational route at the client device from the server.
- a method of visibility event navigation includes one or more visibility event packets located at a server, including information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- the method of visibility event navigation also includes prefetching, via processing circuitry of the server, a first visibility event packet of the one or more visibility event packets to a client device, receiving, via the circuitry, at least one position of the client device in the real environment at the server, and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets to the client device when the at least one position is within the navigational route.
- a method of visibility event navigation includes at least one partial visibility event packet located at a server, the at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including visibility event packet information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- the method of visibility event navigation further includes transmitting, via processing circuitry of a client device, surface information from the client device to the server corresponding to the orientation of a sensor located at the client device, the surface information representing visible surfaces of the real environment, and receiving, via the circuitry, the at least one partial visibility event packet at the client device from the server including a subset of the visibility event packet information that intersects a maximal view frustum, wherein the maximal view frustum includes a volume of space intersected by a view frustum of the sensor during movement of the client device in the second viewcell.
- a method of visibility event navigation includes a first visibility event packet of one or more visibility event packets from a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- the method of visibility event navigation also includes, detecting, via processing circuitry of a first client device of a plurality of client devices, surface information representing visible surfaces of the real environment at a sensor in communication with the first client device of the plurality of client device, calculating, via the circuitry, at least one position of the first client device of the plurality of client devices in the real environment by matching the surface information to the visibility event packet information, and transmitting, via the circuitry, the at least one position from the first client device of the plurality of client devices to the server.
- the method of visibility event navigation further includes receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets at the first client device of the plurality of client devices from the server when the at least one position is within the navigational route, detecting, via the circuitry, position information representing the position of at least one second client device of the one or more client devices in the real environment at the sensor, and transmitting, via the circuitry, the position information from the first client device of the plurality of client devices to the server.
- a method of visibility event navigation prefetch includes a first visibility event packet of one or more visibility event packets, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- the method of visibility event navigation prefetch further includes receiving, via processing circuitry of a server, at least one position of a client device in the real environment at the server from the client device, and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets when the at least one position of the client device is within the navigational route and a fee has been paid by an operator of the client device.
- a method of visibility event navigation includes one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model.
- a visibility event navigation system includes a server and at least one client device located in a real environment and in communication with the server.
- the at least one client device includes processing circuitry configured to detect surface information representing one or more visible surfaces of the real environment at one or more sensors in communication with the at least one client device, and calculate at least one position of the at least one client device in the real environment by matching the surface information to visibility event packet information including a first visibility event packet of one or more visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within the real environment and modeled by the geospatial model.
- the processing circuitry is further configured to transmit the at least one position of the client device to the server and receive a second visibility event packet of the one or more visibility event packets from the server when the at least one position is within the navigational route.
- FIG. 2A is an exemplary illustration of an exemplary viewpoint and a corresponding exemplary view frustum having a 90 degree horizontal field of view, according to certain aspects;
- FIG. 3 is an exemplary illustration of a conservative from-subregion frustum, according to certain aspects
- FIG. 4 is an exemplary illustration of a conservative from-subregion frustum that results from viewpoint penetration into the viewcell over 166 milliseconds for a CCMVE subregion, according to certain aspects
- FIG. 5 is an exemplary illustration of an additional angular region of an extended frustum, according to certain aspects
- FIG. 6 is an exemplary illustration of a top-down view of a view frustum having a horizontal field of view of 90 degrees and undergoing rotation in the horizontal plane at a rate of 90 degrees per second, according to certain aspects;
- FIG. 7 is an exemplary illustration of a server representation of client device viewpoint position and orientation, according to certain aspects
- FIG. 9 is an algorithmic flowchart of a visibility event navigation process, according to certain exemplary aspects.
- FIG. 10 is an algorithmic flowchart of a visibility event navigation difference determination process, according to certain exemplary aspects
- FIG. 11 is an algorithmic flowchart of a visibility event navigation incident determination process, according to certain exemplary aspects
- FIG. 12 is an algorithmic flowchart of a visibility event navigation unauthorized object determination process, according to certain exemplary aspects.
- FIG. 13 is a block diagram of a visibility event navigation system workflow, according to certain exemplary aspects
- FIG. 14 is a hardware block diagram of a client device, according to certain exemplary aspects.
- FIG. 15 is a hardware block diagram of a data processing system, according to certain exemplary aspects.
- FIG. 16 is a hardware block diagram of a CPU, according to certain exemplary aspects.
- the terms “approximately,” “approximate,” “about,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween.
- a method to deliver and receive navigational data as visibility event packets wherein the visibility event packets include information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a specified navigational route within a real environment modeled by a geospatial model.
- the specified navigational route can also be referred to as a pursuit route, pursuit route, navigational route, route of navigation, route of navigation, desired route, desired route, specified route, specified route, and the like, as discussed herein.
- a navigating client device receives at least one visibility event packet from a server.
- the navigating client device employs sensors to acquire information representing the surrounding visible surfaces of the real environment.
- the data is collected as a 3D point cloud including X axis, Y axis, and Z axis values utilizing LiDAR, SONAR, optical photogrammetric reconstruction, or other sensor means that are known.
- the 3D surface information of the real environment can be acquired by the sensor and matched to the 3D surface information of at least one visibility event packet delivered from the server to the client device in order to determine the position of the client device in the real environment.
- the 3D map-matching position determination is used as the sole method of determining position or augmented with other navigational methods such as GPS, eLORAN, inertial navigation, and the like.
- the 3D map-matching determination can be configured to employ matching of directly/indirectly acquired point cloud data to the 3D modeled data.
- the 3D map-matching can also compare geometric or visual “features” acquired from the environment to the transmitted 3D data.
- the transmitted visibility event packets include 3D surface model information representing the real environment as polygons, procedural surfaces, or other representations. Instances in which the visibility event packets employ procedural representation of the surfaces of the modeled environment can include the methods of employing procedural surface representations in visibility event packets, to reduce the transmission bandwidth required to send the navigational data.
- the visibility event navigation system employs 3D map-matching of ground-truth sensor data to visibility event packet model data to determine a position and an orientation of a client device in a real environment.
- the visibility event packets provide a 3D representation that provides accelerated 3D map-matching in comparison to systems which deliver 3D model data using a conventional proximity-based streaming.
- proximity based streaming delivers many model surfaces that are occluded and therefore irrelevant to the 3D map-match process, which generally must only match unoccluded ground truth sensor data to the model data.
- proximity based streaming methods can have higher bandwidth requirements, because they deliver a large amount of occluded data that is not relevant for the 3D map-match localization.
- proximity-based streaming methods can fail to deliver potentially useful, unoccluded model surface data because it is not within the current proximity sphere that the server is using for prefetch. This loss of potentially useful data is exacerbated in densely occluded environments such as cities because the transmission of a large amount of occluded surface model data often causes the system to reduce the proximity sphere in an effort to maintain the stream.
- the visibility event navigation system employs computer vision systems other than conventional 3D map-matching to match ground-truth surfaces or features to visibility event packet model data representing basic surfaces or salient 3D features of the modeled environment.
- the efficient delivery of high-precision, relevant 3D surface model data enables a localization solution that is faster, more efficient, and more precise than simultaneous localization and mapping (SLAM).
- Pure SLAM solutions do not incorporate prior-knowledge of the environment into their localization solutions but instead depend upon effectively developing the real-time 3D model de novo, in real time. Consequently, pure SLAM is more computationally expensive and less precise than 3D map-matching to prior, processed (e.g., previously geo-registered), and relevant (unoccluded) 3D data delivered by a visibility event navigational data stream.
- the increased computational efficiency of 3D map-matching compared to SLAM-focused localization can free on board sensor and computational resources to focus on identifying, tracking, and avoiding moving objects in the environment, such as other client devices.
- visibility event navigation data is compared to an actual ground-truth 3D representation of visible surfaces of an environment in which navigation occurs.
- the comparison can employ 3D map-matching methods, other surface matching methods, computer vision matching methods, and the like.
- data representing the difference between the streamed visibility event navigation data and the ground truth sensor data is streamed to the visibility event navigation server.
- the server compares this information suggesting an apparent change in the structure of the 3D environment in a particular region of the model corresponding to the viewcells from which the changed surfaces would be visible.
- the method enables a maintainable, sustainable, scalable, and deliverable representation of earth's natural and urban environments for the purposes of high-precision navigation.
- the primary navigation routes represented by primary visibility event navigation packets delivered from the server are specifically calculated to avoid low-altitude helicopter traffic by incorporating ADS-B (Automated Dependent Surveillance-Broadcast) information into the route planning.
- the server uses this dynamic information about helicopter and other low altitude aircraft, as well as information about helicopter routes and landing pads, to compute and stream routes for autonomous vehicles that avoid other low altitude aircraft.
- the current position of any aircraft being controlled by the as determined using the visibility event navigation system, as well as the planned route could potentially be incorporated into ADS-B system or other commercial systems (e.g. commercial systems such as FOREFLIGHT® or GARMINPILOT®), which would make the position of the controlled aircraft available to other aircraft though these networks.
- small unmanned aerial vehicles such as small drones operating in a commercial application of package delivery, are given visibility event navigational packets corresponding to flight routes that are intended to survey regions for wires, cables, towers, cranes and other hazards to low altitude aircraft such as helicopters.
- the relevant visibility event navigation data is made available to piloted client devices not being controlled by the visibility event navigation system, for the purpose of obstacle avoidance.
- the server can then compute real-time collision course and issue proximity or impending collision warnings to the client device.
- the server streams the relevant visibility event packets so that the computation can be made on board the piloted client device. By prefetching all the visibility event packets required for an intended navigational route, a continuous connection between the client device and the server is unnecessary to provide the collision warning service.
- a server is configured to deliver visibility event packets to certified client devices which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket.
- the transmission of visibility event packets to certified clients ensures that the client device has met certain requirements for navigating along the intended route. In some aspects, this can allow autonomous navigation in airspace to client devices including operators and airframes that meet specified requirements for safe operation, such as airworthiness.
- the visibility event navigational data stream is made available through the provenance of an issuing navigational authority. Additionally, the visibility event navigational data stream is encrypted, authenticated, registered, and transactional.
- a client device such as an aircraft or vehicle employing the visibility event navigation client system can use onboard sensors to detect uncertified/unregistered objects operating in the vicinity of the certified/registered client device, and to report the location of the intruding object to the server.
- the implementation of the visibility event navigation system could enhance the ability of the Federal Aviation Administration (FAA), the Department of Homeland Security (DHS), or similar issuing authority to control and monitor low-altitude airspace.
- a sustainable revenue stream can be generated by the issuing authority operating the visibility event navigation stream through modest, per-trip transaction fees for autonomous or semi-autonomous client devices using the visibility event navigation stream in the controlled airspace.
- the navigating client device receives visibility event packets for the purpose of high-precision navigation within a prescribed flight route.
- the client device can be required to transmit its location to the server delivering the packets at predetermined intervals.
- the server can be configured to periodically or continuously deliver alternate visibility event packets including visible surfaces that are encoded as visibility event packets and located along one or more flight routes from a current location of the client device to a safe landing zone. In doing so, flight safety can be reinforced by providing data that enhances the ability of the aircraft to make a safe landing at any point in time.
- the client device is a piloted aircraft, wherein the visibility event data representing an alternate route to one or more safe landing zones is immediately available to the pilot in case of engine failure or other circumstances that necessitate an immediate safe landing.
- the safe landing zone data is used by the pilot to identify reachable safe landing zones.
- the client device may be controlled by an autopilot or other system which is instructed to follow one or more of the alternate routes supplied as a visibility event navigational stream to a safe landing zone.
- the client device may be autonomous or semiautonomous, or have an autonomous navigation system in which the visibility event packets supply information used by 3D map-matching or other computer vision navigation systems that employ a high precision 3D representation of the environment in which the navigation will occur.
- a visibility event navigation system includes a command within the client device to follow an alternative navigational route to safe landing in case the client device loses communications with the server.
- the server can be configured to determine if the location of the client device deviates from the specified flight route, and if the client device is determined to deviate, the server transmits a command to the client device which causes the aircraft to divert to the alternate route described by the alternate route visibility event packets, thus making a safe landing in a predetermined safe landing zone.
- the alternate route defined by the visibility event navigation packets that the client device navigates along, either from a default on-board instruction or from an instruction explicitly transmitted from the visibility event navigational server, may not necessarily be the closest landing zone. Instead, the alternate route may be selected based on other safety criteria, such as proximity to people, ground vehicles, or proximity to security-sensitive buildings or transportation routes.
- the server delivers visibility event navigation packets to visibility event navigation systems including client devices such as commercial aircraft, package delivery drones, and the like.
- the visibility event navigation system can deliver visibility event navigation packets from the server to client devices used by law enforcement agencies, for example surveillance drones.
- the visibility event navigation system facilitates a dual-use in which commercial aircraft may optionally assist law enforcement agencies by optionally receiving visibility event navigation packets which define a route to an incident of threatened public safety, and optionally receive an instruction to navigate along this route to the incident area.
- the transmission of navigational data as visibility event packets for navigation along a predetermined route includes a method of enhanced predictive prefetching as described in copending parent application Ser. No. 15/013,784.
- the bandwidth required to stream visibility event navigational packets is reduced when the desired navigational route, on the ground or in the air, is known before packet transmission.
- prior knowledge of the navigational route improves the efficiency of navigation-driven prefetch of visibility event packets, whether the packets are being delivered for visualization, for example in entertainment or serious visualization applications, or for navigation such as visibility even navigational data for use in 3D map-matching or other robotic vision based navigation systems.
- the prior knowledge of navigational intent along a specific navigational route limits the number of visibility event packets that must be prefetched, since navigational uncertainty is reduced, and the ability to deterministically, rather than probabilistically, prefetch packets corresponding to viewcell transitions far ahead of the navigating client is increased.
- a deterministic prefetch approach enables efficient streaming of visibility event navigational packets over high latency and low bandwidth networks.
- FIG. 1 is an exemplary illustration of a visibility event navigation system 100 , according to certain aspects.
- the visibility navigation system 100 describes a system for transmitting and receiving navigational data as visibility event packets between a client device 108 and a server 106 .
- the visibility event navigation system 100 can include a network 102 , a sensor 104 , a server 106 , and a client device 108 .
- the sensor 104 can be one or more sensors 104 and is connected to the server 106 and the client device 108 via the network 102 .
- the sensor 104 can include a LiDAR sensor, a SONAR sensor, an optical sensor, and the like.
- the senor 104 can be directly connected to the client device 108 , as in the case of the sensor 104 being directly employed by the client device 108 for 3D map-matching, computer vision surface matching methods, surface feature matching methods, and the like.
- the sensor 104 can be utilized by circuitry of the client device 108 to identify and track moving objects in an environment. Further, the sensor 104 can be employed by the client device 108 to acquire information representing visible surfaces of the environment surrounding the client device 108 .
- the 3D surface information acquired by the sensor 104 , via the circuitry of the client device 108 is matched to 3D surface information of a visibility event packet that is delivered from the server 106 to the client device 108 in order to determine the position or location of the client device 108 within the environment.
- the server 106 can include one or more servers 106 and is connected to the sensor 104 and the client device 108 via the network 102 .
- the server 106 includes processing circuitry that can be configured to receive position information of the client device 108 from the processing circuitry of the client device 108 .
- the circuitry of the server 106 can be configured to transmit visibility event packets to the client device 108 when the position of the client device 108 is located within a predetermined navigational route.
- the server 106 can include network nodes that transmit information to one or more client devices 108 at different positions along different navigational routes.
- the client device 108 can include one or more client devices 108 and is connected to the sensor 104 and the server 106 via the network 102 .
- the client device 108 can include an autonomous aircraft, a semiautonomous aircraft, a piloted aircraft, an autonomous ground vehicle, a semiautonomous ground vehicle, and the like.
- the client device 108 includes processing circuitry that can be configured to detect surface information representing visible surfaces of a real environment the client device 108 is located in. In certain aspects, the circuitry of the client device 108 utilizes the sensor 104 to determine the surface information.
- the circuitry of the client device 108 can also be configured to calculate a position of the client device 108 in the environment by matching the surface information to visibility event packet information corresponding to visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, in which the first and second viewcells represent spatial regions of a navigational route with the environment that are modeled by the geospatial mode.
- the circuitry of the client device 108 can be configured to transmit the position information of the client device 108 to the server 106 and receive visibility event packets from the server 106 when the position of the client device 108 is within the navigational route.
- the processing circuitry of the server 106 can be configured to perform the methods implemented by the processing circuitry of the client device 108 , as described herein.
- the network 102 can include one or more networks and is connected to the sensor 104 , the server 106 , and the client device 108 .
- the network 102 can encompass wireless networks such as Wi-Fi, BLUETOOTH, cellular networks including EDGE, 3G and 4G wireless cellular systems, or any other wireless form of communication that is known, and may also encompass wired networks, such as Ethernet.
- FIG. 2A is an exemplary illustration of an exemplary viewpoint and a corresponding exemplary view frustum having a 90 degree horizontal field of view 200 , according to certain aspects.
- 3D map-matching, computer-vision based surface methods, feature matching methods, and the like employ 2D and/or 3D scans of a real environment (for example, photographic or LiDAR) via a single scan point.
- the scan point can also be referred to as viewpoint, as discussed herein.
- the scanning method used to obtain a representation of the real environment may have a limited directional extent.
- the directional extent of the scan method can also be referred to as the scan frustum or view frustum, as discussed herein.
- the average or principal direction of the scan can also be referred to as the view direction vector, as discussed herein.
- the viewpoint 202 includes a corresponding view frustum 204 that includes a view direction vector for processing visibility event packets.
- the use of view direction vectors to compute smaller visibility event packets can be useful for streaming a fixed or limited view direction vector in real environments.
- the visibility event navigation can include acquiring 3D surface information of a real environment, acquired by a sensor 104 in communication with a client device 108 , and matching the 3D surface information of at least one visibility event packet to determine the position of the client device 108 in the environment.
- the view direction vector can be pointed in any direction for any viewpoint within any viewcell, corresponding to a view direction vector of a predetermined field of view.
- FIG. 2A shows a viewpoint 202 , and a corresponding view frustum having a 90 degree horizontal field of view 204 .
- FIG. 2B is an exemplary illustration of a conservative current maximal viewpoint extent (CCMVE) 206 of penetration into a viewcell from a known position after 166 ms of elapsed time using the exemplary view frustum 210 having a 90 degree horizontal field of view 200 , according to certain aspects.
- FIG. 2B shows a top-down view of a 90 degree horizontal field of view frustum 210 enveloping the CCMVE-5 206 .
- FIG. 2B also shows a conservative current maximal viewpoint extent 206 , CCMVE-5, of penetration into the viewcell 208 from a known position after 166 ms of elapsed time.
- the CCMVE-5 206 is determined from a last known position and the maximal linear and angular velocity and acceleration of the viewpoint 202 .
- rotation rates of the frustum 204 approach 130 to 140 degrees per second.
- a 90 degree yaw ability to scan the environment is more suitable and enjoyable for the viewing of a spectator.
- FIG. 3 is an exemplary illustration of a conservative from-subregion frustum 300 , according to certain aspects.
- FIG. 3 shows that the resulting conservative from-subregion frustum 304 is larger than the corresponding from-point frustum 306 at viewpoint 302 , even if it assumed that no view direction vector rotation has occurred, for a CCMVE-5 308 representative of predicted viewcell penetration at 166 ms into a viewcell region 310 .
- FIG. 4 is an exemplary illustration of conservative from-subregion frustum that results from viewpoint penetration into the viewcell over 166 milliseconds for a CCMVE subregion 400 , according to certain aspects.
- FIG. 4 shows a resulting conservative from sub-region frustum 402 that results from a CCMVE-5 404 representative of viewpoint penetration into the viewcell 406 sub-region over 166 milliseconds, together with rotation of the view direction vector 15 degrees to the right 408 or 15 degrees to the left 410 from an initial view direction vector orientation 412 .
- the server 106 can employ the extended 120 degree frustum 402 (i.e., 120 degree predicted maximum from-subregion frustum) to determine the subset of the visibility event packet data to actually transmit to the client device 108 . This determination is made by determining the set of unsent surfaces of the corresponding visibility event packet that intersect the extended frustum.
- the visibility event packet data is precomputed using the method of first-order from region visibility.
- the set of surfaces belonging to the corresponding PVS, incrementally maintained using the delta-PVS VE packets, that have not already been sent is maintained using the technique of maintaining the shadow PVS on the server 106 .
- the visibility event packets are precomputed assuming a full omnidirectional view frustum spanning 12.56 steradians of solid angle.
- the server 106 can employ the extended view frustum to cull portions of the precomputed visibility event packet that fall outside of the maximum possible predicted extent of the client device 108 view frustum, as determined from the ping latency and the maximal angular velocity and acceleration of the view frustum, as well as the maximum predicted extent of penetration of the viewpoint into the viewcell.
- This method ensures that all of the potentially visible surfaces are transmitted, while minimizing bandwidth requirements, by deferring the transmission of visibility event packet surfaces that are not within the current conservative extended frustum, or which happen to be backfacing with respect to the conservative current maximal viewpoint extent of penetration into the viewcell.
- the above-disclosed methods comprise determining a conservative representation of the client device's 108 view frustum from the temporal reference frame of the server 106 , and using this extended frustum to cull those surfaces of the corresponding visibility event packet that could not possibly be in the client device's 108 view frustum. Consistent with aspects of the present disclosure, all of the transmitted surface information is represented at the highest level-of-detail.
- the visibility event packets can be encoded using geometric and surface models at a plurality of levels-of-detail, including a plurality of levels of geometric, texture, and other surface detail.
- the visibility event packets can be transmitted at a lower level-of-detail during periods of low bandwidth availability, and/or high bandwidth requirement, in order to maximize the probability that the information encoding newly exposed surfaces arrives on time, such as before the surface is actually exposed in the client device 108 viewport.
- a visibility event packet containing relatively low level-of-detail surface information can initially be transmitted and later replaced by a visibility event packet containing higher level-of-detail information. This exploits the fact that certain 3D map-match systems and/or computer vision systems can have lower precision when matching to newly exposed surfaces and more precision when matching to surfaces that have been exposed for a longer period of time.
- the system has more time to converge on a high-precision match solution for surface elements that are present longer.
- sending elements of a visibility event packet that correspond to newly exposed surfaces for the visibility event navigation system 100 at low level-of-detail saves transmission bandwidth while preserving the efficiency of client-side processing, since the level-of-detail of the transmitted information is matched to the spatiotemporal performance profile of the computer vision or 3D map-matching system.
- FIG. 5 is an exemplary illustration of an additional angular region of an extended frustum 500 , according to certain aspects.
- the limited spatiotemporal performance of robotic vision systems including 3D map-matching navigation systems and the similarly limited spatiotemporal performance of the human visual system, can be exploited by sending low level-of-detail surface information if surfaces fall outside the region of the extended view frustum, as determined, using one or more of the ping latency, the maximum viewpoint translation velocity and acceleration, the maximum angular velocity, and acceleration of the view direction vector.
- FIG. 5 shows an additional angular region of extended view frustum 502 that spans an additional 15 degrees 504 on each side of the extended 120 degree frustum 402 shown in FIG. 4 .
- the server 106 transmits surfaces that fall in the subfrustum between 120 degrees and the maximally extended frustum of 150 degrees 504 at a lower level-of-detail than the other visibility event surface data that fall within the 120 degree extended frustum.
- the disclosed method thus provides a region of uncertainty between 90 degrees and 120 degrees of the subfrustum 506 , as well as an additional buffer region against view direction vector rotation between 120 degrees and 150 degrees of the subfrustum 504 , which may be useful if the directional visibility gradient, when the rate of exposure of surfaces per degree of view direction vector rotation is high, or if the available bandwidth has a high degree of variability, such as network jitter.
- the low level-of-detail surface information can potentially be replaced by a higher level-of-detail representation.
- FIG. 6 is an exemplary illustration of a top-down view of a view frustum having a horizontal field of view of 90 degrees and undergoing rotation in the horizontal plane at a rate of 90 degrees per second 600 , according to certain aspects.
- FIG. 6 shows a top-down view of a view frustum having a horizontal field of view of 90 degrees 602 , and undergoing rotation in the horizontal plane at a rate of 90 degrees per second 604 in a direction from a first region 606 toward a fourth region 612 .
- surfaces to the right-hand side of the view frustum 602 will undergo incursion into the rotating frustum at a first region 606 , whereas surfaces near the left-hand extreme of the view frustum 602 at the first region 606 will exit the frustum 602 during frustum rotation.
- those surfaces in the first region 606 have been in the frustum 602 for between 750 ms and 1000 ms as a consequence of exposure via the second region 608 , the third region 610 and the fourth region 612 during the rotation.
- the surfaces have been in the frustum 602 for between 500 ms and 750 ms; in the third region 610 , the surfaces have been in the frustum 602 for between 250 ms and 500 ms; and in the fourth region 612 , the surfaces have been in the frustum 602 for between 0 ms and 250 ms. Surfaces that have been in the frustum 602 for only a brief period of time have also been exposed to the graphical display in communication with the client device 108 for a concomitantly brief period of time
- FIG. 7 is an exemplary illustration of a server representation of client viewpoint position and orientation 700 , according to certain aspects.
- the visibility event navigation system 100 can be used for streaming visibility event data as well as other data to a navigating client device 108 such as a vehicle or aircraft.
- the client device 108 can be autonomous, semi-autonomous, on-board pilot or driver operated, remotely operated, and the like.
- the visibility event navigation system includes a server 106 that can be configured to employ a navigation-based prefetch scheme in which the entire specified navigational route to an intended destination is predetermined.
- all of the corresponding visibility event navigational packets for the entirety of the navigational route, can be transmitted by the circuitry of the server 106 to the client device 108 as a download prior to initiating navigation.
- all of the visibility event navigational packets can be streamed to the client device 108 during an initial portion of the trip, in which the visibility event packets being streamed correspond to viewcell boundaries that may be penetrated during movement of the client device 108 within the specified 3D navigational route.
- the visibility event navigational server 106 and client device 108 do not require a network connection that is constantly available. However, if the specified navigational route is changed by the server 106 due to traffic, environmental changes, or any other reasons for re-routing, the previous set of visibility event navigational packets, corresponding to the original specified navigational route beyond the point of rerouting, may be delivered unnecessarily, making inefficient use of the available network bandwidth.
- the visibility event navigational packets can be streamed in a demand-paged mode in which the server 106 monitors the actual location of the navigating visibility event client device 108 from self-position reports of the client device 108 , sensor 104 data, and/or position reports of other participating navigating client devices, and streams the corresponding visibility event navigation packet to the navigating client device 108 just before the packet is needed.
- a prefetch method which balances the constraints of reducing bandwidth requirements while reliably preventing visibility event cache underflow on the client device 108 can be used.
- the server 106 prefetches the visibility event packets using a navigation-predictor agent that is virtually navigating (within the server-side environmental model) at a specified distance or a specified time ahead of the client device 108 (navigating in the corresponding real environment).
- the server-side navigation-predictor agent can be controlled by the server 106 to maintain a position that is ahead of the actual navigating client device's 108 corresponding position by a defined period which exceeds the round-trip time network delay between the server 106 and the client device 108 .
- the defined period is short enough to allow the intended navigational route to be changed at any time before too much bandwidth has been committed to delivering information for a diverted route.
- the server 106 and/or the client device 108 can modulate velocity along the intended navigational route in order to avoid visibility event cache underflow.
- This variable-delay navigational prefetch method is described in the copending parent application Ser. No. 15/013,784.
- the navigational prefetch method for streaming visibility event navigational packets support 3D map-match and other computer vision based navigation in real environments.
- the server representation of client position and orientation 700 describes a method of navigation-based prefetch of visibility event navigational packets in which the area and/or volume of navigational prediction collapses to a precise navigational route.
- the server representation of client position and orientation 700 includes a current location of the navigating client 702 , a current frustum 704 .
- the frustum representing the scan frustum for a LiDAR scanner, an active sensor, a passive sensor, and the like.
- the current position 702 follows a navigation route indicated by a future position 714 . In certain aspects of the present disclosure, the current position 702 follows two seconds behind a future position 714 .
- the current position 702 can provide a first frustum 704 which indicates the field of view of the client navigational sensor 104 at the current position 702 .
- the circuitry of the server 106 modifies the position of the future position 714 directly.
- the future positions act as a virtual navigational prefetch agent, moving ahead on the pursuit route 710 in the modeled environment, which corresponds to the client device's 108 navigational route in the real environment.
- the fields of view are processed by the circuitry of the client device 108 to match the sensor data, obtained within the frustum to the portions of the environmental model, delivered by the visibility event packets from the server 106 , that are within a corresponding virtual frustum.
- the current position 702 , the position at the second location 706 and the future position 714 travel along the intended navigational route 710 , which corresponds to the commands received by the circuitry of the server 106 . Additionally, the circuitry can be configured to pre-signal the intended navigational route with respect to the location of the future position 714 .
- the server representation of client position and orientation 700 can predict navigational intent with minute uncertainty, such as 166 milliseconds. The low value of uncertainty allows network latency to be concealed when the server updates the future position 714 .
- the future position 714 is illustrated as being the position of the server's virtual navigational prefetch agent at a time 166 ms after a coordinated current time.
- the current instantaneous state of the real navigational environment is known to the virtual navigational prefetch agent of the server 106 with a dilution of precision that reflects the round trip time (RTT) between the server 106 and the sensor 104 utilized to report locations of objects in the real environment.
- RTT round trip time
- the position and orientation of the future position 702 is known to the server 106 with dilution of precision that reflects the 166 ms RTT between the client device 108 and the server 106 .
- the circuitry of the server 106 can determine the position and orientation of the future position to within 1 ⁇ 2*RTT, however, any visibility event packet transmitted from the server 106 to the client device 108 will be delayed an additional 1 ⁇ 2*RTT, making the effective precision bounds limited to RTT. Consistent with certain aspects of the present disclosure, it is assumed that the dilution of precision is determined by the full RTT.
- the circuitry of the server 106 determines representations of the future position 714 and transmits the representations to the client device 108 .
- the representations can be less than RTT old (time elapsed from the current time) and have a dilution of precision that is a function of (RTT—elapsed time).
- the server 106 representation of the future position 714 includes a future frustum 716 and corresponding probabilistic prefetch 718 which can be utilized to determine a predictive route of uncertainty.
- This predictive route of uncertainty utilizes the probabilistic prefetch 718 in which the future position 714 and orientation states less than RTT time has elapsed from the current time is shown as a dashed trajectory with a future frustum 716 , otherwise referred to as a superimposed navigational cone of uncertainty, reflecting the server's 106 effective uncertainty of position and/or orientation of the future position 714 .
- the server's 106 representation of the current position 702 is undiluted by the RTT uncertainty, and the server's 106 representation of the deterministic prefetch 712 portion of the position 710 can be represented by the server 106 as a space curve with a specific view direction vector for each position on the space curve.
- the predictability of navigation is enhanced.
- the area and/or volume of the corresponding current frustum 704 and the second frustum 708 are decreased.
- the decrease in area and/or volume of the current frustum 704 and the second frustum 708 further places limits onto the predicted position of navigation 710 , effectively decreasing the bandwidth required for visibility event packet streaming.
- the predicted position 710 can be utilized to defer the transmission of significant portions of the visibility event packets.
- the visibility event protocol defined by a navigation driven predictive prefetch of precomputed visibility event packets is an incremental and progressive method of streaming navigational data.
- a series of partial and/or deferred visibility event packets that reflect predicted sequences of viewcell-to-viewcell boundaries are processed via the circuitry of the server 106 .
- runtime conservative view frustum culling methods are employed in which some parts of a particular visibility event packet, corresponding to a first viewcell boundary, may go untransmitted, even as the position penetrates later transited viewcell boundaries.
- the prefetch of visibility event packets essentially becomes deterministic. This reduces the bandwidth required to stream the visibility event packets, while enabling the virtual navigational prefetch agent of the server 106 to respond to changes occurring along the intended navigational route by transmitting a new stream of visibility event packets corresponding to a diverting route in a time period between 166 ms and 2000 ms, as shown in FIG. 7 .
- FIG. 8 describes a high-level architectural block diagram of the visibility event navigation system 800 , according to certain aspects.
- the visibility event navigation system 100 can be attributed to a navigation system on-board a human-operated, autonomous, or semi-autonomous aircraft or ground vehicle.
- FIG. 8 illustrates an exemplary navigation system including a client device 108 such as an on-board an autonomous quadrotor aircraft in various stages of flight corresponding to locations 820 , 822 , 824 and 826 .
- the visibility event navigation system 100 can include a network 102 including a plurality of network nodes 813 , 815 and 839 .
- the network 102 including network nodes 813 , 815 and 839 can include wireless networks such as Wi-Fi, BLUETOOTH, cellular networks including EDGE, 3G and 4G wireless cellular systems, IMS, or any other wireless form of communication that is known, and may also encompass wired networks, such as Ethernet.
- the client device 108 transmits a request for a navigational route to an intended destination which may be an individual home, a business building, a porch, a rooftop, an inside of a structure, and the like.
- Transmission of the request 805 to the server 106 employs a network node 813 .
- the server 106 authenticates the identity of the client device 108 and transmits 807 initial visibility event navigation data corresponding to the visibility event packets for viewcells on the navigational route to the intended destination.
- the navigational route can include a portion of the environment represented by a set of connected viewcells within the navigable 3D space of the environment and a corresponding model of the environment.
- FIG. 8 further shows the client device 108 at position 824 .
- Position 824 depicts a location that is not on the navigational route to the intended destination, but on a route 840 that substantially deviates from the navigational route.
- the server 106 After receiving the transmission 835 from the client device 108 at position 824 via the network node 839 , the server 106 determines that client device 108 is located on the navigational route to the intended destination. If the client device 108 is determined to not be on the navigational route, then the server may transmit a signal 837 from the network node 839 to the client device 108 at position 824 including a safe landing command.
- the safe landing command causes the client device 108 to utilize the visibility event packets for the alternate route to a safe landing zone to navigate along the route to the safe landing zone 840 and land at position 826 .
- this safe landing command may be implemented as a default behavior of the visibility even navigation system in instances where communication between the client device 108 and the server 106 is interrupted for a predetermined period of time.
- the server 106 in communication with the client device 108 authenticates the request of the client device 108 .
- the server 106 is configured to deliver visibility event packets to certified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets to certified client devices 108 ensures that the client device 108 has met certain requirements for navigating along the intended route.
- the circuitry of the server 106 transmits visibility event navigational packets to the client device 108 .
- the initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination.
- the circuitry of the server 106 transmits visibility event data representing the set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the route to a safe landing zone.
- the circuitry of the client device 108 employs the relevant visibility event packet navigational data and 3D sensor data to determine location of the client device 108 within the real environment using 3D map-matching.
- the matching includes other matching methods, computer vision methods, and the like.
- the matching may result in a position relative to the relevant obstacle or target surfaces in the real environment.
- the determination can also result in latitude, longitude, and/or altitude calculations if the environmental model data of the visibility event packets is geo-registered.
- the circuitry of the client device 108 transmits information corresponding to the location of the client device 108 in the real environment to the server 106 .
- the location information can includes latitude, longitude, and/or altitude calculations if the environmental model data of the visibility event packets is geo-registered.
- the server 106 determines if the transmitted location of the client device 108 is on the navigational route.
- the determination can employ multiple parameters to determine if the location of the navigating client device 108 substantially deviates from the navigational route.
- the parameters can include the average distance from the navigational route over a predetermined period of time in which the desired route is defined as a navigational route bounded by an allowed region of navigational uncertainty reflecting a known or desired precision in navigation. If the client device is determined to be located along the navigational route, resulting in a “yes” at step 912 , the visibility event navigation process 900 proceeds to step 914 . Otherwise, if the client device is determined to not be located along the navigational route, resulting in a “no” at step 912 , the visibility event navigation process 900 proceeds to step 918 .
- the circuitry of the server 106 transmits additional visibility event navigation packets corresponding to viewcells that are farther along the specified navigational route to the client device 108 .
- the client device 108 receives the instructions transmitted from the server 106 and continues to navigate along the specified navigational route.
- the visibility event navigation process 900 ends after completing step 916 .
- the visibility event navigation process 900 proceeds to step 910 .
- the circuitry of the server 106 transmits a command to the client device 108 in which the client device 108 is provided with instructions to navigate on an alternate route towards a safe landing zone and land at the safe landing zone.
- the client device 108 navigates along the alternate route and lands at the safe landing zone.
- the visibility event navigational client executes an internal instruction to follow an alternate route to a safe landing zone and land if communication with the visibility event navigational server is interrupted.
- the visibility event navigation process 900 ends upon the completion of step 920 . In other aspects, the visibility event navigation process 900 proceeds to step 910 .
- FIG. 10 is an algorithmic flowchart of a visibility event navigation difference determination process 1000 , according to certain exemplary aspects.
- the visibility event navigation difference determination process 1000 describes a client device 108 navigating along a navigational route and the determination of a deviation of the client device 108 from the navigational route.
- the circuitry of the client device 108 transmits a request to the server 106 for visibility event packets.
- the visibility event packets can be navigational packets that correspond to the navigational route that the client device 108 intends to traverse.
- the server 106 is in communication with the client device 108 and authenticates the request of the client device 108 .
- the server 106 is configured to deliver visibility event packets to certified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets to certified client devices 108 ensures that the client device 108 has met certain requirements for navigating along the intended route.
- the circuitry of the server 106 transmits visibility event navigational packets to the client device 108 .
- the initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination.
- the circuitry of the client device 108 determines a difference between the environmental model data delivered by the visibility event packets and the corresponding 3D ground-truth, as determined using the 3D sensor data of a sensor 104 that is in communication with the client device 108 .
- the difference determination occurs during the 3D map-matching in which the environmental model data supplied by the visibility even packets and the 3D data acquired by the sensor 104 determine the location of the client device 108 within the real environment.
- a caution state in navigation is initiated at the client device 108 via the circuitry of the client device 108 .
- the difference between the representation of the environment and the ground truth scan from the sensor 104 results in the circuitry utilizing the visibility event packet data with caution in the immediate area the client device 108 is located at.
- the visibility event navigation can employ less precise navigational methods such as SLAM, GPS, and the like. As such, the more precise 3D map-matching or other computer vision based navigation can resume when there is sufficient match between the delivered model data and the real-time sensor data that does not surpass a predetermined difference threshold.
- the difference information obtained by the circuitry of the client device 108 is transmitted from the client device 108 to the server 106 .
- the visibility event navigation system 100 exploits the fact that the amount of information generally required to represent a difference over any modest period, for example in an urban environment, is very small compared to the amount of information required to represent the urban environment. As such, this small amount of difference information can be transmitted as raw point cloud data or could be first semi-processed, for example into a voxel representation, or into a procedural parametric representation.
- the difference information is processed into polygon or procedural surfaces that can be encoded as visibility event packets for transmission to the server 106 .
- verification of the difference information can include the server 106 receiving substantially the same or similar difference information, for the same corresponding region of the navigated environment, from a plurality of client devices 108 over a predetermined period of time.
- the significance of the difference can include spatial metrics, or other metrics that weigh the importance of the change to navigation.
- the change information can indicate a new structure that impedes navigation along one or more navigation routes.
- the significance of the change information can include metrics of saliency to the client device 108 including 3D map-matching or other computer vision algorithms that use the real surfaces and the corresponding environmental 3D model data, delivered as visibility event packets.
- the visibility event navigation difference determination process 1000 proceeds to step 1016 . Otherwise, if the delivered environmental change information does not satisfy a predetermined threshold for significance, resulting in a “no” at step 1014 , the visibility event navigation difference determination process 1000 ends.
- the changes to the environmental model are incorporated and encoded as visibility event packets by the circuitry of the server 106 .
- the visibility event packets are transmitted on an on-demand basis to any subsequent client devices 108 in which the changed surfaces are unoccluded from the viewcells defining the current navigational route.
- the transmission employs the same navigation-based predictive prefetch used to transmit the unchanged packets.
- the updated visibility even navigation data is received by a second client device.
- the second client device can include an aircraft or ground vehicle that has entered, or plans to enter regions of the real environment in which the changes detected by the client device 108 have now been encoded as visibility event packets.
- the server 106 is in communication with the client device 108 and authenticates the request of the client device 108 .
- the server 106 is configured to deliver visibility event packets to certified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets to certified client devices 108 ensures that the client device 108 has met certain requirements for navigating along the intended route.
- the circuitry of the server 106 transmits visibility event navigational packets to the client device 108 .
- the initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination.
- the client device 108 navigates along the specified navigational route.
- the circuitry of the server 106 determines if there has been an incident in the real environment.
- the incident may be an event that threatens public safety or security. If an incident is determined to be present, resulting in a “yes” at step 1110 , the visibility event navigation incident determination process 1100 proceeds to step 1112 . Otherwise, if an incident is not determined to be present, resulting in a “no” at step 1110 , the visibility event navigation incident determination process 1100 proceeds to step 1116 .
- visibility event packets corresponding to a route to the incident location are transmitted from server 106 to the client device 108 via the circuitry of the server 106 .
- the transmitted visibility event packets can correspond to one or more ingress or one or more egress routes that are to or from the incident location.
- the circuitry of the server 106 transmits a command to the client device 108 , which instructs the client device 108 to navigate to the incident location or to a location along the ingress/egress routes that are to/from the incident.
- the diversion of the client device 108 to an egress route enables surveillance for incident perpetrators attempting to escape from the incident location.
- the method of precomputing visibility event packets using conservative linearized umbral event surfaces can incorporate identifying the navigable space of the environment and possible routes within the modeled space, including ingress and egress routes.
- visibility event packets delivered in step 460 can include those corresponding to positions from which egress routes and ingress routes are unoccluded and which provide good vantage points for mobile surveillance.
- the circuitry of the server 106 transmits more visibility event navigational packets for the specified route to the client device 108 .
- the visibility event navigation incident determination process 1100 proceeds to step 1108 upon the completion of step 1116 .
- the visibility event navigation incident determination process 1100 ends upon the completion of step 1116 .
- FIG. 12 is an algorithmic flowchart of a visibility event navigation unauthorized object determination process 1200 , according to certain exemplary aspects.
- the visibility event navigation unauthorized object determination process 1200 describes a process of detecting and responding to unauthorized objects in a real environment.
- the circuitry of the client device 108 transmits a request to the server 106 for visibility event packets.
- the visibility event packets can be navigational packets that correspond to the navigational route that the client device 108 intends to traverse.
- the server 106 is in communication with the client device 108 and authenticates the request of the client device 108 .
- the server 106 is configured to deliver visibility event packets to certified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets to certified client devices 108 ensures that the client device 108 has met certain requirements for navigating along the intended route.
- the circuitry of the server 106 transmits visibility event navigational packets to the client device 108 .
- the initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination.
- the circuitry of the server 106 also transmits information representing the location of authorized client devices 108 operating in the vicinity of the specified navigational route.
- the vicinity includes an area within a predetermined sensor range of the specified navigational route, wherein the sensor range includes the range of a sensor 104 that is in communication with the client device 108 .
- the circuitry of the client device 108 detects unauthorized moving objects in the real environment.
- the real environment includes airspace.
- the method of detection can include one or more methods of detecting moving objects via on-board LiDAR, SONAR, Radar, optical sensors, or any other detection means that are known.
- utilizing the information describing known locations of authorized client devices in the vicinity can improve the detection of unauthorized moving objects.
- the client device 108 can also use the sensor 104 such an on-board sensor to detect, track, and report the location of other authorized client devices 108 , or authorized client devices 108 that have lost communication with the server 106 .
- the circuitry of the client device 108 transmits information describing the location of unauthorized moving objects to the server 106 .
- the determination of significance may include size, speed and other parameters of the moving object.
- the verification may include repeated, correlated observations from multiple client devices 108 or other authenticated sources. If the moving object is determined to be significant and/or verified, resulting in a “yes” at step 1212 , the visibility event navigation unauthorized object determination process 1200 proceeds to step 1214 . Otherwise, if the moving object is determined to not be significant or verified, resulting in a “no” at step 1212 , the visibility event navigation unauthorized object determination process 1200 ends.
- the client device 108 initiates pursuit, tracking, or engagement of the unauthorized object by following a pursuit route or intercept route specified by the visibility event packets.
- FIG. 13 is a block diagram of a visibility event navigation system workflow 1300 , according to certain exemplary aspects.
- the visibility event navigation system workflow 1300 describes a high-level architectural block diagram of the visibility event navigation system 100 .
- the visibility event navigation system 100 can be used for streaming visibility event data and other data to navigating client devices 108 such as vehicles, aircraft, and the like.
- the client devices can be autonomous, semi-autonomous, on-board pilot operated, on-board driver operated, remotely operated, and the like.
- the server 1310 of the visibility event navigation system 100 can include circuitry that is configured to process and transmit urban and/or natural environmental model surfaces, content, and the like of a 3D environmental model database 1302 to a client device 108 via a network 102 .
- the circuitry of the server 106 can be configured to deliver the environmental model surfaces and the content of the 3D environmental model database 1302 via a visibility event data stream employing visibility event packets 1306 that are encoded using first-order visibility propagation via a visibility event packet encoder 1304 .
- the visibility event navigation packets 1306 are precomputed from the 3D environmental model 1302 at an earlier time and stored for later use.
- the circuitry of the server 1310 can further be configured to run a visibility event server software 1308 in which the visibility event packets 1306 are processed in a visibility event navigation server 1310 .
- the processed visibility event packets 1306 of the server 1310 can be transmitted to a client device 108 in which visibility event client software 1312 employs the processed visibility event packets 1306 in a 3D map-matching or computer vision based navigation system 1314 .
- the visibility event server software 1308 employs navigation-driven predictive prefetch to transmit the precomputed visibility event packets 1306 .
- specifying a defined navigational route and streaming the visibility event packets 1306 to client visibility event navigation software 1312 increases the predictability of navigation-driven prefetch and reduces the bandwidth required to stream the packets.
- FIG. 14 is a hardware block diagram of a client device, according to certain exemplary aspects.
- the client device 108 includes a CPU 1400 , which performs the processes described above/below.
- the process data and instructions may be stored in memory 1402 .
- These processes and instructions may also be stored on a storage medium disk 1404 such as a hard drive (HDD) or portable storage medium or may be stored remotely.
- a storage medium disk 1404 such as a hard drive (HDD) or portable storage medium or may be stored remotely.
- the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored.
- the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk, or any other information processing device with which the client device 108 communicates, such as a server or computer.
- claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1400 and an operating system such as MICROSOFT WINDOWS, UNIX, SOLARIS, LINUX, APPLE MAC-OS, and other systems known to those skilled in the art.
- an operating system such as MICROSOFT WINDOWS, UNIX, SOLARIS, LINUX, APPLE MAC-OS, and other systems known to those skilled in the art.
- CPU 1400 may be a XENON or CORE processor from Intel of America or an OPTERON processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art.
- the CPU 1400 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize.
- CPU 1400 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
- the client device 108 in FIG. 14 also includes a network controller 1406 , such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with a network 102 .
- the network 102 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks.
- the network 102 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems.
- the wireless network can also be Wi-Fi, BLUETOOTH, or any other wireless form of communication that is known.
- the client device 108 further includes a display controller 1408 , such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 1410 , such as a Hewlett Packard HPL2445w LCD monitor.
- a general purpose I/O interface 1412 interfaces with a touch screen panel 1416 on or separate from display 1410 .
- General purpose I/O interface also connects to a variety of peripherals 1418 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard.
- a sound controller 1420 is also provided in the client device 108 , such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1422 thereby providing sounds and/or music.
- the general purpose storage controller 1424 connects the storage medium disk 1404 with communication bus 1426 , which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the client device 108 .
- communication bus 1426 may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the client device 108 .
- a description of the general features and functionality of the display 1410 , display controller 1408 , storage controller 1424 , network controller 1406 , sound controller 1420 , and general purpose I/O interface 1412 is omitted herein for brevity as these features are known.
- circuitry configured to perform features described herein may be implemented in multiple circuit units (e.g., chips), or the features may be combined in circuitry on a single chipset, as shown in FIG. 15 .
- FIG. 15 is a hardware block diagram of a data processing system 1500 , according to certain exemplary aspects.
- FIG. 15 shows a schematic diagram of a data processing system 1500 , for performing visibility event navigation.
- the data processing system 1500 is an example of a computer in which code or instructions implementing the processes of the illustrative aspects may be located.
- data processing system 1500 employs a hub architecture including a north bridge and memory controller hub (NB/MCH) 1525 and a south bridge and input/output (I/O) controller hub (SB/ICH) 1520 .
- the central processing unit (CPU) 1530 is connected to NB/MCH 1525 .
- the NB/MCH 1525 also connects to the memory 1545 via a memory bus, and connects to the graphics processor 1550 via an accelerated graphics port (AGP).
- AGP accelerated graphics port
- the NB/MCH 1525 also connects to the SB/ICH 1520 via an internal bus (e.g., a unified media interface or a direct media interface).
- the CPU Processing unit 1530 may contain one or more processors and even may be implemented using one or more heterogeneous processor systems.
- FIG. 16 is a hardware block diagram of a CPU, according to certain exemplary aspects.
- FIG. 16 shows one implementation of CPU 1530 .
- the instruction register 1638 retrieves instructions from the fast memory 1640 . At least part of these instructions are fetched from the instruction register 1638 by the control logic 1636 and interpreted according to the instruction set architecture of the CPU 1530 . Part of the instructions can also be directed to the register 1632 .
- the instructions are decoded according to a hardwired method, and in another implementation the instructions are decoded according a microprogram that translates instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses.
- the instructions After fetching and decoding the instructions, the instructions are executed using the arithmetic logic unit (ALU) 1634 that loads values from the register 1632 and performs logical and mathematical operations on the loaded values according to the instructions. The results from these operations can be feedback into the register and/or stored in the fast memory 1640 .
- ALU arithmetic logic unit
- the instruction set architecture of the CPU 1530 can use a reduced instruction set architecture, a complex instruction set architecture, a vector processor architecture, a very large instruction word architecture.
- the CPU 1530 can be based on the Von Neuman model or the Harvard model.
- the CPU 1530 can be a digital signal processor, an FPGA, an ASIC, a PLA, a PLD, or a CPLD.
- the CPU 1530 can be an x86 processor by Intel or by AMD; an ARM processor, a Power architecture processor by, e.g., IBM; a SPARC architecture processor by Sun Microsystems or by Oracle; or other known CPU architecture.
- the data processing system 1500 can include that the SB/ICH 1520 is coupled through a system bus to an I/O Bus, a read only memory (ROM) 1556 , universal serial bus (USB) port 1564 , a flash binary input/output system (BIOS) 1568 , and a graphics controller 1558 .
- PCI/PCIe devices can also be coupled to SB/ICH 1520 through a PCI bus 1562 .
- the PCI devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers.
- the Hard disk drive 1560 and CD-ROM 1566 can use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface.
- IDE integrated drive electronics
- SATA serial advanced technology attachment
- the I/O bus can include a super I/O (SIO) device.
- the hard disk drive (HDD) 1560 and optical drive 1566 can also be coupled to the SB/ICH 1520 through a system bus.
- a parallel port 1578 and a serial port 1576 can be connected to the system bust through the I/O bus.
- Other peripherals and devices that can be connected to the SB/ICH 1520 using a mass storage controller such as SATA or PATA, an Ethernet port, an ISA bus, a LPC bridge, SMBus, a DMA controller, and an Audio Codec.
- the functions and features described herein may also be executed by various distributed components of a system.
- one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network.
- the distributed components may include one or more client and server machines, which may share processing, in addition to various human interface and communication devices (e.g., display monitors, smart phones, tablets, personal digital assistants (PDAs)).
- the network may be a private network, such as a LAN or WAN, or may be a public network, such as the Internet. Input to the system may be received via direct user input and received remotely either in real-time or as a batch process.
- a method of visibility event navigation including one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server; detecting, via the circuitry, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device; calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to a first visibility event packet of the one or more visibility event packets; transmitting, via the circuitry, the at least one position from the client device to the server; and receiving, via the circuitry, at least one second visibility event packet of
- the method of (1) further including: receiving, via the circuitry, at least one alternate visibility event packet of the one or more visibility event packets at the client device from the server, wherein the at least one alternate visibility event packet representing 3D surface elements of the geospatial model that are occluded from a third viewcell but not occluded from a fourth viewcell, the third and fourth viewcells representing spatial regions of a an alternate navigational route within a real environment modeled by the geospatial model, the alternate navigational route leading to a safe landing zone.
- the client device includes a navigation system for at least one of an autonomous aircraft, a semiautonomous aircraft and a piloted aircraft.
- the safe landing zone includes a specific viewcell as a zone reachable by the autonomous aircraft in case of an engine failure while the autonomous aircraft is in the specific viewcell.
- the safe landing zone includes a specific viewcell as a zone reachable by the piloted aircraft in case of an engine failure while the piloted aircraft is in the specific viewcell.
- a method of visibility event navigation including one or more visibility event packets located at a server, including information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: prefetching, via processing circuitry of the server, a first visibility event packet of the one or more visibility event packets to a client device; receiving, via the circuitry, at least one position of the client device in the real environment at the server; and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets to the client device when the at least one position is within the navigational route.
- a method of visibility event navigation prefetch including at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a server, information at the server from a client device representing an orientation of a sensor located at the client device, the sensor acquiring information representing the visible surfaces of the real environment; and transmitting, via the circuitry, the at least one partial visibility event packet from the server to the client device, wherein the at least one partial visibility event packet intersects a maximal view frustum including a volume of space intersected by the view frustum of the sensor during movement of the client device in the second viewcell.
- a method of visibility event navigation including at least one partial visibility event packet located at a server, the at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including visibility event packet information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: transmitting, via processing circuitry of a client device, surface information from the client device to the server corresponding to the orientation of a sensor located at the client device, the surface information representing visible surfaces of the real environment; and receiving, via the circuitry, the at least one partial visibility event packet at the client device from the server including a subset of the visibility event packet information that intersects a maximal view frustum, wherein the maximal view frustum includes a volume of space intersected by a view frustum of the sensor during movement of the client device in the second
- a method of visibility event navigation including a first visibility event packet of one or more visibility event packets from a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: detecting, via processing circuitry of a first client device of a plurality of client devices, surface information representing visible surfaces of the real environment at a sensor in communication with the first client device of the plurality of client device; calculating, via the circuitry, at least one position of the first client device of the plurality of client devices in the real environment by matching the surface information to the visibility event packet information; transmitting, via the circuitry, the at least one position from the first client device of the plurality of client devices to the server; receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets
- a method of visibility event navigation prefetch including a first visibility event packet of one or more visibility event packets, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a server, at least one position of a client device in the real environment at the server from the client device; and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets when the at least one position of the client device is within the navigational route and a fee has been paid by an operator of the client device.
- a method of visibility event navigation including one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server; detecting, via processing circuitry of the client device, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device; calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to the first visibility event packet of the one or more visibility event packets; transmitting, via the circuitry, the at least one position in the real environment from the client device to the server; receiving, via the circuitry, at least one
- a visibility event navigation system including: a server; at least one client device located in a real environment and in communication with the server, the at least one client device including processing circuitry configured to: detect surface information representing one or more visible surfaces of the real environment at one or more sensors in communication with the at least one client device, calculate at least one position of the at least one client device in the real environment by matching the surface information to visibility event packet information including a first visibility event packet of one or more visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within the real environment and modeled by the geospatial model, transmit the at least one position of the client device to the server, and receive a second visibility event packet of the one or more visibility event packets from the server when the at least one position is within the navigational route.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Power Engineering (AREA)
- Geometry (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method of visibility event navigation includes receiving, via processing circuitry of a client device, a first visibility event packet from a server, the first visibility event packet including information representing 3D surface elements of an environmental model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a specified navigational route within a real environment modeled by the environmental model. The method also includes acquiring, surface information representing the visible surfaces of the real environment at a sensor and determining, a position in the real environment by matching the surface information to the visibility event packet information. The method further includes transmitting, the position from the client device to the server and receiving a second visibility event packet from the server if the at least one position is within the specified navigational route.
Description
- The present invention relates generally to navigation using 3D representations of a given space to be navigated, and more particularly to a method and system for streaming visibility event data to navigating vehicles (on land, sea, air, or space), and using the visibility event data in 3D map-matching and other computer vision navigation methods.
- The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventor, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present disclosure.
- Precision navigation in obstacle-strewn environments such as the urban canyon or indoors is challanging. Typically GPS does not provide the precision required to navigate in obstacle rich environments. Also GPS reception requires line of sight access to at least four satellites, which is often not available in the urban canyon because of occlusion of the sky by buildings. Additionally, GPS radio signals are relatively weak, and can be jammed or spoofed (replaced by a false data stream that can mislead the targeted navigational system).
- Higher-power radio navigation methods such as eLORAN may be less susceptible to jamming near the transmitter. Otherwise these methods inherit most of the vulnerabilities of GPS radio navigation, including susceptibility to denial of service by attack on the fixed transmitters. The precision of eLORAN localization is significantly less than GPS, and the global availability of eLORAN service has been significantly limited by the recent decision by the United Kingdom to discontinue eLORAN service in Europe.
- 3D map-matching is a navigational basis that is orthogonal to GPS and eLORAN navigation and consequently does not suffer from the same limitations and vulnerabilities. Early 2.5D map matching systems such as TERCOM (Terrain Contour Matching), were effectively employed in cruise missile navigation prior to GPS. The ability to pre-acquire detailed 3D environmental data has increased exponentially since the time of TERCOM. Moreover, commodity sensors are now available which generate real-time point-clouds that could potentially be matched to the pre-acquired 3D environmental data to provide rapid, precise localization in many GPS denied environments.
- However, two problems have slowed the general adoption of efficient 3D map-match solutions. First, because the 3D environmental data sets are so large, it can be difficult to transmit and maintain them over existing networks using conventional data delivery approaches. Second, processing of these massive 3D datasets by 3D map-matching algorithms can be very inefficient because the matching algorithm is typically forced to process a large amount of occluded data that is irrelevant to the immediate 3D map-match localization solution. This is especially true in densely occluded natural terrains, indoors, or within the urban canyon, where buildings make most of the surfaces of the environment occluded from any small region.
- In an exemplary aspect, a method of visibility event navigation includes one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation also includes receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server, detecting, via the circuitry, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device, and calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to a first visibility event packet of the one or more visibility event packets. The method of visibility event navigation further includes, transmitting, via the circuitry, the at least one position from the client device to the server, and receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets when the at least one position is within the navigational route at the client device from the server.
- In some exemplary aspects, a method of visibility event navigation includes one or more visibility event packets located at a server, including information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation also includes prefetching, via processing circuitry of the server, a first visibility event packet of the one or more visibility event packets to a client device, receiving, via the circuitry, at least one position of the client device in the real environment at the server, and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets to the client device when the at least one position is within the navigational route.
- In certain exemplary aspects, a method of visibility event navigation prefetch includes at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation prefetch further includes receiving, via processing circuitry of a server, information at the server from a client device representing an orientation of a sensor located at the client device, the sensor acquiring information representing the visible surfaces of the real environment, and transmitting, via the circuitry, the at least one partial visibility event packet from the server to the client device, wherein the at least one partial visibility event packet intersects a maximal view frustum including a volume of space intersected by the view frustum of the sensor during movement of the client device in the second viewcell.
- In some exemplary aspects, a method of visibility event navigation includes at least one partial visibility event packet located at a server, the at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including visibility event packet information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation further includes transmitting, via processing circuitry of a client device, surface information from the client device to the server corresponding to the orientation of a sensor located at the client device, the surface information representing visible surfaces of the real environment, and receiving, via the circuitry, the at least one partial visibility event packet at the client device from the server including a subset of the visibility event packet information that intersects a maximal view frustum, wherein the maximal view frustum includes a volume of space intersected by a view frustum of the sensor during movement of the client device in the second viewcell.
- In some exemplary aspects, a method of visibility event navigation includes a first visibility event packet of one or more visibility event packets from a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation also includes, detecting, via processing circuitry of a first client device of a plurality of client devices, surface information representing visible surfaces of the real environment at a sensor in communication with the first client device of the plurality of client device, calculating, via the circuitry, at least one position of the first client device of the plurality of client devices in the real environment by matching the surface information to the visibility event packet information, and transmitting, via the circuitry, the at least one position from the first client device of the plurality of client devices to the server. The method of visibility event navigation further includes receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets at the first client device of the plurality of client devices from the server when the at least one position is within the navigational route, detecting, via the circuitry, position information representing the position of at least one second client device of the one or more client devices in the real environment at the sensor, and transmitting, via the circuitry, the position information from the first client device of the plurality of client devices to the server.
- In certain exemplary aspects, a method of visibility event navigation prefetch includes a first visibility event packet of one or more visibility event packets, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation prefetch further includes receiving, via processing circuitry of a server, at least one position of a client device in the real environment at the server from the client device, and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets when the at least one position of the client device is within the navigational route and a fee has been paid by an operator of the client device.
- In some exemplary aspects, a method of visibility event navigation includes one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model. The method of visibility event navigation also includes receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server, detecting, via processing circuitry of the client device, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device, calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to the first visibility event packet of the one or more visibility event packets, and transmitting, via the circuitry, the at least one position in the real environment from the client device to the server The method of visibility event navigation further includes receiving, via the circuitry, at least one second visibility event packet from the server of the one or more visibility event packets at the client device from the server, calculating, via the circuitry, at least one deviation of the ground-
truth 3D structure from the corresponding environment modeled by the geospatial model using the surface information and the visibility event packet information and transmitting, via the circuitry, the at least one deviation from the client device to the server. - In certain exemplary aspects, a visibility event navigation system includes a server and at least one client device located in a real environment and in communication with the server. The at least one client device includes processing circuitry configured to detect surface information representing one or more visible surfaces of the real environment at one or more sensors in communication with the at least one client device, and calculate at least one position of the at least one client device in the real environment by matching the surface information to visibility event packet information including a first visibility event packet of one or more visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within the real environment and modeled by the geospatial model. The processing circuitry is further configured to transmit the at least one position of the client device to the server and receive a second visibility event packet of the one or more visibility event packets from the server when the at least one position is within the navigational route.
- The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the claims. The described aspects, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
- A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
-
FIG. 1 is an exemplary illustration of a visibility event navigation system, according to certain aspects. -
FIG. 2A is an exemplary illustration of an exemplary viewpoint and a corresponding exemplary view frustum having a 90 degree horizontal field of view, according to certain aspects; -
FIG. 2B is an exemplary illustration of a conservative current maximal viewpoint extent (CCMVE) of penetration into a viewcell from a known position after 166 ms of elapsed time using the exemplary view frustum having a 90 degree horizontal field of view, according to certain aspects; -
FIG. 3 is an exemplary illustration of a conservative from-subregion frustum, according to certain aspects; -
FIG. 4 is an exemplary illustration of a conservative from-subregion frustum that results from viewpoint penetration into the viewcell over 166 milliseconds for a CCMVE subregion, according to certain aspects; -
FIG. 5 is an exemplary illustration of an additional angular region of an extended frustum, according to certain aspects; -
FIG. 6 is an exemplary illustration of a top-down view of a view frustum having a horizontal field of view of 90 degrees and undergoing rotation in the horizontal plane at a rate of 90 degrees per second, according to certain aspects; -
FIG. 7 is an exemplary illustration of a server representation of client device viewpoint position and orientation, according to certain aspects; -
FIG. 8 is an exemplary illustration of an architectural block diagram of a visibility event navigation system, according to certain aspects; -
FIG. 9 is an algorithmic flowchart of a visibility event navigation process, according to certain exemplary aspects; -
FIG. 10 is an algorithmic flowchart of a visibility event navigation difference determination process, according to certain exemplary aspects; -
FIG. 11 is an algorithmic flowchart of a visibility event navigation incident determination process, according to certain exemplary aspects; -
FIG. 12 is an algorithmic flowchart of a visibility event navigation unauthorized object determination process, according to certain exemplary aspects, -
FIG. 13 is a block diagram of a visibility event navigation system workflow, according to certain exemplary aspects; -
FIG. 14 is a hardware block diagram of a client device, according to certain exemplary aspects; -
FIG. 15 is a hardware block diagram of a data processing system, according to certain exemplary aspects; and -
FIG. 16 is a hardware block diagram of a CPU, according to certain exemplary aspects. - In the drawings, like reference numerals designate identical or corresponding parts throughout the several views. Further, as used herein, the words “a,” “an” and the like generally carry a meaning of “one or more,” unless stated otherwise.
- Furthermore, the terms “approximately,” “approximate,” “about,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween.
- In certain exemplary aspects, there is described a method to deliver and receive navigational data as visibility event packets, wherein the visibility event packets include information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a specified navigational route within a real environment modeled by a geospatial model. The specified navigational route can also be referred to as a pursuit route, pursuit route, navigational route, route of navigation, route of navigation, desired route, desired route, specified route, specified route, and the like, as discussed herein. As such, a navigating client device receives at least one visibility event packet from a server.
- In some aspects, the navigating client device employs sensors to acquire information representing the surrounding visible surfaces of the real environment. In certain aspects, the data is collected as a 3D point cloud including X axis, Y axis, and Z axis values utilizing LiDAR, SONAR, optical photogrammetric reconstruction, or other sensor means that are known. The 3D surface information of the real environment can be acquired by the sensor and matched to the 3D surface information of at least one visibility event packet delivered from the server to the client device in order to determine the position of the client device in the real environment. In other aspects, the 3D map-matching position determination is used as the sole method of determining position or augmented with other navigational methods such as GPS, eLORAN, inertial navigation, and the like. The 3D map-matching determination can be configured to employ matching of directly/indirectly acquired point cloud data to the 3D modeled data. In some aspects, the 3D map-matching can also compare geometric or visual “features” acquired from the environment to the transmitted 3D data. In certain aspects, the transmitted visibility event packets include 3D surface model information representing the real environment as polygons, procedural surfaces, or other representations. Instances in which the visibility event packets employ procedural representation of the surfaces of the modeled environment can include the methods of employing procedural surface representations in visibility event packets, to reduce the transmission bandwidth required to send the navigational data.
- In some exemplary aspects, the visibility event navigation system employs 3D map-matching of ground-truth sensor data to visibility event packet model data to determine a position and an orientation of a client device in a real environment. In exemplary aspects, the visibility event packets provide a 3D representation that provides accelerated 3D map-matching in comparison to systems which deliver 3D model data using a conventional proximity-based streaming. As such, proximity based streaming delivers many model surfaces that are occluded and therefore irrelevant to the 3D map-match process, which generally must only match unoccluded ground truth sensor data to the model data. Additionally, proximity based streaming methods can have higher bandwidth requirements, because they deliver a large amount of occluded data that is not relevant for the 3D map-match localization. This additional occluded data delivered by proximity-based streaming methods also tends to make the client-
side 3D map-match process less efficient, since it introduces irrelevant data into the 3D map-match execution. In addition to delivering a potentially large amount of occluded model surface information, proximity-based streaming methods can fail to deliver potentially useful, unoccluded model surface data because it is not within the current proximity sphere that the server is using for prefetch. This loss of potentially useful data is exacerbated in densely occluded environments such as cities because the transmission of a large amount of occluded surface model data often causes the system to reduce the proximity sphere in an effort to maintain the stream. - In some exemplary aspects, the visibility event navigation system employs computer vision systems other than conventional 3D map-matching to match ground-truth surfaces or features to visibility event packet model data representing basic surfaces or salient 3D features of the modeled environment.
- In some exemplary aspects, the efficient delivery of high-precision, relevant 3D surface model data enables a localization solution that is faster, more efficient, and more precise than simultaneous localization and mapping (SLAM). Pure SLAM solutions do not incorporate prior-knowledge of the environment into their localization solutions but instead depend upon effectively developing the real-
time 3D model de novo, in real time. Consequently, pure SLAM is more computationally expensive and less precise than 3D map-matching to prior, processed (e.g., previously geo-registered), and relevant (unoccluded) 3D data delivered by a visibility event navigational data stream. The increased computational efficiency of 3D map-matching compared to SLAM-focused localization can free on board sensor and computational resources to focus on identifying, tracking, and avoiding moving objects in the environment, such as other client devices. - In certain exemplary aspects, visibility event navigation data is compared to an actual ground-
truth 3D representation of visible surfaces of an environment in which navigation occurs. The comparison can employ 3D map-matching methods, other surface matching methods, computer vision matching methods, and the like. In some aspects, if the comparison of the ground-truth sensor data and the visibility event navigation data determines that the ground-truth surfaces do not match the delivered visibility event navigation data, then data representing the difference between the streamed visibility event navigation data and the ground truth sensor data is streamed to the visibility event navigation server. In some aspects, the server compares this information suggesting an apparent change in the structure of the 3D environment in a particular region of the model corresponding to the viewcells from which the changed surfaces would be visible. The method enables a maintainable, sustainable, scalable, and deliverable representation of earth's natural and urban environments for the purposes of high-precision navigation. - In some aspects, the primary navigation routes represented by primary visibility event navigation packets delivered from the server are specifically calculated to avoid low-altitude helicopter traffic by incorporating ADS-B (Automated Dependent Surveillance-Broadcast) information into the route planning. The server uses this dynamic information about helicopter and other low altitude aircraft, as well as information about helicopter routes and landing pads, to compute and stream routes for autonomous vehicles that avoid other low altitude aircraft. In certain aspects, the current position of any aircraft being controlled by the as determined using the visibility event navigation system, as well as the planned route could potentially be incorporated into ADS-B system or other commercial systems (e.g. commercial systems such as FOREFLIGHT® or GARMINPILOT®), which would make the position of the controlled aircraft available to other aircraft though these networks.
- In certain aspects, small unmanned aerial vehicles, such as small drones operating in a commercial application of package delivery, are given visibility event navigational packets corresponding to flight routes that are intended to survey regions for wires, cables, towers, cranes and other hazards to low altitude aircraft such as helicopters. The relevant visibility event navigation data is made available to piloted client devices not being controlled by the visibility event navigation system, for the purpose of obstacle avoidance. By transmitting the position of the piloted client device (as determined by GPS or other navigational means) to the server, the server can then compute real-time collision course and issue proximity or impending collision warnings to the client device. In some aspects, the server streams the relevant visibility event packets so that the computation can be made on board the piloted client device. By prefetching all the visibility event packets required for an intended navigational route, a continuous connection between the client device and the server is unnecessary to provide the collision warning service.
- In certain aspects, a server is configured to deliver visibility event packets to certified client devices which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets to certified clients ensures that the client device has met certain requirements for navigating along the intended route. In some aspects, this can allow autonomous navigation in airspace to client devices including operators and airframes that meet specified requirements for safe operation, such as airworthiness.
- In some aspects, the visibility event navigational data stream is made available through the provenance of an issuing navigational authority. Additionally, the visibility event navigational data stream is encrypted, authenticated, registered, and transactional. A client device such as an aircraft or vehicle employing the visibility event navigation client system can use onboard sensors to detect uncertified/unregistered objects operating in the vicinity of the certified/registered client device, and to report the location of the intruding object to the server. The implementation of the visibility event navigation system could enhance the ability of the Federal Aviation Administration (FAA), the Department of Homeland Security (DHS), or similar issuing authority to control and monitor low-altitude airspace. As such, a sustainable revenue stream can be generated by the issuing authority operating the visibility event navigation stream through modest, per-trip transaction fees for autonomous or semi-autonomous client devices using the visibility event navigation stream in the controlled airspace.
- In certain aspects, the navigating client device receives visibility event packets for the purpose of high-precision navigation within a prescribed flight route. The client device can be required to transmit its location to the server delivering the packets at predetermined intervals. In this instance, the server can be configured to periodically or continuously deliver alternate visibility event packets including visible surfaces that are encoded as visibility event packets and located along one or more flight routes from a current location of the client device to a safe landing zone. In doing so, flight safety can be reinforced by providing data that enhances the ability of the aircraft to make a safe landing at any point in time. In some aspects, the client device is a piloted aircraft, wherein the visibility event data representing an alternate route to one or more safe landing zones is immediately available to the pilot in case of engine failure or other circumstances that necessitate an immediate safe landing. The safe landing zone data is used by the pilot to identify reachable safe landing zones. In other aspects, the client device may be controlled by an autopilot or other system which is instructed to follow one or more of the alternate routes supplied as a visibility event navigational stream to a safe landing zone. The client device may be autonomous or semiautonomous, or have an autonomous navigation system in which the visibility event packets supply information used by 3D map-matching or other computer vision navigation systems that employ a
high precision 3D representation of the environment in which the navigation will occur. - In some aspects, a visibility event navigation system includes a command within the client device to follow an alternative navigational route to safe landing in case the client device loses communications with the server. In other aspects, the server can be configured to determine if the location of the client device deviates from the specified flight route, and if the client device is determined to deviate, the server transmits a command to the client device which causes the aircraft to divert to the alternate route described by the alternate route visibility event packets, thus making a safe landing in a predetermined safe landing zone. The alternate route defined by the visibility event navigation packets that the client device navigates along, either from a default on-board instruction or from an instruction explicitly transmitted from the visibility event navigational server, may not necessarily be the closest landing zone. Instead, the alternate route may be selected based on other safety criteria, such as proximity to people, ground vehicles, or proximity to security-sensitive buildings or transportation routes.
- In certain aspects, the server delivers visibility event navigation packets to visibility event navigation systems including client devices such as commercial aircraft, package delivery drones, and the like. The visibility event navigation system can deliver visibility event navigation packets from the server to client devices used by law enforcement agencies, for example surveillance drones. In other exemplary aspects, the visibility event navigation system facilitates a dual-use in which commercial aircraft may optionally assist law enforcement agencies by optionally receiving visibility event navigation packets which define a route to an incident of threatened public safety, and optionally receive an instruction to navigate along this route to the incident area.
- In certain aspects, the transmission of navigational data as visibility event packets for navigation along a predetermined route includes a method of enhanced predictive prefetching as described in copending parent application Ser. No. 15/013,784. In some aspects, the bandwidth required to stream visibility event navigational packets is reduced when the desired navigational route, on the ground or in the air, is known before packet transmission. As such, prior knowledge of the navigational route improves the efficiency of navigation-driven prefetch of visibility event packets, whether the packets are being delivered for visualization, for example in entertainment or serious visualization applications, or for navigation such as visibility even navigational data for use in 3D map-matching or other robotic vision based navigation systems. In some aspects, the prior knowledge of navigational intent along a specific navigational route limits the number of visibility event packets that must be prefetched, since navigational uncertainty is reduced, and the ability to deterministically, rather than probabilistically, prefetch packets corresponding to viewcell transitions far ahead of the navigating client is increased. In this instance, a deterministic prefetch approach enables efficient streaming of visibility event navigational packets over high latency and low bandwidth networks.
-
FIG. 1 is an exemplary illustration of a visibilityevent navigation system 100, according to certain aspects. Thevisibility navigation system 100 describes a system for transmitting and receiving navigational data as visibility event packets between aclient device 108 and aserver 106. The visibilityevent navigation system 100 can include anetwork 102, asensor 104, aserver 106, and aclient device 108. Thesensor 104 can be one ormore sensors 104 and is connected to theserver 106 and theclient device 108 via thenetwork 102. Thesensor 104 can include a LiDAR sensor, a SONAR sensor, an optical sensor, and the like. In certain aspects, thesensor 104 can be directly connected to theclient device 108, as in the case of thesensor 104 being directly employed by theclient device 108 for 3D map-matching, computer vision surface matching methods, surface feature matching methods, and the like. In some aspects, thesensor 104 can be utilized by circuitry of theclient device 108 to identify and track moving objects in an environment. Further, thesensor 104 can be employed by theclient device 108 to acquire information representing visible surfaces of the environment surrounding theclient device 108. In certain aspects, the 3D surface information acquired by thesensor 104, via the circuitry of theclient device 108, is matched to 3D surface information of a visibility event packet that is delivered from theserver 106 to theclient device 108 in order to determine the position or location of theclient device 108 within the environment. - The
server 106 can include one ormore servers 106 and is connected to thesensor 104 and theclient device 108 via thenetwork 102. Theserver 106 includes processing circuitry that can be configured to receive position information of theclient device 108 from the processing circuitry of theclient device 108. In some aspects, the circuitry of theserver 106 can be configured to transmit visibility event packets to theclient device 108 when the position of theclient device 108 is located within a predetermined navigational route. In other aspects, theserver 106 can include network nodes that transmit information to one ormore client devices 108 at different positions along different navigational routes. - The
client device 108 can include one ormore client devices 108 and is connected to thesensor 104 and theserver 106 via thenetwork 102. Theclient device 108 can include an autonomous aircraft, a semiautonomous aircraft, a piloted aircraft, an autonomous ground vehicle, a semiautonomous ground vehicle, and the like. Theclient device 108 includes processing circuitry that can be configured to detect surface information representing visible surfaces of a real environment theclient device 108 is located in. In certain aspects, the circuitry of theclient device 108 utilizes thesensor 104 to determine the surface information. The circuitry of theclient device 108 can also be configured to calculate a position of theclient device 108 in the environment by matching the surface information to visibility event packet information corresponding to visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, in which the first and second viewcells represent spatial regions of a navigational route with the environment that are modeled by the geospatial mode. In some aspects, the circuitry of theclient device 108 can be configured to transmit the position information of theclient device 108 to theserver 106 and receive visibility event packets from theserver 106 when the position of theclient device 108 is within the navigational route. In certain aspects of the present disclosure, the processing circuitry of theserver 106 can be configured to perform the methods implemented by the processing circuitry of theclient device 108, as described herein. - The
network 102 can include one or more networks and is connected to thesensor 104, theserver 106, and theclient device 108. Thenetwork 102 can encompass wireless networks such as Wi-Fi, BLUETOOTH, cellular networks including EDGE, 3G and 4G wireless cellular systems, or any other wireless form of communication that is known, and may also encompass wired networks, such as Ethernet. -
FIG. 2A is an exemplary illustration of an exemplary viewpoint and a corresponding exemplary view frustum having a 90 degree horizontal field ofview 200, according to certain aspects. In certain aspects of the present disclosure, 3D map-matching, computer-vision based surface methods, feature matching methods, and the like employ 2D and/or 3D scans of a real environment (for example, photographic or LiDAR) via a single scan point. The scan point can also be referred to as viewpoint, as discussed herein. In some aspects, the scanning method used to obtain a representation of the real environment may have a limited directional extent. The directional extent of the scan method can also be referred to as the scan frustum or view frustum, as discussed herein. The average or principal direction of the scan can also be referred to as the view direction vector, as discussed herein. Theviewpoint 202 includes acorresponding view frustum 204 that includes a view direction vector for processing visibility event packets. The use of view direction vectors to compute smaller visibility event packets can be useful for streaming a fixed or limited view direction vector in real environments. In some aspects, the visibility event navigation can include acquiring 3D surface information of a real environment, acquired by asensor 104 in communication with aclient device 108, and matching the 3D surface information of at least one visibility event packet to determine the position of theclient device 108 in the environment. In the general case, the view direction vector can be pointed in any direction for any viewpoint within any viewcell, corresponding to a view direction vector of a predetermined field of view. For example,FIG. 2A shows aviewpoint 202, and a corresponding view frustum having a 90 degree horizontal field ofview 204. -
FIG. 2B is an exemplary illustration of a conservative current maximal viewpoint extent (CCMVE) 206 of penetration into a viewcell from a known position after 166 ms of elapsed time using theexemplary view frustum 210 having a 90 degree horizontal field ofview 200, according to certain aspects.FIG. 2B shows a top-down view of a 90 degree horizontal field ofview frustum 210 enveloping the CCMVE-5 206.FIG. 2B also shows a conservative currentmaximal viewpoint extent 206, CCMVE-5, of penetration into theviewcell 208 from a known position after 166 ms of elapsed time. The CCMVE-5 206 is determined from a last known position and the maximal linear and angular velocity and acceleration of theviewpoint 202. For example, for a typical 90 degree field of view such as the 90 degree from-point frustum 204 shown inFIG. 2A , rotation rates of thefrustum 204 approach 130 to 140 degrees per second. However, a 90 degree yaw ability to scan the environment is more suitable and enjoyable for the viewing of a spectator. -
FIG. 3 is an exemplary illustration of a conservative from-subregion frustum 300, according to certain aspects.FIG. 3 shows that the resulting conservative from-subregion frustum 304 is larger than the corresponding from-point frustum 306 atviewpoint 302, even if it assumed that no view direction vector rotation has occurred, for a CCMVE-5 308 representative of predicted viewcell penetration at 166 ms into aviewcell region 310. -
FIG. 4 is an exemplary illustration of conservative from-subregion frustum that results from viewpoint penetration into the viewcell over 166 milliseconds for aCCMVE subregion 400, according to certain aspects.FIG. 4 shows a resulting conservative from sub-region frustum 402 that results from a CCMVE-5 404 representative of viewpoint penetration into theviewcell 406 sub-region over 166 milliseconds, together with rotation of the view direction vector 15 degrees to the right 408 or 15 degrees to the left 410 from an initial viewdirection vector orientation 412. In this exemplary case, assuming a maximum view direction rotation rate of 90 degrees per second, if the ping latency between theclient device 108 and theserver 106 is 166 ms, the resulting 30 degree rotation would represent the uncertainty of the client device's 108view direction vector 412, as experienced by theserver 106. Accordingly, consistent with aspects of the present disclosure, theserver 106 can employ the extended 120 degree frustum 402 (i.e., 120 degree predicted maximum from-subregion frustum) to determine the subset of the visibility event packet data to actually transmit to theclient device 108. This determination is made by determining the set of unsent surfaces of the corresponding visibility event packet that intersect the extended frustum. - In certain aspects, the visibility event packet data is precomputed using the method of first-order from region visibility. The set of surfaces belonging to the corresponding PVS, incrementally maintained using the delta-PVS VE packets, that have not already been sent is maintained using the technique of maintaining the shadow PVS on the
server 106. In some aspects, the visibility event packets are precomputed assuming a full omnidirectional view frustum spanning 12.56 steradians of solid angle. As described, in certain exemplary aspects, theserver 106 can employ the extended view frustum to cull portions of the precomputed visibility event packet that fall outside of the maximum possible predicted extent of theclient device 108 view frustum, as determined from the ping latency and the maximal angular velocity and acceleration of the view frustum, as well as the maximum predicted extent of penetration of the viewpoint into the viewcell. - This method ensures that all of the potentially visible surfaces are transmitted, while minimizing bandwidth requirements, by deferring the transmission of visibility event packet surfaces that are not within the current conservative extended frustum, or which happen to be backfacing with respect to the conservative current maximal viewpoint extent of penetration into the viewcell. In exemplary aspects, there is also described a method of using reduced level-of-detail models in periphery of extended view frustum to reduce bandwidth requirements for buffering against view direction vector rotation, such as level-of-detail vs. predicted exposure durations.
- The above-disclosed methods comprise determining a conservative representation of the client device's 108 view frustum from the temporal reference frame of the
server 106, and using this extended frustum to cull those surfaces of the corresponding visibility event packet that could not possibly be in the client device's 108 view frustum. Consistent with aspects of the present disclosure, all of the transmitted surface information is represented at the highest level-of-detail. The visibility event packets can be encoded using geometric and surface models at a plurality of levels-of-detail, including a plurality of levels of geometric, texture, and other surface detail. In some aspects, however, the visibility event packets can be transmitted at a lower level-of-detail during periods of low bandwidth availability, and/or high bandwidth requirement, in order to maximize the probability that the information encoding newly exposed surfaces arrives on time, such as before the surface is actually exposed in theclient device 108 viewport. Under some conditions, a visibility event packet containing relatively low level-of-detail surface information can initially be transmitted and later replaced by a visibility event packet containing higher level-of-detail information. This exploits the fact that certain 3D map-match systems and/or computer vision systems can have lower precision when matching to newly exposed surfaces and more precision when matching to surfaces that have been exposed for a longer period of time. In this instance, the system has more time to converge on a high-precision match solution for surface elements that are present longer. In certain aspects, sending elements of a visibility event packet that correspond to newly exposed surfaces for the visibilityevent navigation system 100 at low level-of-detail saves transmission bandwidth while preserving the efficiency of client-side processing, since the level-of-detail of the transmitted information is matched to the spatiotemporal performance profile of the computer vision or 3D map-matching system. -
FIG. 5 is an exemplary illustration of an additional angular region of anextended frustum 500, according to certain aspects. In certain aspects, the limited spatiotemporal performance of robotic vision systems, including 3D map-matching navigation systems and the similarly limited spatiotemporal performance of the human visual system, can be exploited by sending low level-of-detail surface information if surfaces fall outside the region of the extended view frustum, as determined, using one or more of the ping latency, the maximum viewpoint translation velocity and acceleration, the maximum angular velocity, and acceleration of the view direction vector. For example,FIG. 5 shows an additional angular region ofextended view frustum 502 that spans an additional 15degrees 504 on each side of the extended 120 degree frustum 402 shown inFIG. 4 . - In certain aspects, the
server 106 transmits surfaces that fall in the subfrustum between 120 degrees and the maximally extended frustum of 150degrees 504 at a lower level-of-detail than the other visibility event surface data that fall within the 120 degree extended frustum. The disclosed method thus provides a region of uncertainty between 90 degrees and 120 degrees of thesubfrustum 506, as well as an additional buffer region against view direction vector rotation between 120 degrees and 150 degrees of thesubfrustum 504, which may be useful if the directional visibility gradient, when the rate of exposure of surfaces per degree of view direction vector rotation is high, or if the available bandwidth has a high degree of variability, such as network jitter. In this instance, the low level-of-detail surface information can potentially be replaced by a higher level-of-detail representation. -
FIG. 6 is an exemplary illustration of a top-down view of a view frustum having a horizontal field of view of 90 degrees and undergoing rotation in the horizontal plane at a rate of 90 degrees per second 600, according to certain aspects.FIG. 6 shows a top-down view of a view frustum having a horizontal field of view of 90degrees 602, and undergoing rotation in the horizontal plane at a rate of 90 degrees per second 604 in a direction from afirst region 606 toward afourth region 612. In this exemplary case, surfaces to the right-hand side of theview frustum 602 will undergo incursion into the rotating frustum at afirst region 606, whereas surfaces near the left-hand extreme of theview frustum 602 at thefirst region 606 will exit thefrustum 602 during frustum rotation. In the exemplary case shown inFIG. 6 , those surfaces in thefirst region 606 have been in thefrustum 602 for between 750 ms and 1000 ms as a consequence of exposure via thesecond region 608, thethird region 610 and thefourth region 612 during the rotation. In thesecond region 608, for example, the surfaces have been in thefrustum 602 for between 500 ms and 750 ms; in thethird region 610, the surfaces have been in thefrustum 602 for between 250 ms and 500 ms; and in thefourth region 612, the surfaces have been in thefrustum 602 for between 0 ms and 250 ms. Surfaces that have been in thefrustum 602 for only a brief period of time have also been exposed to the graphical display in communication with theclient device 108 for a concomitantly brief period of time -
FIG. 7 is an exemplary illustration of a server representation of client viewpoint position andorientation 700, according to certain aspects. The visibilityevent navigation system 100 can be used for streaming visibility event data as well as other data to a navigatingclient device 108 such as a vehicle or aircraft. In certain aspects, theclient device 108 can be autonomous, semi-autonomous, on-board pilot or driver operated, remotely operated, and the like. - The visibility event navigation system includes a
server 106 that can be configured to employ a navigation-based prefetch scheme in which the entire specified navigational route to an intended destination is predetermined. In certain aspects, all of the corresponding visibility event navigational packets, for the entirety of the navigational route, can be transmitted by the circuitry of theserver 106 to theclient device 108 as a download prior to initiating navigation. Alternatively, all of the visibility event navigational packets can be streamed to theclient device 108 during an initial portion of the trip, in which the visibility event packets being streamed correspond to viewcell boundaries that may be penetrated during movement of theclient device 108 within the specified 3D navigational route. In this instance, the visibility eventnavigational server 106 andclient device 108 do not require a network connection that is constantly available. However, if the specified navigational route is changed by theserver 106 due to traffic, environmental changes, or any other reasons for re-routing, the previous set of visibility event navigational packets, corresponding to the original specified navigational route beyond the point of rerouting, may be delivered unnecessarily, making inefficient use of the available network bandwidth. - In other aspects, the visibility event navigational packets can be streamed in a demand-paged mode in which the
server 106 monitors the actual location of the navigating visibilityevent client device 108 from self-position reports of theclient device 108,sensor 104 data, and/or position reports of other participating navigating client devices, and streams the corresponding visibility event navigation packet to the navigatingclient device 108 just before the packet is needed. - In some aspects, a prefetch method which balances the constraints of reducing bandwidth requirements while reliably preventing visibility event cache underflow on the
client device 108 can be used. In this instance, theserver 106 prefetches the visibility event packets using a navigation-predictor agent that is virtually navigating (within the server-side environmental model) at a specified distance or a specified time ahead of the client device 108 (navigating in the corresponding real environment). As such, the server-side navigation-predictor agent can be controlled by theserver 106 to maintain a position that is ahead of the actual navigating client device's 108 corresponding position by a defined period which exceeds the round-trip time network delay between theserver 106 and theclient device 108. In certain aspects, the defined period is short enough to allow the intended navigational route to be changed at any time before too much bandwidth has been committed to delivering information for a diverted route. In certain aspects, theserver 106 and/or theclient device 108 can modulate velocity along the intended navigational route in order to avoid visibility event cache underflow. This variable-delay navigational prefetch method is described in the copending parent application Ser. No. 15/013,784. In certain aspects of the present disclosure, the navigational prefetch method for streaming visibility event navigational packets support 3D map-match and other computer vision based navigation in real environments. - The server representation of client position and
orientation 700 describes a method of navigation-based prefetch of visibility event navigational packets in which the area and/or volume of navigational prediction collapses to a precise navigational route. The server representation of client position andorientation 700 includes a current location of the navigatingclient 702, acurrent frustum 704. The frustum representing the scan frustum for a LiDAR scanner, an active sensor, a passive sensor, and the like. The position at asecond location 706, asecond frustum 708, a pursuit route ofnavigation 710,deterministic prefetch 712, afuture position 714, afuture frustum 716, andprobabilistic prefetch 718. Thecurrent position 702 follows a navigation route indicated by afuture position 714. In certain aspects of the present disclosure, thecurrent position 702 follows two seconds behind afuture position 714. Thecurrent position 702 can provide afirst frustum 704 which indicates the field of view of the clientnavigational sensor 104 at thecurrent position 702. - The circuitry of the
server 106 modifies the position of thefuture position 714 directly. The future positions act as a virtual navigational prefetch agent, moving ahead on thepursuit route 710 in the modeled environment, which corresponds to the client device's 108 navigational route in the real environment. - The
current position 702 can represent the position of the navigatingclient device 108 in the real environment, as communicated between theclient device 108 and theserver 106. In certain aspects, the position of theclient device 108 can change location as a result of the movement of thefuture position 714. For example, thecurrent position 702 can move along a specifiednavigational route 710 towards asecond position 706. The position at thesecond location 706 can include asecond frustum 708 in which the field of view of thesensor 104 at a position at thesecond location 706 is represented, based on the intended heading of the navigatingclient device 108 atposition 706. In some aspects, the fields of view are processed by the circuitry of theclient device 108 to match the sensor data, obtained within the frustum to the portions of the environmental model, delivered by the visibility event packets from theserver 106, that are within a corresponding virtual frustum. - The
current position 702, the position at thesecond location 706 and thefuture position 714 travel along the intendednavigational route 710, which corresponds to the commands received by the circuitry of theserver 106. Additionally, the circuitry can be configured to pre-signal the intended navigational route with respect to the location of thefuture position 714. The server representation of client position andorientation 700 can predict navigational intent with minute uncertainty, such as 166 milliseconds. The low value of uncertainty allows network latency to be concealed when the server updates thefuture position 714. Thefuture position 714 is illustrated as being the position of the server's virtual navigational prefetch agent at a time 166 ms after a coordinated current time. - The current instantaneous state of the real navigational environment is known to the virtual navigational prefetch agent of the
server 106 with a dilution of precision that reflects the round trip time (RTT) between theserver 106 and thesensor 104 utilized to report locations of objects in the real environment. - The position and orientation of the
future position 702 is known to theserver 106 with dilution of precision that reflects the 166 ms RTT between theclient device 108 and theserver 106. The circuitry of theserver 106 can determine the position and orientation of the future position to within ½*RTT, however, any visibility event packet transmitted from theserver 106 to theclient device 108 will be delayed an additional ½*RTT, making the effective precision bounds limited to RTT. Consistent with certain aspects of the present disclosure, it is assumed that the dilution of precision is determined by the full RTT. - In some aspects, the circuitry of the
server 106 determines representations of thefuture position 714 and transmits the representations to theclient device 108. The representations can be less than RTT old (time elapsed from the current time) and have a dilution of precision that is a function of (RTT—elapsed time). Theserver 106 representation of thefuture position 714 includes afuture frustum 716 and correspondingprobabilistic prefetch 718 which can be utilized to determine a predictive route of uncertainty. This predictive route of uncertainty utilizes theprobabilistic prefetch 718 in which thefuture position 714 and orientation states less than RTT time has elapsed from the current time is shown as a dashed trajectory with afuture frustum 716, otherwise referred to as a superimposed navigational cone of uncertainty, reflecting the server's 106 effective uncertainty of position and/or orientation of thefuture position 714. - As such, when the RTT exceeds the elapsed time, then the server's 106 representation of the
current position 702 is undiluted by the RTT uncertainty, and the server's 106 representation of thedeterministic prefetch 712 portion of theposition 710 can be represented by theserver 106 as a space curve with a specific view direction vector for each position on the space curve. - As limits are placed on the
current position 702 and the position at thesecond location 706, the predictability of navigation is enhanced. The area and/or volume of the correspondingcurrent frustum 704 and thesecond frustum 708 are decreased. The decrease in area and/or volume of thecurrent frustum 704 and thesecond frustum 708 further places limits onto the predicted position ofnavigation 710, effectively decreasing the bandwidth required for visibility event packet streaming. - Additionally, the predicted
position 710 can be utilized to defer the transmission of significant portions of the visibility event packets. The visibility event protocol defined by a navigation driven predictive prefetch of precomputed visibility event packets is an incremental and progressive method of streaming navigational data. In some aspects of the present disclosure, a series of partial and/or deferred visibility event packets that reflect predicted sequences of viewcell-to-viewcell boundaries are processed via the circuitry of theserver 106. As such, runtime conservative view frustum culling methods are employed in which some parts of a particular visibility event packet, corresponding to a first viewcell boundary, may go untransmitted, even as the position penetrates later transited viewcell boundaries. - By having the navigating
client device 108 follow the future position of the virtual navigational prefetch agent of theserver 106 by 2000 ms, the prefetch of visibility event packets essentially becomes deterministic. This reduces the bandwidth required to stream the visibility event packets, while enabling the virtual navigational prefetch agent of theserver 106 to respond to changes occurring along the intended navigational route by transmitting a new stream of visibility event packets corresponding to a diverting route in a time period between 166 ms and 2000 ms, as shown inFIG. 7 . -
FIG. 8 describes a high-level architectural block diagram of the visibilityevent navigation system 800, according to certain aspects. The visibilityevent navigation system 100 can be attributed to a navigation system on-board a human-operated, autonomous, or semi-autonomous aircraft or ground vehicle.FIG. 8 illustrates an exemplary navigation system including aclient device 108 such as an on-board an autonomous quadrotor aircraft in various stages of flight corresponding tolocations - The visibility
event navigation system 100 can include anetwork 102 including a plurality ofnetwork nodes network 102 includingnetwork nodes - At the
position 820, theclient device 108 transmits a request for a navigational route to an intended destination which may be an individual home, a business building, a porch, a rooftop, an inside of a structure, and the like. Transmission of therequest 805 to theserver 106 employs anetwork node 813. Theserver 106 authenticates the identity of theclient device 108 and transmits 807 initial visibility event navigation data corresponding to the visibility event packets for viewcells on the navigational route to the intended destination. The navigational route can include a portion of the environment represented by a set of connected viewcells within the navigable 3D space of the environment and a corresponding model of the environment. Once the initial visibility event navigation packets are received at theclient device 108, theclient device 108 proceeds from theinitial location 820 tolocation 822 along the navigational route to the intended destination. Atlocation 822, theclient device 108 can be configured to determine location information using data detected at asensor 104 in communication with theclient device 108. Thesensor 104 can be utilized by theclient device 108 to determine, directly or indirectly, surface information including the location of the visible 3D surfaces surrounding theclient device 108 and compare this surface information to the relevant 3D environmental model data delivered by the visibility event navigational stream. This location information is transmitted to theclient device 108 through thecommunication network node 815 intransmission 814. - After receiving the
transmission 814 from theclient device 108 atposition 822 via thenetwork node 814, theserver 106 determines whether theclient device 108 atlocation 822 is located along the navigational route to the intended destination. If theclient device 108 is determined to be located along the navigational route to the intended destination, then the server transmits 817 additional visibility event navigational packets for viewcells further along the navigational route to the intended destination. In some aspects of the present disclosure, thetransmission 817 from theserver 106 includes visibility event navigation data corresponding to the visibility event packets for viewcells on one or more alternate routes to save landing zones. -
FIG. 8 further shows theclient device 108 atposition 824.Position 824 depicts a location that is not on the navigational route to the intended destination, but on aroute 840 that substantially deviates from the navigational route. After receiving thetransmission 835 from theclient device 108 atposition 824 via thenetwork node 839, theserver 106 determines thatclient device 108 is located on the navigational route to the intended destination. If theclient device 108 is determined to not be on the navigational route, then the server may transmit asignal 837 from thenetwork node 839 to theclient device 108 atposition 824 including a safe landing command. In some aspects, the safe landing command causes theclient device 108 to utilize the visibility event packets for the alternate route to a safe landing zone to navigate along the route to thesafe landing zone 840 and land atposition 826. In certain aspects, this safe landing command may be implemented as a default behavior of the visibility even navigation system in instances where communication between theclient device 108 and theserver 106 is interrupted for a predetermined period of time. -
FIG. 9 is an algorithmic flowchart of a visibilityevent navigation process 900, according to certain exemplary aspects. The visibilityevent navigation process 900 describes aclient device 108 navigating along a navigational route and the transmission of an alternate route and safe landing command if theclient device 108 strays from the navigational route. Atstep 902, the circuitry of theclient device 108 transmits a request to theserver 106 for visibility event packets. The visibility event packets can be navigational packets that correspond to the navigational route that theclient device 108 intends to traverse. - At
step 904, theserver 106 in communication with theclient device 108 authenticates the request of theclient device 108. In certain aspects, theserver 106 is configured to deliver visibility event packets tocertified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets tocertified client devices 108 ensures that theclient device 108 has met certain requirements for navigating along the intended route. - At
step 906, the circuitry of theserver 106 transmits visibility event navigational packets to theclient device 108. The initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination. In some aspects, the circuitry of theserver 106 transmits visibility event data representing the set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the route to a safe landing zone. - At
step 908, the circuitry of theclient device 108 employs the relevant visibility event packet navigational data and 3D sensor data to determine location of theclient device 108 within the real environment using 3D map-matching. In some aspects, the matching includes other matching methods, computer vision methods, and the like. The matching may result in a position relative to the relevant obstacle or target surfaces in the real environment. The determination can also result in latitude, longitude, and/or altitude calculations if the environmental model data of the visibility event packets is geo-registered. - At
step 910, the circuitry of theclient device 108 transmits information corresponding to the location of theclient device 108 in the real environment to theserver 106. In certain aspects, the location information can includes latitude, longitude, and/or altitude calculations if the environmental model data of the visibility event packets is geo-registered. - At
step 912, a determination is made of whether theclient device 108 is located along the specific navigational route. In some aspects, theserver 106 determines if the transmitted location of theclient device 108 is on the navigational route. In certain aspects, the determination can employ multiple parameters to determine if the location of the navigatingclient device 108 substantially deviates from the navigational route. For example, the parameters can include the average distance from the navigational route over a predetermined period of time in which the desired route is defined as a navigational route bounded by an allowed region of navigational uncertainty reflecting a known or desired precision in navigation. If the client device is determined to be located along the navigational route, resulting in a “yes” atstep 912, the visibilityevent navigation process 900 proceeds to step 914. Otherwise, if the client device is determined to not be located along the navigational route, resulting in a “no” atstep 912, the visibilityevent navigation process 900 proceeds to step 918. - At
step 914, the circuitry of theserver 106 transmits additional visibility event navigation packets corresponding to viewcells that are farther along the specified navigational route to theclient device 108. - At
step 916, theclient device 108 receives the instructions transmitted from theserver 106 and continues to navigate along the specified navigational route. In some aspects, the visibilityevent navigation process 900 ends after completingstep 916. In other aspects, the visibilityevent navigation process 900 proceeds to step 910. - At
step 918, the circuitry of theserver 106 transmits a command to theclient device 108 in which theclient device 108 is provided with instructions to navigate on an alternate route towards a safe landing zone and land at the safe landing zone. - At
step 920, theclient device 108 navigates along the alternate route and lands at the safe landing zone. In certain aspects, the visibility event navigational client executes an internal instruction to follow an alternate route to a safe landing zone and land if communication with the visibility event navigational server is interrupted. In some aspects, the visibilityevent navigation process 900 ends upon the completion ofstep 920. In other aspects, the visibilityevent navigation process 900 proceeds to step 910. -
FIG. 10 is an algorithmic flowchart of a visibility event navigationdifference determination process 1000, according to certain exemplary aspects. The visibility event navigationdifference determination process 1000 describes aclient device 108 navigating along a navigational route and the determination of a deviation of theclient device 108 from the navigational route. Atstep 1002, the circuitry of theclient device 108 transmits a request to theserver 106 for visibility event packets. The visibility event packets can be navigational packets that correspond to the navigational route that theclient device 108 intends to traverse. - At
step 1004, theserver 106 is in communication with theclient device 108 and authenticates the request of theclient device 108. In certain aspects, theserver 106 is configured to deliver visibility event packets tocertified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets tocertified client devices 108 ensures that theclient device 108 has met certain requirements for navigating along the intended route. - At
step 1006, the circuitry of theserver 106 transmits visibility event navigational packets to theclient device 108. The initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination. - At
step 1008, the circuitry of theclient device 108 determines a difference between the environmental model data delivered by the visibility event packets and the corresponding 3D ground-truth, as determined using the 3D sensor data of asensor 104 that is in communication with theclient device 108. In certain aspects, the difference determination occurs during the 3D map-matching in which the environmental model data supplied by the visibility even packets and the 3D data acquired by thesensor 104 determine the location of theclient device 108 within the real environment. The difference determination ofstep 1008 may employ one or more statistical metrics of ‘fit’ as determined by an iterative closest points algorithm or other matching algorithm including variations of the iterative closest points algorithm in which real-time acquired sensor point cloud data is matched to model polygons or other surface representations used in the corresponding environmental model. - At
step 1010, a caution state in navigation is initiated at theclient device 108 via the circuitry of theclient device 108. In certain aspects, the difference between the representation of the environment and the ground truth scan from thesensor 104, results in the circuitry utilizing the visibility event packet data with caution in the immediate area theclient device 108 is located at. In this state of cautious navigation, the visibility event navigation can employ less precise navigational methods such as SLAM, GPS, and the like. As such, the more precise 3D map-matching or other computer vision based navigation can resume when there is sufficient match between the delivered model data and the real-time sensor data that does not surpass a predetermined difference threshold. - At
step 1012, the difference information obtained by the circuitry of theclient device 108 is transmitted from theclient device 108 to theserver 106. In certain aspects, the visibilityevent navigation system 100 exploits the fact that the amount of information generally required to represent a difference over any modest period, for example in an urban environment, is very small compared to the amount of information required to represent the urban environment. As such, this small amount of difference information can be transmitted as raw point cloud data or could be first semi-processed, for example into a voxel representation, or into a procedural parametric representation. In some aspects, the difference information is processed into polygon or procedural surfaces that can be encoded as visibility event packets for transmission to theserver 106. - At
step 1014, a determination is made of whether the difference information is significant and verified. In some aspects, verification of the difference information can include theserver 106 receiving substantially the same or similar difference information, for the same corresponding region of the navigated environment, from a plurality ofclient devices 108 over a predetermined period of time. In certain aspects, the significance of the difference can include spatial metrics, or other metrics that weigh the importance of the change to navigation. For example, the change information can indicate a new structure that impedes navigation along one or more navigation routes. In other aspects, the significance of the change information can include metrics of saliency to theclient device 108 including 3D map-matching or other computer vision algorithms that use the real surfaces and the corresponding environmental 3D model data, delivered as visibility event packets. If the delivered environmental change information satisfies a predetermined threshold for significance, resulting in a “yes” atstep 1014, the visibility event navigationdifference determination process 1000 proceeds to step 1016. Otherwise, if the delivered environmental change information does not satisfy a predetermined threshold for significance, resulting in a “no” atstep 1014, the visibility event navigationdifference determination process 1000 ends. - At
step 1016, the changes to the environmental model are incorporated and encoded as visibility event packets by the circuitry of theserver 106. - At
step 1018, the visibility event packets are transmitted on an on-demand basis to anysubsequent client devices 108 in which the changed surfaces are unoccluded from the viewcells defining the current navigational route. In certain aspects, the transmission employs the same navigation-based predictive prefetch used to transmit the unchanged packets. In some aspects, the updated visibility even navigation data is received by a second client device. For example, the second client device can include an aircraft or ground vehicle that has entered, or plans to enter regions of the real environment in which the changes detected by theclient device 108 have now been encoded as visibility event packets. -
FIG. 11 is an algorithmic flowchart of a visibility event navigationincident determination process 1100, according to certain exemplary aspects. The visibility event navigationincident determination process 1100 describes aclient device 108 that navigates to a determined incident location or a location along an ingress/egress route to/from the incident location. Atstep 1102, the circuitry of theclient device 108 transmits a request to theserver 106 for visibility event packets. The visibility event packets can be navigational packets that correspond to the navigational route that theclient device 108 intends to traverse. - At
step 1104, theserver 106 is in communication with theclient device 108 and authenticates the request of theclient device 108. In certain aspects, theserver 106 is configured to deliver visibility event packets tocertified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets tocertified client devices 108 ensures that theclient device 108 has met certain requirements for navigating along the intended route. - At
step 1106, the circuitry of theserver 106 transmits visibility event navigational packets to theclient device 108. The initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination. - At
step 1108, theclient device 108 navigates along the specified navigational route. - At
step 1110, a determination is made of whether an incident has occurred. In certain aspects, the circuitry of theserver 106 determines if there has been an incident in the real environment. In some aspects, the incident may be an event that threatens public safety or security. If an incident is determined to be present, resulting in a “yes” atstep 1110, the visibility event navigationincident determination process 1100 proceeds to step 1112. Otherwise, if an incident is not determined to be present, resulting in a “no” atstep 1110, the visibility event navigationincident determination process 1100 proceeds to step 1116. - At
step 1112, visibility event packets corresponding to a route to the incident location are transmitted fromserver 106 to theclient device 108 via the circuitry of theserver 106. In certain aspects, the transmitted visibility event packets can correspond to one or more ingress or one or more egress routes that are to or from the incident location. - At
step 1114, the circuitry of theserver 106 transmits a command to theclient device 108, which instructs theclient device 108 to navigate to the incident location or to a location along the ingress/egress routes that are to/from the incident. In certain aspects, the diversion of theclient device 108 to an egress route enables surveillance for incident perpetrators attempting to escape from the incident location. As described in copending parent application Ser. No. 13/807,824, the method of precomputing visibility event packets using conservative linearized umbral event surfaces can incorporate identifying the navigable space of the environment and possible routes within the modeled space, including ingress and egress routes. In exemplary aspects, visibility event packets delivered in step 460 can include those corresponding to positions from which egress routes and ingress routes are unoccluded and which provide good vantage points for mobile surveillance. - At
step 1116, the circuitry of theserver 106 transmits more visibility event navigational packets for the specified route to theclient device 108. In some aspects, the visibility event navigationincident determination process 1100 proceeds to step 1108 upon the completion ofstep 1116. In other aspects, the visibility event navigationincident determination process 1100 ends upon the completion ofstep 1116. -
FIG. 12 is an algorithmic flowchart of a visibility event navigation unauthorizedobject determination process 1200, according to certain exemplary aspects. The visibility event navigation unauthorizedobject determination process 1200 describes a process of detecting and responding to unauthorized objects in a real environment. Atstep 1202, the circuitry of theclient device 108 transmits a request to theserver 106 for visibility event packets. The visibility event packets can be navigational packets that correspond to the navigational route that theclient device 108 intends to traverse. - At
step 1204, theserver 106 is in communication with theclient device 108 and authenticates the request of theclient device 108. In certain aspects, theserver 106 is configured to deliver visibility event packets tocertified client devices 108 which present authenticated credentials using a secure process of authentication, such as a cryptographic ticket. The transmission of visibility event packets tocertified client devices 108 ensures that theclient device 108 has met certain requirements for navigating along the intended route. - At
step 1206, the circuitry of theserver 106 transmits visibility event navigational packets to theclient device 108. The initial navigational packets may include a subset of the entire set of visibility event packets corresponding to the contiguous viewcells of the navigable space representing the specified navigational route to the intended destination. The circuitry of theserver 106 also transmits information representing the location of authorizedclient devices 108 operating in the vicinity of the specified navigational route. In certain aspects, the vicinity includes an area within a predetermined sensor range of the specified navigational route, wherein the sensor range includes the range of asensor 104 that is in communication with theclient device 108. - At
step 1208, the circuitry of theclient device 108 detects unauthorized moving objects in the real environment. In certain aspects, the real environment includes airspace. The method of detection can include one or more methods of detecting moving objects via on-board LiDAR, SONAR, Radar, optical sensors, or any other detection means that are known. In some aspects, utilizing the information describing known locations of authorized client devices in the vicinity can improve the detection of unauthorized moving objects. Theclient device 108 can also use thesensor 104 such an on-board sensor to detect, track, and report the location of other authorizedclient devices 108, or authorizedclient devices 108 that have lost communication with theserver 106. - At
step 1210, the circuitry of theclient device 108 transmits information describing the location of unauthorized moving objects to theserver 106. - At
step 1212, a determination is made of whether the detection of a moving object is significant and/or verified. In certain aspects, the determination of significance may include size, speed and other parameters of the moving object. In some aspects, the verification may include repeated, correlated observations frommultiple client devices 108 or other authenticated sources. If the moving object is determined to be significant and/or verified, resulting in a “yes” atstep 1212, the visibility event navigation unauthorizedobject determination process 1200 proceeds to step 1214. Otherwise, if the moving object is determined to not be significant or verified, resulting in a “no” atstep 1212, the visibility event navigation unauthorizedobject determination process 1200 ends. - At
step 1214, the circuitry of theserver 106 transmits the location of the unauthorized object such as an aircraft or ground vehicle to a client device. The circuitry of theserver 106 can also be configured to transmit the visibility event packets corresponding to an intercept or pursuit route to the unauthorized object. In some aspects, theserver 106 can also transmit a command to theclient device 108 which instructs theclient device 108 to pursue, track, or otherwise engage the detected unauthorized object. - At
step 1216, theclient device 108 initiates pursuit, tracking, or engagement of the unauthorized object by following a pursuit route or intercept route specified by the visibility event packets. -
FIG. 13 is a block diagram of a visibility eventnavigation system workflow 1300, according to certain exemplary aspects. The visibility eventnavigation system workflow 1300 describes a high-level architectural block diagram of the visibilityevent navigation system 100. The visibilityevent navigation system 100 can be used for streaming visibility event data and other data to navigatingclient devices 108 such as vehicles, aircraft, and the like. In some aspects, the client devices can be autonomous, semi-autonomous, on-board pilot operated, on-board driver operated, remotely operated, and the like. Theserver 1310 of the visibilityevent navigation system 100 can include circuitry that is configured to process and transmit urban and/or natural environmental model surfaces, content, and the like of a 3Denvironmental model database 1302 to aclient device 108 via anetwork 102. - The circuitry of the
server 106 can be configured to deliver the environmental model surfaces and the content of the 3Denvironmental model database 1302 via a visibility event data stream employingvisibility event packets 1306 that are encoded using first-order visibility propagation via a visibilityevent packet encoder 1304. In certain aspects, the visibilityevent navigation packets 1306 are precomputed from the 3Denvironmental model 1302 at an earlier time and stored for later use. The circuitry of theserver 1310 can further be configured to run a visibilityevent server software 1308 in which thevisibility event packets 1306 are processed in a visibilityevent navigation server 1310. The processedvisibility event packets 1306 of theserver 1310 can be transmitted to aclient device 108 in which visibilityevent client software 1312 employs the processedvisibility event packets 1306 in a 3D map-matching or computer vision basednavigation system 1314. In certain aspects, the visibilityevent server software 1308 employs navigation-driven predictive prefetch to transmit the precomputedvisibility event packets 1306. In certain aspects, specifying a defined navigational route and streaming thevisibility event packets 1306 to client visibilityevent navigation software 1312 increases the predictability of navigation-driven prefetch and reduces the bandwidth required to stream the packets. -
FIG. 14 is a hardware block diagram of a client device, according to certain exemplary aspects. InFIG. 14 , theclient device 108 includes aCPU 1400, which performs the processes described above/below. The process data and instructions may be stored inmemory 1402. These processes and instructions may also be stored on astorage medium disk 1404 such as a hard drive (HDD) or portable storage medium or may be stored remotely. Further, the claimed advancements are not limited by the form of the computer-readable media on which the instructions of the inventive process are stored. For example, the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk, or any other information processing device with which theclient device 108 communicates, such as a server or computer. - Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with
CPU 1400 and an operating system such as MICROSOFT WINDOWS, UNIX, SOLARIS, LINUX, APPLE MAC-OS, and other systems known to those skilled in the art. - The hardware elements in order to achieve the
server 106 may be realized by various circuitry elements, known to those skilled in the art. For example,CPU 1400 may be a XENON or CORE processor from Intel of America or an OPTERON processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, theCPU 1400 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further,CPU 1400 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above. - The
client device 108 inFIG. 14 also includes anetwork controller 1406, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with anetwork 102. As can be appreciated, thenetwork 102 can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. Thenetwork 102 can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be Wi-Fi, BLUETOOTH, or any other wireless form of communication that is known. - The
client device 108 further includes adisplay controller 1408, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing withdisplay 1410, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 1412 interfaces with atouch screen panel 1416 on or separate fromdisplay 1410. General purpose I/O interface also connects to a variety ofperipherals 1418 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard. - A
sound controller 1420 is also provided in theclient device 108, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1422 thereby providing sounds and/or music. - The general
purpose storage controller 1424 connects thestorage medium disk 1404 withcommunication bus 1426, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of theclient device 108. A description of the general features and functionality of thedisplay 1410,display controller 1408,storage controller 1424,network controller 1406,sound controller 1420, and general purpose I/O interface 1412 is omitted herein for brevity as these features are known. - The exemplary circuit elements described in the context of the present disclosure may be replaced with other elements and structured differently than the examples provided herein. Moreover, circuitry configured to perform features described herein may be implemented in multiple circuit units (e.g., chips), or the features may be combined in circuitry on a single chipset, as shown in
FIG. 15 . -
FIG. 15 is a hardware block diagram of adata processing system 1500, according to certain exemplary aspects.FIG. 15 shows a schematic diagram of adata processing system 1500, for performing visibility event navigation. Thedata processing system 1500 is an example of a computer in which code or instructions implementing the processes of the illustrative aspects may be located. - In
FIG. 15 ,data processing system 1500 employs a hub architecture including a north bridge and memory controller hub (NB/MCH) 1525 and a south bridge and input/output (I/O) controller hub (SB/ICH) 1520. The central processing unit (CPU) 1530 is connected to NB/MCH 1525. The NB/MCH 1525 also connects to thememory 1545 via a memory bus, and connects to thegraphics processor 1550 via an accelerated graphics port (AGP). The NB/MCH 1525 also connects to the SB/ICH 1520 via an internal bus (e.g., a unified media interface or a direct media interface). TheCPU Processing unit 1530 may contain one or more processors and even may be implemented using one or more heterogeneous processor systems. -
FIG. 16 is a hardware block diagram of a CPU, according to certain exemplary aspects.FIG. 16 shows one implementation ofCPU 1530. In one implementation, theinstruction register 1638 retrieves instructions from thefast memory 1640. At least part of these instructions are fetched from theinstruction register 1638 by thecontrol logic 1636 and interpreted according to the instruction set architecture of theCPU 1530. Part of the instructions can also be directed to theregister 1632. In one implementation the instructions are decoded according to a hardwired method, and in another implementation the instructions are decoded according a microprogram that translates instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. After fetching and decoding the instructions, the instructions are executed using the arithmetic logic unit (ALU) 1634 that loads values from theregister 1632 and performs logical and mathematical operations on the loaded values according to the instructions. The results from these operations can be feedback into the register and/or stored in thefast memory 1640. - According to certain implementations, the instruction set architecture of the
CPU 1530 can use a reduced instruction set architecture, a complex instruction set architecture, a vector processor architecture, a very large instruction word architecture. Furthermore, theCPU 1530 can be based on the Von Neuman model or the Harvard model. TheCPU 1530 can be a digital signal processor, an FPGA, an ASIC, a PLA, a PLD, or a CPLD. Further, theCPU 1530 can be an x86 processor by Intel or by AMD; an ARM processor, a Power architecture processor by, e.g., IBM; a SPARC architecture processor by Sun Microsystems or by Oracle; or other known CPU architecture. - Referring again to
FIG. 15 , thedata processing system 1500 can include that the SB/ICH 1520 is coupled through a system bus to an I/O Bus, a read only memory (ROM) 1556, universal serial bus (USB)port 1564, a flash binary input/output system (BIOS) 1568, and agraphics controller 1558. PCI/PCIe devices can also be coupled to SB/ICH 1520 through aPCI bus 1562. - The PCI devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. The
Hard disk drive 1560 and CD-ROM 1566 can use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. In one implementation the I/O bus can include a super I/O (SIO) device. - Further, the hard disk drive (HDD) 1560 and
optical drive 1566 can also be coupled to the SB/ICH 1520 through a system bus. In one implementation, aparallel port 1578 and aserial port 1576 can be connected to the system bust through the I/O bus. Other peripherals and devices that can be connected to the SB/ICH 1520 using a mass storage controller such as SATA or PATA, an Ethernet port, an ISA bus, a LPC bridge, SMBus, a DMA controller, and an Audio Codec. - The functions and features described herein may also be executed by various distributed components of a system. For example, one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network. The distributed components may include one or more client and server machines, which may share processing, in addition to various human interface and communication devices (e.g., display monitors, smart phones, tablets, personal digital assistants (PDAs)). The network may be a private network, such as a LAN or WAN, or may be a public network, such as the Internet. Input to the system may be received via direct user input and received remotely either in real-time or as a batch process.
- The above-described hardware description is a non-limiting example of corresponding structure for performing the functionality described herein.
- A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of this disclosure. For example, preferable results may be achieved if the steps of the disclosed techniques were performed in a different sequence, if components in the disclosed systems were combined in a different manner, or if the components were replaced or supplemented by other components. The functions, processes and algorithms described herein may be performed in hardware or software executed by hardware, including computer processors and/or programmable circuits executing program code and/or computer instructions to execute the functions, processes and algorithms described herein. Additionally, an implementation may be performed on modules or hardware not identical to those described. Accordingly, other implementations are within the scope that may be claimed.
- The above disclosure also encompasses the aspects listed below.
- (1) A method of visibility event navigation, including one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server; detecting, via the circuitry, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device; calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to a first visibility event packet of the one or more visibility event packets; transmitting, via the circuitry, the at least one position from the client device to the server; and receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets when the at least one position is within the navigational route at the client device from the server.
- (2) The method of (1), further including: receiving, via the circuitry, at least one alternate visibility event packet of the one or more visibility event packets at the client device from the server, wherein the at least one alternate visibility event packet representing 3D surface elements of the geospatial model that are occluded from a third viewcell but not occluded from a fourth viewcell, the third and fourth viewcells representing spatial regions of a an alternate navigational route within a real environment modeled by the geospatial model, the alternate navigational route leading to a safe landing zone.
- (3) The method of either (1) or (2), further including: receiving, via the circuitry, a command for navigation to occur along the alternate navigational route at the client device from the server, when the at least one position is not within the navigational route.
- (4) The method of any one of (1) to (3), further including: receiving, via the circuitry, a command for navigation to occur along the alternate navigational route and the safe landing zone when the client fails to receive a signal from the server during a predetermined period of time.
- (5) The method of any one of (1) to (4), wherein the client device includes a navigation system for at least one of an autonomous aircraft, a semiautonomous aircraft and a piloted aircraft.
- (6) The method of any one of (1) to (5), wherein the safe landing zone includes a specific viewcell as a zone reachable by the autonomous aircraft in case of an engine failure while the autonomous aircraft is in the specific viewcell.
- (7) The method of any one of (1) to (6), wherein the safe landing zone includes a specific viewcell as a zone reachable by the piloted aircraft in case of an engine failure while the piloted aircraft is in the specific viewcell.
- (8) The method of any one of (1) to (7), further including: detecting, via the circuitry, at least one moving object using the surface information and the visibility event packet information; and transmitting, via the circuitry, information representing the location of the at least one moving object from the client device to the server.
- (9) The method of any one of (1) to (8), wherein the real environment is an urban environment, and the geospatial model is a model of the urban environment.
- (10) The method of any one of (1) to (9), wherein the real environment is and indoor environment, and the geospatial model is a model of the indoor environment.
- (11) The method of any one of (1) to (10), wherein the client device includes a navigation system for at least one of an autonomous ground vehicle and a semiautonomous ground vehicle.
- (12) The method of any one of (1) to (11), wherein the client device is a navigational system for a piloted aircraft and the visibility event packet include representations of obstacles hazardous to aviation.
- (13) The method of any one of (1) to (12), further including: receiving one or more visibility event packets when a request for the one or more visibility event packets is authenticated by the server.
- (14) A method of visibility event navigation, including one or more visibility event packets located at a server, including information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: prefetching, via processing circuitry of the server, a first visibility event packet of the one or more visibility event packets to a client device; receiving, via the circuitry, at least one position of the client device in the real environment at the server; and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets to the client device when the at least one position is within the navigational route.
- (15) The method of (14), further including: transmitting, via the circuitry, at least one alternate visibility event packet of the one or more visibility event packets from the server to the client, the at least one alternate visibility event packet representing 3D surface elements of the geospatial model that are occluded from a third viewcell but not occluded from a fourth viewcell, the third and fourth viewcells representing spatial regions of an alternate navigational route within a real environment modeled by the geospatial model, the alternate navigational route leading to a safe landing zone.
- (16) The method of either (14) or (15), wherein the client device is a navigational system includes at least one of an autonomous aircraft and a semiautonomous aircraft.
- (17) The method of any one of (14) to (16), further including: receiving, via the circuitry, at least one object position in the real environment of a moving object at the server from the client device.
- (18) The method of any one of (14) to (17), further including: transmitting, via the circuitry, a command for navigation to occur along the alternate navigational route from the server to the client device, when the at least one position is not within the specified navigational route.
- (19) The method of any one of (14) to (18), wherein the real environment is an urban environment, and the geospatial model is a model of the urban environment.
- (20) The method of any one of (14) to (19), wherein the real environment is an indoor environment, and the geospatial model is a model of the indoor environment.
- (21) A method of visibility event navigation prefetch, including at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a server, information at the server from a client device representing an orientation of a sensor located at the client device, the sensor acquiring information representing the visible surfaces of the real environment; and transmitting, via the circuitry, the at least one partial visibility event packet from the server to the client device, wherein the at least one partial visibility event packet intersects a maximal view frustum including a volume of space intersected by the view frustum of the sensor during movement of the client device in the second viewcell.
- (22) A method of visibility event navigation, including at least one partial visibility event packet located at a server, the at least one partial visibility event packet including a subset of a complete visibility event packet, the complete visibility event packet including visibility event packet information representing 3D surface elements of a geospatial model occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: transmitting, via processing circuitry of a client device, surface information from the client device to the server corresponding to the orientation of a sensor located at the client device, the surface information representing visible surfaces of the real environment; and receiving, via the circuitry, the at least one partial visibility event packet at the client device from the server including a subset of the visibility event packet information that intersects a maximal view frustum, wherein the maximal view frustum includes a volume of space intersected by a view frustum of the sensor during movement of the client device in the second viewcell.
- (23) A method of visibility event navigation, including a first visibility event packet of one or more visibility event packets from a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: detecting, via processing circuitry of a first client device of a plurality of client devices, surface information representing visible surfaces of the real environment at a sensor in communication with the first client device of the plurality of client device; calculating, via the circuitry, at least one position of the first client device of the plurality of client devices in the real environment by matching the surface information to the visibility event packet information; transmitting, via the circuitry, the at least one position from the first client device of the plurality of client devices to the server; receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets at the first client device of the plurality of client devices from the server when the at least one position is within the navigational route; detecting, via the circuitry, position information representing the position of at least one second client device of the one or more client devices in the real environment at the sensor; and transmitting, via the circuitry, the position information from the first client device of the plurality of client devices to the server.
- (24) A method of visibility event navigation prefetch, including a first visibility event packet of one or more visibility event packets, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a server, at least one position of a client device in the real environment at the server from the client device; and transmitting, via the circuitry, a second visibility event packet of the one or more visibility event packets when the at least one position of the client device is within the navigational route and a fee has been paid by an operator of the client device.
- (25) A method of visibility event navigation, including one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, including: receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server; detecting, via processing circuitry of the client device, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device; calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to the first visibility event packet of the one or more visibility event packets; transmitting, via the circuitry, the at least one position in the real environment from the client device to the server; receiving, via the circuitry, at least one second visibility event packet from the server of the one or more visibility event packets at the client device from the server; calculating, via the circuitry, at least one deviation of the ground-truth 3D structure from the corresponding environment modeled by the geospatial model using the surface information and the visibility event packet information; and transmitting, via the circuitry, the at least one deviation from the client device to the server.
- (26) A visibility event navigation system, including: a server; at least one client device located in a real environment and in communication with the server, the at least one client device including processing circuitry configured to: detect surface information representing one or more visible surfaces of the real environment at one or more sensors in communication with the at least one client device, calculate at least one position of the at least one client device in the real environment by matching the surface information to visibility event packet information including a first visibility event packet of one or more visibility event packets representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within the real environment and modeled by the geospatial model, transmit the at least one position of the client device to the server, and receive a second visibility event packet of the one or more visibility event packets from the server when the at least one position is within the navigational route.
Claims (1)
1. A method of visibility event navigation, including one or more visibility event packets located at a server, the one or more visibility event packets including visibility event packet information representing 3D surface elements of a geospatial model that are occluded from a first viewcell and not occluded from a second viewcell, the first and second viewcells representing spatial regions of a navigational route within a real environment modeled by the geospatial model, comprising:
receiving, via processing circuitry of a client device, at least one visibility event packet of the one or more visibility event packets from the server;
detecting, via the circuitry, surface information representing one or more visible surfaces of the real environment at a sensor in communication with the client device;
calculating, via the circuitry, at least one position of the client device in the real environment by matching the surface information to the visibility event packet information corresponding to a first visibility event packet of the one or more visibility event packets;
transmitting, via the circuitry, the at least one position from the client device to the server; and
receiving, via the circuitry, at least one second visibility event packet of the one or more visibility event packets when the at least one position is within the navigational route at the client device from the server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/793,959 US20200273354A1 (en) | 2010-06-30 | 2020-02-18 | Visibility event navigation method and system |
Applications Claiming Priority (18)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US36028310P | 2010-06-30 | 2010-06-30 | |
US38205610P | 2010-09-13 | 2010-09-13 | |
US38428410P | 2010-09-19 | 2010-09-19 | |
US201161452330P | 2011-03-14 | 2011-03-14 | |
US201161474491P | 2011-04-12 | 2011-04-12 | |
US201161476819P | 2011-04-19 | 2011-04-19 | |
PCT/US2011/042309 WO2012012161A2 (en) | 2010-06-30 | 2011-06-29 | System and method of from-region visibility determination and delta-pvs based content streaming using conservative linearized umbral event surfaces |
PCT/US2011/051403 WO2012037129A2 (en) | 2010-09-13 | 2011-09-13 | System and method of delivering and controlling streaming interactive media comprising predetermined packets of geometric, texture, lighting and other data which are rendered on a reciving device |
US13/420,436 US20120229445A1 (en) | 2010-06-30 | 2012-03-14 | System and method of reducing transmission bandwidth required for visibility-event streaming of interactive and non-interactive content |
US201313807824A | 2013-03-20 | 2013-03-20 | |
US14/082,997 US9620469B2 (en) | 2013-11-18 | 2013-11-18 | Mechanisms for forming post-passivation interconnect structure |
US201562110774P | 2015-02-02 | 2015-02-02 | |
US201562116604P | 2015-02-16 | 2015-02-16 | |
US15/013,784 US9892546B2 (en) | 2010-06-30 | 2016-02-02 | Pursuit path camera model method and system |
US15/044,956 US9916763B2 (en) | 2010-06-30 | 2016-02-16 | Visibility event navigation method and system |
US15/918,872 US20180268724A1 (en) | 2010-06-30 | 2018-03-12 | Visibility event navigation method and system |
US16/262,502 US20190236964A1 (en) | 2010-06-30 | 2019-01-30 | Visibility event navigation method and system |
US16/793,959 US20200273354A1 (en) | 2010-06-30 | 2020-02-18 | Visibility event navigation method and system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/262,502 Continuation US20190236964A1 (en) | 2010-06-30 | 2019-01-30 | Visibility event navigation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200273354A1 true US20200273354A1 (en) | 2020-08-27 |
Family
ID=56094800
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/044,956 Active 2032-01-09 US9916763B2 (en) | 2010-06-30 | 2016-02-16 | Visibility event navigation method and system |
US15/918,872 Abandoned US20180268724A1 (en) | 2010-06-30 | 2018-03-12 | Visibility event navigation method and system |
US16/262,502 Abandoned US20190236964A1 (en) | 2010-06-30 | 2019-01-30 | Visibility event navigation method and system |
US16/793,959 Abandoned US20200273354A1 (en) | 2010-06-30 | 2020-02-18 | Visibility event navigation method and system |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/044,956 Active 2032-01-09 US9916763B2 (en) | 2010-06-30 | 2016-02-16 | Visibility event navigation method and system |
US15/918,872 Abandoned US20180268724A1 (en) | 2010-06-30 | 2018-03-12 | Visibility event navigation method and system |
US16/262,502 Abandoned US20190236964A1 (en) | 2010-06-30 | 2019-01-30 | Visibility event navigation method and system |
Country Status (1)
Country | Link |
---|---|
US (4) | US9916763B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3985645A1 (en) * | 2020-10-19 | 2022-04-20 | Aurora Flight Sciences Corporation, a subsidiary of The Boeing Company | Landing zone evaluation |
US12100203B2 (en) | 2020-10-19 | 2024-09-24 | The Boeing Company | Above-horizon target tracking |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9479392B2 (en) * | 2015-01-08 | 2016-10-25 | Intel Corporation | Personal communication drone |
US10868740B2 (en) * | 2015-01-28 | 2020-12-15 | Timo Eränkö | Systems for feed-back communication in real-time in a telecommunication network |
US10818084B2 (en) * | 2015-04-07 | 2020-10-27 | Geopogo, Inc. | Dynamically customized three dimensional geospatial visualization |
US9969495B2 (en) | 2016-04-29 | 2018-05-15 | United Parcel Service Of America, Inc. | Unmanned aerial vehicle pick-up and delivery systems |
US10730626B2 (en) | 2016-04-29 | 2020-08-04 | United Parcel Service Of America, Inc. | Methods of photo matching and photo confirmation for parcel pickup and delivery |
US9786027B1 (en) | 2016-06-16 | 2017-10-10 | Waygate, Inc. | Predictive bi-adaptive streaming of real-time interactive computer graphics content |
US10560680B2 (en) * | 2017-01-28 | 2020-02-11 | Microsoft Technology Licensing, Llc | Virtual reality with interactive streaming video and likelihood-based foveation |
US10775792B2 (en) | 2017-06-13 | 2020-09-15 | United Parcel Service Of America, Inc. | Autonomously delivering items to corresponding delivery locations proximate a delivery route |
US11508247B2 (en) * | 2017-07-27 | 2022-11-22 | Honeywell International Inc. | Lidar-based aircraft collision avoidance system |
US10684372B2 (en) * | 2017-10-03 | 2020-06-16 | Uatc, Llc | Systems, devices, and methods for autonomous vehicle localization |
US11687869B2 (en) * | 2018-02-22 | 2023-06-27 | Flytrex Aviation Ltd. | System and method for securing delivery using an autonomous vehicle |
US10650482B1 (en) | 2018-11-09 | 2020-05-12 | Adobe Inc. | Parallel rendering engine |
US11580687B2 (en) | 2018-12-04 | 2023-02-14 | Ottopia Technologies Ltd. | Transferring data from autonomous vehicles |
CN111369779B (en) * | 2018-12-26 | 2021-09-03 | 北京图森智途科技有限公司 | Accurate parking method, equipment and system for truck in shore crane area |
US10823562B1 (en) | 2019-01-10 | 2020-11-03 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced base map generation |
US11312379B2 (en) * | 2019-02-15 | 2022-04-26 | Rockwell Collins, Inc. | Occupancy map synchronization in multi-vehicle networks |
US11624820B2 (en) | 2019-04-15 | 2023-04-11 | Eagle Technology, Llc | RF PNT system with embedded messaging and related methods |
US11694557B2 (en) * | 2019-09-16 | 2023-07-04 | Joby Aero, Inc. | Integrating air and ground data collection for improved drone operation |
CN110795797B (en) * | 2019-09-26 | 2021-06-18 | 北京航空航天大学 | MBD model processing feature recognition and information extraction method |
CN110929639B (en) * | 2019-11-20 | 2023-09-19 | 北京百度网讯科技有限公司 | Method, apparatus, device and medium for determining the position of an obstacle in an image |
US11787564B2 (en) * | 2020-04-06 | 2023-10-17 | Workhorse Group Inc. | Carriage lock mechanism for an unmanned aerial vehicle |
CN111736136B (en) * | 2020-06-23 | 2023-01-06 | 自然资源部四川测绘产品质量监督检验站(四川省测绘产品质量监督检验站) | Airborne laser point cloud aerial photography vulnerability detection method and system |
US11440679B2 (en) * | 2020-10-27 | 2022-09-13 | Cowden Technologies, Inc. | Drone docking station and docking module |
Family Cites Families (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993000650A1 (en) | 1991-06-28 | 1993-01-07 | Hong Lip Lim | Improvements in visibility calculations for 3d computer graphics |
US5553206A (en) | 1993-02-12 | 1996-09-03 | International Business Machines Corporation | Method and system for producing mesh representations of objects |
AU3718497A (en) | 1996-06-28 | 1998-01-21 | Resolution Technologies, Inc. | Fly-through computer aided design method and apparatus |
US5886702A (en) | 1996-10-16 | 1999-03-23 | Real-Time Geometry Corporation | System and method for computer modeling of 3D objects or surfaces by mesh constructions having optimal quality characteristics and dynamic resolution capabilities |
US6057847A (en) | 1996-12-20 | 2000-05-02 | Jenkins; Barry | System and method of image generation and encoding using primitive reprojection |
US6111582A (en) | 1996-12-20 | 2000-08-29 | Jenkins; Barry L. | System and method of image generation and encoding using primitive reprojection |
US6259452B1 (en) | 1997-04-14 | 2001-07-10 | Massachusetts Institute Of Technology | Image drawing system and method with real-time occlusion culling |
US6028608A (en) | 1997-05-09 | 2000-02-22 | Jenkins; Barry | System and method of perception-based image generation and encoding |
US6377229B1 (en) | 1998-04-20 | 2002-04-23 | Dimensional Media Associates, Inc. | Multi-planar volumetric display system and method of operation using three-dimensional anti-aliasing |
US6636633B2 (en) | 1999-05-03 | 2003-10-21 | Intel Corporation | Rendering of photorealistic computer graphics images |
KR100313706B1 (en) | 1999-09-29 | 2001-11-26 | 윤종용 | Redistributed Wafer Level Chip Size Package And Method For Manufacturing The Same |
US6511901B1 (en) | 1999-11-05 | 2003-01-28 | Atmel Corporation | Metal redistribution layer having solderable pads and wire bondable pads |
WO2001063561A1 (en) | 2000-02-25 | 2001-08-30 | The Research Foundation Of State University Of New York | Apparatus and method for volume processing and rendering |
TW578244B (en) | 2002-03-01 | 2004-03-01 | Advanced Semiconductor Eng | Underball metallurgy layer and chip structure having bump |
JP4031306B2 (en) | 2002-07-12 | 2008-01-09 | 日本放送協会 | 3D information detection system |
US6933946B1 (en) | 2003-05-07 | 2005-08-23 | At&T Corp. | Method for out-of core rendering of large 3D models |
TWI229930B (en) | 2003-06-09 | 2005-03-21 | Advanced Semiconductor Eng | Chip structure |
JP4510817B2 (en) | 2003-06-11 | 2010-07-28 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | User control of 3D volume space crop |
US7276801B2 (en) | 2003-09-22 | 2007-10-02 | Intel Corporation | Designs and methods for conductive bumps |
US7154500B2 (en) | 2004-04-20 | 2006-12-26 | The Chinese University Of Hong Kong | Block-based fragment filtration with feasible multi-GPU acceleration for real-time volume rendering on conventional personal computer |
CN100349467C (en) | 2004-05-13 | 2007-11-14 | 三洋电机株式会社 | Method and apparatus for processing three-dimensional images |
JP4196889B2 (en) | 2004-06-30 | 2008-12-17 | 日本電気株式会社 | Image display device and portable terminal device |
US7586490B2 (en) | 2004-10-20 | 2009-09-08 | Siemens Aktiengesellschaft | Systems and methods for three-dimensional sketching |
CN101138084B (en) | 2004-10-29 | 2010-06-02 | 弗利普芯片国际有限公司 | Semiconductor device package with bump overlying a polymer layer |
TWI286454B (en) | 2005-03-09 | 2007-09-01 | Phoenix Prec Technology Corp | Electrical connector structure of circuit board and method for fabricating the same |
TWI273667B (en) | 2005-08-30 | 2007-02-11 | Via Tech Inc | Chip package and bump connecting structure thereof |
US8180182B2 (en) | 2006-05-11 | 2012-05-15 | Panasonic Corporation | Processing device for processing plurality of polygon meshes, the device including plurality of processors for performing coordinate transformation and gradient calculations and an allocation unit to allocate each polygon to a respective processor |
US20100060640A1 (en) | 2008-06-25 | 2010-03-11 | Memco, Inc. | Interactive atmosphere - active environmental rendering |
DE102006043894B3 (en) | 2006-09-19 | 2007-10-04 | Siemens Ag | Multi-dimensional compressed graphical data recalling and graphically visualizing method, involves forming volume areas at examination point in location variant detailed gradient as volume units of partitioned, viewer-distant volume areas |
WO2008073798A2 (en) | 2006-12-08 | 2008-06-19 | Mental Images Gmbh | Computer graphics shadow volumes using hierarchical occlusion culling |
CN100514369C (en) | 2007-10-17 | 2009-07-15 | 北京航空航天大学 | Non-homogeneous space partition based scene visibility cutting method |
US8629871B2 (en) | 2007-12-06 | 2014-01-14 | Zynga Inc. | Systems and methods for rendering three-dimensional objects |
GB0808023D0 (en) | 2008-05-02 | 2008-06-11 | British Telecomm | Graphical data processing |
US8058726B1 (en) | 2008-05-07 | 2011-11-15 | Amkor Technology, Inc. | Semiconductor device having redistribution layer |
CN101630667A (en) | 2008-07-15 | 2010-01-20 | 中芯国际集成电路制造(上海)有限公司 | Method and system for forming conductive bump with copper interconnections |
US8395051B2 (en) | 2008-12-23 | 2013-03-12 | Intel Corporation | Doping of lead-free solder alloys and structures formed thereby |
CN107093203A (en) * | 2010-06-30 | 2017-08-25 | 巴里·林恩·詹金斯 | The control method and system that prefetching transmission or reception based on navigation of graphical information |
US10109103B2 (en) * | 2010-06-30 | 2018-10-23 | Barry L. Jenkins | Method of determining occluded ingress and egress routes using nav-cell to nav-cell visibility pre-computation |
US9171396B2 (en) * | 2010-06-30 | 2015-10-27 | Primal Space Systems Inc. | System and method of procedural visibility for interactive and broadcast streaming of entertainment, advertising, and tactical 3D graphical information using a visibility event codec |
US9385076B2 (en) | 2011-12-07 | 2016-07-05 | Taiwan Semiconductor Manufacturing Company, Ltd. | Semiconductor device with bump structure on an interconncet structure |
US20130193570A1 (en) | 2012-02-01 | 2013-08-01 | Chipbond Technology Corporation | Bumping process and structure thereof |
-
2016
- 2016-02-16 US US15/044,956 patent/US9916763B2/en active Active
-
2018
- 2018-03-12 US US15/918,872 patent/US20180268724A1/en not_active Abandoned
-
2019
- 2019-01-30 US US16/262,502 patent/US20190236964A1/en not_active Abandoned
-
2020
- 2020-02-18 US US16/793,959 patent/US20200273354A1/en not_active Abandoned
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3985645A1 (en) * | 2020-10-19 | 2022-04-20 | Aurora Flight Sciences Corporation, a subsidiary of The Boeing Company | Landing zone evaluation |
US12072204B2 (en) | 2020-10-19 | 2024-08-27 | The Boeing Company | Landing zone evaluation |
US12100203B2 (en) | 2020-10-19 | 2024-09-24 | The Boeing Company | Above-horizon target tracking |
Also Published As
Publication number | Publication date |
---|---|
US20160163205A1 (en) | 2016-06-09 |
US20190236964A1 (en) | 2019-08-01 |
US9916763B2 (en) | 2018-03-13 |
US20180268724A1 (en) | 2018-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200273354A1 (en) | Visibility event navigation method and system | |
US8892348B2 (en) | Method and system for aircraft conflict detection and resolution | |
US20220309920A1 (en) | Controlling vehicle-infrastructure cooperated autonomous driving | |
CN105931497B (en) | Navigation on-air collision detection method, device and all purpose aircraft | |
US9523986B1 (en) | System and method for secure, privacy-aware and contextualised package delivery using autonomous vehicles | |
US10209363B2 (en) | Implementing a restricted-operation region for unmanned vehicles | |
US10678266B2 (en) | Method and system for continued navigation of unmanned aerial vehicles beyond restricted airspace boundaries | |
US11960028B2 (en) | Determining specular reflectivity characteristics using LiDAR | |
US20130085629A1 (en) | Hardware-Based Weight And Range Limitation System, Apparatus And Method | |
US10488426B2 (en) | System for determining speed and related mapping information for a speed detector | |
US10937324B2 (en) | Orchestration in heterogeneous drone swarms | |
WO2020139488A1 (en) | Companion drone to assist location determination | |
CN114179832A (en) | Lane changing method for autonomous vehicle | |
US11797024B2 (en) | Methods and systems for configuring vehicle communications | |
US20220169283A1 (en) | Method and system for determining a vehicle trajectory through a blind spot | |
KR102679953B1 (en) | Drone hijacking method and system using gnss spoofing signal generation technology | |
EP4006680A1 (en) | Systems and methods for controlling a robotic vehicle | |
CN114394111A (en) | Lane changing method for autonomous vehicle | |
EP4047583A2 (en) | Method and apparatus for controlling vehicle-infrastructure cooperated autonomous driving, electronic device, and vehicle | |
US11650072B2 (en) | Portable lane departure detection | |
US20190212452A1 (en) | Online lidar intensity normalization | |
WO2020062395A1 (en) | Information processing method, aircrafts, system and storage medium | |
CN115952670A (en) | Automatic driving scene simulation method and device | |
CN115107803A (en) | Vehicle control method, device, equipment, vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |