US20190205310A1 - 3d analytics actionable solutions support system and apparatus - Google Patents

3d analytics actionable solutions support system and apparatus Download PDF

Info

Publication number
US20190205310A1
US20190205310A1 US16/299,737 US201916299737A US2019205310A1 US 20190205310 A1 US20190205310 A1 US 20190205310A1 US 201916299737 A US201916299737 A US 201916299737A US 2019205310 A1 US2019205310 A1 US 2019205310A1
Authority
US
United States
Prior art keywords
data
data objects
implemented method
processor
computer implemented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/299,737
Inventor
Tharmalingam Satkunarajah
Kalayini Sathasivam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US16/299,737 priority Critical patent/US20190205310A1/en
Publication of US20190205310A1 publication Critical patent/US20190205310A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • G06K9/6259
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present invention describes an apparatus and method for integrating multi-sensor, multi-temporal, multi-spatial, multi-format data from multiple sensors or data stores in a real-time engineering grade location based analysis and predictive analytic 3D data stack and visualizing that data in real-time in response to user inquires.
  • a system and method that provides improved access, conditioning, integrating and visualization of geospatial and other actionable information and utilizing the same to provide answers to user queries regarding the location of various infrastructure and optimal positioning of actions within a defined space.
  • a system and method that provides real-time visualizations that combine data from multiple sources to present a cohesive analysis of the infrastructure and information relating to a specific location and serve the operational and business needs of industries such as Transportation, Water, Environmental, Engineering, Telecommunication, Finance, Energy, Natural Resources, Defense, insurance, retail, city planning, utilities, and Security
  • the present invention is directed to a collection of networked apparatus or a method for improving the use of incompatible multivariate, multi-sensor, multi-temporal, multi-spatial, multi-format data spatial and non-spatial data obtained from at least one or more sensor devices by transforming the data into compatible formats by accessing and transforming the data into compatible formats within the memory of a computer and generating a 3D visualization thereof configured to provide answers to user queries and predictive analytics.
  • the method comprises using a computer, properly configured, to select a location of interest such as a particular area bound by geospatial data using a geospatial query generator.
  • the query returns a data object that represents a 3D stack of information relating to the particular location.
  • the 3D stack is constructed is by accessing a plurality of data objects obtained from at least one of a plurality external data sets or active sensor devices using a input module configured as code executing in the processor, wherein the data is relevant to the geospatial data of the inquiry.
  • each data object obtained from the plurality of external data sets or sensors is evaluated for proper format type using a format check module configured as code executing in the processor.
  • the format check module is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configures the processor to identify data objects with an incompatible format type.
  • the processor is configured to store each data object having an incompatible format as an element in a conversion array.
  • each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor for converting the data object to an approved format type, and applying the format factor to obtain a converted data object.
  • These converted data objects are linked to one another and function as a 3D data stack for a given location.
  • the resulting 3D data stack is transmitted to a computing device that generates a three-dimensional visualization of the 3D data stack and allows the user to view and inspect the data represented by the 3D data stack either remotely or at the location corresponding to the query.
  • the computing device is, in one implementation, a Virtual Reality and/or Augmented Reality hardware and software system that utilizes the 3D data stack to generate immersive environments to analyze and evaluate the user's queries. Any data obtained or input into the computing device is then used to update the 3D data stack in real-time.
  • FIG. 1 is an overview block diagram detailing the arrangement of elements of the system described herein in accordance with one embodiment of the invention.
  • FIG. 2 is a flow diagram detailing the steps of an embodiment of the method as described herein.
  • FIG. 3 is a block diagram of an example system in accordance with an embodiment of the present invention.
  • FIG. 4 is a flow diagram detailing the additional steps of an embodiment of the method applied as described herein.
  • FIG. 5 is a flow diagram detailing the particular steps of an embodiment of the system as described herein.
  • the present invention concerns a system and method for accessing, transforming and visualizing spatial and non-spatial data related to a geographic location and providing such a transformations and visualizations to a remote computing device, such as a smart phone, virtual reality interface (VR), augmented reality (AR) interface device, or autonomous or semiautonomous device.
  • a remote computing device such as a smart phone, virtual reality interface (VR), augmented reality (AR) interface device, or autonomous or semiautonomous device.
  • the present system and method are directed to running queries in a data object database for a geographic location and receiving a customized data package that combines available geospatial data, contextual data, metadata and predictive data that provides a custom solution to the user query.
  • a data stack when implemented in a 3D environment is used to provide actionable information to entities in the Transportation, Water, Environmental, Engineering, Telecommunication, Finance, Energy, Natural Resources, Defense, Insurance, Retail, City planning, Utilities (e.g. Gas, Oil, Electric), and Security industries.
  • FIG. 1 a block diagram of the overall system 100 is provided.
  • current geospatial data in a variety of data e.g. raster, vector, point, contextual, point, dynamic/sensor
  • the databases have a connection to the present geospatial analytic system 104 .
  • the databases 102 are SQL, NoSQL, flat, relational, object or other commonly used databases types and schema.
  • each of the databases 102 are remote to the analytic system 104 and connections between the external databases 102 and the analytic system are accomplished by network connections (shown as red arrows).
  • the external databases 102 are configured to contain accessible data relating to specific geographic locations, including data feeds or streams obtained from direct and remote sensing platforms.
  • the data in one embodiment, is stored in one or more proprietary vendor formats.
  • one or more of the external databases 102 stores data obtained from ultra, high, medium and low resolution or accuracy sensor devices.
  • the sensor devices might use optical, laser, radar, thermal, sonar/acoustic, seismic, bathymetric, and geological sensors owned or operated by private companies, government agencies or other organizations.
  • these sensors are space-based, airborne-based, ship-based, vehicle-based, hand-held, or permanent terrestrial installations that provide periodic, single use, or continuous feeds and steams of data relating to physical conditions and properties under observation and analysis.
  • the data stored in the external databases 102 and accessed by the analytic system 104 are geospatial data files or data objects.
  • the external databases 102 also contain archival records, customer, survey, municipal, zoning, geologic, environmental and other data collected over time by various governmental, scientific, or commercial entities.
  • the data and associated metadata obtained from sensors is stored in SQL format databases in the form of spreadsheets, tabular, textual, html/XML or other file or formats.
  • the geospatial analytic system 104 is configured to access and transform data obtained from the external databases 102 .
  • the analytic system 104 is a computer equipped with a one or more processors (as shown in FIG. 3 ), RAM and ROM memory, network interface adaptors and one or more input or output devices.
  • the analytic system 104 is a computer server or collection of computer servers, each server configured to store, access, process, distribute or transmit data between one another and other computers or devices accessible or connectable therewith.
  • the analytic system 104 is a hosted server, virtual machine, or other collection of software modules or programs that are interrelated and hosted in a remote accessible storage device (e.g.
  • cloud storage and hosting implementation that allows for dynamically allocated additional processors, hardware or other resources on an “as-need” or elastic need basis.
  • elastic load balancing algorithms are utilized to ensure that sufficient back-end capacity is present to enable the system to handle multiple concurrent connections and requests.
  • a model database 108 such as a NoSQL database, is connected to the analytic system 104 and is used to store data output from the processing of input data from the geospatial databases 102 .
  • the model database 108 is a SQL, relational, flat, object or other configuration database.
  • the model database stores model data objects (MDO) that represent a collection of data elements corresponding to a particular geographic location, structure or entity.
  • MDO model data objects
  • the MDO contains links to other MDOs in close proximity to the location in question. In this way queries that request information within a radius or given distance from a location can also be utilized and accessed.
  • the NoSQL database 108 uses an Object Based intelligence (OBI) architecture such that a data object representing a tangible or intangible item (e.g. person, place, thing) exists only in a single place across time or at an instant in time.
  • OBI Object Based intelligence
  • the NoSQL database is, in one configuration, implemented with BIM (Building Information Modeling) architecture.
  • BIM architecture allows for the MDOs to be associated with additional information or features. For instance, a detailed design, building analysis, documentation, fabrications, construction 4D/5D, construction logistics, operation and maintenance, demolition, renovation, programming and conceptual design data is included in the MDO.
  • a MDO for a particular address contains information about the subterranean infrastructure present at the address as well as other data relating to the same. All of other MDOs relating to a particular geographic location such as specific MDOs detailing zoning regulations at that location or traffic patterns) are collected and transformed by the geospatial analysis system 104 into a 3D data stack of real time and historical data to a user regarding the geospatial and infrastructure features present at a specific location based on the data from the databases 102 .
  • the real-time and historical data collected into the 3D data stack is provided to a user though a user interface device 106 .
  • the user interface device 106 is a desktop computer.
  • the user interface device 106 is a mobile computing device, such as a smart phone or table computer using an Apple® or Android® operating system and/or hardware.
  • the mobile computing device is an augmented reality (AR) interface.
  • AR devices function by overlaying data from the analytic system 104 onto the field of vision or view window integrated into the output device 106 .
  • the input device is a virtual reality device.
  • Virtual reality devices (VR) are immersion technology that projects images, video and data into a sphere encircling the user.
  • VR, AR and mobile technology encompasses sufficient processors, software, firmware, audio visual devices, user interfaces, geospatial locators and anatomical tracking technology that is used to implement, construct or display a virtual and version of a real or imaginary location and identify the location of the user therein.
  • FIG. 2 details particular work-flows in accordance with aspects of the invention.
  • the steps shown in FIG. 2 can be carried out by code executing within the memory of the processor 102 , as may be organized into one or more modules, or can comprise firmware or hard-wired circuitry as shown in FIG. 3 .
  • the code referenced in FIG. 3 is described in the form of modules that are executed within a processor 105 of the analytic system 104 and which are each organized to configure the processor 105 to perform specific functions.
  • the block diagram of FIG. 3 provides exemplary descriptions of the modules that cooperate with a memory and processor 105 of the analytic system 104 and cooperate to implement the steps outlined in FIG. 2 .
  • any processor of the analytic system can comprise a plurality of cores or discrete processors, each with a respective memory, which collectively implement the functionality described below, together with associated communication of data there between.
  • the geospatial data transformation is initiated and implemented by at least one query module 310 which comprises code executing in the processor 102 to access and search the records in the model document database 108 according to step 210 .
  • the query generated according to step 210 is a given set of coordinates or other location identifiers e.g. place name, survey plot, or beacon serial number.
  • the query generated is contextual.
  • additional data e.g. coordinate location of the user is also generated and supplied as part of the query.
  • additional query types such as semantic, spatial, contextual, remote sensing, situational or temporal queries are envisioned.
  • a semantic query might entail encoding in search parameters a request for the location and history of all underground utilities within a 75 foot radius of a given address along with design plans and any updated records in the last two years for a particular utility provider.
  • queries can be voice input, text input or contextual using images or video of a specific location.
  • additional modules used to enable voice to text conversions and image recognition module. For instance, natural language processing interfaces and speech recognition applications are deployed to parse the input and pass it to the remaining modules.
  • the user's requests or inputs are used as queries are used to generate a data return.
  • the queries contain or include specific coordinates, geographic markers or references corresponding to an entry or collection of entries stored within the NoSQL database 108 .
  • the model document database 108 is a geospatial “global map” as per FIGS. 2 and 5 .
  • all data vector, raster, imagery, text, video is either natively geo-referenced based on relevant source data and formats or is tagged based on a location identifier (e.g. global localization, zip code, latitude and longitude coordinates etc.) of the origin of the data or the query.
  • a location identifier e.g. global localization, zip code, latitude and longitude coordinates etc.
  • Queries that do not have location-based parameters are, in particular embodiments, defaulted to query origin location with default parameters.
  • the model document database 108 implements a “many to many” relationship which allows for targeted spatial data (e.g. the data stack) by default or inference.
  • the location search can be based on point (discreet location, user location (via LBS) or area users defined (via GUI, contextual or test/string based).
  • the query generated in step 210 is used to search the model database 108 as in step 220 .
  • a database search module 220 is used to query or search the model database 108 for data relating to the query.
  • the model database 108 utilizes a building information modeling (BIM) architecture to store a MDO.
  • BIM building information modeling
  • a query of a specific building address will result in the searching of the model database 108 for a collection of model data objects (combined as a 3D data stack) that represents all of the data corresponding to that building or location.
  • Municipal, infrastructure and other data corresponding to a real-world location is sent for transformation by a data platform module 306 .
  • the BIM model architecture contains data that allows multiple MDOs to be queried such that an integrated city landscape can be generated from a collection of MDOs representing geographic proximate locations. For example, a number of buildings on either side of a street are each represented by MDOs.
  • the BIM architecture allows for the MDOs to be queried as a group and supplied as a composite 3D stack detailing a particular above ground and subsurface urban landscape.
  • external search model 308 comprises code that configures one or more processors to access and search the remote databases accessible by the analytic system 104 .
  • the external database search module 308 queries municipal, zoning, planning, waste management and utility databases for information relating to the location identified in the query.
  • the data obtained from the external databases is passed first through an application or software interface (e.g.
  • the external databases are connected via a secure authorized socket and the database search module configures the processor to implement the suitable communication protocol.
  • the processor is configured to implement a structured connect routine to access the data models and define the translation and relationships schema for the data.
  • the database search module 308 configures the processor to create an indexing table within the local or remote memory location during the connection/ingest process to enable “real time” searches.
  • a data transformation module 310 comprises code that configures the processor to convert the data found in the external databases into model data formats using proprietary or open source algorithms configured to convert file types and transform data types while preserving the fidelity of the underlying content and data.
  • model data object is stored in the model database and is associated with, linked to or incorporating sub-objects or properties that describe the semantic relation of the given object to other data.
  • properties include accuracy values and attributes of the object model, including the scale and class of data as well and inheritance and data lineage.
  • the data model object has, in particular embodiments, attributes detailing the history, temporal or dynamic nature of the data, such as time stamps, changes over time, or durations.
  • the model data object has attributes in a particular configuration addressing interoperability of the data, such as spatial attributes and SPARQL information and data.
  • the model data object includes sub-attributes and data relating to cartographic, topographic and area relevant information to update and expand the contextual and semantic attributes of the object model.
  • a user initiated query is parsed using query parse module 410 .
  • the parsed query is used to search the plurality of external databases or sensors 102 .
  • the results of this query are received by an input module 408 of the analytic system evaluated for proper format type using a format check module 402 configured as code executing in the processor of the analysis system.
  • the format check module 402 is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configured the processor to identify data objects with an incompatible format type.
  • the processor is configured to store each data object having an incompatible format as an element in a conversion array.
  • each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor, such as stored within a conversion software development kit 406 , for converting the data object stored in the element of the conversion array to an approved format type in the format array, and applying the format factor to the element in the conversion array to obtain a converted data object, the converted data objects are linked to one another and function as a 3D data stack for a given location.
  • the open source tools include the GDAL (Geospatial Data Abstraction Library) Tools which are released under the Open Source License issued by the Open Source Geospatial Foundation.
  • Such open source tools can include, but are not limited to, tools for conversion/manipulation of raster data formats and vector data formats, including geospatial industry standard formats.
  • geospatial projections and geodetic libraries are available through Proj4 public libraries for base geospatial information, definitions and translations.
  • the converted or transformed data is stored to the model database 108 for further use, as in step 245 .
  • the vector data (autocad—.dxg, .dxt, .rvt, .3ds, .ifc; bently—.dgn; Archicad—0.3ds obj ifc .vrl; sketchup—.u3d, obj IFC; Google—KML .kmz; ESRI .shp, .sde, GEORSS and GEOJSON file formatted data can be converted using the conversion module.
  • Raster data such as .tff, .img, .jpg, .png format data can be converted as well.
  • Elevation data can also be converted from such formats as las, DTED, ASCII, LSS XSE, xtf, jsf(bathy).
  • Data obtained from dynamic data sensors e.g. way, MP3/4, .avi, xml, .mov, .html, 0.3gp, j son
  • binary data such as .pdf, .xls, .doc, .txt and .dbf can be input and converted using the conversion modules as described herein.
  • a user may enter this data into the system and convert the data into a model data object.
  • a user interface for data input is provided.
  • this user input interface has additional functions beyond data input.
  • the data input user interface is a standalone component of the system.
  • the user interface for data input allows for the uploading or transfer of data or files to the system. Uploaded data is checked for appropriate data formats. If the uploaded data is not in a compatible format, then the conversion module 406 or another suitable module or submodule is used to convert the data into a compatible format using open source or proprietary conversion modules.
  • the data stack platform is a software and or hardware appliance configured to take the data object model as an input and construct a virtualized representation of the data.
  • the data stack transformation module 310 is configured to receive the model data object and generate a 3D virtualization of the data suitable for use with 3D configured displays.
  • the data stack transformation module 310 parses the data included in the data module, or the data linked to the data module and generates visual representations or identifiers of the specific features, elements or characterizes of a given location.
  • the transformation module uses or parses data in the MDOs into geographical markup language (GML) or other mapping formation useful for generating visualizations.
  • GML geographical markup language
  • the 3D virtualization includes parsing the data model to determine the path of buried utility infrastructure on a building site. This information is projected into a 3D virtual space along with information on building plots and zoning envelopes, subsurface structures, above surface structures and other characteristic of the location. Additionally, in implementations where the data model contains temporal information, one of more of the visualized features can be represented in time series such that animations showing the development of a feature of condition over time can be demonstrated in the visualization. In one embodiment, WebGL or similar and successor APis are used for 3D visualization. Along with the visualization, tables, graphs reports and lists of metadata can be generated and provided to the user or stored in the database.
  • a game engine or module configured as code executed in the processor 105 is used to visualize, access and manipulate data representing some portion or the entire 3-dimensional data stack.
  • the game engine configures the processor to render the data stack as a 3-dimensional environment.
  • the 3D stack is stored, in a particular configuration, as language independent JSON format information with specific address with reference to real geographic coordinates.
  • the 3D data stack is used as an input to a predictive engine as in step 295 .
  • the prediction engine is configured as a module 330 via code associated therewith and executing in the processor 105 of the analytic system 104 .
  • the prediction engine module is remote to the analytic system 104 and hosted as accessible software.
  • the predictive engine module 330 comprises code that configures the processor 105 to analyze the 3D stack for a location in response to a user query regarding a potential event. For example, the prediction engine module configures the processor 105 to analyze the 3D stack and indicate portions of the location that are prone to flooding, or that are anticipated to be prone to flooding in the event of a major weather event.
  • the prediction engine is configured to estimate or predict the effect of road closures on traffic, evacuation routes, police response time or logistical and delivery options in response to such weather events.
  • the prediction engine is configured as a neural network that takes historical data and provides probabilities of future outcomes based on an analysis of prior data.
  • the prediction module incorporates cognitive science applications, such as support vector analysis, fuzzy logic, expert systems, neural networks, intelligent agents or other supervised or unsupervised machine learning algorithms utilized to extract data from the model database 108 or the external database(s) 102 to obtain historical information and generate predictions thereof.
  • the predictive engine provides a user of the 3D stack with a suggested list of measures to be taken to reduce congestion based on a set of rules and algorithms.
  • the predictive module is configured to compare multi-temporal and multi-spatial aspects of the data stored by the system to integrate queries and predictive analytics to model complex systems and variables which are then presented to a user in 3D/4D (time slices). Such data is then used to model and display solutions based on user defined criteria.
  • This time based analysis can, in one arrangement be used to assist law enforcement, or government agencies in conducting situational and threat assessment utilizing geospatial data.
  • the AI system encoded in the prediction module 330 is also configured to generate options or actions in response to a real or hypothetical/simulated event. For instance, in the event of an extreme weather event, the predictive module is configured to generate solutions that would provide alternative evacuation routes, traffic signal control modification to expedite traffic, efficient routing plans for EMS/Fire/police officials, food and shelter logistics and predicated economic and infrastructure damage.
  • the predictive module When used in infrastructure planning, the predictive module would provide information regarding housing and impact assessment data, environmental maps, geologic and engineering routes vegetation route analysis, location and co-location of industrial and commercial clients, and physical plane and line security information.
  • the predictive module uses machine learning to optimize solutions given the variables and goals of the users. This information would enable the generation of new data and information that can be used to update the database and be available to other users.
  • the predictive module is also used to data mine the database(s) 108 or 102 to determine relationships and outcomes of variables to interpret new data and query results.
  • the AI system encoded in a predictive model implements machine learning to generated predictive analysis and information.
  • machine learning is an evolution from pattern recognition and computational learning theory in artificial intelligence.
  • machine learning represents the analysis and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions.
  • the predictive models configure the processor to evaluate different format inputs, and make a comparison between formats, and checks the date on timely basis and extrapolates and predicts future events and circumstances and provide solutions thereto.
  • Machine learning is closely related to computational statistics; a discipline that aims at the design of algorithm for implementing statistical methods on computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field.
  • Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible.
  • Example applications include weather prediction, optical character recognition (OCR), search engines and computer vision.
  • Machine learning and pattern recognition can be viewed as two facets of the same field.
  • machine learning methods may be referred to as predictive analytics or predictive modelling.
  • the predictive model utilizing augmented artificial intelligence, configures the processor to implement one or more algorithms to utilize virtual machine learning to generate predictions and alerts based on analyzed large data sets.
  • the predictive module implements different types of machine learning depending on the nature of the learning “signal” or “feedback” available to the 3D visualization system 100 .
  • the predictive module is configured to use supervised learning methods and implementations. For instance, the predictive module configures the processor to evaluate example data inputs and their desired outputs, and generate a general rule that maps inputs to outputs. For instance, the processor is fed data and a goal is set for the engine to solve traffic congestion at a particular location.
  • the inputs are fed manually, obtained from sensors and/or computer vision system utilizing digital image processing.
  • the predictive module then evaluates the input data and the desired output state and generates a solution that is predicted to result in the desired outcome of reduced congestion.
  • the predictive module utilizes unsupervised learning implementations. Under this system, no labels are given to the learning algorithm employed by the processor, thus the processor is configured to generate structure from the input. Using such an unsupervised learning approach results in discovering hidden patterns in data which might not be apparent from a manual analysis.
  • a user can generate a 3D stack relating to a particular transit infrastructure such as a bus or train. The user, desiring to navigate to the particular bus that will have the shortest commute time to her desired location, utilizes the unsupervised learning features of the predictive module to take into account changes in routes, time and other factors due to inclement weather, accidents or other events that might cause delay to one or more transit options.
  • the predictive module uses reinforcement learning features implemented as a sub-module of the predictive module.
  • the processor is configured to interact with a dynamic environment in which a certain goal (e.g. driving a vehicle) is performed without a user manually providing instructions about the vehicle's proximity to the desired destination.
  • Reinforcement learning can be considered semi-supervised learning, where the sub-module configures to the processor to receive an incomplete training signal, such as a training set with some, or often many, of the target outputs missing.
  • transduction is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing.
  • machine learning solutions are also implemented by the processor configured to execute the submodules of the predictive module.
  • developmental learning submodules generate sequences or curriculum of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human interface.
  • the submodules incorporate other guidance mechanisms, such as active learning, prediction etc.
  • classification of data is implemented as a supervised learning routine.
  • regression is also implemented as a supervised learning problem. In regression, the outputs are continuous rather than discrete.
  • clustering a set of inputs is to be divided into groups.
  • the submodule uses dimensionality reduction algorithms to simplify inputs by mapping high-dimensional data in to a lower-dimensional space.
  • the predictive model configures the processor to implement a topic modeling strategy, such as through a topic modeling sub-module to evaluate a list of human language documents and determine or generate relationships between the documents. Using such a topic modeling submodule the system described extracts useful information relating documents from different places with different language, or formats.
  • the 3D data stack generated in step 250 utilizing the 3D data stack transformation module 310 is transmitted or otherwise communicated to a data and visualization output device 106 .
  • data and visualization output device 106 is a mobile computing device configured through code executing in a processor thereof to receive the 3D data stack and generate a visualization for an end user as shown is step 260 .
  • the mobile computing device 106 is a smart phone or tablet computer with a display device, coordinate or location devices and a network interface. According to this implementation, the mobile computing device receives the 3D data stack as a wireless data communication from the analysis system 104 .
  • the mobile computing device is configured through software modules, such as the data stack transmission module 312 utilized by the analytic system or the transmission module 314 utilized by the display device, to retrieve or cause to transmit to a 3D stack from a remote storage device or service such as a cloud hosting device or service.
  • the mobile device 106 is configured to permit the user to manipulate and view the 3D data stack in real-time in order to evaluate the results of the query as in step 270 .
  • the mobile computing device is equipped with navigational and location aids such as GPS transceivers, altimeters and digital compasses. Utilizing such equipped devices allows the user to align the 3D data stack with the user's orientation at a specific location such that when the device is moved, the portion of the 3D stack displayed by the user device 106 .
  • a processor of the mobile computing device 106 is configured, through the display module 314 , to represent changes in the field of view displayed to a user in response to the movement of the device.
  • the movements of the mobile computing device itself or the user and the mobile device together, will cause the view or portion of the 3D stack to change in relation to orientation, angle and elevation of the mobile device.
  • the mobile device 106 is a VR display device.
  • the user is immersed in a full scale visualization of the 3D data stack.
  • the mobile computing device is an AR device that provides the 3D stack as a data overlay on a user's field of vision but allows the user to maintain real time observations of the location in question.
  • the user can access tools and functions that allow the 3D data stack to be updated or modified.
  • user action such as the placement of a beacon or annotating a location with additional metadata is recorded and added to the 3D data stack currently under visualization.
  • This updated information is transmitted to the analysis system 106 where it can be processed and used to update the copy of the 3D stack residing in the model database 108 .
  • the update module 318 configures the processor of the mobile device 106 to transmit this information to the analysis system 104 where data stack update module 320 stores the updated information in the model database 108 .
  • the 3D data stack is used by an autonomous or semi-autonomous device in order to path find, analyze or spot check data provided in the 3D data stack.
  • an airborne, submersible or subsurface autonomous device e.g. drone
  • the autonomous device is configured to take readings or measurements of infrastructure and update the 3D data stack with metadata relating to the current condition or state of the infrastructure.
  • the autonomous or semi-autonomous device is used in search and rescue, fire prevention and mitigation, disaster mitigation and relief operations.
  • the autonomous devices can, in specific embodiments, utilize the predictive module 330 of the system to provide real-time learning systems to execute tasks such as path finding and visual identification.
  • an autonomous vehicle traveling through a geographic area is configured to implement a data update feature whereby the data received by the vehicle is used to update the accuracy of the model data objects in near real time.
  • the system described provides a platform for network enabled vehicle communication and/or autonomous robotic vehicle with multi sensor interface to act upon dynamic changes occurring in environment and update that information back to the database. For example, location data and other information recorded or measured by the autonomous robotic vehicle is transmitted through the document model database and is distributed to other linked or connected autonomous and non-autonomous vehicles to avoid congestion. Based on the information from all the mobile platforms, as well as sensors and other data feeds, the predictive module configures the processor of the system to send a predicted route change to a vehicle to avoid congestion in traffic, or avoid other navigational hazards.
  • the mobile computing device 106 is configured to perform, using a check sub-module (data consistency and validity) of the data stack update module 318 to review specific set of data in the model database 108 .
  • the check submodule configures the processor to analyze and validate in “real-time” any changes made to the 3D data stack, such as by annotating metadata or updated sensor measurements.
  • the check module configures a processor to initiate a flag and update procedure to the specific 3D data stack being utilized by the user if any parameter changes are recognized.
  • the analytic system is configured to transmit an update to some or all of the 3D data stack being used by the user.
  • the 3D stack stored in the model database 108 is updated, and the updated status changes are made in real or near real time to any users that are currently accessing or using a 3D stack corresponding to that location.
  • Data from autonomous vehicle/sensors can constantly update the MDOs in the model database 108 to provide improved spatial accuracy for real-time events that can be remotely analyzed by a user.
  • sensors indicating the creation of a new pothole would provide data to the 3D data stack of the location in question such that a remote user could evaluate the size, depth and potential impact of such a change in road surfaces might have on traffic.
  • the apparatus so described are configured to be extensible and interoperable with future designed tool sets configured to access the stored data models (such as though an API) and run data analysis specific to a particular domain or area of interest.
  • the Transportation Department of a government may have an interface with the analytic system 104 that allows for additional information to be utilized in support of traffic signal analysis and impact, accident analysis, diversion and route analysis and dynamic re-routing tools.
  • the analytic system is extensible to accept private or user specific data streams and information to allow the model data to be used and combined with user data for the purposes of crop monitoring, yield prediction, weather impact analysis, drought mitigation and cost predictions.

Abstract

The claimed invention relates to a system and method for generating actionable intelligence and information by utilizing a multi-sensor, multi-temporal; multi-spatial, multi-format data (mSTSFA) architecture stored in a NoSQL data architecture to qualify spatial (accuracy) and contextual information integrated into a real time Engineering Grade location based analysis and predictive analytics engine returning users based queries in a 3D visualization including Virtual Reality (VR)/Augmented Reality functionality. The present invention is a systemized platform for handling geospatial, geophysical, financial, temporal and attribute data input directly to analyze the datasets to serve the operational and business needs of the industries such as transportation, water, environmental, engineering, telecommunication, finance, energy, natural resources, defense and security.

Description

    1. CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent application claims priority under 35 U.S.C. § 120 and is a continuation of U.S. patent application Ser. No. 14/959,433, filed Dec. 4, 2015 and titled “3D Analytics Actionable Solutions Support System and Apparatus,” which claims priority under 35 U.S.C. § 119(e) to U.S. provisional application Ser. No. 62/241,394, filed Oct. 14, 2015, each of which is hereby incorporated by reference in its entirety.
  • 2. INTRODUCTION
  • The present invention describes an apparatus and method for integrating multi-sensor, multi-temporal, multi-spatial, multi-format data from multiple sensors or data stores in a real-time engineering grade location based analysis and predictive analytic 3D data stack and visualizing that data in real-time in response to user inquires.
  • 3. BACKGROUND
  • Although there are many types of spatial and non-spatial data held by different organizations, agencies, and private companies, the data contained therein is rarely unified or in compatible formats. The disparate nature of the data repositories, formats and structures prevent the maximum utilization of the investment in the data capture, initial analysis and maintenance. Thus, there exists a need therefore for harmonizing the data in a manner that allows these disparate data stores and historical records to be used in furtherance of development goals and tasks.
  • There are many software and database tools and environments that can access and analyze components or subsets of the data but a comprehensive geo-spatial based solution configured to read and access multi-format data models and real time data transactions is required to solve the complex multi-dimensional problems faced as part of the need for accurate spatial and contextual data to support smart city growth.
  • Therefore, what is needed is a system and method that provides improved access, conditioning, integrating and visualization of geospatial and other actionable information and utilizing the same to provide answers to user queries regarding the location of various infrastructure and optimal positioning of actions within a defined space. In particular, what is needed is a system and method that provides real-time visualizations that combine data from multiple sources to present a cohesive analysis of the infrastructure and information relating to a specific location and serve the operational and business needs of industries such as Transportation, Water, Environmental, Engineering, Telecommunication, Finance, Energy, Natural Resources, Defense, insurance, retail, city planning, utilities, and Security
  • 4. SUMMARY
  • In accordance with one aspect that can be implemented in one or more embodiments, the present invention is directed to a collection of networked apparatus or a method for improving the use of incompatible multivariate, multi-sensor, multi-temporal, multi-spatial, multi-format data spatial and non-spatial data obtained from at least one or more sensor devices by transforming the data into compatible formats by accessing and transforming the data into compatible formats within the memory of a computer and generating a 3D visualization thereof configured to provide answers to user queries and predictive analytics. The method comprises using a computer, properly configured, to select a location of interest such as a particular area bound by geospatial data using a geospatial query generator. The query returns a data object that represents a 3D stack of information relating to the particular location. In one arrangement, the 3D stack is constructed is by accessing a plurality of data objects obtained from at least one of a plurality external data sets or active sensor devices using a input module configured as code executing in the processor, wherein the data is relevant to the geospatial data of the inquiry.
  • More particularly, prior to generating the 3D data stack, each data object obtained from the plurality of external data sets or sensors is evaluated for proper format type using a format check module configured as code executing in the processor. The format check module is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configures the processor to identify data objects with an incompatible format type. The processor is configured to store each data object having an incompatible format as an element in a conversion array.
  • Using a conversion module configured as code executing in the processor, each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor for converting the data object to an approved format type, and applying the format factor to obtain a converted data object. These converted data objects are linked to one another and function as a 3D data stack for a given location.
  • The resulting 3D data stack is transmitted to a computing device that generates a three-dimensional visualization of the 3D data stack and allows the user to view and inspect the data represented by the 3D data stack either remotely or at the location corresponding to the query. The computing device is, in one implementation, a Virtual Reality and/or Augmented Reality hardware and software system that utilizes the 3D data stack to generate immersive environments to analyze and evaluate the user's queries. Any data obtained or input into the computing device is then used to update the 3D data stack in real-time.
  • These and other aspects, features and advantages of the present invention can be further appreciated from the following discussion of certain more particular embodiments thereof.
  • 5. BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other features of the present invention will be more readily apparent from the following detailed description and drawings of one or more exemplary embodiments of the invention in which:
  • FIG. 1 is an overview block diagram detailing the arrangement of elements of the system described herein in accordance with one embodiment of the invention.
  • FIG. 2 is a flow diagram detailing the steps of an embodiment of the method as described herein.
  • FIG. 3 is a block diagram of an example system in accordance with an embodiment of the present invention.
  • FIG. 4 is a flow diagram detailing the additional steps of an embodiment of the method applied as described herein.
  • FIG. 5 is a flow diagram detailing the particular steps of an embodiment of the system as described herein.
  • 6. DETAILED DESCRIPTION
  • By way of overview and introduction, the present invention concerns a system and method for accessing, transforming and visualizing spatial and non-spatial data related to a geographic location and providing such a transformations and visualizations to a remote computing device, such as a smart phone, virtual reality interface (VR), augmented reality (AR) interface device, or autonomous or semiautonomous device.
  • Specifically, the present system and method are directed to running queries in a data object database for a geographic location and receiving a customized data package that combines available geospatial data, contextual data, metadata and predictive data that provides a custom solution to the user query. Such a data stack, when implemented in a 3D environment is used to provide actionable information to entities in the Transportation, Water, Environmental, Engineering, Telecommunication, Finance, Energy, Natural Resources, Defense, Insurance, Retail, City planning, Utilities (e.g. Gas, Oil, Electric), and Security industries.
  • 6.1 System Overview
  • Turning to FIG. 1, a block diagram of the overall system 100 is provided. As shown, current geospatial data in a variety of data (e.g. raster, vector, point, contextual, point, dynamic/sensor) are stored in a plurality of external databases 102. The databases have a connection to the present geospatial analytic system 104. In one particular configuration, the databases 102 are SQL, NoSQL, flat, relational, object or other commonly used databases types and schema. In the illustrated configuration, each of the databases 102 are remote to the analytic system 104 and connections between the external databases 102 and the analytic system are accomplished by network connections (shown as red arrows).
  • The external databases 102 are configured to contain accessible data relating to specific geographic locations, including data feeds or streams obtained from direct and remote sensing platforms. The data, in one embodiment, is stored in one or more proprietary vendor formats. For example, one or more of the external databases 102 stores data obtained from ultra, high, medium and low resolution or accuracy sensor devices. The sensor devices might use optical, laser, radar, thermal, sonar/acoustic, seismic, bathymetric, and geological sensors owned or operated by private companies, government agencies or other organizations. In a particular embodiment, these sensors are space-based, airborne-based, ship-based, vehicle-based, hand-held, or permanent terrestrial installations that provide periodic, single use, or continuous feeds and steams of data relating to physical conditions and properties under observation and analysis. In one particular arrangement, the data stored in the external databases 102 and accessed by the analytic system 104 are geospatial data files or data objects. The external databases 102 also contain archival records, customer, survey, municipal, zoning, geologic, environmental and other data collected over time by various governmental, scientific, or commercial entities. In one embodiment, the data and associated metadata obtained from sensors is stored in SQL format databases in the form of spreadsheets, tabular, textual, html/XML or other file or formats.
  • The geospatial analytic system 104 is configured to access and transform data obtained from the external databases 102. In one arrangement, the analytic system 104 is a computer equipped with a one or more processors (as shown in FIG. 3), RAM and ROM memory, network interface adaptors and one or more input or output devices. In a further arrangement, the analytic system 104 is a computer server or collection of computer servers, each server configured to store, access, process, distribute or transmit data between one another and other computers or devices accessible or connectable therewith. In still a further implementation, the analytic system 104 is a hosted server, virtual machine, or other collection of software modules or programs that are interrelated and hosted in a remote accessible storage device (e.g. cloud storage and hosting implementation) that allows for dynamically allocated additional processors, hardware or other resources on an “as-need” or elastic need basis. Furthermore, elastic load balancing algorithms are utilized to ensure that sufficient back-end capacity is present to enable the system to handle multiple concurrent connections and requests.
  • A model database 108, such as a NoSQL database, is connected to the analytic system 104 and is used to store data output from the processing of input data from the geospatial databases 102. In an alternative configuration, the model database 108 is a SQL, relational, flat, object or other configuration database. The model database stores model data objects (MDO) that represent a collection of data elements corresponding to a particular geographic location, structure or entity. However, in further arrangements, the MDO contains links to other MDOs in close proximity to the location in question. In this way queries that request information within a radius or given distance from a location can also be utilized and accessed. The NoSQL database 108 uses an Object Based intelligence (OBI) architecture such that a data object representing a tangible or intangible item (e.g. person, place, thing) exists only in a single place across time or at an instant in time. The NoSQL database is, in one configuration, implemented with BIM (Building Information Modeling) architecture. BIM architecture allows for the MDOs to be associated with additional information or features. For instance, a detailed design, building analysis, documentation, fabrications, construction 4D/5D, construction logistics, operation and maintenance, demolition, renovation, programming and conceptual design data is included in the MDO.
  • In one non-limiting example, a MDO for a particular address contains information about the subterranean infrastructure present at the address as well as other data relating to the same. All of other MDOs relating to a particular geographic location such as specific MDOs detailing zoning regulations at that location or traffic patterns) are collected and transformed by the geospatial analysis system 104 into a 3D data stack of real time and historical data to a user regarding the geospatial and infrastructure features present at a specific location based on the data from the databases 102.
  • The real-time and historical data collected into the 3D data stack is provided to a user though a user interface device 106. In one configuration, the user interface device 106 is a desktop computer. Alternatively, the user interface device 106 is a mobile computing device, such as a smart phone or table computer using an Apple® or Android® operating system and/or hardware. In a further example, the mobile computing device is an augmented reality (AR) interface. In one implementation, AR devices function by overlaying data from the analytic system 104 onto the field of vision or view window integrated into the output device 106. In yet a further implementation the input device is a virtual reality device. Virtual reality devices (VR) are immersion technology that projects images, video and data into a sphere encircling the user. Such technology employs motion tracking and other technologies to track a person's movement so as to provide the sensation of total immersion within a projected stage or area. Those possessing the requisite level of skill in the art will appreciate that VR, AR and mobile technology encompasses sufficient processors, software, firmware, audio visual devices, user interfaces, geospatial locators and anatomical tracking technology that is used to implement, construct or display a virtual and version of a real or imaginary location and identify the location of the user therein.
  • 5.2 Accessing Geospatial Data from External Databases
  • FIG. 2 details particular work-flows in accordance with aspects of the invention. The steps shown in FIG. 2 can be carried out by code executing within the memory of the processor 102, as may be organized into one or more modules, or can comprise firmware or hard-wired circuitry as shown in FIG. 3. For simplicity of discussion, the code referenced in FIG. 3 is described in the form of modules that are executed within a processor 105 of the analytic system 104 and which are each organized to configure the processor 105 to perform specific functions. The block diagram of FIG. 3 provides exemplary descriptions of the modules that cooperate with a memory and processor 105 of the analytic system 104 and cooperate to implement the steps outlined in FIG. 2. Those possessing an ordinary level of skill in the art will appreciate that any processor of the analytic system can comprise a plurality of cores or discrete processors, each with a respective memory, which collectively implement the functionality described below, together with associated communication of data there between.
  • With reference now to FIGS. 2 and 3, the geospatial data transformation is initiated and implemented by at least one query module 310 which comprises code executing in the processor 102 to access and search the records in the model document database 108 according to step 210. In one particular implementation, the query generated according to step 210 is a given set of coordinates or other location identifiers e.g. place name, survey plot, or beacon serial number. In an alternative arrangement, the query generated is contextual. In this arrangement additional data, e.g. coordinate location of the user is also generated and supplied as part of the query. Furthermore, additional query types, such as semantic, spatial, contextual, remote sensing, situational or temporal queries are envisioned. For instance, a semantic query might entail encoding in search parameters a request for the location and history of all underground utilities within a 75 foot radius of a given address along with design plans and any updated records in the last two years for a particular utility provider. In particular embodiments, queries can be voice input, text input or contextual using images or video of a specific location. Depending on the query type, additional modules used to enable voice to text conversions and image recognition module. For instance, natural language processing interfaces and speech recognition applications are deployed to parse the input and pass it to the remaining modules.
  • In a particular embodiment, the user's requests or inputs are used as queries are used to generate a data return. In one non-limiting example, the queries contain or include specific coordinates, geographic markers or references corresponding to an entry or collection of entries stored within the NoSQL database 108. In a further embodiment, the model document database 108 is a geospatial “global map” as per FIGS. 2 and 5. In the present embodiment, all data vector, raster, imagery, text, video is either natively geo-referenced based on relevant source data and formats or is tagged based on a location identifier (e.g. global localization, zip code, latitude and longitude coordinates etc.) of the origin of the data or the query. Queries that do not have location-based parameters are, in particular embodiments, defaulted to query origin location with default parameters. In a further arrangement, the model document database 108 implements a “many to many” relationship which allows for targeted spatial data (e.g. the data stack) by default or inference. The location search can be based on point (discreet location, user location (via LBS) or area users defined (via GUI, contextual or test/string based).
  • The query generated in step 210 is used to search the model database 108 as in step 220. In one implementation, a database search module 220 is used to query or search the model database 108 for data relating to the query. Here, the model database 108 utilizes a building information modeling (BIM) architecture to store a MDO. For example, a query of a specific building address will result in the searching of the model database 108 for a collection of model data objects (combined as a 3D data stack) that represents all of the data corresponding to that building or location. In this way, municipal, infrastructure and other data corresponding to a real-world location is sent for transformation by a data platform module 306. In one embodiment, the BIM model architecture contains data that allows multiple MDOs to be queried such that an integrated city landscape can be generated from a collection of MDOs representing geographic proximate locations. For example, a number of buildings on either side of a street are each represented by MDOs. The BIM architecture allows for the MDOs to be queried as a group and supplied as a composite 3D stack detailing a particular above ground and subsurface urban landscape.
  • In particular work-flows where the model database 108 does not contain a specific or generic MDO for the location indicated by the query, a search of the remote databasesis conducted as in step 230. According to one non-limiting embodiment of the system described, external search model 308 comprises code that configures one or more processors to access and search the remote databases accessible by the analytic system 104. For instance, the external database search module 308 queries municipal, zoning, planning, waste management and utility databases for information relating to the location identified in the query. In a further arrangement, the data obtained from the external databases is passed first through an application or software interface (e.g. Safe SW or Blue Marble) as software development kits, or application programming layers (API) that use real time format conversions, web-forms and ASCII (RMDS) implementations to condition the data prior to handing or passing off to the other modules described herein. In one embodiment, the external databases are connected via a secure authorized socket and the database search module configures the processor to implement the suitable communication protocol. For example, the processor is configured to implement a structured connect routine to access the data models and define the translation and relationships schema for the data. Furthermore, the database search module 308 configures the processor to create an indexing table within the local or remote memory location during the connection/ingest process to enable “real time” searches.
  • The results of the search of the external databases are then transformed into model data object compatible formats and a model data object is created and stored in the model database as shown in step 240. In one implementation, a data transformation module 310 comprises code that configures the processor to convert the data found in the external databases into model data formats using proprietary or open source algorithms configured to convert file types and transform data types while preserving the fidelity of the underlying content and data.
  • Additionally, the model data object is stored in the model database and is associated with, linked to or incorporating sub-objects or properties that describe the semantic relation of the given object to other data. Such properties include accuracy values and attributes of the object model, including the scale and class of data as well and inheritance and data lineage. Additionally, the data model object has, in particular embodiments, attributes detailing the history, temporal or dynamic nature of the data, such as time stamps, changes over time, or durations. Furthermore, the model data object has attributes in a particular configuration addressing interoperability of the data, such as spatial attributes and SPARQL information and data. In further implementations, the model data object includes sub-attributes and data relating to cartographic, topographic and area relevant information to update and expand the contextual and semantic attributes of the object model.
  • With particular reference to FIG. 4, a user initiated query is parsed using query parse module 410. Where no data relating to the query is identified in the model database 108, the parsed query is used to search the plurality of external databases or sensors 102. The results of this query are received by an input module 408 of the analytic system evaluated for proper format type using a format check module 402 configured as code executing in the processor of the analysis system. The format check module 402 is configured to check the format of the data object against a format array of pre-set object format types, where each element of the array contains reference to a compatible format type and the module further configured the processor to identify data objects with an incompatible format type. The processor is configured to store each data object having an incompatible format as an element in a conversion array.
  • Using a conversion module 406, configured as code executing in the processor, each data object having an incompatible format type is converted into a compatible format type by iterating over each element in the conversion array and identifying a conversion factor, such as stored within a conversion software development kit 406, for converting the data object stored in the element of the conversion array to an approved format type in the format array, and applying the format factor to the element in the conversion array to obtain a converted data object, the converted data objects are linked to one another and function as a 3D data stack for a given location. In one embodiment, the open source tools include the GDAL (Geospatial Data Abstraction Library) Tools which are released under the Open Source License issued by the Open Source Geospatial Foundation. Such open source tools can include, but are not limited to, tools for conversion/manipulation of raster data formats and vector data formats, including geospatial industry standard formats. Likewise, geospatial projections and geodetic libraries are available through Proj4 public libraries for base geospatial information, definitions and translations.
  • Upon transformation into a model data compatible format, the converted or transformed data is stored to the model database 108 for further use, as in step 245.
  • By way on non-limiting examples, the vector data (autocad—.dxg, .dxt, .rvt, .3ds, .ifc; bently—.dgn; Archicad—0.3ds obj ifc .vrl; sketchup—.u3d, obj IFC; Google—KML .kmz; ESRI .shp, .sde, GEORSS and GEOJSON file formatted data can be converted using the conversion module. Raster data such as .tff, .img, .jpg, .png format data can be converted as well. Elevation data can also be converted from such formats as las, DTED, ASCII, LSS XSE, xtf, jsf(bathy). Data obtained from dynamic data sensors (e.g. way, MP3/4, .avi, xml, .mov, .html, 0.3gp, j son) can also be converted and used by the system described. Additionally, binary data such as .pdf, .xls, .doc, .txt and .dbf can be input and converted using the conversion modules as described herein.
  • In a further embodiment, where new or custom data is available, a user may enter this data into the system and convert the data into a model data object. In this arrangement, a user interface for data input is provided. In one arrangement, this user input interface has additional functions beyond data input. In an alternative arrangement, the data input user interface is a standalone component of the system. The user interface for data input allows for the uploading or transfer of data or files to the system. Uploaded data is checked for appropriate data formats. If the uploaded data is not in a compatible format, then the conversion module 406 or another suitable module or submodule is used to convert the data into a compatible format using open source or proprietary conversion modules.
  • Once the model data corresponding to a particular query has been obtained, either directly from the model database or via transformation of external data sets or user input into a model object compatible format, the data and associated metadata returned by the query is sent to a data stack platform as in step 250. In one arrangement the data stack platform is a software and or hardware appliance configured to take the data object model as an input and construct a virtualized representation of the data. For example, the data stack transformation module 310 is configured to receive the model data object and generate a 3D virtualization of the data suitable for use with 3D configured displays. The data stack transformation module 310 parses the data included in the data module, or the data linked to the data module and generates visual representations or identifiers of the specific features, elements or characterizes of a given location. For example, the transformation module uses or parses data in the MDOs into geographical markup language (GML) or other mapping formation useful for generating visualizations.
  • By way of example, the 3D virtualization includes parsing the data model to determine the path of buried utility infrastructure on a building site. This information is projected into a 3D virtual space along with information on building plots and zoning envelopes, subsurface structures, above surface structures and other characteristic of the location. Additionally, in implementations where the data model contains temporal information, one of more of the visualized features can be represented in time series such that animations showing the development of a feature of condition over time can be demonstrated in the visualization. In one embodiment, WebGL or similar and successor APis are used for 3D visualization. Along with the visualization, tables, graphs reports and lists of metadata can be generated and provided to the user or stored in the database.
  • In a specific embodiment, a game engine or module configured as code executed in the processor 105 is used to visualize, access and manipulate data representing some portion or the entire 3-dimensional data stack. For instance, the game engine configures the processor to render the data stack as a 3-dimensional environment. The 3D stack is stored, in a particular configuration, as language independent JSON format information with specific address with reference to real geographic coordinates.
  • 5.3 Augmented Intelligence Use of the 3D Stack
  • In a further arrangement of the system and method described, the 3D data stack is used as an input to a predictive engine as in step 295. In one implementation, the prediction engine is configured as a module 330 via code associated therewith and executing in the processor 105 of the analytic system 104. However, in an alternative arrangement, the prediction engine module is remote to the analytic system 104 and hosted as accessible software. The predictive engine module 330 comprises code that configures the processor 105 to analyze the 3D stack for a location in response to a user query regarding a potential event. For example, the prediction engine module configures the processor 105 to analyze the 3D stack and indicate portions of the location that are prone to flooding, or that are anticipated to be prone to flooding in the event of a major weather event. Likewise, the prediction engine is configured to estimate or predict the effect of road closures on traffic, evacuation routes, police response time or logistical and delivery options in response to such weather events. The prediction engine is configured as a neural network that takes historical data and provides probabilities of future outcomes based on an analysis of prior data. In an alternative configuration, the prediction module incorporates cognitive science applications, such as support vector analysis, fuzzy logic, expert systems, neural networks, intelligent agents or other supervised or unsupervised machine learning algorithms utilized to extract data from the model database 108 or the external database(s) 102 to obtain historical information and generate predictions thereof.
  • For example, in the road traffic scenario described above, where municipality sensors monitor the vehicle density and traffic flow, the predictive engine provides a user of the 3D stack with a suggested list of measures to be taken to reduce congestion based on a set of rules and algorithms.
  • In another embodiment, the predictive module is configured to compare multi-temporal and multi-spatial aspects of the data stored by the system to integrate queries and predictive analytics to model complex systems and variables which are then presented to a user in 3D/4D (time slices). Such data is then used to model and display solutions based on user defined criteria. This time based analysis can, in one arrangement be used to assist law enforcement, or government agencies in conducting situational and threat assessment utilizing geospatial data.
  • In further embodiments, the AI system encoded in the prediction module 330 is also configured to generate options or actions in response to a real or hypothetical/simulated event. For instance, in the event of an extreme weather event, the predictive module is configured to generate solutions that would provide alternative evacuation routes, traffic signal control modification to expedite traffic, efficient routing plans for EMS/Fire/police officials, food and shelter logistics and predicated economic and infrastructure damage.
  • When used in infrastructure planning, the predictive module would provide information regarding housing and impact assessment data, environmental maps, geologic and engineering routes vegetation route analysis, location and co-location of industrial and commercial clients, and physical plane and line security information.
  • The predictive module uses machine learning to optimize solutions given the variables and goals of the users. This information would enable the generation of new data and information that can be used to update the database and be available to other users. The predictive module is also used to data mine the database(s) 108 or 102 to determine relationships and outcomes of variables to interpret new data and query results.
  • In a further embodiment, the AI system encoded in a predictive model implements machine learning to generated predictive analysis and information. Those skilled in the art will appreciate that machine learning is an evolution from pattern recognition and computational learning theory in artificial intelligence. As used and understood herein, machine learning represents the analysis and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions.
  • In one embodiment of the present visualization system and apparatus, the predictive models configure the processor to evaluate different format inputs, and make a comparison between formats, and checks the date on timely basis and extrapolates and predicts future events and circumstances and provide solutions thereto. Machine learning is closely related to computational statistics; a discipline that aims at the design of algorithm for implementing statistical methods on computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include weather prediction, optical character recognition (OCR), search engines and computer vision. Machine learning and pattern recognition can be viewed as two facets of the same field. When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or predictive modelling.
  • The predictive model, utilizing augmented artificial intelligence, configures the processor to implement one or more algorithms to utilize virtual machine learning to generate predictions and alerts based on analyzed large data sets. In accordance with the described embodiment, the predictive module implements different types of machine learning depending on the nature of the learning “signal” or “feedback” available to the 3D visualization system 100.
  • In a non-limiting example, the predictive module is configured to use supervised learning methods and implementations. For instance, the predictive module configures the processor to evaluate example data inputs and their desired outputs, and generate a general rule that maps inputs to outputs. For instance, the processor is fed data and a goal is set for the engine to solve traffic congestion at a particular location. Here, the inputs are fed manually, obtained from sensors and/or computer vision system utilizing digital image processing. The predictive module then evaluates the input data and the desired output state and generates a solution that is predicted to result in the desired outcome of reduced congestion.
  • In an alternative embodiment, the predictive module utilizes unsupervised learning implementations. Under this system, no labels are given to the learning algorithm employed by the processor, thus the processor is configured to generate structure from the input. Using such an unsupervised learning approach results in discovering hidden patterns in data which might not be apparent from a manual analysis. As a non-limiting example, a user can generate a 3D stack relating to a particular transit infrastructure such as a bus or train. The user, desiring to navigate to the particular bus that will have the shortest commute time to her desired location, utilizes the unsupervised learning features of the predictive module to take into account changes in routes, time and other factors due to inclement weather, accidents or other events that might cause delay to one or more transit options.
  • In a further embodiment, the predictive module uses reinforcement learning features implemented as a sub-module of the predictive module. Here the processor is configured to interact with a dynamic environment in which a certain goal (e.g. driving a vehicle) is performed without a user manually providing instructions about the vehicle's proximity to the desired destination. Reinforcement learning can be considered semi-supervised learning, where the sub-module configures to the processor to receive an incomplete training signal, such as a training set with some, or often many, of the target outputs missing. For instance, transduction is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing.
  • Other machine learning solutions are also implemented by the processor configured to execute the submodules of the predictive module. For example, in certain embodiments that utilize a robot or autonomous device, developmental learning submodules generate sequences or curriculum of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human interface. Additionally, the submodules incorporate other guidance mechanisms, such as active learning, prediction etc.
  • In using machine learning classification of data, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one (or multi-label classification) or more of these classes. In one embodiment, classification of data is implemented as a supervised learning routine. In a further machine learning implementation, regression is also implemented as a supervised learning problem. In regression, the outputs are continuous rather than discrete. In clustering, a set of inputs is to be divided into groups.
  • In a further embodiment, the submodule uses dimensionality reduction algorithms to simplify inputs by mapping high-dimensional data in to a lower-dimensional space. In a non-limiting embodiment, the predictive model configures the processor to implement a topic modeling strategy, such as through a topic modeling sub-module to evaluate a list of human language documents and determine or generate relationships between the documents. Using such a topic modeling submodule the system described extracts useful information relating documents from different places with different language, or formats.
  • 5.4 Real-Time Use of the 3D Data Stack
  • In one arrangement of the described system and method, the 3D data stack generated in step 250 utilizing the 3D data stack transformation module 310 is transmitted or otherwise communicated to a data and visualization output device 106. In one example of the system described, data and visualization output device 106 is a mobile computing device configured through code executing in a processor thereof to receive the 3D data stack and generate a visualization for an end user as shown is step 260.
  • In one non-limiting arrangement, the mobile computing device 106 is a smart phone or tablet computer with a display device, coordinate or location devices and a network interface. According to this implementation, the mobile computing device receives the 3D data stack as a wireless data communication from the analysis system 104. However, in an alternative arrangement, the mobile computing device is configured through software modules, such as the data stack transmission module 312 utilized by the analytic system or the transmission module 314 utilized by the display device, to retrieve or cause to transmit to a 3D stack from a remote storage device or service such as a cloud hosting device or service.
  • The mobile device 106 is configured to permit the user to manipulate and view the 3D data stack in real-time in order to evaluate the results of the query as in step 270. In one configuration, the mobile computing device is equipped with navigational and location aids such as GPS transceivers, altimeters and digital compasses. Utilizing such equipped devices allows the user to align the 3D data stack with the user's orientation at a specific location such that when the device is moved, the portion of the 3D stack displayed by the user device 106. For instance, a processor of the mobile computing device 106 is configured, through the display module 314, to represent changes in the field of view displayed to a user in response to the movement of the device. Here, the movements of the mobile computing device itself or the user and the mobile device together, will cause the view or portion of the 3D stack to change in relation to orientation, angle and elevation of the mobile device.
  • 5.5 VR and AR Devices
  • In a particular embodiment, the mobile device 106 is a VR display device. In this configuration, the user is immersed in a full scale visualization of the 3D data stack. By moving the VR display device, the data displayed to the user's field of vision will change depending on body position, head position and viewing angle. In an alternative configuration of the system described, the mobile computing device is an AR device that provides the 3D stack as a data overlay on a user's field of vision but allows the user to maintain real time observations of the location in question.
  • During the visualization interaction, either with a mobile device or altered reality platform (e.g. VR or AR), the user can access tools and functions that allow the 3D data stack to be updated or modified. As shown in step 290, user action such as the placement of a beacon or annotating a location with additional metadata is recorded and added to the 3D data stack currently under visualization. This updated information is transmitted to the analysis system 106 where it can be processed and used to update the copy of the 3D stack residing in the model database 108. The update module 318 configures the processor of the mobile device 106 to transmit this information to the analysis system 104 where data stack update module 320 stores the updated information in the model database 108.
  • 5.6 Autonomous Devices
  • In a further implementation, the 3D data stack is used by an autonomous or semi-autonomous device in order to path find, analyze or spot check data provided in the 3D data stack. In one non-limiting example, an airborne, submersible or subsurface autonomous device (e.g. drone) utilizes the 3D stack to inspect utility infrastructure with remote sensing devices such as IR scanners, cameras or magnetometers. The autonomous device is configured to take readings or measurements of infrastructure and update the 3D data stack with metadata relating to the current condition or state of the infrastructure. Additionally, the autonomous or semi-autonomous device is used in search and rescue, fire prevention and mitigation, disaster mitigation and relief operations.
  • The autonomous devices can, in specific embodiments, utilize the predictive module 330 of the system to provide real-time learning systems to execute tasks such as path finding and visual identification. As a non-limiting example, an autonomous vehicle traveling through a geographic area is configured to implement a data update feature whereby the data received by the vehicle is used to update the accuracy of the model data objects in near real time.
  • In a further embodiment, the system described provides a platform for network enabled vehicle communication and/or autonomous robotic vehicle with multi sensor interface to act upon dynamic changes occurring in environment and update that information back to the database. For example, location data and other information recorded or measured by the autonomous robotic vehicle is transmitted through the document model database and is distributed to other linked or connected autonomous and non-autonomous vehicles to avoid congestion. Based on the information from all the mobile platforms, as well as sensors and other data feeds, the predictive module configures the processor of the system to send a predicted route change to a vehicle to avoid congestion in traffic, or avoid other navigational hazards. In a further arrangement, the mobile computing device 106 is configured to perform, using a check sub-module (data consistency and validity) of the data stack update module 318 to review specific set of data in the model database 108. The check submodule configures the processor to analyze and validate in “real-time” any changes made to the 3D data stack, such as by annotating metadata or updated sensor measurements. The check module configures a processor to initiate a flag and update procedure to the specific 3D data stack being utilized by the user if any parameter changes are recognized. Here, the analytic system is configured to transmit an update to some or all of the 3D data stack being used by the user.
  • For example, if a sensor on oil and gas infrastructure indicates a change in the safety and continuous operational status, the 3D stack stored in the model database 108 is updated, and the updated status changes are made in real or near real time to any users that are currently accessing or using a 3D stack corresponding to that location. Data from autonomous vehicle/sensors can constantly update the MDOs in the model database 108 to provide improved spatial accuracy for real-time events that can be remotely analyzed by a user. For example, sensors indicating the creation of a new pothole would provide data to the 3D data stack of the location in question such that a remote user could evaluate the size, depth and potential impact of such a change in road surfaces might have on traffic. As another example, changes as a result of real time sensor and
  • The apparatus so described are configured to be extensible and interoperable with future designed tool sets configured to access the stored data models (such as though an API) and run data analysis specific to a particular domain or area of interest. For example, the Transportation Department of a government may have an interface with the analytic system 104 that allows for additional information to be utilized in support of traffic signal analysis and impact, accident analysis, diversion and route analysis and dynamic re-routing tools. Likewise, in agricultural contexts, the analytic system is extensible to accept private or user specific data streams and information to allow the model data to be used and combined with user data for the purposes of crop monitoring, yield prediction, weather impact analysis, drought mitigation and cost predictions.
  • While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of any embodiment or of what can be claimed, but rather as descriptions of features that can be specific to particular embodiments of particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a sub-combination or variation of a sub-combination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • It should be noted that use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
  • Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
  • Particular embodiments of the subject matter described in this specification have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain embodiments, multitasking and parallel processing can be advantageous. Patents, patent applications, and publications are cited throughout this application, the disclosures of which, particularly, including all disclosed chemical structures, are incorporated herein by reference. Citation of the above publications or documents is not intended as an admission that any of the foregoing is pertinent prior art, nor does it constitute any admission as to the contents or date of these publications or documents. All references cited herein are incorporated by reference to the same extent as if each individual publication, patent application, or patent, was specifically and individually indicated to be incorporated by reference.
  • While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (20)

What is claimed is:
1. A computer implemented method the method comprising:
selecting, by a processor, a geospatial area;
generating, by the processor, geospatial data corresponding to the geospatial area;
retrieving, by the processor, from a model database a first plurality of data objects relevant to the geospatial area wherein the first plurality of data objects include data related to one or more objects located in a city;
determining, by the processor, the model database doesn't contain a specific type of data object relevant to the geospatial area;
in response to determining the model database doesn't contain the specific type of data object relevant to the geospatial area, accessing, from at least one remote database accessible by the processor, a second plurality of data objects obtained from at least one of a plurality of sensor devices wherein the second plurality of data objects is relevant to the geospatial area;
determining, by the processor, a format of the each of the second plurality of data objects, wherein the determining the format includes determining whether each of the second plurality of data objects is in a compatible format type or an incompatible format type;
identifying a first portion of the second plurality of data objects are one of a plurality of incompatible format types and storing the first portion of the second plurality of data objects each in one of the incompatible format types as an element in a conversion array;
converting each of the first portion of the second plurality of data objects in the conversion array such that each of the first portion of the second plurality of data objects in one of the incompatible format types is converted to one of the compatible format types wherein the converting includes identifying a conversion algorithm for converting each of the first portion of the second plurality of data objects stored in the conversion array from one of the incompatible format types to one of the compatible format types, and applying the identified conversion algorithm to each of the second plurality of data objects to obtain a plurality of converted data objects;
generating the 3D visualization from 1) the first plurality of data objects from the model database, 2) the plurality of converted data objects from the at least one remote database and 3) a second portion of the second plurality of data objects each in one of the compatible data format types and from the at least one remote database;
transmitting the 3D visualization to a second computer;
displaying the 3D visualization to the user using at least one display device integral to the second computer.
2. The computer implemented method of claim 1, wherein the one or more objects located in the city are located above ground.
3. The computer implemented method of claim 1, wherein the one or more objects located in the city are located below ground.
4. The computer implemented method of claim 1, wherein the one or more objects include one or more buildings.
5. The computer implemented method of claim 1, wherein the one or more objects include subsurface infrastructure.
6. The computer implemented method of claim 1, wherein the 3D visualization includes one or more of a building, subsurface infrastructure or combinations thereof.
7. The computer implemented method of claim 1, wherein second plurality of data objects includes data from a first sensor device recorded at different times.
8. The computer implemented method of claim 1 wherein the second computer is integral to a virtual reality display device.
9. The computer implemented method of claim 1 wherein the second computer is integral to an autonomous or a semiautonomous device.
10. The computer implemented method of claim 1, further comprising:
updating a local copy, with the second computer, the first plurality of data objects, the plurality of converted data objects and the second portion of the second plurality of data objects each having the compatible data format type comprising the 3D visualization;
transmitting the updated local copy to the first computer;
synchronizing the local copy with a master copy of the updated data accessible by the first computer.
11. A computer implemented method, the method comprising:
receiving, in a processor a selection of a geospatial area;
generating, by the processor, geospatial data corresponding to the geospatial area;
retrieving, by the processor, from a model database a first plurality of data objects relevant to the geospatial area wherein the first plurality of data objects include data related to one or more buildings;
retrieving, from at least one remote database accessible by the processor, a second plurality of data objects wherein the second plurality of data objects include sensor information from at least one of a plurality of sensor devices that monitor conditions within the geospatial area;
determining, by the processor, a format of the each of the second plurality of data objects;
determining, by the processor, whether the format of each of the second plurality of data objects is in a compatible format type or an incompatible format type;
identifying, by the processor, a first portion of the second plurality of data objects with an incompatible format types and storing the first portion of the second plurality of data objects each in one of the incompatible format types as an element in a conversion array;
converting each of the first portion of the second plurality of data objects in the conversion array, using a conversion module configured as code executing in the processor such that each of the first portion of the second plurality of data objects in one the incompatible format types is converted to one of the compatible format types
wherein the converting includes identifying a conversion algorithm for converting each of the first portion of the second plurality of data objects stored in the conversion array from one of the incompatible format types to one of the compatible format types, and applying the identified conversion algorithm to each of the second plurality of data objects to obtain a plurality of converted data objects; and
generating a three-dimensional visualization of the one or more buildings and the sensor information from the first plurality of data objects and the at least one of the plurality of sensor devices.
12. The computer implemented method of claim 11, where the second plurality of data objects includes data associated with underground utilities.
13. The computer implemented method of claim 12, wherein the data includes a location and a history of the underground utilities within the geospatial area for a particular time period.
14. The computer implemented method of claim 12, wherein three-dimensional visualization includes a rendering of a path of the underground utilities.
15. The computer implemented method of claim 11, wherein the first plurality of data objects include data related to a plurality of buildings and wherein the three-dimensional visualization includes a rendering of the plurality of buildings.
16. The computer implemented method of claim 11, wherein the at least one remote database is selected from the group consisting of a municipal database, a zoning database, a planning database, a waste management database and utility databases.
17. The computer implemented method of claim 11, further comprising storing the plurality of converted data objects to the model database.
18. The computer implemented method of claim 11, wherein the sensor information is related to traffic in the geospatial area and wherein the three-dimensional visualization shows roads and data related to traffic flow on the roads in the geospatial area.
19. The computer implemented method of claim 1, wherein the second computer is a mobile computing device.
20. The computer implemented method of claim 19, wherein the mobile device is located within the geospatial area under inquiry and wherein the 3D visualization is adjusted in response to an orientation of the mobile device such that different portions of the 3D visualization are output depending on the orientation of the mobile device.
US16/299,737 2015-10-14 2019-03-12 3d analytics actionable solutions support system and apparatus Abandoned US20190205310A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/299,737 US20190205310A1 (en) 2015-10-14 2019-03-12 3d analytics actionable solutions support system and apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562241394P 2015-10-14 2015-10-14
US14/959,433 US10268740B2 (en) 2015-10-14 2015-12-04 3D analytics actionable solution support system and apparatus
US16/299,737 US20190205310A1 (en) 2015-10-14 2019-03-12 3d analytics actionable solutions support system and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/959,433 Continuation US10268740B2 (en) 2015-10-14 2015-12-04 3D analytics actionable solution support system and apparatus

Publications (1)

Publication Number Publication Date
US20190205310A1 true US20190205310A1 (en) 2019-07-04

Family

ID=58518035

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/959,433 Expired - Fee Related US10268740B2 (en) 2015-10-14 2015-12-04 3D analytics actionable solution support system and apparatus
US16/299,737 Abandoned US20190205310A1 (en) 2015-10-14 2019-03-12 3d analytics actionable solutions support system and apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/959,433 Expired - Fee Related US10268740B2 (en) 2015-10-14 2015-12-04 3D analytics actionable solution support system and apparatus

Country Status (3)

Country Link
US (2) US10268740B2 (en)
TW (1) TW201727514A (en)
WO (1) WO2017066679A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704916A (en) * 2019-09-24 2020-01-17 中水北方勘测设计研究有限责任公司 BIM technology-based large complex three-dimensional geological model grid coarsening method
US11035958B2 (en) 2018-11-15 2021-06-15 Bejing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
WO2021140514A1 (en) * 2020-01-07 2021-07-15 Datumate Ltd. Building information modeling (bim) data model for construction infrastructure
US11091156B2 (en) 2018-08-17 2021-08-17 Lyft, Inc. Road segment similarity determination
US11157007B2 (en) * 2019-06-28 2021-10-26 Lyft, Inc. Approaches for encoding environmental information
WO2022170241A1 (en) * 2021-02-08 2022-08-11 Dollypup Productions, LLC Code-based file format conversion
US11449475B2 (en) 2019-06-28 2022-09-20 Lyft, Inc. Approaches for encoding environmental information
US11788846B2 (en) 2019-09-30 2023-10-17 Lyft, Inc. Mapping and determining scenarios for geographic regions
US11816900B2 (en) 2019-10-23 2023-11-14 Lyft, Inc. Approaches for encoding environmental information
US11928557B2 (en) 2019-06-13 2024-03-12 Lyft, Inc. Systems and methods for routing vehicles to capture and evaluate targeted scenarios

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357738A1 (en) * 2016-06-11 2017-12-14 Flux Factory, Inc. Process for Merging Parametric Building Information Models
US10924357B2 (en) * 2016-06-20 2021-02-16 Telefonaktiebolaget Lm Ericsson (Publ) Method and device for determining resource utilization
US10115237B2 (en) * 2016-09-28 2018-10-30 Redzone Robotics, Inc. Virtual reality display of pipe inspection data
JP6242547B1 (en) * 2017-01-30 2017-12-06 三菱電機株式会社 Equipment maintenance management system and equipment information providing method
CN107291223A (en) * 2017-06-07 2017-10-24 武汉大学 A kind of super large data volume virtual reality space Information Visualization System and method
CN107273867A (en) * 2017-06-27 2017-10-20 航天星图科技(北京)有限公司 Empty day Remote Sensing Data Processing all-in-one
CN107577731B (en) * 2017-08-24 2020-06-16 多伦科技股份有限公司 Method and system for accessing different spatial databases
CN107577347B (en) * 2017-09-05 2020-07-21 南京睿诚华智科技有限公司 Education system and method based on virtual reality
CA3073645A1 (en) * 2017-09-05 2019-03-14 Zoran Obradovic System, method, and program product for local investment networking
CN111213137B (en) * 2017-10-10 2023-06-20 本特利系统有限公司 Alignment of source infrastructure data with BIS concept patterns
CN108268185B (en) * 2017-11-27 2021-05-28 北京硬壳科技有限公司 Control method, control device and control system
CN109931923B (en) 2017-12-15 2023-07-07 阿里巴巴集团控股有限公司 Navigation guidance diagram generation method and device
CN108197379B (en) * 2017-12-29 2021-09-17 金螳螂家装电子商务(苏州)有限公司 Data analysis method applied to home decoration platform
US11568236B2 (en) 2018-01-25 2023-01-31 The Research Foundation For The State University Of New York Framework and methods of diverse exploration for fast and safe policy improvement
CN112424839B (en) * 2018-06-28 2022-10-11 利乐拉瓦尔集团及财务有限公司 System and method for detecting deviations in packaging containers
US10740984B2 (en) * 2018-07-06 2020-08-11 Lindsay Corporation Computer-implemented methods, computer-readable media and electronic devices for virtual control of agricultural devices
CN109190094B (en) * 2018-09-05 2023-03-10 盈嘉互联(北京)科技有限公司 Building information model file segmentation method based on IFC standard
CN109408979B (en) * 2018-10-31 2023-04-07 广西路桥工程集团有限公司 Engineering construction internal control data management system
US10997761B2 (en) * 2018-11-09 2021-05-04 Imaginear Inc. Systems and methods for creating and delivering augmented reality content
CN110008274B (en) * 2019-04-15 2022-10-18 大连海事大学 BIM-based visual automatic monitoring system for internal force of pile foundation
US10895637B1 (en) * 2019-07-17 2021-01-19 BGA Technology LLC Systems and methods for mapping manmade objects buried in subterranean surfaces using an unmanned aerial vehicle integrated with radar sensor equipment
CN110597771B (en) * 2019-08-14 2023-09-05 平安证券股份有限公司 Method, device, equipment and readable storage medium for quickly importing DBF (digital binary flash) files
CN110837665A (en) * 2019-09-20 2020-02-25 久瓴(上海)智能科技有限公司 Building model display method and device, computer equipment and readable storage medium
CN111080536A (en) * 2019-11-13 2020-04-28 武汉华中天经通视科技有限公司 Self-adaptive filtering method for airborne laser radar point cloud
US11435938B2 (en) * 2019-12-13 2022-09-06 EMC IP Holding Company LLC System and method for generating asset maps
CN111160420B (en) * 2019-12-13 2023-10-10 北京三快在线科技有限公司 Map-based fault diagnosis method, map-based fault diagnosis device, electronic equipment and storage medium
TWI726539B (en) * 2019-12-16 2021-05-01 英業達股份有限公司 Processing method of range selector
US11366693B2 (en) 2019-12-26 2022-06-21 EMC IP Holding Company LLC System and method for deployment interface
CN113043265A (en) * 2019-12-26 2021-06-29 沈阳新松机器人自动化股份有限公司 Android-based library robot control method and device
US11694089B1 (en) 2020-02-04 2023-07-04 Rockwell Collins, Inc. Deep-learned photorealistic geo-specific image generator with enhanced spatial coherence
US11544832B2 (en) 2020-02-04 2023-01-03 Rockwell Collins, Inc. Deep-learned generation of accurate typical simulator content via multiple geo-specific data channels
CN111310230B (en) * 2020-02-10 2023-04-14 腾讯云计算(北京)有限责任公司 Spatial data processing method, device, equipment and medium
US11501489B2 (en) 2020-02-27 2022-11-15 Magic Leap, Inc. Cross reality system for large scale environment reconstruction
CN111506950B (en) * 2020-04-23 2021-04-13 中筑创联建筑科技(北京)有限公司 BIM structure transformation increment information generation and storage system and method
JP7466429B2 (en) * 2020-10-22 2024-04-12 株式会社日立製作所 Computer system and planning evaluation method
CN112417141B (en) * 2020-11-22 2023-05-16 西安热工研究院有限公司 Domestic industrial control system curve data query processing method
US11605202B2 (en) 2020-12-11 2023-03-14 International Business Machines Corporation Route recommendation that assists a user with navigating and interpreting a virtual reality environment
US11860641B2 (en) * 2021-01-28 2024-01-02 Caterpillar Inc. Visual overlays for providing perception of depth
TWI795764B (en) * 2021-04-22 2023-03-11 政威資訊顧問有限公司 Object positioning method and server end of presenting facility based on augmented reality view
TWI782591B (en) * 2021-06-23 2022-11-01 中興工程顧問股份有限公司 The method of the pipeline project design
CN114386078B (en) * 2022-03-22 2022-06-03 武汉汇德立科技有限公司 BIM-based construction project electronic archive management method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7995055B1 (en) 2007-05-25 2011-08-09 Google Inc. Classifying objects in a scene
WO2011070927A1 (en) 2009-12-11 2011-06-16 株式会社トプコン Point group data processing device, point group data processing method, and point group data processing program
JP5462093B2 (en) 2010-07-05 2014-04-02 株式会社トプコン Point cloud data processing device, point cloud data processing system, point cloud data processing method, and point cloud data processing program
US20140006919A1 (en) * 2012-06-29 2014-01-02 3S International, Llc. Method and apparatus for annotation content conversions
US9412040B2 (en) 2013-12-04 2016-08-09 Mitsubishi Electric Research Laboratories, Inc. Method for extracting planes from 3D point cloud sensor data

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11091156B2 (en) 2018-08-17 2021-08-17 Lyft, Inc. Road segment similarity determination
US11858503B2 (en) 2018-08-17 2024-01-02 Lyft, Inc. Road segment similarity determination
US11035958B2 (en) 2018-11-15 2021-06-15 Bejing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
US11928557B2 (en) 2019-06-13 2024-03-12 Lyft, Inc. Systems and methods for routing vehicles to capture and evaluate targeted scenarios
US11157007B2 (en) * 2019-06-28 2021-10-26 Lyft, Inc. Approaches for encoding environmental information
US11449475B2 (en) 2019-06-28 2022-09-20 Lyft, Inc. Approaches for encoding environmental information
CN110704916A (en) * 2019-09-24 2020-01-17 中水北方勘测设计研究有限责任公司 BIM technology-based large complex three-dimensional geological model grid coarsening method
US11788846B2 (en) 2019-09-30 2023-10-17 Lyft, Inc. Mapping and determining scenarios for geographic regions
US11816900B2 (en) 2019-10-23 2023-11-14 Lyft, Inc. Approaches for encoding environmental information
WO2021140514A1 (en) * 2020-01-07 2021-07-15 Datumate Ltd. Building information modeling (bim) data model for construction infrastructure
WO2022170241A1 (en) * 2021-02-08 2022-08-11 Dollypup Productions, LLC Code-based file format conversion

Also Published As

Publication number Publication date
WO2017066679A1 (en) 2017-04-20
US10268740B2 (en) 2019-04-23
TW201727514A (en) 2017-08-01
US20170109422A1 (en) 2017-04-20
WO2017066679A9 (en) 2017-05-18

Similar Documents

Publication Publication Date Title
US20190205310A1 (en) 3d analytics actionable solutions support system and apparatus
Faiz et al. Geographical information systems and spatial optimization
US20230046926A1 (en) 3d building generation using topology
Ramírez Eudave et al. On the suitability of a unified GIS-BIM-HBIM framework for cataloguing and assessing vulnerability in Historic Urban Landscapes: A critical review
Kijewski-Correa et al. CyberEye: Development of integrated cyber-infrastructure to support rapid hurricane risk assessment
de Vries Trends in the adoption of new geospatial technologies for spatial planning and land management in 2021
Kiwelekar et al. Deep learning techniques for geospatial data analysis
Tohidi et al. A review of the machine learning in gis for megacities application
Shekhar et al. From GPS and virtual globes to spatial computing-2020
National Research Council et al. IT roadmap to a geospatial future
EP4305534A1 (en) Location-specific three-dimensional models responsive to location-related queries
Yu et al. Automatic geospatial data conflation using semantic web technologies
Baylon et al. Introducing GIS to TransNav and its extensive maritime application: an innovative tool for intelligent decision making?
Gaur et al. Emerging Trends, Techniques, and Applications in Geospatial Data Science
Huber et al. Spatial data standards in view of models of space and the functions operating on them
Câmara et al. Geographical information engineering in the 21st century
McKee et al. Inside the opengis specification
Stocker et al. Abstractions from Sensor Data with Complex Event Processing and Machine Learning
Garg Understanding Geospatial Data
Noardo Spatial ontologies for architectural heritage.
Cordes et al. Reimagining Maps
Goodchild Geography and the information society
Sebillo et al. A territorial intelligence-based approach for smart emergency planning
Chursin Useful spatial data and GIS applications on the Internet for transportation companies
Mahamunkar Cite this chapter as: Kiwelekar AW, Mahamunkar GS, Netak LD, Nikam VB (2020) Deep Learning Techniques for Geospatial Data Analysis. In: Tsihrintzis G., Jain L.(eds) Machine Learning Paradigms. Learning and Analytics in Intelligent Sys-tems, vol 18. Springer, Cham. https://doi. org/10.1007/978-3-030-49724-8 3.

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION