CN111429583A - Space-time situation perception method and system based on three-dimensional geographic information - Google Patents

Space-time situation perception method and system based on three-dimensional geographic information Download PDF

Info

Publication number
CN111429583A
CN111429583A CN202010208367.8A CN202010208367A CN111429583A CN 111429583 A CN111429583 A CN 111429583A CN 202010208367 A CN202010208367 A CN 202010208367A CN 111429583 A CN111429583 A CN 111429583A
Authority
CN
China
Prior art keywords
data
dimensional geographic
fusion
analysis
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010208367.8A
Other languages
Chinese (zh)
Inventor
陈虹旭
刘卫华
刘丽娟
周舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Yunzhou Technology Co ltd
Original Assignee
Beijing Smart Yunzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Yunzhou Technology Co ltd filed Critical Beijing Smart Yunzhou Technology Co ltd
Priority to CN202010208367.8A priority Critical patent/CN111429583A/en
Publication of CN111429583A publication Critical patent/CN111429583A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the application discloses a space-time situation perception method and system based on three-dimensional geographic information. Wherein the method comprises: acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data; carrying out virtual-real scene fusion on the three-dimensional geographic comprehensive bearer data to obtain a three-dimensional geographic fusion scene; carrying out space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene; the space-time position intelligent collision detection analysis result provides data support for decision analysis, and the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, industrial application auxiliary decision making, and automatic control and response.

Description

Space-time situation perception method and system based on three-dimensional geographic information
Technical Field
The embodiment of the application relates to the technical field of virtual reality, in particular to a space-time situation perception method and system based on three-dimensional geographic information.
Background
With the advent of the big data era, the extraction, analysis and prediction of mass data have become the key direction of urban application, and new wave is brought up by Situation Awareness (SA). A "situation" is not an "event". An event is a corollary, and a situation is a trend, plus perception, that is a prediction of the trend before the event occurs. The situation awareness system should have the continuous monitoring capability of network space security and can find various attack threats and anomalies in time; the system has the capability of threat investigation analysis and visualization, and can quickly judge the influence range, attack path, purpose and means related to the threat, thereby supporting effective security decision and response; a safety early warning mechanism can be established to perfect the levels of risk control, emergency response and overall safety protection.
At present, in industrial application, most systems are in a first level of situation awareness, namely a situation awareness stage, and all data can be accessed and browsed without unified understanding analysis, trend prediction and other applications. And aiming at massive data perception, some limitations also exist, such as:
(1) the video monitoring pictures are mutually split, and the browsed video is only an independent video picture based on a single camera, cannot reflect and restore real scene information, and cannot form macroscopic integral observation.
(2) The intelligent data analysis is realized through an artificial intelligence technical mode, only based on single picture analysis, the data are dispersed and isolated, a large amount of manpower and time are needed for research and judgment, the efficiency is low, the workload is large, and the whole space perception and the time event pulse path cannot be formed.
(3) The multi-source perception data can not restore and perceive dynamic perception data in a real scene only based on the display and application of the sensing data or the association between the position of the sensor and a map, and cannot form scene-based visual perception.
In summary, the prior art still has many limitations in merging virtual and reality and application.
Disclosure of Invention
Therefore, the embodiment of the application provides a space-time situation perception method and system based on three-dimensional geographic information, and decision analysis based on the three-dimensional geographic information in virtual reality is achieved.
According to a first aspect of embodiments of the present application, there is provided a space-time situation awareness method based on three-dimensional geographic information, the method including:
acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data;
carrying out virtual-real scene fusion on the three-dimensional geographic comprehensive bearing data to obtain a three-dimensional geographic fusion scene, and carrying out space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene;
the space-time position intelligent collision detection analysis result provides data support for decision analysis, and the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, industrial application auxiliary decision making, and automatic control and response.
Optionally, the obtaining information data and converging the information data into the three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data includes:
and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic comprehensive bearing data.
Optionally, the video application gateway forwards the video stream data by 28281 protocol or SDK mode;
the intelligent analysis gateway accesses the intelligent analysis application of the third-party platform and forwards the analysis data in an 28281 protocol or SDK mode;
the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
Optionally, the method includes performing virtual-real scene fusion on the three-dimensional geographical integrated bearer data to obtain a three-dimensional geographical fusion scene, and performing space-time position intelligent collision detection analysis on the three-dimensional geographical fusion scene, including:
and inputting the target information data into a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to obtain a virtual-real fusion scene of the three-dimensional geographic fusion module, and then performing space-time position intelligent collision detection analysis.
Optionally, the three-dimensional geographic model is constructed from a remote sensing image, a digital elevation, a vector map and a three-dimensional model.
According to a second aspect of the embodiments of the present application, there is provided a spatial-temporal situational awareness system based on three-dimensional geographic information, the system including:
the convergence sensing module is used for acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data;
the analysis and understanding module is used for carrying out virtual-real scene fusion on the three-dimensional geographic comprehensive bearing data to obtain a three-dimensional geographic fusion scene; carrying out space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene;
and the simulation prediction module is used for carrying out decision analysis in the three-dimensional geographic fusion scene according to the intelligent collision detection and analysis result of the space-time position, wherein the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, the auxiliary decision of industry application, and automatic control and response.
Optionally, the convergence sensing module is specifically configured to:
and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic fusion model.
Optionally, the video application gateway forwards the video stream data by 28281 protocol or SDK mode;
the intelligent analysis gateway accesses the intelligent analysis application of the third-party platform and forwards the analysis data in an 28281 protocol or SDK mode;
the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
Optionally, the analysis understanding module is specifically configured to:
and inputting the target information data into a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to obtain a virtual-real fusion scene of the three-dimensional geographic fusion module, and then performing space-time position intelligent collision detection analysis.
Optionally, the three-dimensional geographic model is constructed from a remote sensing image, a digital elevation, a vector map and a three-dimensional model.
In summary, the space-time situation awareness method and system based on three-dimensional geographic information provided by the embodiment of the application obtain a three-dimensional geographic fusion model by obtaining information data and converging the information data into a three-dimensional geographic information model; carrying out space-time position intelligent collision detection analysis on target information data and the three-dimensional geographic fusion model to obtain a virtual-real fusion scene of the three-dimensional geographic fusion model; and performing decision analysis in a virtual-real fusion scene of the three-dimensional geographic fusion model according to the data to be predicted, wherein the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, and the auxiliary decision and automatic control and response of industrial application. And realizing decision analysis based on three-dimensional geographic information in virtual reality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope of the present invention.
FIG. 1 is a schematic diagram of a spatial-temporal situation awareness system based on three-dimensional geographic information according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of spatial-temporal situational awareness based on three-dimensional geographic information provided by an embodiment of the present application;
fig. 3 is a schematic flow chart of a space-time situation awareness method based on three-dimensional geographic information according to an embodiment of the present application.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Based on the limitations in the prior art mentioned in the background art, the embodiments of the present application provide a space-time situation awareness method and system based on three-dimensional geographic information, which effectively organize and sort data in real life according to a uniform space-time frame reference based on a spatial region to form intelligent data capable of guiding production life and industry application, so as to better realize acquisition, understanding, display and forward prediction of development trend of events, and further provide feasible and effective decisions for city management.
In one possible embodiment, a three-dimensional geographic information model is built. As a bottom core bearing engine of the system, multi-source data such as remote sensing images, digital elevations, electronic maps, urban three-dimensional models and the like can be integrated to reconstruct urban three-dimensional virtual geographic environments.
In one possible implementation, a three-dimensional geo-fusion model in a digital twin city scenario is constructed. In a virtual city scene, according to the real and accurate coordinate positions of city spaces such as longitude, latitude, altitude and the like, unstructured big data such as monitoring videos and the like, intelligent analysis data and internet of things perception data are fused, a real world is constructed into a virtual three-dimensional digital twin city, and the first stage of city space-time situation perception is achieved.
In a digital world, the only method for establishing a pair of 'digital twin brothers' is to utilize positions, traditional intelligent analysis applications are all applications based on a single video or a single scene, no single situation or state can be called as a situation without overall and global concepts, the situation focuses on representing the environment, the dynamics and the integrity, the position Intelligence (L localization Intelligence, L I) is an extension and supplement to the artificial Intelligence, AI (artificial Intelligence) + L I (position Intelligence) is a dual intelligent application engine, so that the artificial Intelligence application based on a single video and a single scene, combined with the spatial position, realizes more real and accurate mining and analysis, helps managers to make correct decisions by combining the position information and the data information, so that multiple kinds of perception data in cities can realize mining and understanding of real-time and spatial data in a twin-time + urban scene, and visualization of spatial data, and visualization of spatial perception stages are formed.
Under the common drive of L I (position intelligence) + AI (artificial intelligence), firstly positioning urban data and associating the data with the position, which is the basis of decision making, secondly carrying out space visualization, finding out and displaying the information related to the position, visualizing the position information data to execute more and more complex operations based on the position, thirdly carrying out deep mining analysis of data based on the position and improving the application to an analysis level, such as the relation among various data, possible development trend in the future and the like, adding the data into business data, providing richer data support for urban management and decision making, and finally carrying out intelligent planning of urban space, namely combining human, ground, twin, object and various data bases in the city, realizing the visual prediction of various business conditions, understanding of various business bases, urban brain perception, urban business support, urban business prediction, urban brain support, and visual expression of digital business support through the analysis of the position intelligence and the analysis.
In order to realize effective and accurate prediction and early warning of urban situation, L I is used for modeling and analyzing each influence factor of multiple situation themes, AI is continuously optimized, and meanwhile, an urban situation simulation system is constructed to realize spatial-temporal unification of urban data and urban scene, information intercommunication of each system, improvement of urban data utility and finally accurate perception, understanding and prediction of urban spatial-temporal situation.
Fig. 1 illustrates a space-time situation awareness system based on three-dimensional geographic information according to an embodiment of the present application, where the system includes:
the convergence sensing module 101 is configured to obtain information data and converge the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearer data.
The analysis understanding module 102 is configured to perform virtual-real scene fusion on the three-dimensional geographic integrated bearer data to obtain a three-dimensional geographic fusion scene; and performing space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene.
And the simulation prediction module 103 is used for performing decision analysis in the three-dimensional geographic fusion scene according to the space-time position intelligent collision detection analysis result, wherein the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, the auxiliary decision of industry application, and automatic control and response.
In a possible implementation manner, the convergence awareness module 101 is specifically configured to: and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic fusion model.
In a possible implementation, the video application gateway forwards the video stream data through 28281 protocol or SDK mode; the intelligent analysis gateway accesses the intelligent analysis application of the video monitoring equipment or the third-party platform and forwards the analysis data through an 28281 protocol or an SDK mode; the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
In a possible implementation, the analysis understanding module is specifically configured to: and inputting the target information data into a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to perform space-time position intelligent collision detection analysis, so as to obtain a virtual-real fusion scene of the three-dimensional geographic fusion model.
In one possible embodiment, the three-dimensional geographic model is constructed from a remote sensing image, a digital elevation, a vector map, and a three-dimensional model.
In order to more clearly illustrate the three-dimensional geographic information based spatio-temporal situation awareness system provided by the embodiment of the present application, fig. 2 schematically illustrates an embodiment of a three-dimensional geographic information based spatio-temporal situation awareness method provided by the embodiment of the present application. As shown in fig. 2, the space-time situation awareness system based on three-dimensional geographic information includes three modules: the system comprises a convergence perception module, an analysis understanding module and a simulation prediction module.
The convergence sensing module converges unstructured data such as monitoring videos, intelligent analysis data and internet of things sensing data according to a unified space-time frame (also called as a three-dimensional geographic information model) through a video application gateway, an intelligent analysis data gateway and an internet of things sensing data gateway, gives unified and standard geographic spatial position information to the converged data, realizes the space-time visual comprehensive convergence sensing of three-dimensional geographic information scenes of the unstructured data such as the monitoring videos, the intelligent analysis data and the internet of things sensing data, and obtains a three-dimensional geographic fusion model.
The video application gateway is used for realizing the access of the video monitoring equipment and the forwarding of the streaming media through 28281 protocol or SDK mode. And the intelligent analysis gateway is used for realizing the access of various video monitoring devices or intelligent analysis applications of a third-party platform and the forwarding of analysis data through an 28281 protocol or an SDK mode. And the Internet of things perception gateway is used for realizing the access of perception data of various sensor devices and dynamic data forwarding in an SDK mode.
The analysis understanding module comprises a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a data access and bearing module such as a human-ground object group and the like, a scientific knowledge map application module and a three-dimensional geographic information space-time position intelligent collision detection application module. Through the three-dimensional geographic information space-time position intelligent collision detection application module, space-time + space position intelligent collision detection analysis is uniformly carried out on the three-dimensional geographic scene video virtual-real fusion module, the intelligent analysis data geographic fusion sensing module, the internet of things sensing data geographic fusion sensing module, the data access and bearing module of the human-ground object group and the scientific knowledge map application module, so that a virtual-real fusion scene of the three-dimensional geographic fusion model is obtained.
The three-dimensional geographic scene video virtual-real fusion module is used for realizing large-scale wide-area scene splicing and fusion on massive and scattered monitoring video pictures in a three-dimensional geographic information scene according to real and accurate geographic coordinate positions such as longitude, latitude, altitude and the like, and the splicing and fusion mode can not generate dislocation due to roaming operation of a three-dimensional geographic information three-dimensional model scene, so that the video pictures and the three-dimensional geographic information scene are fused and unified. The visual characteristics of the three-dimensional geographic information scene are fully exerted, and the virtual and real combination of the video and the three-dimensional geographic information scene is realized.
The intelligent analysis data geographic fusion sensing module is used for realizing intelligent video analysis data of single and scattered monitoring videos in a three-dimensional geographic information scene, and performing unified three-dimensional geographic information scene fusion matching based on a spatial geographic position.
The system comprises an Internet of things perception data geographic fusion perception module, a three-dimensional geographic information scene fusion module and a geographic information scene fusion module, wherein the Internet of things perception data geographic fusion perception module is used for realizing unified three-dimensional geographic information scene fusion matching on massive, scattered and different kinds of Internet of things perception data based on spatial geographic positions.
The data access and bearing module for the human-ground object group and the like is used for realizing the access and bearing of elements such as natural people, geographic positions, case events, articles, organizations and the like in a three-dimensional geographic information scene, organically organizing and providing basic support for space-time position intelligent collision detection analysis.
The scientific knowledge map application module is used for realizing the access and bearing of scientific knowledge map data in the application field of professional or special industry and providing scientific knowledge basic support for the industry and professional knowledge when the intelligent collision detection analysis is carried out on the spatial and temporal positions.
The three-dimensional geographic information space-time position intelligent collision detection application module is a unified space-time frame system based on a three-dimensional geographic information system, and combines intelligent analysis such as video fusion, human-vehicle target detection and the like and Internet of things perception data to perform space-time position intelligent collision monitoring analysis, namely: based on the space-time secondary analysis of the three-dimensional geographic information scene, technical support is provided for constructing intelligent data based on unified space-time positions and forming the space-time topological logical relationship of the original data.
The simulation prediction module realizes unified application of data after intelligent collision detection of the space-time position, firstly constructs space-time logic intelligent data of a unified space-time framework system, and visually masters the space-time situation, rule and trend of the industry data by combining the industry attributes and the scientific knowledge map, thereby providing simulation deduction and command prediction technical support for industry application decision making. The system specifically comprises a space-time logic intelligent data construction module, a space-time situation, rule and trend visualization module, an industry application auxiliary decision-making module and an automatic control and response module.
The spatial-temporal situation, rule and trend visualization module is used for simulating, deducing and prejudging the situation, rule and trend of the spatial-temporal integrated event after intelligent collision detection and analysis based on the spatial-temporal position, and visually presenting the situation, rule and trend. And the industry application assistant decision-making module is used for providing visual assistant decision-making support after intelligent data and situation rule analysis aiming at specific industry application after intelligent collision detection and analysis based on space-time position. And the automatic control and response module is used for providing automatic handling and feedback response technical support for specific industry application and decision-making aspects after intelligent collision detection and analysis based on space-time positions.
In practical application, the space-time situation perception method and system based on three-dimensional geographic information provided by the embodiment of the application can be used for performing military exercises and the like, performing 'rehearsal' on things which do not occur, setting positions and data conditions, and obtaining casualty results. It is also possible to, for example, conduct a command based on real-time monitoring, where a views the real-time dynamics of B in the system, informs B of the integrated decision, gives behavioral guidance, etc., such as: a prisoner has partnerships in front of the B to pay attention to avoiding and the like.
In summary, the space-time situation perception method and system based on three-dimensional geographic information provided by the embodiment of the application perform unified space-time geographic frame convergence on data perceived in a space based on a three-dimensional geographic information scene to form unified reconstruction of a real world and a virtual world, so as to obtain a three-dimensional geographic fusion model. Furthermore, through the application of intelligent collision detection and analysis of time and space positions, the capabilities of finding, identifying, understanding, analyzing and responding to and handling events are improved from a global view, real-time situation insights, analysis, trend prediction and situation simulation of each situation topic are realized, elements causing situation changes are acquired, understood and displayed, the sequential prediction of recent development trends is carried out, and finally, theoretical support of decision and action is provided for city management.
Based on the same technical concept, fig. 3 shows a schematic flow chart of a space-time situation awareness method based on three-dimensional geographic information according to an embodiment of the present application, where the method includes the following steps:
step 301: and acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data.
Step 302: and performing virtual-real scene fusion on the three-dimensional geographic comprehensive bearing data to obtain a three-dimensional geographic fusion scene, and performing space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene.
Step 303: the space-time position intelligent collision detection analysis result provides data support for decision analysis, and the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, industrial application auxiliary decision making, and automatic control and response.
In one possible embodiment, the three-dimensional geographic model in step 301 is constructed from the remote sensing image, the digital elevation, the vector map, and the three-dimensional model.
In step 301, the method specifically includes: and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic fusion model.
In a possible implementation mode, the video application gateway is used for receiving video stream data acquired by the high-point camera, the gunlock and the dome camera, the intelligent analysis data gateway is used for storing analysis data of the intelligent analysis equipment, and the internet of things perception data gateway is used for storing perception data of the internet of things perception equipment.
In a possible implementation, the video application gateway forwards the video stream data through 28281 protocol or SDK mode; the intelligent analysis gateway accesses the intelligent analysis application of the video monitoring equipment or the third-party platform and forwards the analysis data through an 28281 protocol or an SDK mode; the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
In a possible implementation manner, in step 302, the target information data is input to a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to perform space-time position intelligent collision detection analysis, so as to obtain a virtual-real fusion scene of the three-dimensional geographic fusion model.
In step 302, virtual-real fusion of unstructured data such as a surveillance video and the like and three-dimensional geographic information scene space matching is realized through a three-dimensional geographic fusion model; and the intelligent collision detection analysis based on the space-time position is carried out by combining data of human-ground object groups and the like and a scientific knowledge map, so that the visual analysis and understanding of virtual and real data are realized on the basis of convergence and fusion.
In summary, the space-time situation awareness method and system based on three-dimensional geographic information provided by the embodiment of the application obtain a three-dimensional geographic fusion model by obtaining information data and converging the information data into a three-dimensional geographic information model; carrying out space-time position intelligent collision detection analysis on target information data and the three-dimensional geographic fusion model to obtain a virtual-real fusion scene of the three-dimensional geographic fusion model; and performing decision analysis in a virtual-real fusion scene of the three-dimensional geographic fusion model according to the data to be predicted, wherein the decision analysis comprises the construction of space-time logic intelligent data, the visualization of space-time situation rules and trends, and the auxiliary decision and automatic control and response of industrial application. And realizing decision analysis based on three-dimensional geographic information in virtual reality.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Reference is made to the description of the method embodiments.
It is noted that while the operations of the methods of the present invention are depicted in the drawings in a particular order, this is not a requirement or suggestion that the operations must be performed in this particular order or that all of the illustrated operations must be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Although the present application provides method steps as in embodiments or flowcharts, additional or fewer steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an apparatus or client product in practice executes, it may execute sequentially or in parallel (e.g., in a parallel processor or multithreaded processing environment, or even in a distributed data processing environment) according to the embodiments or methods shown in the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded.
The units, devices, modules, etc. set forth in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the present application, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of a plurality of sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The above-mentioned embodiments are further described in detail for the purpose of illustrating the invention, and it should be understood that the above-mentioned embodiments are only illustrative of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A space-time situation perception method based on three-dimensional geographic information is characterized by comprising the following steps:
acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data;
carrying out virtual-real scene fusion on the three-dimensional geographic comprehensive bearing data to obtain a three-dimensional geographic fusion scene, and carrying out space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene;
the space-time position intelligent collision detection analysis result provides data support for decision analysis, and the decision analysis comprises the construction of space-time logic intelligent data, the prediction of space-time situation rules and trends, industrial application auxiliary decision making and automatic control and response.
2. The method of claim 1, wherein the obtaining information data and aggregating the information data into the three-dimensional geographic information model to obtain three-dimensional geographic integrated bearer data comprises:
and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic comprehensive bearing data.
3. The method of claim 2, wherein the video application gateway forwards the video stream data by means of 28281 protocol or SDK;
the intelligent analysis gateway accesses the intelligent analysis application of the third-party platform and forwards the analysis data in an 28281 protocol or SDK mode;
the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
4. The method of claim 1 or 2, wherein the three-dimensional geography comprehensive carrying data is subjected to virtual-real scene fusion to obtain a three-dimensional geography fusion scene, and then the three-dimensional geography fusion scene is subjected to space-time position intelligent collision detection analysis, comprising:
target information data are input into a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to obtain a virtual-real fusion scene of the three-dimensional geographic fusion module, and then space-time position intelligent collision detection analysis is carried out.
5. The method of claim 1, wherein the three-dimensional geographic model is constructed from remotely sensed images, digital elevations, vector maps, and three-dimensional models.
6. A space-time situation awareness system based on three-dimensional geographic information, the system comprising:
the convergence sensing module is used for acquiring information data and converging the information data into a three-dimensional geographic information model to obtain three-dimensional geographic comprehensive bearing data;
the analysis and understanding module is used for carrying out virtual-real scene fusion on the three-dimensional geographic comprehensive bearing data to obtain a three-dimensional geographic fusion scene; carrying out space-time position intelligent collision detection analysis on the three-dimensional geographic fusion scene;
and the simulation prediction module is used for carrying out decision analysis in the three-dimensional geographic fusion scene according to the intelligent collision detection and analysis result of the space-time position, wherein the decision analysis comprises the construction of space-time logic intelligent data, the prediction of space-time situation rules and trends, the auxiliary decision of industry application, and automatic control and response.
7. The system of claim 6, wherein the convergence awareness module is specifically configured to:
and acquiring information data of a video application gateway, an intelligent analysis data gateway and an internet of things perception data gateway, and fusing a video picture, the intelligent analysis application data and the internet of things perception data in the three-dimensional geographic information model according to the longitude and latitude and the altitude coordinate of each information data to obtain the three-dimensional geographic fusion model.
8. The system of claim 7, wherein the video application gateway forwards video stream data via 28281 protocol or SDK;
the intelligent analysis gateway accesses the intelligent analysis application of the third-party platform and forwards the analysis data in an 28281 protocol or SDK mode;
the Internet of things perception gateway accesses perception data of the sensor equipment and forwards dynamic data in an SDK mode.
9. The system of claim 6 or 7, wherein the analytical understanding module is specifically configured to:
target information data are input into a three-dimensional geographic scene video virtual-real fusion module, an intelligent analysis data geographic fusion sensing module, an internet of things sensing data geographic fusion sensing module, a human-ground object data module or a scientific knowledge map module to obtain a virtual-real fusion scene of the three-dimensional geographic fusion module, and then space-time position intelligent collision detection analysis is carried out.
10. The system of claim 6, wherein the three-dimensional geographic model is constructed from remotely sensed images, digital elevations, vector maps, and three-dimensional models.
CN202010208367.8A 2020-03-23 2020-03-23 Space-time situation perception method and system based on three-dimensional geographic information Pending CN111429583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010208367.8A CN111429583A (en) 2020-03-23 2020-03-23 Space-time situation perception method and system based on three-dimensional geographic information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010208367.8A CN111429583A (en) 2020-03-23 2020-03-23 Space-time situation perception method and system based on three-dimensional geographic information

Publications (1)

Publication Number Publication Date
CN111429583A true CN111429583A (en) 2020-07-17

Family

ID=71549237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010208367.8A Pending CN111429583A (en) 2020-03-23 2020-03-23 Space-time situation perception method and system based on three-dimensional geographic information

Country Status (1)

Country Link
CN (1) CN111429583A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766595A (en) * 2021-01-29 2021-05-07 北京电子工程总体研究所 Command control device, method, system, computer equipment and medium
CN112991535A (en) * 2021-04-19 2021-06-18 中国人民解放军国防科技大学 Three-dimensional space situation representation method and device of height information enhanced ink cartoo map
CN113572764A (en) * 2021-07-23 2021-10-29 广东轻工职业技术学院 Industrial Internet network security situation perception system based on AI
CN116303856A (en) * 2023-03-07 2023-06-23 北京龙软科技股份有限公司 Industrial geographic information system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118654A (en) * 2007-09-19 2008-02-06 中国科学院上海微系统与信息技术研究所 Machine vision computer simulation emulation system based on sensor network
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN108230440A (en) * 2017-12-29 2018-06-29 杭州百子尖科技有限公司 Chemical industry whole process operating system and method based on virtual augmented reality
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118654A (en) * 2007-09-19 2008-02-06 中国科学院上海微系统与信息技术研究所 Machine vision computer simulation emulation system based on sensor network
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN108230440A (en) * 2017-12-29 2018-06-29 杭州百子尖科技有限公司 Chemical industry whole process operating system and method based on virtual augmented reality
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766595A (en) * 2021-01-29 2021-05-07 北京电子工程总体研究所 Command control device, method, system, computer equipment and medium
CN112766595B (en) * 2021-01-29 2023-09-29 北京电子工程总体研究所 Command control device, method, system, computer equipment and medium
CN112991535A (en) * 2021-04-19 2021-06-18 中国人民解放军国防科技大学 Three-dimensional space situation representation method and device of height information enhanced ink cartoo map
CN113572764A (en) * 2021-07-23 2021-10-29 广东轻工职业技术学院 Industrial Internet network security situation perception system based on AI
CN113572764B (en) * 2021-07-23 2023-04-25 广东轻工职业技术学院 Industrial Internet network security situation awareness system based on AI
CN116303856A (en) * 2023-03-07 2023-06-23 北京龙软科技股份有限公司 Industrial geographic information system
CN116303856B (en) * 2023-03-07 2024-01-09 北京龙软科技股份有限公司 Industrial geographic information system

Similar Documents

Publication Publication Date Title
CN111429583A (en) Space-time situation perception method and system based on three-dimensional geographic information
Fang et al. Modeling and key technologies of a data-driven smart city system
Mittelstädt et al. An integrated in-situ approach to impacts from natural disasters on critical infrastructures
CN114218788A (en) Transformer substation digital twinning system and application method and system thereof
CN114187541A (en) Intelligent video analysis method and storage device for user-defined service scene
Shoukat et al. Smart home for enhanced healthcare: exploring human machine interface oriented digital twin model
CN112950758A (en) Space-time twin visualization construction method and system
CN115620208A (en) Power grid safety early warning method and device, computer equipment and storage medium
Kramar et al. Augmented Reality-assisted Cyber-Physical Systems of Smart University Campus
Chopade et al. Real-time large-scale big data networks analytics and visualization architecture
CN115859689B (en) Panoramic visualization digital twin application method
Sha et al. Smart city public safety intelligent early warning and detection
CN113836247A (en) Wall map battle method and system for network security management
Hall et al. The use of soft sensors and I-space for improved combat ID
Govindaraj et al. Command and control systems for search and rescue robots
Bui et al. LiDAR-based virtual environment study for disaster response scenarios
Nittel et al. Emerging technological trends likely to affect GIScience in the next 20 years
Qu et al. Design of trace-based NS-3 simulations for UAS video analytics with geospatial mobility
CN117274485A (en) Tunnel bridge emergency simulation system based on GIS and BIM and establishment method thereof
Yue et al. CAD Design and Implementation of Virtual Reality Booth Based on Unity Technology
Henson et al. Facilitating Effective Utilization of Water Science Research Among Emergency Flood Responders
CN117852134A (en) Building site digital twin method, system and storage medium for live video and BIM
Anagnostopoulos et al. A Design Approach and Prototype Implementation for Factory Monitoring Based on Virtual and Augmented Reality at the Edge of Industry 4.0
CN117289797A (en) Transformer substation inspection information interaction method and system based on meta-universe technology
Chase et al. Semantic visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination