CN114283251A - Real-time data access method of camp Internet of things sensing equipment based on three-dimensional scene - Google Patents

Real-time data access method of camp Internet of things sensing equipment based on three-dimensional scene Download PDF

Info

Publication number
CN114283251A
CN114283251A CN202111623006.0A CN202111623006A CN114283251A CN 114283251 A CN114283251 A CN 114283251A CN 202111623006 A CN202111623006 A CN 202111623006A CN 114283251 A CN114283251 A CN 114283251A
Authority
CN
China
Prior art keywords
internet
things
data
camp
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111623006.0A
Other languages
Chinese (zh)
Other versions
CN114283251B (en
Inventor
宋宣霈
闫勇
张明敏
于林宇
常海波
鞠岩
郭子仪
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Science And Engineering Intelligent Operation Research And Information Security Research Institute Wuhan Co ltd
Original Assignee
Aerospace Science And Engineering Intelligent Operation Research And Information Security Research Institute Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Science And Engineering Intelligent Operation Research And Information Security Research Institute Wuhan Co ltd filed Critical Aerospace Science And Engineering Intelligent Operation Research And Information Security Research Institute Wuhan Co ltd
Priority to CN202111623006.0A priority Critical patent/CN114283251B/en
Publication of CN114283251A publication Critical patent/CN114283251A/en
Application granted granted Critical
Publication of CN114283251B publication Critical patent/CN114283251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Alarm Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a real-time data access method of a camp Internet of things sensing device based on a three-dimensional scene, and belongs to the technical field of three-dimensional modeling. According to the invention, through constructing the internet of things perception library and the three-dimensional graph model library, the internet of things perception equipment is effectively linked with the three-dimensional scene of the camp, so that camp managers can thoroughly perceive the operation conditions of physical entity objects such as core buildings, underground space facilities, various devices of ground infrastructure and the like and the production and living activities of various personnel, analyze the coupling relation among various conditions and activities, and realize the timely perception of the operation state, early prediction and early warning of risks and problems and efficient handling.

Description

Real-time data access method of camp Internet of things sensing equipment based on three-dimensional scene
Technical Field
The invention belongs to the technical field of three-dimensional modeling, and particularly relates to a data real-time access method of a camp Internet of things sensing device based on a three-dimensional scene.
Background
The army camp is responsible for providing infrastructure conditions for officer and soldier combat readiness training and working life, provides an important mission basically depending on the combat effectiveness of weaponry, and ensures stable operation of camp by comprehensively controlling the operating state of camp by managers. In recent years, with the rapid development of emerging technologies such as internet of things, artificial intelligence and 5G, the continuous expansion of camp scale and information data volume and the rapid growth of smart industry, the demand of creating smart military camp is continuously increased, and the camp management trend gradually trends to digitization and precision. The traditional data information such as two-dimensional drawings and tables is difficult to meet the digital requirement of camp management. Based on a three-dimensional scene, by means of advanced technologies of a GIS (Geographic Information System) and a BIM (Building Information Modeling, BIM), various applications of dynamic Information, description entity units and supporting camp areas are integrated for specific requirements, deployment of comprehensive guarantee force and dynamic coordination scheduling level can be effectively improved, and real-time monitoring of camp area operation states by departments such as camp area management is supported.
At present, although three-dimensional modeling is widely applied to intelligent military camp construction, most of the information is static display of camp information such as terrain, buildings, roads, trees and the like, the fusion degree of dynamic monitoring data of the internet of things sensing equipment and camp three-dimensional scenes is low, real-time operation states of camp cannot be visually displayed, and online decision support is provided for camp management and emergency treatment.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to design a real-time data access method of a camp Internet of things sensing device.
(II) technical scheme
In order to solve the technical problems, the invention provides a data real-time access method of a camp Internet of things sensing device based on a three-dimensional scene, which comprises 4 steps of building an Internet of things sensing library, building a three-dimensional graph model library, building association between the Internet of things sensing device and a graph model, and fusing the graph model and the camp three-dimensional scene;
the method specifically comprises the following steps of constructing an internet of things perception library:
1) internet of things perception device categorization
Classifying the Internet of things sensing equipment aiming at different camp management services and application scenes, wherein the classification comprises personnel monitoring, vehicle monitoring, warning protection and environment monitoring;
2) construction of internet of things perception library
Constructing an internet of things perception library for storing relevant data of equipment and carrying out situation display by combining a three-dimensional scene; according to the business application, dividing an Internet of things perception database into a basic database, a dynamic database and a decision support database;
the basic database stores basic information of the sensing equipment of the stock union;
the dynamic database stores real-time monitoring data of the sensing equipment of the storage union;
and the decision support database stores historical monitoring data and statistical data of the IOT sensing equipment.
3) Monitoring data leading-in
According to different types of equipment and different formats of data, dynamic monitoring data of the Internet of things sensing equipment are led to an Internet of things sensing library for storage in real time through a network;
the three-dimensional graph model library construction method specifically comprises the following steps:
1) design of pattern model
Designing graphic model characteristics aiming at the characteristics of the Internet of things sensing equipment by combining the three-dimensional scene of the camp;
2) three-dimensional graph model library construction
Constructing a three-dimensional graph model library, storing graph model data, calling the graph model data when the situation is displayed, and dividing the three-dimensional graph model library into a GIS graph model library and a BIM graph model library according to different three-dimensional scenes;
the step of establishing association between the internet of things sensing equipment and the graph model specifically comprises the following steps:
selecting a graph model object in a three-dimensional graph model library for corresponding according to the attribute and the application scene of the Internet of things sensing equipment, associating the graph model object with the Internet of things sensing library, and extracting basic data, dynamic data and decision support data of the equipment;
the graph model object comprises 4 entities of graph model attribute, real-time information of sensing of the Internet of things, historical information and statistical information, the graph model object is associated with 1 common and unique identification of sensing equipment of the Internet of things, and O is usediRepresenting a graph-model object, IDiIs a unique identification of the Internet of things sensing equipment, PiIs a graph model attribute, RiFor sensing real-time information for the Internet of things, HiAs history information, SiFor statistical information, a graph-model object is represented as:
Oi={Pi,Ri,Hi,Si}。
wherein:
Pi={IDi,(p1,p2,…,pj)};
Ri={IDi,(r1,r2,…,rk)};
Hi={IDi,(h1,h2,…,hm)};
Si={IDi,(s1,s2,…,sn)}。
in the formula: i, j, k, m and n respectively represent any natural number; p is a radical ofjRepresenting a data item in a three-dimensional graph model library; r iskRepresenting a data item in a dynamic database; h ismRepresenting a historical monitoring data item in a decision support database; snRepresenting a statistical data item in a decision support database;
the method for fusing the graph model and the camp three-dimensional scene specifically comprises the following steps:
1) camp three-dimensional scene construction
Building a three-dimensional GIS (geographic information System) of a camp area based on an airborne LiDAR (light detection and ranging) and an oblique image, generating a digital elevation model and a building frame model, and completing construction of an outdoor camp area model through texture extraction and mapping; aiming at a core building of a camp, carrying out texture geometric correction based on a BIM white model to form a live-action BIM ground object model;
2) pattern model identification
Determining a three-dimensional space coordinate of the internet of things sensing equipment according to the actual installation position of the internet of things sensing equipment, and identifying a graph model object in the constructed three-dimensional scene of the camp;
3) situation display
And selecting an image model object in the three-dimensional scene, and displaying the basic information, dynamic data and historical data of the corresponding Internet of things perception equipment.
Preferably, the basic information of the internet of things sensing equipment comprises a brand, a model, a number, equipment classification, an installation position and a returning department.
Preferably, the real-time monitoring data of the internet of things sensing equipment comprises human faces, license plate numbers, positioning, alarming and temperature and humidity.
Preferably, during the design of the graph model, the graph model features such as color, brightness, line type, line width, character size, character interval and font are designed according to the characteristics of the internet of things sensing equipment.
Preferably, the texture geometry correction process is as follows:
a. calculating ground coordinates
The corresponding ground coordinates calculated by BIM white mode midpoint P orthorectification are
Figure BDA0003438800960000041
Wherein X 'and Y' are coordinates of an image point of any point P with respect to the texture surface, and X0、Y0The ground coordinates of the BIM white mold side corner point at the lower left corner of the orthograin surface are obtained, and M is a scale denominator of an image where the orthograin is located;
b. calculating coordinates of image points
And (3) setting the coordinates of any pixel in the original image and the corrected image as (X, Y) and (X, Y), respectively, and calculating the coordinates P (X, Y) of the image point on the original image by using an inverse solution formula:
Figure BDA0003438800960000051
wherein Z is the elevation value of the point P; x and y are image plane coordinates of the image point; x is the number of0,y0F is the internal orientation element of the image; x5,Y5,Z5The object space coordinates of the camera stations; x, Y and Z are object space coordinates of the object space point, and X and Y are ground coordinates in the step a; a isi,bi,ci(i is 1, 2, 3) is 9 directional cosine values composed of 3 external orientation angle elements of the image;
c. grayscale interpolation and assignment
Firstly, a bilinear interpolation method is adopted to obtain a P-point gray value G (X, Y), and then the P-point gray value is assigned to a corrected pixel, namely G (X, Y) is G (X, Y), so that the oblique texture pixel-by-pixel orthorectification is realized.
Preferably, the graph model identification specifically includes the following steps:
a. acquiring basic information and three-dimensional space coordinates of an Internet of things sensing device to be accessed from an Internet of things sensing library;
by LiLongitude, D, representing the installation location of the Internet of things sensing deviceiLatitude representing installation position of Internet of things sensing device, AiElevation, ID, indicating the installation location of the instrumented sonsiThe unique identification of the internet of things perception equipment is the three-dimensional space coordinate C of the internet of things perception equipmentiExpressed as:
Ci={Li,Di,Ai}
wherein:
Li={IDi,(l1,l2,…,lj)}
Di={IDi,(d1,d2,…,dk)}
Ai={IDi,(a1,a2,…,am)}
in the formula: i, j, k and m respectively represent any natural number; ljRepresenting a device longitude data item in the internet of things perception library; dkRepresenting a device dimension data item in an internet of things perception library; a ismRepresenting a device elevation data item in an Internet of things perception library;
b. recording the equipment number and equipment classification information when an event is triggered in real time;
c. and storing the acquired three-dimensional space coordinates, the corresponding Internet of things sensing equipment number and the equipment classification information into a three-dimensional graph library.
Preferably, said posture exhibition specifically comprises the steps of:
a. personnel situation information display
The method comprises the steps that the distribution situation of personnel in a camp and the situation information of visitors are displayed in a mode of combining data and a three-dimensional scene, and meanwhile, the comprehensive situation of the personnel in different time periods is positioned in a time axis mode;
accessing personnel information: acquiring personnel data in a camp by accessing a face recognition camera;
secondly, displaying the overall distribution of personnel: the method comprises the following steps of displaying the overall distribution condition of personnel in a camp, wherein the display is performed in the mode of personnel point location information, personnel density and a heat map;
thirdly, displaying the motion track information of a single person: displaying the moving track of a single person in the camp according to the feedback result of the face recognition camera;
and fourthly, displaying the person in and out: according to the feedback result of the visitor all-in-one machine, the entering and exiting of personnel can be counted and displayed;
b. vehicle situation information display
Displaying vehicle situation overview and vehicle track tracking information in a mode of combining data and a three-dimensional scene;
vehicle information access: acquiring vehicle data in a camp through a vehicle identification all-in-one machine;
secondly, overview of vehicle situation: displaying the situations of all vehicles, including the use states, the total number and the model functions of the vehicles;
vehicle track is tracked: according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine, the running track of the vehicle is displayed, and the vehicle track playback can be displayed according to the time dimension;
fourthly, displaying the vehicle activity heat map: according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine, generating and displaying an activity heat map of the vehicle in the camp;
c. environment monitoring situation display
Displaying the environment operation situation and alarm information in a mode of combining data and a three-dimensional scene;
accessing environment monitoring data: acquiring environmental monitoring data through various sensors, and accessing temperature, humidity, smoke, water leakage, gas and dust data aiming at a material warehouse, a ordnance warehouse and an ammunition warehouse; access to PM10, PM2.5, SO for specific areas2、NO2、CO、O3Data; forest fires, earthquakes, landslide, collapse, ground collapse, settlement and ground crack data are accessed aiming at the mountainous areas of the campsite;
secondly, displaying environmental operation parameters: and displaying the environmental operation parameter information in a certain mode according to the feedback result of the environmental monitoring sensor.
Preferably, when the environmental monitoring data is accessed, the environmental monitoring data is acquired through temperature and humidity, smoke, ground disasters and harmful gas sensors.
Preferably, when the environment monitoring data is accessed, the PM10, PM2.5 and SO are accessed for specific areas such as living areas, office areas and headquarters2、NO2、CO、O3And (4) data.
Preferably, when the environmental operation parameter is displayed, the environmental operation parameter information is displayed in a mode of characters, tables and visual graphs.
(III) advantageous effects
According to the invention, through constructing the internet of things perception library and the three-dimensional graph model library, the internet of things perception equipment is effectively linked with the three-dimensional scene of the camp, so that camp managers can thoroughly perceive the operation conditions of physical entity objects such as core buildings, underground space facilities, various devices of ground infrastructure and the like and the production and living activities of various personnel, analyze the coupling relation among various conditions and activities, and realize the timely perception of the operation state, early prediction and early warning of risks and problems and efficient handling.
Drawings
FIG. 1 is a schematic diagram of an implementation of the present invention;
fig. 2 is a schematic diagram of classification of an internet of things sensing device.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
As shown in fig. 1, the method for accessing data of a camp-based internet of things sensing device in real time based on a three-dimensional scene provided by the invention comprises 4 steps of building an internet of things sensing library, building a three-dimensional graph model library, building association between the internet of things sensing device and a graph model, fusing the graph model and the camp-based three-dimensional scene, and the like.
(1) Construction of an Internet of things perception library
1) Internet of things perception device categorization
The method includes classifying the internet of things sensing equipment according to different camp management services and application scenes, wherein the classification includes personnel monitoring, vehicle monitoring, warning protection, environment monitoring and the like, and is shown in fig. 2.
2) Construction of internet of things perception library
And constructing an internet of things perception library for storing relevant data of the equipment and carrying out situation display by combining a three-dimensional scene. According to business application, the internet of things perception database is divided into a basic database, a dynamic database and a decision support database.
The basic database mainly stores basic information of the internet of things sensing equipment, such as brands, models, numbers, equipment classification, installation positions, returning departments and the like.
The dynamic database mainly stores real-time monitoring data of the internet of things sensing equipment, such as human faces, license plate numbers, positioning, alarming, temperature and humidity and the like.
The decision support database mainly stores historical monitoring data and statistical data of the internet of things sensing equipment.
3) Monitoring data leading-in
According to different types of equipment and different formats of data, dynamic monitoring data of the Internet of things sensing equipment are led to an Internet of things sensing library for storage in real time through a network.
(2) Building three-dimensional graph model library
1) Design of pattern model
By combining with the three-dimensional scene of the camp, the method designs the graphic model characteristics such as color, brightness, line type, line width, character size, character interval, font and the like aiming at the characteristics of the Internet of things sensing equipment, and has the functional attributes of high concentration, rapid information transmission and convenient memory.
2) Three-dimensional graph model library construction
And constructing a three-dimensional graph model library for storing graph model data and calling the graph model data when the situation is displayed. And dividing the three-dimensional graph model library into a GIS graph model library and a BIM graph model library according to different three-dimensional scenes.
(3) Association between Internet of things sensing equipment and graph model
And selecting the graph model object in the three-dimensional graph model library for corresponding according to the attribute of the Internet of things sensing equipment and the application scene. And the graph model object is associated with the Internet of things perception library, and basic data, dynamic data and decision support data of the equipment are extracted.
The graph model object comprises 4 entities such as graph model attributes, real-time information of sensing of the internet of things, historical information and statistical information, and is associated with 1 common and unique identification of sensing equipment of the internet of things. With OiRepresenting a graph-model object, IDiIs a unique identification of the Internet of things sensing equipment, PiIs a graph model attribute, RiFor sensing real-time information for the Internet of things, HiAs history information, SiFor statistical information, a graph-model object can be represented as:
Oi={Pi,Ri,Hi,Si}。
wherein:
Pi={IDi,(p1,p2,…,pj)};
Ri={IDi,(r1,r2,…,rk)};
Hi={IDi,(h1,h2,…,hm)};
Si={IDi,(s1,s2,…,sn)}。
in the formula: i, j, k, m and n respectively represent any natural number; p is a radical ofjRepresenting a certain data item in the three-dimensional graph model library; r iskRepresenting a data item in a dynamic database; h ismRepresenting a certain historical monitoring data item in the decision support database; snRepresenting a certain statistical data item in the decision support database.
(4) Three-dimensional scene fusion of graph model and camp
1) Camp three-dimensional scene construction
Building a three-dimensional GIS (geographic information System) of a camp area based on an airborne LiDAR (light detection and ranging) and an oblique image, generating a digital elevation model and a building frame model, and completing construction of an outdoor camp area model through texture extraction and mapping; aiming at the core building of the camp, the texture geometric correction is carried out based on the BIM white model to form a real-scene BIM ground object model.
The texture geometry correction process is as follows:
a. calculating ground coordinates
The corresponding ground coordinates calculated by BIM white mode midpoint P orthorectification are
Figure BDA0003438800960000111
Wherein X 'and Y' are coordinates of an arbitrary point P (pixel center) with respect to an image point of the texture surface, and X0、Y0The coordinate of the BIM white mold side corner point at the lower left corner of the orthograin surface is the ground coordinate, and M is the scale denominator of the image where the orthograin is located.
b. Calculating coordinates of image points
And (3) setting the coordinates of any pixel in the original image and the corrected image as (X, Y) and (X, Y), respectively, and calculating the image point coordinate P (X, Y) on the original image by using an inverse solution formula (collinearity equation):
Figure BDA0003438800960000112
wherein Z is the elevation value of the point P; x and y are image plane coordinates of the image point; x is the number of0,y0F is the internal orientation element of the image; x5,Y5,Z5The object space coordinates of the camera stations; x, Y and Z are object space coordinates of the object space point, and X and Y are ground coordinates in the step a; a isi,bi,ci(i is 1, 2, 3) is 9 directional cosine values composed of 3 external orientation angle elements of the image.
c. Grayscale interpolation and assignment
The calculated pixel coordinates P (x, y) are not necessarily located at the center of the pixel, necessitating a per-pixel gray scale interpolation. Firstly, a bilinear interpolation method is adopted to obtain a P-point gray value G (X, Y), and then the P-point gray value is assigned to a corrected pixel, namely G (X, Y) is G (X, Y), so that the oblique texture pixel-by-pixel orthorectification is realized.
2) Pattern model identification
According to the actual installation position of the internet of things sensing equipment, determining the three-dimensional space coordinates of the internet of things sensing equipment, and identifying the graph model object in the constructed camp three-dimensional scene, the method specifically comprises the following steps:
a. and acquiring basic information and three-dimensional space coordinates of the to-be-accessed Internet of things sensing equipment from the Internet of things sensing library.
By LiLongitude, D, representing the installation location of the Internet of things sensing deviceiLatitude representing installation position of Internet of things sensing device, AiElevation, ID, indicating the installation location of the instrumented sonsiThe unique identification of the internet of things perception equipment is the three-dimensional space coordinate C of the internet of things perception equipmentiExpressed as:
Ci={Li,Di,Ai}
wherein:
Li={IDi,(l1,l2,…,lj)}
Di={IDi,(d1,d2,…,dk)}
Ai={IDi,(a1,a2,…,am)}
in the formula: i, j, k and m respectively represent any natural number; ljRepresenting a certain equipment longitude data item in the internet of things perception library; dkRepresenting a certain equipment dimension data item in the Internet of things perception library; a ismAnd representing a certain equipment elevation data item in the Internet of things perception library.
b. And recording the equipment number and the equipment classification information when the event is triggered in real time.
c. And storing the acquired three-dimensional space coordinates, the corresponding Internet of things sensing equipment number and the equipment classification information into a three-dimensional graph library.
3) Situation display
Selecting an image model object in a three-dimensional scene, and showing basic information, dynamic data and historical data of the corresponding Internet of things perception device, wherein the method specifically comprises the following steps:
a. personnel situation information display
The method is characterized in that information such as the distribution situation of people in a camp and the situation of visitors is displayed in a mode of combining data and a three-dimensional scene, and meanwhile, the comprehensive situation of the people in different time periods is rapidly positioned in a time axis mode.
Accessing personnel information: acquiring personnel data in a camp by accessing a face recognition camera;
secondly, displaying the overall distribution of personnel: the overall distribution condition of the personnel in the camp is displayed, including displaying in the modes of personnel point location information, personnel density, heat map and the like;
thirdly, displaying the motion track information of a single person: displaying the moving track of a single person in the camp according to the feedback result of the face recognition camera;
and fourthly, displaying the person in and out: according to the feedback result of the visitor all-in-one machine, people can be counted and displayed when entering and exiting.
b. Vehicle situation information display
And displaying information such as vehicle situation overview, vehicle track tracking and the like in a mode of combining data and a three-dimensional scene.
Vehicle information access: acquiring vehicle data in a camp through a vehicle identification all-in-one machine;
secondly, overview of vehicle situation: displaying the situations of all vehicles, including the functions of the using states of the vehicles, the total number of the vehicles, the vehicle types and the like;
vehicle track is tracked: according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine, the running track of the vehicle is displayed, and the vehicle track playback can be displayed according to the time dimension;
fourthly, displaying the vehicle activity heat map: and generating and displaying an activity heat map of the vehicle in the camp according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine.
c. Environment monitoring situation display
And displaying information such as environment operation situation, alarm and the like in a mode of combining data and a three-dimensional scene.
Accessing environment monitoring data: environmental monitoring data is acquired through various sensors such as temperature, humidity, smoke, ground disasters and harmful gases. The method comprises the following steps of accessing data such as temperature and humidity, smoke, water leakage, gas, dust and the like aiming at important material storehouses, ordnance storehouses and ammunition storehouses; accessing data such as PM10, PM2.5, SO2, NO2, CO, O3 and the like aiming at certain specific areas such as living areas, office areas, headquarter premises and the like; accessing data such as forest fires, earthquakes, landslides, collapses, ground subsidence, settlement, ground cracks and the like aiming at mountainous areas in the campsite;
secondly, displaying environmental operation parameters: and displaying the environmental operation parameter information in the modes of characters, tables, visual graphs and the like according to the feedback result of the environmental monitoring sensor.
The method can realize effective superposition of dynamic data of the Internet of things sensing equipment and the three-dimensional scene of the camp, and realizes centralized display of camp operation data on a platform and a graph according to an application mode of 'data centralized storage and information comprehensive utilization'; according to the invention, camp area management elements such as command, monitoring, protection, alarm, communication, disposal and the like are integrated in the camp area three-dimensional scene, so that the change of camp area precaution control from passive disposal to active precaution can be realized, and the pre-known early warning preprocessing capability is improved.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A camp Internet of things sensing equipment data real-time access method based on a three-dimensional scene is characterized by comprising 4 steps of building an Internet of things sensing library, building a three-dimensional graph model library, building association between the Internet of things sensing equipment and a graph model, and fusing the graph model and the camp three-dimensional scene;
the method specifically comprises the following steps of constructing an internet of things perception library:
1) internet of things perception device categorization
Classifying the Internet of things sensing equipment aiming at different camp management services and application scenes, wherein the classification comprises personnel monitoring, vehicle monitoring, warning protection and environment monitoring;
2) construction of internet of things perception library
Constructing an internet of things perception library for storing relevant data of equipment and carrying out situation display by combining a three-dimensional scene; according to the business application, dividing an Internet of things perception database into a basic database, a dynamic database and a decision support database;
the basic database stores basic information of the sensing equipment of the stock union;
the dynamic database stores real-time monitoring data of the sensing equipment of the storage union;
and the decision support database stores historical monitoring data and statistical data of the IOT sensing equipment.
3) Monitoring data leading-in
According to different types of equipment and different formats of data, dynamic monitoring data of the Internet of things sensing equipment are led to an Internet of things sensing library for storage in real time through a network;
the three-dimensional graph model library construction method specifically comprises the following steps:
1) design of pattern model
Designing graphic model characteristics aiming at the characteristics of the Internet of things sensing equipment by combining the three-dimensional scene of the camp;
2) three-dimensional graph model library construction
Constructing a three-dimensional graph model library, storing graph model data, calling the graph model data when the situation is displayed, and dividing the three-dimensional graph model library into a GIS graph model library and a BIM graph model library according to different three-dimensional scenes;
the step of establishing association between the internet of things sensing equipment and the graph model specifically comprises the following steps:
selecting a graph model object in a three-dimensional graph model library for corresponding according to the attribute and the application scene of the Internet of things sensing equipment, associating the graph model object with the Internet of things sensing library, and extracting basic data, dynamic data and decision support data of the equipment;
the graph model object comprises 4 entities of graph model attribute, real-time information of sensing of the Internet of things, historical information and statistical information, the graph model object is associated with 1 common and unique identification of sensing equipment of the Internet of things, and O is usediRepresenting a graph-model object, IDiIs a unique identification of the Internet of things sensing equipment, PiIs a graph model attribute, RiFor sensing real-time information for the Internet of things, HiAs history information, SiFor statistical information, a graph-model object is represented as:
Oi={Pi,Ri,Hi,Si}。
wherein:
Pi={IDi,(p1,p2,…,pj)};
Ri={IDi,(r1,r2,…,rk)};
Hi={IDi,(h1,h2,…,hm)};
Si={IDi,(s1,s2,…,sn)}。
in the formula: i, j, k, m and n respectively represent any natural number; p is a radical ofjRepresenting a data item in a three-dimensional graph model library; r iskRepresenting a data item in a dynamic database; h ismRepresenting a historical monitoring data item in a decision support database; snRepresenting a statistical data item in a decision support database;
the method for fusing the graph model and the camp three-dimensional scene specifically comprises the following steps:
1) camp three-dimensional scene construction
Building a three-dimensional GIS (geographic information System) of a camp area based on an airborne LiDAR (light detection and ranging) and an oblique image, generating a digital elevation model and a building frame model, and completing construction of an outdoor camp area model through texture extraction and mapping; aiming at a core building of a camp, carrying out texture geometric correction based on a BIM white model to form a live-action BIM ground object model;
2) pattern model identification
Determining a three-dimensional space coordinate of the internet of things sensing equipment according to the actual installation position of the internet of things sensing equipment, and identifying a graph model object in the constructed three-dimensional scene of the camp;
3) situation display
And selecting an image model object in the three-dimensional scene, and displaying the basic information, dynamic data and historical data of the corresponding Internet of things perception equipment.
2. The method of claim 1, wherein the basic information of the internet of things aware device comprises brand, model, number, device category, installation location, and gate department.
3. The method of claim 1, wherein the real-time monitoring data of the internet of things sensing device comprises a human face, a license plate number, positioning, alarming, temperature and humidity.
4. The method of claim 1, wherein the graphic model is designed according to the characteristics of the internet of things sensing device, such as color, brightness, line type, line width, character size, character spacing and font.
5. The method of claim 1, wherein the texture geometry correction is performed as follows:
a. calculating ground coordinates
The corresponding ground coordinates calculated by BIM white mode midpoint P orthorectification are
Figure FDA0003438800950000031
Wherein X 'and Y' are coordinates of an image point of any point P with respect to the texture surface, and X0、Y0The ground coordinates of the BIM white mold side corner point at the lower left corner of the orthograin surface are obtained, and M is a scale denominator of an image where the orthograin is located;
b. calculating coordinates of image points
And (3) setting the coordinates of any pixel in the original image and the corrected image as (X, Y) and (X, Y), respectively, and calculating the coordinates P (X, Y) of the image point on the original image by using an inverse solution formula:
Figure FDA0003438800950000041
wherein Z is the elevation value of the point P; x and y are image plane coordinates of the image point; x is the number of0,y0F is the internal orientation element of the image; x5,Y5,Z5The object space coordinates of the camera stations; x, Y and Z are object space coordinates of the object space point, and X and Y are ground coordinates in the step a; a isi,bi,ci(i is 1, 2, 3) is 9 directional cosine values composed of 3 external orientation angle elements of the image;
c. grayscale interpolation and assignment
Firstly, a bilinear interpolation method is adopted to obtain a P-point gray value G (X, Y), and then the P-point gray value is assigned to a corrected pixel, namely G (X, Y) is G (X, Y), so that the oblique texture pixel-by-pixel orthorectification is realized.
6. The method of claim 5, wherein the graph model identification specifically comprises the steps of:
a. acquiring basic information and three-dimensional space coordinates of an Internet of things sensing device to be accessed from an Internet of things sensing library;
by LiLongitude, D, representing the installation location of the Internet of things sensing deviceiLatitude representing installation position of Internet of things sensing device, AiElevation, ID, indicating the installation location of the instrumented sonsiThe unique identification of the internet of things perception equipment is the three-dimensional space coordinate C of the internet of things perception equipmentiExpressed as:
Ci={Li,Di,Ai}
wherein:
Li={IDi,(l1,l2,…,lj)}
Di={IDi,(d1,d2,…,dk)}
Ai={IDi,(a1,a2,…,am)}
in the formula: i, j, k and m respectively represent any natural number; ljRepresenting a device longitude data item in the internet of things perception library; dkRepresenting a device dimension data item in an internet of things perception library; a ismRepresenting a device elevation data item in an Internet of things perception library;
b. recording the equipment number and equipment classification information when an event is triggered in real time;
c. and storing the acquired three-dimensional space coordinates, the corresponding Internet of things sensing equipment number and the equipment classification information into a three-dimensional graph library.
7. The method of claim 1, wherein the situation representation specifically comprises the steps of:
a. personnel situation information display
The method comprises the steps that the distribution situation of personnel in a camp and the situation information of visitors are displayed in a mode of combining data and a three-dimensional scene, and meanwhile, the comprehensive situation of the personnel in different time periods is positioned in a time axis mode;
accessing personnel information: acquiring personnel data in a camp by accessing a face recognition camera;
secondly, displaying the overall distribution of personnel: the method comprises the following steps of displaying the overall distribution condition of personnel in a camp, wherein the display is performed in the mode of personnel point location information, personnel density and a heat map;
thirdly, displaying the motion track information of a single person: displaying the moving track of a single person in the camp according to the feedback result of the face recognition camera;
and fourthly, displaying the person in and out: according to the feedback result of the visitor all-in-one machine, the entering and exiting of personnel can be counted and displayed;
b. vehicle situation information display
Displaying vehicle situation overview and vehicle track tracking information in a mode of combining data and a three-dimensional scene;
vehicle information access: acquiring vehicle data in a camp through a vehicle identification all-in-one machine;
secondly, overview of vehicle situation: displaying the situations of all vehicles, including the use states, the total number and the model functions of the vehicles;
vehicle track is tracked: according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine, the running track of the vehicle is displayed, and the vehicle track playback can be displayed according to the time dimension;
fourthly, displaying the vehicle activity heat map: according to the feedback results of the vehicle identification all-in-one machine and the Beidou vehicle-mounted all-in-one machine, generating and displaying an activity heat map of the vehicle in the camp;
c. environment monitoring situation display
Displaying the environment operation situation and alarm information in a mode of combining data and a three-dimensional scene;
accessing environment monitoring data: acquiring environmental monitoring data through various sensors, and accessing temperature, humidity, smoke, water leakage, gas and dust data aiming at a material warehouse, a ordnance warehouse and an ammunition warehouse; access to PM10, PM2.5, SO for specific areas2、NO2、CO、O3Data; forest fires, earthquakes, landslide, collapse, ground collapse, settlement and ground crack data are accessed aiming at the mountainous areas of the campsite;
secondly, displaying environmental operation parameters: and displaying the environmental operation parameter information in a certain mode according to the feedback result of the environmental monitoring sensor.
8. The method of claim 7, wherein the environmental monitoring data is obtained by temperature and humidity, smoke, ground disaster, and harmful gas sensors when the environmental monitoring data is accessed.
9. The method of claim 7, wherein the environmental monitoring data is accessed by accessing PM10, PM2.5, SO for specific areas of living quarters, office quarters, and chief premises2、NO2、CO、O3And (4) data.
10. The method of claim 7, wherein the environmental operation parameter information is displayed in a form of text, table, visual graphics when the environmental operation parameter is displayed.
CN202111623006.0A 2021-12-28 2021-12-28 Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene Active CN114283251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111623006.0A CN114283251B (en) 2021-12-28 2021-12-28 Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111623006.0A CN114283251B (en) 2021-12-28 2021-12-28 Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene

Publications (2)

Publication Number Publication Date
CN114283251A true CN114283251A (en) 2022-04-05
CN114283251B CN114283251B (en) 2024-04-09

Family

ID=80876959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111623006.0A Active CN114283251B (en) 2021-12-28 2021-12-28 Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene

Country Status (1)

Country Link
CN (1) CN114283251B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177391A (en) * 2013-01-31 2013-06-26 李生林 Equipment monitoring system based on three-dimensional real scenes and system integration method based on three-dimensional real scenes
WO2016034140A1 (en) * 2014-09-04 2016-03-10 国家电网公司 Gis map-based dynamic status plotting system and method
CN107576311A (en) * 2017-08-23 2018-01-12 长江水利委员会长江科学院 A kind of reservoir inspection method of real-time based on three-dimension GIS
CN112465401A (en) * 2020-12-17 2021-03-09 国网四川省电力公司电力科学研究院 Electric power operation safety control system based on multi-dimensional information fusion and control method thereof
CN112560137A (en) * 2020-12-04 2021-03-26 武汉光谷信息技术股份有限公司 Multi-model fusion method and system based on smart city
KR102272295B1 (en) * 2021-03-30 2021-07-02 케이에스아이 주식회사 Method for improving recognition ratio of vehicle license plate employing depth information of image
CN113516768A (en) * 2021-07-15 2021-10-19 国网山东省电力公司电力科学研究院 Visual wisdom thing of transformer substation allies oneself with management system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177391A (en) * 2013-01-31 2013-06-26 李生林 Equipment monitoring system based on three-dimensional real scenes and system integration method based on three-dimensional real scenes
WO2016034140A1 (en) * 2014-09-04 2016-03-10 国家电网公司 Gis map-based dynamic status plotting system and method
CN107576311A (en) * 2017-08-23 2018-01-12 长江水利委员会长江科学院 A kind of reservoir inspection method of real-time based on three-dimension GIS
CN112560137A (en) * 2020-12-04 2021-03-26 武汉光谷信息技术股份有限公司 Multi-model fusion method and system based on smart city
CN112465401A (en) * 2020-12-17 2021-03-09 国网四川省电力公司电力科学研究院 Electric power operation safety control system based on multi-dimensional information fusion and control method thereof
KR102272295B1 (en) * 2021-03-30 2021-07-02 케이에스아이 주식회사 Method for improving recognition ratio of vehicle license plate employing depth information of image
CN113516768A (en) * 2021-07-15 2021-10-19 国网山东省电力公司电力科学研究院 Visual wisdom thing of transformer substation allies oneself with management system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张轩;凌云;: "基于数字孪生的智慧营区信息系统建设", 信息化研究, no. 03, 20 June 2020 (2020-06-20) *

Also Published As

Publication number Publication date
CN114283251B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN109978409B (en) Multi-engineering enterprise-level intelligent construction site block chain management method
KR101937096B1 (en) 3D monitoring server using 3D BIM object model and 3D monitoring system comprising it
US20220319183A1 (en) System for tracking and visualizing objects and a method therefor
CN110008618A (en) A kind of emergency aided decision system based on GIS and BIM
Ervin et al. Landscape visibility computation: necessary, but not sufficient
CN100533486C (en) Digital city full-automatic generating method
CN112955900B (en) Intelligent video monitoring system and method
US20110202510A1 (en) Supporting multiple different applications having different data needs using a voxel database
Kobayashi et al. Visualizing diurnal population change in urban areas for emergency management
CN104159067A (en) Intelligent monitoring system and method based on combination of 3DGIS with real scene video
CN103390075A (en) Comparing virtual and real images in a shopping experience
US20110202326A1 (en) Modeling social and cultural conditions in a voxel database
CN115346026A (en) Emergency treatment system based on digital twinning technology
CN117351521B (en) Digital twinning-based power transmission line bird detection method, system, medium and equipment
CN106446987B (en) Museum objects intelligent management system and method based on Internet of Things and interior GIS
CN114283252B (en) Real-time access system for barrage Internet of things sensing equipment data based on three-dimensional scene
McHaffie Public Cartography, the Market, and Democracy
JP2007521455A (en) CT-Analyst: A software system for emergency assessment of chemical, biological and radiological (CBR) threats from the air with zero delay and high fidelity
CN116319862A (en) System and method for intelligently matching digital libraries
Qiu et al. Disaggregating population data for assessing progress of SDGs: methods and applications
CN114511432A (en) Digital country management service system based on block chain
CN114283251B (en) Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene
Yang et al. Real-time monitoring of changes in building tilt angle using double square artificial marks
Mennis et al. Application of spatial statistical techniques
JP5406773B2 (en) Security measure support device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant