CN115578685A - Three-dimensional model-based area monitoring method, system, equipment and storage medium - Google Patents

Three-dimensional model-based area monitoring method, system, equipment and storage medium Download PDF

Info

Publication number
CN115578685A
CN115578685A CN202211098098.XA CN202211098098A CN115578685A CN 115578685 A CN115578685 A CN 115578685A CN 202211098098 A CN202211098098 A CN 202211098098A CN 115578685 A CN115578685 A CN 115578685A
Authority
CN
China
Prior art keywords
monitoring
model
target area
video
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211098098.XA
Other languages
Chinese (zh)
Inventor
王莹
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Chongqing Information Technology Co Ltd
Original Assignee
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Chongqing Information Technology Co Ltd filed Critical CISDI Chongqing Information Technology Co Ltd
Priority to CN202211098098.XA priority Critical patent/CN115578685A/en
Publication of CN115578685A publication Critical patent/CN115578685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to a region monitoring method, a system, equipment and a storage medium based on a three-dimensional model, wherein a lightweight model of a target region is obtained by fusing a physical model of the target region and characteristic data of the target region; then, identifying the surveillance video of the target area through an image identification algorithm to obtain an identification result of the surveillance video; and fusing the monitoring video of the target area, the identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model, and monitoring the target area based on the three-dimensional live-action model. According to the invention, a three-dimensional model of the metallurgical park is established through the physical model and the characteristic data, and then the monitoring video of the monitoring equipment is displayed in the three-dimensional model, so that massive discrete videos in the metallurgical park range are effectively and visually managed, a worker can conveniently and dynamically master the scene state of a target area of the park in real time, and orderly management, control and command scheduling are realized.

Description

Three-dimensional model-based area monitoring method, system, equipment and storage medium
Technical Field
The invention belongs to the technical field of monitoring, and particularly relates to a three-dimensional model-based area monitoring method, system, equipment and storage medium.
Background
The metallurgical industry is the important root of national economic development, and the metallurgical park is the important guarantee of metallurgical industry development. Aiming at the characteristics of high blast furnace danger in metallurgical park and a plurality of dangerous areas such as steel making, continuous casting, hot rolling, cold rolling and the like, the safety and timeliness requirements in the control process are high.
Because of the relatively complex metallurgical process, a large amount of monitoring equipment is needed for the video monitoring of the metallurgical park, so that massive and scattered video data are generated, the process of monitoring the metallurgical process based on the video data is relatively complicated, and the monitoring of the metallurgical park is not facilitated.
Disclosure of Invention
The invention provides a three-dimensional model-based area monitoring method, a three-dimensional model-based area monitoring system, a three-dimensional model-based area monitoring device and a three-dimensional model-based area monitoring storage medium, and aims to solve the technical problem that the monitoring process of a metallurgical process is complex in the prior art.
A method of region monitoring based on a three-dimensional model, the method comprising:
acquiring position information of monitoring equipment, a monitoring video of a target area acquired by the monitoring equipment, a physical model of the target area and characteristic data of the target area;
fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
identifying the surveillance video of the target area through a preset image identification algorithm to obtain an identification result of the surveillance video;
fusing the monitoring video of the target area, the identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
and monitoring a target area based on the three-dimensional live-action model.
In an embodiment of the present invention, the target area physical model includes: a geographic information model, a building information model; the characteristic data of the target area comprise oblique photography data, three-dimensional point cloud data, image data and elevation data;
fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area, wherein the lightweight model comprises the following steps:
performing format conversion on the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data; so that the formats of the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data are all target formats;
and mapping the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data after format conversion to a preset coordinate system to obtain the lightweight model.
In an embodiment of the present invention, the fusing the surveillance video of the target area, the recognition result of the surveillance video, and the lightweight model according to the position information of the surveillance device to obtain a three-dimensional live-action model, includes:
constructing an information label according to the position information;
and associating the information tag with the surveillance video of the target area and the recognition result of the surveillance video, and setting the information tag to a corresponding position of the lightweight model according to the position information to be displayed to obtain the three-dimensional live-action model.
In an embodiment of the present invention, monitoring a target area based on the three-dimensional live-action model includes:
acquiring label access information from the outside, wherein the label access information is used for accessing the information label;
responding to the label access information, outputting and displaying the surveillance video of the target area associated with the information label and the identification result of the surveillance video, and finishing monitoring the target area.
In an embodiment of the present invention, identifying the surveillance video in the target area by using the image identification algorithm to obtain an identification result of the surveillance video includes:
calling an image recognition algorithm from a preset management platform, and accessing the monitoring equipment into the preset management platform;
and combining the image recognition algorithm and the monitoring equipment in the management platform, and recognizing the monitoring video acquired by the monitoring equipment through the image recognition algorithm in the same combination to obtain a recognition result of monitoring recognition.
In an embodiment of the present invention, after the constructing a management platform according to the image recognition algorithm and the monitoring device, the method further includes:
managing the image recognition algorithm and the monitoring equipment based on the management platform, wherein the management mode at least comprises one of the following modes:
accessing other image recognition algorithms into the management platform according to a predetermined protocol;
accessing other monitoring equipment to the management platform;
the combination relation between an image recognition algorithm and monitoring equipment in the existing combination is removed;
combining the image recognition algorithm and the monitoring equipment in the management platform;
deleting the existing image recognition algorithm and monitoring equipment in the management platform;
inquiring the existing image recognition algorithm and monitoring equipment in the management platform;
storing the monitoring video and the identification result of monitoring identification;
and outputting the identification results of the monitoring videos and the monitoring identification stored in the management platform.
In an embodiment of the present invention, after the monitoring video of the target area is identified by the image identification algorithm and the identification result of the monitoring video is obtained, the method further includes:
and when the identification result contains the target characteristics, generating early warning prompt information.
The invention also provides a three-dimensional model-based area monitoring system, which comprises:
the acquisition module is used for acquiring the position information of the monitoring equipment, the monitoring video of the target area acquired by the monitoring equipment, the physical model of the target area, the characteristic data of the target area,
The first fusion module is used for fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
the identification module is used for identifying the surveillance video of the target area through a preset image identification algorithm to obtain an identification result of the surveillance video;
the second fusion module is used for fusing the monitoring video of the target area, the recognition result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
and the monitoring module is used for monitoring the target area based on the three-dimensional live-action model.
The present invention also provides an electronic device comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement a three-dimensional model-based area monitoring method as described above.
The present invention also provides a computer-readable storage medium, which is characterized by having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to execute a three-dimensional model-based area monitoring method as described above.
The invention provides a region monitoring method, a region monitoring system, region monitoring equipment and a storage medium based on a three-dimensional model, which have the following beneficial effects: fusing a physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area; then, identifying the surveillance video of the target area through an image identification algorithm to obtain an identification result of the surveillance video; fusing a monitoring video of the target area, the identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model; and monitoring the target area based on the three-dimensional live-action model. According to the invention, a three-dimensional model of the metallurgical park is established through the physical model and the characteristic data, and then the monitoring video of the monitoring equipment is displayed in the three-dimensional model, so that massive discrete videos in the metallurgical park range are effectively and visually managed, a worker can conveniently and dynamically control the scene state of a target area of the park in real time, and ordered management and control and command scheduling are realized.
Drawings
Fig. 1 is an application scenario diagram of a three-dimensional model-based area monitoring method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for three-dimensional model-based area monitoring in accordance with an exemplary embodiment of the present application;
FIG. 3 is a flow chart of step S220 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 4 is a flow chart of step S240 in the embodiment shown in FIG. 2 in an exemplary embodiment;
FIG. 5 is a flow chart of step S250 in the embodiment shown in FIG. 4 in an exemplary embodiment;
FIG. 6 is a flow chart of step S230 in the embodiment shown in FIG. 3 in an exemplary embodiment;
FIG. 7 is a block diagram of a three-dimensional model based area monitoring system in accordance with an exemplary embodiment of the present application;
FIG. 8 illustrates a schematic structural diagram of a computer system suitable for use to implement the electronic device of the embodiments of the present application.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details.
Fig. 1 is a view of an application scenario of a three-dimensional model-based area monitoring method according to an exemplary embodiment of the present application, as shown in fig. 1, monitoring videos of various positions of a metallurgical park are collected by a camera 130; the camera 130 is connected to a monitoring machine room through the network equipment 120 (such as a switch and a server), the PC host 110 in the monitoring machine room fuses a Building Information Model (BIM), a geographic information model (GIS), oblique photography data, three-dimensional point cloud data, elevation data and image data of an existing park to obtain a lightweight model of a metallurgical park, a monitoring video acquired by the camera 130 is stored in a database, and the camera 130 at a corresponding position in the lightweight model is associated to generate an information label; when the staff accesses the information label, the monitoring video can be obtained through the associated path; in addition, a management platform is established in the PC host 110, and an image recognition algorithm is preset, by which a surveillance video can be recognized to obtain a recognition result; and triggering an alarm when the identification result comprises the target characteristic.
As shown in fig. 2, the method for monitoring a region based on a three-dimensional model provided in the present invention includes steps S210 to S250, which are described in detail as follows:
s210, acquiring position information of monitoring equipment, a monitoring video of a target area acquired by the monitoring equipment, a physical model of the target area and characteristic data of the target area;
in this embodiment, the physical model of the target area includes a geographic information model (GIS model) and a building information model (BIM model), and the feature data of the target area includes three-dimensional point cloud data, image data, oblique photography data, and the like;
s220, fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
in step S220, the fusion in this embodiment is to merge the information of the target region in the physical model of the target region and the feature data of the target region, and aggregate multiple models and data together to achieve lightweight, and at the same time, include as much information as possible;
s230, identifying the monitoring video of the target area through a preset image identification algorithm to obtain an identification result of the monitoring video;
in step S230, the image recognition algorithm is an existing recognition algorithm, and is used for performing frame-by-frame recognition or frame-by-frame recognition on the monitored video;
s240, fusing the monitoring video of the target area, the identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
in step S240, the surveillance video of the target area and the recognition result of the surveillance video are projected into the lightweight model, so that the surveillance picture obtained by the surveillance equipment in the lightweight model and the recognition result of the surveillance picture can be conveniently and visually displayed;
and S250, monitoring the target area based on the three-dimensional real scene model.
In step S250, the position of the monitoring device may be displayed to the worker through the three-dimensional real-scene model, and the worker may obtain the monitoring picture and the recognition result of the monitoring device at the corresponding position according to the position that the worker wants to view.
In one embodiment of the present invention, the target area physical model includes: a geographic information model, a building information model; the characteristic data of the target area comprises oblique photography data, three-dimensional point cloud data, image data and elevation data; acquiring oblique photography data, three-dimensional point cloud data, image data and elevation data through an unmanned aerial vehicle image data acquisition and a three-dimensional laser scanner;
as shown in fig. 3, the process of fusing the physical model of the target region and the feature data of the target region to obtain the lightweight model of the target region may include steps S310 to S320, which are described in detail as follows:
s310, carrying out format conversion on the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data; so that the formats of the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data are all target formats;
in this embodiment, in order to facilitate the fusion of the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data, and the elevation data, the models and the data need to be converted into the same intermediate format data;
and S320, mapping the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data which are subjected to format conversion into a preset coordinate system to obtain a lightweight model.
In step S320, the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data, and the elevation data after the format conversion are mapped to a preset coordinate system, so that the geographic information model and the building information model are overlapped, and information in the oblique photography data, the three-dimensional point cloud data, the image data, and the elevation data is merged into the overlapped model, thereby realizing real scene restoration and reverse modeling, and obtaining a three-dimensional real scene model on the ground consistent with the current situation.
As shown in fig. 4, in an embodiment of the present invention, the process of obtaining the three-dimensional real-scene model by fusing the surveillance video of the target area, the recognition result of the surveillance video, and the lightweight model according to the position information of the surveillance device may include steps S410 to S420, which are described in detail as follows:
s410, constructing an information label according to the position information;
in this embodiment, the information tag is used to display brief information of the monitoring device;
and S420, associating the information label with the monitoring video of the target area and the identification result of the monitoring video, and setting the information label to a corresponding position of the lightweight model according to the position information for displaying to obtain the three-dimensional live-action model.
In step S420, the surveillance videos and the recognition results of the surveillance videos are stored in a preset database, and the information tags are associated with the surveillance videos and the recognition results of the surveillance videos at corresponding positions (i.e., associated data links); when the model is displayed externally, only the information label (namely, the short information) is displayed, and the monitoring video and the identification result of the monitoring video can be further accessed by accessing the information label, so that the intensive monitoring video information is displayed in the limited model space.
As shown in fig. 5, in an embodiment of the present invention, the process of monitoring the target area based on the three-dimensional live-action model may include steps S510 to S520, which are described in detail as follows:
s510, obtaining label access information from the outside, wherein the label access information is used for accessing an information label;
in this embodiment, since the three-dimensional live-action model only displays the information tag, when a user needs to view corresponding video information, the user needs to access the information tag to obtain the corresponding video information;
s520, responding to the label access information, outputting and displaying the monitoring video of the target area associated with the information label and the identification result of the monitoring video, and finishing monitoring of the target area.
In step S520, when the user obtains the identification results of the monitoring video and the surveillance video by accessing the information tag, the system outputs the identification results of the monitoring video and the surveillance video associated with the information tag.
As shown in fig. 6, in an embodiment of the present invention, the process of identifying the surveillance video of the target area by using the image identification algorithm to obtain the identification result of the surveillance video may include steps S610 to S620, which are described in detail as follows:
s610, calling an image recognition algorithm from a preset management platform, and accessing the monitoring equipment to the preset management platform;
in this embodiment, the image recognition algorithm and the monitoring device are managed through a preset management platform;
s620, combining the image recognition algorithm and the monitoring equipment in the management platform, and recognizing the monitoring video collected by the monitoring equipment through the image recognition algorithm in the same combination to obtain a recognition result of monitoring recognition.
In this embodiment, the same combination at least includes an image recognition algorithm and a monitoring device, and the image recognition algorithm in the same combination is specifically used for recognizing the monitoring device collected by the monitoring device in the combination.
In an embodiment of the present invention, the process after the management platform is built according to the image recognition algorithm and the monitoring device further includes step S710, which is described in detail as follows:
s710, managing the image recognition algorithm and the monitoring equipment based on a management platform, wherein the management mode at least comprises one of the following modes:
(1) Accessing other image recognition algorithms into the management platform according to a predetermined protocol, wherein the predetermined protocol is an SDK protocol or other protocols;
(2) Accessing other monitoring equipment to a management platform;
(3) The combination relation between an image recognition algorithm and monitoring equipment in the existing combination is removed;
(4) Combining an image recognition algorithm and monitoring equipment in a management platform;
(5) Deleting the existing image recognition algorithm and monitoring equipment in the management platform;
(6) Inquiring the existing image recognition algorithm and monitoring equipment in the management platform;
(7) Storing the monitoring video and the identification result of monitoring identification;
(8) And outputting the identification results of the monitoring videos and the monitoring identification stored in the management platform.
In this embodiment, the platform performs operations such as arbitrary combination, addition, modification, deletion, query, and the like on a plurality of image recognition algorithms and a plurality of monitoring devices.
In an embodiment of the present invention, the process after the surveillance video in the target area is identified by using an image identification algorithm and the identification result of the surveillance video is obtained may further include step S810, which is described in detail as follows:
and S810, generating early warning prompt information when the identification result contains the target characteristics.
In this embodiment, the real-time video picture is analyzed and early-warned by using the image classification algorithm, so that timely early warning and prompting of the park management and control problem is realized.
The invention provides a region monitoring method based on a three-dimensional model, which is characterized in that a lightweight model of a target region is obtained by fusing a physical model of the target region and characteristic data of the target region; then, identifying the surveillance video of the target area through an image identification algorithm to obtain an identification result of the surveillance video; fusing a monitoring video of the target area, an identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model; and monitoring the target area based on the three-dimensional live-action model. According to the invention, a three-dimensional model of the metallurgical park is established through the physical model and the characteristic data, and then the monitoring video of the monitoring equipment is displayed in the three-dimensional model, so that massive discrete videos in the metallurgical park range are effectively and visually managed, a worker can conveniently and dynamically master the scene state of a target area of the park in real time, and orderly management, control and command scheduling are realized.
As shown in fig. 7, the present invention further provides a three-dimensional model based area monitoring system, which includes:
the acquisition module is used for acquiring the position information of the monitoring equipment, the monitoring video of the target area acquired by the monitoring equipment, the physical model of the target area, the characteristic data of the target area,
The first fusion module is used for fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
the identification module is used for identifying the surveillance video of the target area through a preset image identification algorithm to obtain an identification result of the surveillance video;
the second fusion module is used for fusing the monitoring video of the target area, the recognition result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
and the monitoring module is used for monitoring the target area based on the three-dimensional live-action model.
The invention provides a three-dimensional model-based regional monitoring system, which is characterized in that a physical model of a target region and characteristic data of the target region are fused to obtain a lightweight model of the target region; then, identifying the surveillance video of the target area through an image identification algorithm to obtain an identification result of the surveillance video; fusing a monitoring video of a target area, an identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model; and monitoring the target area based on the three-dimensional live-action model. According to the invention, a three-dimensional model of the metallurgical park is established through the physical model and the characteristic data, and then the monitoring video of the monitoring equipment is displayed in the three-dimensional model, so that massive discrete videos in the metallurgical park range are effectively and visually managed, a worker can conveniently and dynamically control the scene state of a target area of the park in real time, and ordered management and control and command scheduling are realized.
It should be noted that the three-dimensional model-based area monitoring system provided in the foregoing embodiment and the three-dimensional model-based area monitoring method provided in the foregoing embodiment belong to the same concept, and specific manners in which each module and unit perform operations have been described in detail in the method embodiment, and are not described herein again. In practical applications, the three-dimensional model-based area monitoring system provided in the above embodiment may distribute the functions to different functional modules according to needs, that is, divide the internal structure of the device into different functional modules to complete all or part of the functions described above, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device, configured to store one or more programs, which when executed by one or more processors, enable an electronic device to implement a three-dimensional model-based area monitoring method provided in the above-described embodiments.
FIG. 8 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 800 of the electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, a computer system 800 includes a Central Processing Unit (CPU) 801, which can perform various appropriate actions and processes, such as performing the methods in the above-described embodiments, according to a program stored in a Read-Only Memory (ROM) 802 or a program loaded from a storage portion 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for system operation are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An Input/Output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk and the like; and a communication section 809 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as needed. A removable medium 88 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted into the storage section 808 as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments the computer program may be downloaded and installed from a network via communications section 809 and/or installed from removable media 88. When the computer program is executed by the Central Processing Unit (CPU) 801, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a three-dimensional model-based area monitoring method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist alone without being assembled into the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the three-dimensional model-based area monitoring method provided in the above embodiments.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A region monitoring method based on a three-dimensional model is characterized by comprising the following steps:
acquiring position information of monitoring equipment, a monitoring video of a target area acquired by the monitoring equipment, a physical model of the target area and characteristic data of the target area;
fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
identifying the surveillance video of the target area through a preset image identification algorithm to obtain an identification result of the surveillance video;
fusing the monitoring video of the target area, the recognition result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
and monitoring a target area based on the three-dimensional real scene model.
2. The three-dimensional model-based area monitoring method according to claim 1, wherein the target area physical model comprises: a geographic information model, a building information model; the characteristic data of the target area comprise oblique photography data, three-dimensional point cloud data, image data and elevation data;
fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area, wherein the lightweight model comprises the following steps:
performing format conversion on the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data; so that the formats of the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data are all target formats;
and mapping the geographic information model, the building information model, the oblique photography data, the three-dimensional point cloud data, the image data and the elevation data after format conversion to a preset coordinate system to obtain the lightweight model.
3. The method for monitoring the area based on the three-dimensional model according to claim 1, wherein the fusion of the monitoring video of the target area, the recognition result of the monitoring video and the lightweight model according to the position information of the monitoring device to obtain the three-dimensional live-action model comprises:
constructing an information label according to the position information;
and associating the information tag with the surveillance video of the target area and the identification result of the surveillance video, and setting the information tag to a corresponding position of the lightweight model according to the position information for displaying to obtain the three-dimensional live-action model.
4. The method for monitoring the area based on the three-dimensional model according to claim 3, wherein the monitoring of the target area based on the three-dimensional real scene model comprises:
acquiring label access information from the outside, wherein the label access information is used for accessing the information label;
and responding to the tag access information, outputting and displaying the monitoring video of the target area associated with the information tag and the identification result of the monitoring video, and finishing monitoring the target area.
5. The area monitoring method based on the three-dimensional model according to claim 1, wherein the step of identifying the surveillance video of the target area by a preset image identification algorithm to obtain the identification result of the surveillance video comprises the following steps:
calling an image recognition algorithm from a preset management platform, and accessing the monitoring equipment to the preset management platform;
and combining the image recognition algorithm and the monitoring equipment in the management platform, and recognizing the monitoring video acquired by the monitoring equipment through the image recognition algorithm in the same combination to obtain a recognition result of monitoring recognition.
6. The method according to claim 5, wherein after the monitoring device constructs a management platform according to the image recognition algorithm, the method further comprises:
managing the image recognition algorithm and the monitoring equipment based on the management platform, wherein the management mode at least comprises one of the following modes:
accessing other image recognition algorithms into the management platform according to a predetermined protocol;
accessing other monitoring equipment to the management platform;
the combination relation between an image recognition algorithm and monitoring equipment in the existing combination is removed;
combining the image recognition algorithm and the monitoring equipment in the management platform;
deleting the existing image recognition algorithm and monitoring equipment in the management platform;
inquiring the existing image recognition algorithm and monitoring equipment in the management platform;
storing the monitoring video and the identification result of monitoring identification;
and outputting the identification results of the monitoring videos and the monitoring identification stored in the management platform.
7. The method for monitoring the area based on the three-dimensional model according to claim 1, wherein after the surveillance video of the target area is identified by the image identification algorithm and the identification result of the surveillance video is obtained, the method further comprises:
and when the identification result contains the target characteristics, generating early warning prompt information.
8. A three-dimensional model based area monitoring system, the system comprising:
the acquisition module is used for acquiring the position information of the monitoring equipment, the monitoring video of the target area acquired by the monitoring equipment, the physical model of the target area, the characteristic data of the target area,
The first fusion module is used for fusing the physical model of the target area and the characteristic data of the target area to obtain a lightweight model of the target area;
the identification module is used for identifying the surveillance video of the target area through a preset image identification algorithm to obtain an identification result of the surveillance video;
the second fusion module is used for fusing the monitoring video of the target area, the identification result of the monitoring video and the lightweight model according to the position information of the monitoring equipment to obtain a three-dimensional live-action model;
and the monitoring module is used for monitoring the target area based on the three-dimensional real scene model.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement a three-dimensional model-based region monitoring method according to any one of claims 1 to 7.
10. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the three-dimensional model-based region monitoring method of any one of claims 1 to 7.
CN202211098098.XA 2022-09-08 2022-09-08 Three-dimensional model-based area monitoring method, system, equipment and storage medium Pending CN115578685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211098098.XA CN115578685A (en) 2022-09-08 2022-09-08 Three-dimensional model-based area monitoring method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211098098.XA CN115578685A (en) 2022-09-08 2022-09-08 Three-dimensional model-based area monitoring method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115578685A true CN115578685A (en) 2023-01-06

Family

ID=84580197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211098098.XA Pending CN115578685A (en) 2022-09-08 2022-09-08 Three-dimensional model-based area monitoring method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115578685A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385690A (en) * 2023-06-05 2023-07-04 四川云控交通科技有限责任公司 BIM model-based three-dimensional operation and maintenance management and control platform and management and control method thereof
CN117455324A (en) * 2023-11-08 2024-01-26 交通运输部水运科学研究所 Large port operation management method and system based on physical model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385690A (en) * 2023-06-05 2023-07-04 四川云控交通科技有限责任公司 BIM model-based three-dimensional operation and maintenance management and control platform and management and control method thereof
CN116385690B (en) * 2023-06-05 2023-09-26 四川云控交通科技有限责任公司 BIM model-based three-dimensional operation and maintenance management and control platform and management and control method thereof
CN117455324A (en) * 2023-11-08 2024-01-26 交通运输部水运科学研究所 Large port operation management method and system based on physical model
CN117455324B (en) * 2023-11-08 2024-04-19 交通运输部水运科学研究所 Large port operation management method and system based on physical model

Similar Documents

Publication Publication Date Title
CN115578685A (en) Three-dimensional model-based area monitoring method, system, equipment and storage medium
WO2020061939A1 (en) Method, apparatus, and system for identifying device, storage medium, processor, and terminal
CN116188821B (en) Copyright detection method, system, electronic device and storage medium
WO2022007434A1 (en) Visualization method and related device
US20200126315A1 (en) Method and apparatus for generating information
CN112929602B (en) Data monitoring method and device based on image processing and related equipment
JP7329572B2 (en) TRAFFIC STATE ACQUISITION METHOD AND DEVICE, ROAD SIDE DEVICE, AND CLOUD CONTROL PLATFORM
CN116310143B (en) Three-dimensional model construction method, device, equipment and storage medium
EP4177836A1 (en) Target detection method and apparatus, and computer-readable medium and electronic device
CN113112859A (en) Parking space determining method and device based on building information model and related equipment
US20230326351A1 (en) Data sharing method and apparatus applied to vehicle platoon
CN114429528A (en) Image processing method, image processing apparatus, image processing device, computer program, and storage medium
CN111429583A (en) Space-time situation perception method and system based on three-dimensional geographic information
CN112053440A (en) Method for determining individualized model and communication device
CN115620208A (en) Power grid safety early warning method and device, computer equipment and storage medium
CN111859503A (en) Drawing review method, electronic equipment and graphic server
CN113676525A (en) Network collaborative manufacturing-oriented industrial internet public service platform
EP3690675A1 (en) Method, apparatus, and storage medium for providing visual representation of set of objects
CN115984516B (en) Augmented reality method based on SLAM algorithm and related equipment
CN114998768B (en) Intelligent construction site management system and method based on unmanned aerial vehicle
CN110377776B (en) Method and device for generating point cloud data
CN115438812A (en) Life-saving management method and device for power transmission equipment, computer equipment and storage medium
CN114612976A (en) Key point detection method and device, computer readable medium and electronic equipment
CN115294283A (en) Digital twin factory construction method, device, equipment and storage medium
CN114066673A (en) Energy internet digital twin system construction method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination