CN112135091A - Monitoring scene marking method and device, computer equipment and storage medium - Google Patents
Monitoring scene marking method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN112135091A CN112135091A CN202010876995.3A CN202010876995A CN112135091A CN 112135091 A CN112135091 A CN 112135091A CN 202010876995 A CN202010876995 A CN 202010876995A CN 112135091 A CN112135091 A CN 112135091A
- Authority
- CN
- China
- Prior art keywords
- information
- dimensional model
- target
- scene
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000004590 computer program Methods 0.000 claims description 13
- 238000002372 labelling Methods 0.000 claims description 9
- 230000007547 defect Effects 0.000 claims description 5
- 238000012423 maintenance Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 14
- 238000010276 construction Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 239000003129 oil well Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000004441 surface measurement Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/44504—Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The application relates to a monitoring scene marking method, a monitoring scene marking device, computer equipment and a storage medium, wherein the monitoring scene marking method comprises the following steps: acquiring scanning information of a target scene through a three-dimensional scanning instrument; acquiring video information of a target scene through image acquisition equipment; acquiring object information of all target objects in a target scene; performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; marking the object information on the three-dimensional model; and superposing the marked three-dimensional model on the video information to obtain a target video. According to the monitoring scene marking method, the monitoring scene marking device, the computer equipment and the storage medium, the information marking is carried out on the three-dimensional model of the target scene, the existing hardware equipment such as a camera and the like is not required to be upgraded, and the time cost and the economic cost are greatly reduced.
Description
Technical Field
The application relates to the field of security monitoring, in particular to a monitoring scene marking method and device, computer equipment and a storage medium.
Background
Along with the popularization and application of the monitoring video system, the requirements of users on the monitoring video application are gradually improved, from the very beginning, the users only need to see the pictures of the monitored area, to the application of the technologies such as the existing high-definition pictures, augmented reality markers, sensor data association and the like, the technologies are all used for helping the users to better read the information of the monitored area, and the management efficiency of the monitoring video system is improved.
At present, an augmented reality marking system is realized by adding data information in the existing monitoring video picture, the data such as the rotation angle of a monitoring camera, the focal length of a lens and the like are associated with the position of an actual monitored object, and then a video processing program is used for drawing the data information of the video picture frame by frame, including some field sensor information, which can be associated with the actual position of equipment. However, the implementation method needs to customize the content of the marker information for each camera, and when the number of monitoring cameras deployed in the same monitored site is large and the number of marker information is large, the workload is greatly increased by the augmented reality method. For example, the application scenarios of key areas such as oil wells, transformer substations, power plants and the like in the energy industry can all meet similar situations.
The mixed reality technology is the fusion of real and virtual environments and is the terminal point of the development of the augmented reality technology. The existing mixed reality technology needs to wear a Head Mounted Display (HMD) equipped with a large number of sensors and dedicated processing chips, and if the existing monitoring system is modified, the mixed reality technology is realized by adding the sensors and the dedicated processing chips to each camera, which involves a large amount of hardware modification and software upgrading work, and is high in cost.
The workload of the traditional augmented reality marking system for monitoring videos in a key monitoring area is multiplied along with the increase of the number of monitoring devices, and the traditional augmented reality marking system cannot meet the requirements of application scenes in the key area. The time and economic cost of the method for upgrading hardware by adopting the traditional mixed reality technology are too high, and the method is not feasible at present.
Disclosure of Invention
The embodiment of the application provides a monitoring scene marking method and device, computer equipment and a storage medium, and aims to at least solve the problem that time and economic cost are too high in a mode of upgrading hardware by adopting a traditional mixed reality technology in the related art.
In a first aspect, an embodiment of the present application provides a monitoring scene marking method, including:
acquiring scanning information of a target scene through a three-dimensional scanning instrument;
acquiring video information of a target scene through image acquisition equipment;
acquiring object information of all target objects in a target scene;
performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
marking the object information on the three-dimensional model;
and superposing the marked three-dimensional model on the video information to obtain a target video.
In some embodiments, the acquiring object information of all target objects in the target scene includes:
and acquiring object information of the target object through a database, wherein the object information of each target object is stored in the database in advance.
In some embodiments, said overlaying the marked three-dimensional model on the video information to obtain the target video includes:
adjusting the lens distortion of the video information frame by frame to enable the video information to be matched with the three-dimensional model;
and superposing the marked three-dimensional model on the adjusted video information to obtain a target video.
In some embodiments, the performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene includes:
performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
acquiring attitude information and focal length information of image acquisition equipment;
and adjusting the angle of the three-dimensional model based on the attitude information and the focal length information to enable the three-dimensional model to be matched with the video information.
In some of these embodiments, the object information includes at least one of a device name, a device code, a purchase date, a defect, and repair data.
In some of these embodiments, said superimposing said marked three-dimensional model on said video information comprises:
and overlaying the marked three-dimensional model on the video information through a three-dimensional engine.
In some of these embodiments, said superimposing said marked three-dimensional model on said video information comprises:
taking the video information as a display background;
setting the three-dimensional model to be transparent and superposing the three-dimensional model on the video information;
and displaying the object information on the display background in an animation mode.
In a second aspect, an embodiment of the present application provides a monitoring scene marking apparatus, including:
the scanning module is used for acquiring scanning information of a target scene through a three-dimensional scanning instrument;
the video acquisition module is used for acquiring video information of a target scene through image acquisition equipment;
the object information acquisition module is used for acquiring object information of all target objects in a target scene;
the model reconstruction module is used for performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
a labeling module for labeling the object information on the three-dimensional model;
and the superposition module is used for superposing the marked three-dimensional model on the video information to obtain a target video.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the monitoring scene marking method according to the first aspect is implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the monitoring scene marking method according to the first aspect.
According to the monitoring scene marking method, the monitoring scene marking device, the computer equipment and the storage medium, the scanning information of the target scene is acquired through the three-dimensional scanning instrument; acquiring video information of a target scene through image acquisition equipment; acquiring object information of all target objects in a target scene; performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; marking the object information on the three-dimensional model; and the marked three-dimensional model is superposed on the video information to obtain a target video, the information of the three-dimensional model of the target scene is marked, and the existing hardware equipment such as a camera and the like is not required to be upgraded, so that the time cost and the economic cost are greatly reduced.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a monitoring scene marking method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a monitoring scene tagging system according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a monitoring scene marking method according to another embodiment of the present invention;
FIG. 4 is a diagram illustrating a mixed reality display method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a target video of a monitoring scene marking method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a target video of a monitoring scene marking method according to another embodiment of the present invention;
fig. 7 is a block diagram of a monitoring scene marking apparatus according to an embodiment of the present invention;
fig. 8 is a schematic hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Referring to fig. 1, fig. 1 is a schematic flow chart of a monitoring scene marking method according to an embodiment of the invention.
In this embodiment, the monitoring scene marking method includes:
and step S101, acquiring scanning information of a target scene through a laser radar.
Illustratively, a three-dimensional scanning instrument, such as a laser radar, is used for three-dimensionally scanning a scene, equipment and a monitoring camera of a monitored object in a field to obtain point cloud data, and a three-dimensional model of a target scene is reconstructed based on the point cloud data. It will be appreciated that the scan information for the target scene includes scan information for all objects in the target scene.
And S102, acquiring video information of a target scene through image acquisition equipment.
As can be understood, the video information is a monitoring video of the target scene.
Step S103, acquiring object information of all target objects in the target scene.
Specifically, the target object mainly includes a field device, and the object information includes at least one of a device name, a device code, a purchase date, a defect, and maintenance data. It can be understood that the target object may also be another object in the target scene, and the object information may also include other information, which only needs to belong to the target object, and the object information may be obtained in advance.
And step S104, performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene.
It will be understood that the three-dimensional model includes a model of the target scene as well as a model of objects in the target scene.
Step S105, marking object information on the three-dimensional model.
For example, the object information may be acquired in advance, and the object information of interest or a preset type may be marked on the three-dimensional model.
And S106, superposing the marked three-dimensional model on video information to obtain a target video.
It can be understood that when the three-dimensional model is overlapped with the video information, the object information marked on the three-dimensional model is also displayed on the monitoring video, and the target video marked with the object information can be obtained.
In the monitoring scene marking method, the scanning information of the target scene is acquired through the three-dimensional scanning instrument; acquiring video information of a target scene through image acquisition equipment; acquiring object information of all target objects in a target scene; performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; marking object information on the three-dimensional model; the marked three-dimensional model is superposed on the video information to obtain a target video mode, the information marking is carried out on the three-dimensional model of the target scene, the existing hardware equipment such as a camera and the like is not required to be upgraded, and the time cost and the economic cost are greatly reduced.
In another embodiment, obtaining object information for all target objects in the target scene comprises: and acquiring object information of the target object through a database, wherein the object information of each target object is stored in the database in advance. Specifically, the object information includes at least one of a device name, a device code, a purchase date, a defect, and maintenance data.
In another embodiment, the superimposing the marked three-dimensional model on video information to obtain the target video includes: adjusting the lens distortion of the video information frame by frame to make the video information matched with the three-dimensional model; and superposing the marked three-dimensional model on the adjusted video information to obtain a target video.
In another embodiment, performing three-dimensional reconstruction based on the scan information to obtain a three-dimensional model of the target scene includes: performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; acquiring attitude information and focal length information of image acquisition equipment; and adjusting the angle of the three-dimensional model based on the attitude information and the focal length information to enable the three-dimensional model to be matched with the video information. Illustratively, the image acquisition device may be a surveillance camera. It can be understood that when the three-dimensional model is reconstructed, high-precision attitude information and high-precision focal length information of the monitoring camera need to be acquired, the attitude information of the monitoring camera is divided into two parameters of horizontal rotation and elevation angle, and the precision is smaller than 0.2 degrees. It can be understood that the acquisition precision of the monitored object and the monitoring camera can be determined according to actual requirements. In another embodiment, reconstructing the three-dimensional model of the target scene includes: and reconstructing a three-dimensional model of the target scene, the equipment and the monitoring camera. Illustratively, a three-dimensional scanning instrument is adopted to carry out three-dimensional scanning on a target scene, equipment and a monitoring camera on site to obtain point cloud data, and a three-dimensional model of the target scene is reconstructed based on the point cloud data. It can be understood that the three-dimensional model can be coincided with the surveillance video only by obtaining the attitude information of the surveillance camera and adjusting the angle of the three-dimensional model of the target scene based on the attitude information. Specifically, the position information of the target scene and the monitoring camera both need to reach centimeter-level precision, and the three-dimensional model can be constructed by adopting laser point cloud data scanned by a laser scanner with the precision reaching centimeter level in a field manner or by using key equipment/scene position information obtained by an RTK-GNSS technology and a high-precision BIM model together.
In another embodiment, when the three-dimensional model is reconstructed, high-precision attitude information and high-precision focal length information of the monitoring camera need to be acquired, the attitude information of the monitoring camera is divided into two parameters of horizontal rotation and elevation angle, and the precision is less than 0.2 degrees. It can be understood that the acquisition precision of the target scene and the monitoring camera can be determined according to actual requirements.
The three-dimensional laser scanning technology is a new technology appearing in recent years, and is attracting more and more attention in the research field in China. The method utilizes the principle of laser ranging, and can quickly reconstruct a three-dimensional model of a measured object and various drawing data such as lines, surfaces, bodies and the like by recording information such as three-dimensional coordinates, reflectivity, textures and the like of a large number of dense points on the surface of the measured object. Since the three-dimensional laser scanning system can densely acquire a large number of data points of the target object, the three-dimensional laser scanning technology is also referred to as a revolutionary technical breakthrough that evolves from single-point measurement to surface measurement, compared to the conventional single-point measurement. The technology also has many attempts, applications and exploration in the fields of historical relic protection, construction, planning, civil engineering, factory improvement, indoor design, building monitoring, traffic accident treatment, legal evidence collection, disaster assessment, ship design, digital cities, military analysis and the like. The three-dimensional laser scanning system comprises a hardware part for data acquisition and a software part for data processing. According to the difference of carriers, three-dimensional laser scanning systems can be divided into airborne, vehicle-mounted, ground and handheld types. The scanning technique is applied to measure the size and shape of the workpiece. The method is mainly applied to reverse engineering, is responsible for curved surface reading and three-dimensional measurement of workpieces, can quickly measure the outline set data of an object under the condition that the existing three-dimensional object (sample or model) has no technical document, and can be used for constructing, editing and modifying to generate a curved surface digital model with a general output format.
An RTK (Real-time kinematic) carrier phase differential technology is a differential method for processing carrier phase observed quantities of two measuring stations in Real time, and the carrier phase acquired by a reference station is sent to a user receiver for difference solving. The method is a new common satellite positioning measurement method, the former static, rapid static and dynamic measurements all need to be solved afterwards to obtain centimeter-level accuracy, the RTK is a measurement method capable of obtaining centimeter-level positioning accuracy in real time in the field, a carrier phase dynamic real-time difference method is adopted, the method is a major milestone applied to GPS, the appearance of the method is project lofting and terrain mapping, various control measurements bring new measurement principles and methods, and the operation efficiency is greatly improved.
GNSS generally refers to global navigation satellite systems. Gnss positioning is an observation that uses pseudoranges, ephemeris, satellite transmit times, etc. from a set of satellites, while the user clock error must also be known. The global navigation satellite system is a space-based radio navigation positioning system that can provide users with all-weather 3-dimensional coordinates and velocity and time information at any location on the earth's surface or in near-earth space.
The BIM (building Information modeling) technology can help to realize the integration of building Information, and all kinds of Information are always integrated in a three-dimensional model Information database from the design, construction and operation of a building to the end of the whole life cycle of the building, so that personnel of a design team, a construction unit, a facility operation department, an owner and the like can perform cooperative work based on the BIM, thereby effectively improving the working efficiency, saving resources, reducing the cost and realizing sustainable development. The core of BIM is to provide a complete building engineering information base consistent with the actual situation for a virtual building engineering three-dimensional model by establishing the model and utilizing the digital technology. The information base not only contains geometrical information, professional attributes and state information describing building components, but also contains state information of non-component objects (such as space and motion behaviors). By means of the three-dimensional model containing the construction engineering information, the information integration degree of the construction engineering is greatly improved, and therefore a platform for engineering information exchange and sharing is provided for related interest parties of the construction engineering project.
In another embodiment, the position information and the data information of the corresponding device are acquired by a sensor installed on the device of the monitored object.
In another embodiment, superimposing the marked three-dimensional model on the video information comprises: and superposing the marked three-dimensional model on the video information through a three-dimensional engine. Specifically, a three-dimensional model, device information, sensor data and the like can be displayed in an overlapping manner on the basis of the existing monitoring video through a Unity3d, unregeal and other three-dimensional engines.
In another embodiment, superimposing the marked three-dimensional model on the video information comprises: using the video information as a display background; setting the three-dimensional model to be transparent and superposing the three-dimensional model on video information; and displaying the object information on the display background in an animation mode. It can be understood that when the three-dimensional model is transparent, the object information is visually marked on the actual scene or the actual device. In addition, when the dynamic position of the object information is behind the field device or the scene of the three-dimensional model, the three-dimensional model is transparent, but the object information is shielded, so that the effect that the object information is marked on the actual device is created, the spatial perception of the user on the data information is enhanced, the data information display is more in line with the intuition of human observation, and the judgment accuracy is improved.
Referring to fig. 2, fig. 2 is a schematic diagram of a monitoring scene marking system according to an embodiment of the invention.
In this embodiment, the monitoring scene tagging system includes a digital twin base module 80, a video platform server module 83, a sensor communication service module 81, an asset database module 82, and a mixed reality display module 84, the digital twin base module 80 is respectively connected with the video platform server module 83, the sensor communication service module 81, and the asset database module 82, the mixed reality display module 84 is respectively connected with the digital twin base module 80 and the video platform server module 83, wherein:
the digital twin basic module 80 is used for storing a three-dimensional reconstruction model of a target scene, and mainly comprises a high-precision basic three-dimensional model of the target scene (scene, key equipment) and a monitoring camera; meanwhile, structured and unstructured data information transmitted by other modules, including but not limited to sensor information, asset standing book information, maintenance and repair information and the like, and corresponding position and posture information of various devices in the model are received, and relevant data information is marked on the three-dimensional reconstruction model.
The digital twin is a simulation process integrating multidisciplinary, multi-physical quantity, multi-scale and multi-probability by fully utilizing data such as a physical model, sensor updating, operation history and the like, and mapping is completed in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. Digital twinning is an beyond-realistic concept that can be viewed as a digital mapping system of one or more important, interdependent equipment systems. The digital twin is a generally-adapted theoretical technical system, can be applied to a plurality of fields, and is more applied to the fields of product design, product manufacturing, medical analysis, engineering construction and the like at present. At present, the most deep application in China is in the field of engineering construction, the highest attention and the hottest research are in the field of intelligent manufacturing.
The video platform server module 83, the video platform server module 83 is mainly used for storing ID codes, postures and focal length information of the surveillance cameras, forwarding video streams of the surveillance cameras, and sending the ID codes, postures and focal length information of the cameras to the digital twin base module 80.
The sensor communication server module and the sensor communication service module 81 include point location information and sensor data of sensors corresponding to each device in a target scene, and transmit the structured data of the sensors corresponding to each device to the digital twin base module 80 through a TCP/IP protocol after the data are aggregated.
And the asset database module 82, the asset database module 82 is used for storing asset data information of the target scene, namely equipment information of the target scene, including but not limited to equipment ID codes, purchase dates, defect and maintenance data and other asset life cycle related data, and the digital twin basic module 80 acquires the related asset data information through query.
The mixed reality display module 84, the mixed reality display module 84 is configured to display the three-dimensional model marked with the relevant data information transmitted by the digital twin base module 80 in an overlaying manner on the basis of the existing surveillance video by using a three-dimensional engine, such as Unity3d, unregeal and the like.
Referring to fig. 3, fig. 3 is a schematic diagram of a monitoring scene marking method according to another embodiment of the invention.
In this embodiment, the lidar performs data acquisition and three-dimensional reconstruction on a target scene, performs three-dimensional reconstruction on equipment of the target scene, acquires a three-dimensional coordinate of the monitoring camera, and transmits a three-dimensional reconstruction model and the three-dimensional coordinate of the monitoring camera to the digital twin base module; the sensors corresponding to the devices in the target scene transmit data to the digital twin basic module through the sensor communication server module; the video platform server module acquires attitude information and focal length information of each monitoring camera, transmits the attitude information and the focal length information to the digital twin basic module, and transmits a video stream acquired by the monitoring cameras to the mixed reality display module; the asset database module transmits the equipment information in the target scene, namely the asset data information to the digital twin basic module; the digital twin basic module marks related asset data information on the three-dimensional model and transmits the three-dimensional model to the mixed reality display module; the mixed reality display module displays a three-dimensional model marked with related data information in an overlapping mode on the basis of the existing monitoring video.
Referring to fig. 4, fig. 4 is a schematic view illustrating a mixed reality display method according to an embodiment of the invention.
In this embodiment, distortion of a video stream acquired by a monitoring camera transmitted by a video platform server module is corrected frame by frame, and a three-dimensional model of a target scene is adjusted based on the attitude and focal length information of the monitoring camera, so that the three-dimensional model of the target scene is consistent with a field actual picture, and then a monitoring video is used as a display background; setting a three-dimensional model of a target scene to be transparent; displaying the data information on a display background in an animation mode; and finally, calculating the shielding effect of the transparent three-dimensional model on the data information, and shielding the data information, so that when the dynamic position of the data information is behind the field equipment or the scene of the three-dimensional model, although the three-dimensional model is transparent, the data information can be shielded, the effect of marking the data information on the actual equipment is created, the spatial perception of the data information by a user is enhanced, the data information display is more in line with the intuition of human observation, and the judgment accuracy is improved.
Referring to fig. 5 and 6, fig. 5 is a schematic view of a target video of a monitoring scene marking method according to an embodiment of the invention, and fig. 6 is a schematic view of a target video of a monitoring scene marking method according to another embodiment of the invention. In fig. 5, the fire service water supply pipe is marked with the characters of the fire service water supply pipe with the surrounding effect, and the characters can be shielded in the surrounding process, so that the effect of marking data information on actual equipment is created, the space perception of a user on the data information is enhanced, the data information display is more in line with the intuition of human observation, and the judgment accuracy is improved. In fig. 6, the wall is marked with "wall space" letters.
In the monitoring scene marking method, the scanning information of the target scene is acquired through the three-dimensional scanning instrument; acquiring video information of a target scene through image acquisition equipment; acquiring object information of all target objects in a target scene; performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; marking object information on the three-dimensional model; the marked three-dimensional model is superposed on the video information to obtain a target video mode, the information marking is carried out on the three-dimensional model of the target scene, the existing hardware equipment such as a camera and the like is not required to be upgraded, and the time cost and the economic cost are greatly reduced.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a monitoring scene marking apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted here. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram of a monitoring scene marking apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus includes:
and the scanning module 10 is configured to acquire scanning information of a target scene through a three-dimensional scanning instrument.
And the video acquisition module 20 is configured to acquire video information of the target scene through an image acquisition device.
And an object information obtaining module 30, configured to obtain object information of all target objects in the target scene.
The object information obtaining module 30 is further configured to obtain object information of the target object through a database, where the object information of each target object is stored in advance in the database.
And the model reconstruction module 40 is used for performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene.
A model reconstruction module 40, further configured to:
performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
acquiring attitude information and focal length information of image acquisition equipment;
and adjusting the angle of the three-dimensional model based on the attitude information and the focal length information to enable the three-dimensional model to be matched with the video information.
A marking module 50 for marking object information on the three-dimensional model.
And the overlaying module 60 is configured to overlay the marked three-dimensional model on the video information to obtain a target video.
A superimposing module 60 further configured to:
adjusting the lens distortion of the video information frame by frame to make the video information matched with the three-dimensional model;
and superposing the marked three-dimensional model on the adjusted video information to obtain a target video.
And the overlaying module 60 is further used for overlaying the marked three-dimensional model on the video information through the three-dimensional engine.
A superimposing module 60 further configured to:
using the video information as a display background;
setting the three-dimensional model to be transparent and superposing the three-dimensional model on video information;
and displaying the object information on the display background in an animation mode.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In addition, the monitoring scene marking method described in the embodiment of the present application with reference to fig. 1 may be implemented by a computer device. Fig. 8 is a hardware structure diagram of a computer device according to an embodiment of the present application.
The computer device may comprise a processor 81 and a memory 82 in which computer program instructions are stored.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 implements any of the monitoring scenario tagging methods in the above embodiments by reading and executing computer program instructions stored in the memory 82.
In some of these embodiments, the computer device may also include a communication interface 83 and a bus 80. As shown in fig. 8, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication interface 83 may also enable communication with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The computer device may execute the monitoring scene labeling method in the embodiment of the present application based on the acquired computer program instruction, thereby implementing the monitoring scene labeling method described in conjunction with fig. 1.
In addition, in combination with the monitoring scene marking method in the foregoing embodiment, the embodiment of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the monitoring scenario tagging methods of the above embodiments.
According to the monitoring scene marking method, the monitoring scene marking device, the computer equipment and the storage medium, the scanning information of the target scene is acquired through the three-dimensional scanning instrument; acquiring video information of a target scene through image acquisition equipment; acquiring object information of all target objects in a target scene; performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene; marking object information on the three-dimensional model; the marked three-dimensional model is superposed on the video information to obtain a target video mode, the information marking is carried out on the three-dimensional model of the target scene, the existing hardware equipment such as a camera and the like is not required to be upgraded, and the time cost and the economic cost are greatly reduced.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A monitoring scene marking method is characterized by comprising the following steps:
acquiring scanning information of a target scene through a three-dimensional scanning instrument;
acquiring video information of a target scene through image acquisition equipment;
acquiring object information of all target objects in a target scene;
performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
marking the object information on the three-dimensional model;
and superposing the marked three-dimensional model on the video information to obtain a target video.
2. The monitored scene tagging method of claim 1, wherein said obtaining object information for all target objects in the target scene comprises:
and acquiring object information of the target object through a database, wherein the object information of each target object is stored in the database in advance.
3. The method for labeling monitoring scenes as claimed in claim 1, wherein said overlaying the labeled three-dimensional model on the video information to obtain a target video comprises:
adjusting the lens distortion of the video information frame by frame to enable the video information to be matched with the three-dimensional model;
and superposing the marked three-dimensional model on the adjusted video information to obtain a target video.
4. The method for labeling a monitored scene according to claim 1, wherein said performing three-dimensional reconstruction according to said scanning information to obtain a three-dimensional model of a target scene comprises:
performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
acquiring attitude information and focal length information of the image acquisition equipment;
and adjusting the angle of the three-dimensional model based on the attitude information and the focal length information to enable the three-dimensional model to be matched with the video information.
5. The monitored scene tagging method of claim 1, wherein said object information includes at least one of a device name, a device code, a purchase date, a defect, and maintenance data.
6. The surveillance scene tagging method of claim 1, wherein said superimposing said tagged three-dimensional model on said video information comprises:
and overlaying the marked three-dimensional model on the video information through a three-dimensional engine.
7. The surveillance scene tagging method of claim 1, wherein said superimposing said tagged three-dimensional model on said video information comprises:
taking the video information as a display background;
setting the three-dimensional model to be transparent and superposing the three-dimensional model on the video information;
and displaying the object information on the display background in an animation mode.
8. A surveillance scene tagging apparatus, comprising:
the scanning module is used for acquiring scanning information of a target scene through a three-dimensional scanning instrument;
the video acquisition module is used for acquiring video information of a target scene through image acquisition equipment;
the object information acquisition module is used for acquiring object information of all target objects in a target scene;
the model reconstruction module is used for performing three-dimensional reconstruction according to the scanning information to obtain a three-dimensional model of the target scene;
a labeling module for labeling the object information on the three-dimensional model;
and the superposition module is used for superposing the marked three-dimensional model on the video information to obtain a target video.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the monitoring scenario tagging method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the monitoring scenario labeling method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010876995.3A CN112135091A (en) | 2020-08-27 | 2020-08-27 | Monitoring scene marking method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010876995.3A CN112135091A (en) | 2020-08-27 | 2020-08-27 | Monitoring scene marking method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112135091A true CN112135091A (en) | 2020-12-25 |
Family
ID=73847367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010876995.3A Pending CN112135091A (en) | 2020-08-27 | 2020-08-27 | Monitoring scene marking method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112135091A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819928A (en) * | 2021-01-27 | 2021-05-18 | 成都数字天空科技有限公司 | Model reconstruction method and device, electronic equipment and storage medium |
CN113286126A (en) * | 2021-05-28 | 2021-08-20 | Oppo广东移动通信有限公司 | Monitoring data processing method, system and related device |
CN114090550A (en) * | 2022-01-19 | 2022-02-25 | 成都博恩思医学机器人有限公司 | Robot database construction method and system, electronic device and storage medium |
CN114401451A (en) * | 2021-12-28 | 2022-04-26 | 有半岛(北京)信息科技有限公司 | Video editing method and device, electronic equipment and readable storage medium |
CN114966695A (en) * | 2022-05-11 | 2022-08-30 | 南京慧尔视软件科技有限公司 | Digital twin image processing method, device, equipment and medium of radar |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060061583A1 (en) * | 2004-09-23 | 2006-03-23 | Conversion Works, Inc. | System and method for processing video images |
CN105425698A (en) * | 2015-11-09 | 2016-03-23 | 国网重庆市电力公司电力科学研究院 | Integrated management and control method and system for three-dimensional digital transformer station |
CN107223269A (en) * | 2016-12-29 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | Three-dimensional scene positioning method and device |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109872401A (en) * | 2019-02-18 | 2019-06-11 | 中国铁路设计集团有限公司 | A kind of UAV Video augmented reality implementation method |
-
2020
- 2020-08-27 CN CN202010876995.3A patent/CN112135091A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060061583A1 (en) * | 2004-09-23 | 2006-03-23 | Conversion Works, Inc. | System and method for processing video images |
CN105425698A (en) * | 2015-11-09 | 2016-03-23 | 国网重庆市电力公司电力科学研究院 | Integrated management and control method and system for three-dimensional digital transformer station |
CN107223269A (en) * | 2016-12-29 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | Three-dimensional scene positioning method and device |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109872401A (en) * | 2019-02-18 | 2019-06-11 | 中国铁路设计集团有限公司 | A kind of UAV Video augmented reality implementation method |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819928A (en) * | 2021-01-27 | 2021-05-18 | 成都数字天空科技有限公司 | Model reconstruction method and device, electronic equipment and storage medium |
CN112819928B (en) * | 2021-01-27 | 2022-10-28 | 成都数字天空科技有限公司 | Model reconstruction method and device, electronic equipment and storage medium |
CN113286126A (en) * | 2021-05-28 | 2021-08-20 | Oppo广东移动通信有限公司 | Monitoring data processing method, system and related device |
CN114401451A (en) * | 2021-12-28 | 2022-04-26 | 有半岛(北京)信息科技有限公司 | Video editing method and device, electronic equipment and readable storage medium |
CN114090550A (en) * | 2022-01-19 | 2022-02-25 | 成都博恩思医学机器人有限公司 | Robot database construction method and system, electronic device and storage medium |
CN114966695A (en) * | 2022-05-11 | 2022-08-30 | 南京慧尔视软件科技有限公司 | Digital twin image processing method, device, equipment and medium of radar |
CN114966695B (en) * | 2022-05-11 | 2023-11-14 | 南京慧尔视软件科技有限公司 | Digital twin image processing method, device, equipment and medium for radar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112135091A (en) | Monitoring scene marking method and device, computer equipment and storage medium | |
CN111274337B (en) | Two-dimensional and three-dimensional integrated GIS system based on live-action three-dimension | |
Koeva et al. | Using UAVs for map creation and updating. A case study in Rwanda | |
US10297074B2 (en) | Three-dimensional modeling from optical capture | |
Jazayeri et al. | A geometric and semantic evaluation of 3D data sourcing methods for land and property information | |
US20190026400A1 (en) | Three-dimensional modeling from point cloud data migration | |
KR100997084B1 (en) | A method and system for providing real time information of underground object, and a sever and method for providing information of the same, and recording medium storing a program thereof | |
Verykokou et al. | UAV-based 3D modelling of disaster scenes for Urban Search and Rescue | |
CN1163760C (en) | Device and system for labeling sight images | |
Gruen | Reality-based generation of virtual environments for digital earth | |
Hess et al. | Fusion of multimodal three-dimensional data for comprehensive digital documentation of cultural heritage sites | |
CN117268350B (en) | Mobile intelligent mapping system based on point cloud data fusion | |
CN110715646B (en) | Map trimming measurement method and device | |
Fenais et al. | A meta-analysis of augmented reality challenges in the underground utility construction industry | |
CN108957507A (en) | Fuel gas pipeline leakage method of disposal based on augmented reality | |
Varela-González et al. | Performance testing of LiDAR exploitation software | |
CN115795084A (en) | Satellite remote sensing data processing method and device, electronic equipment and storage medium | |
Barton | 3D laser scanning and the conservation of earthen architecture: a case study at the UNESCO World Heritage Site Merv, Turkmenistan | |
Adăscăliţei et al. | The Influence of Augmented Reality in Construction and Integration into Smart City. | |
Haibt | End-to-end digital twin creation of the archaeological landscape in Uruk-Warka (Iraq) | |
CN108954016A (en) | Fuel gas pipeline leakage disposal system based on augmented reality | |
Crues et al. | Digital Lunar Exploration Sites (DLES) | |
Li | [Retracted] Application of Multimedia Tilt Photogrammetry Technology Based on Unmanned Aerial Vehicle in Geological Survey | |
CN116701554A (en) | Three-dimensional space data quality inspection method and system based on GIS, BIM and panoramic image technology | |
Gurgel et al. | New opportunities and challenges in surveying underground cavities using photogrammetric methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201225 |
|
RJ01 | Rejection of invention patent application after publication |