CN115604433A - Virtual-real combined three-dimensional visualization system - Google Patents

Virtual-real combined three-dimensional visualization system Download PDF

Info

Publication number
CN115604433A
CN115604433A CN202211228861.6A CN202211228861A CN115604433A CN 115604433 A CN115604433 A CN 115604433A CN 202211228861 A CN202211228861 A CN 202211228861A CN 115604433 A CN115604433 A CN 115604433A
Authority
CN
China
Prior art keywords
installation
monitoring
points
marking
combination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211228861.6A
Other languages
Chinese (zh)
Inventor
柯军扬
曹静
柯羊军
胡凯旋
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Kewei Video Intelligence Technology Co ltd
Original Assignee
Anhui Kewei Video Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Kewei Video Intelligence Technology Co ltd filed Critical Anhui Kewei Video Intelligence Technology Co ltd
Priority to CN202211228861.6A priority Critical patent/CN115604433A/en
Publication of CN115604433A publication Critical patent/CN115604433A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The invention discloses a virtual-real combined three-dimensional visualization system, which belongs to the technical field of campus monitoring and comprises a layout module, a display module and a server; the arrangement module is used for arranging cameras in a campus, acquiring the shooting performance of the cameras, setting installation characteristics according to the acquired shooting performance of the cameras, establishing a campus panoramic model, marking a to-be-selected area meeting the installation requirements in the panoramic model according to the set installation characteristics, setting a corresponding to-be-selected installation point in the to-be-selected area, marking a corresponding monitoring range in the panoramic model according to area information corresponding to the to-be-selected installation point, and marking a corresponding point position value; generating a monitoring range plan according to the current panoramic model, analyzing the monitoring range plan, eliminating redundant mounting points to be selected, obtaining target mounting points, and mounting a camera on the target mounting points; the display module is used for displaying the monitoring data.

Description

Virtual-real combined three-dimensional visualization system
Technical Field
The invention belongs to the technical field of campus monitoring, and particularly relates to a virtual-real combined three-dimensional visualization system.
Background
Most of the video monitoring point positions of school safety are low-point monitoring mainly based on gun and ball machines installed on the road surface, monitoring of low-point monitoring resources within a visual range of 5-100 meters has obvious limitations, and accurate monitoring of large range, beyond visual range and all weather cannot be met; the low-point monitoring focuses on close-up shooting of local pictures, cannot give consideration to the whole and local pictures, and is not enough for linkage use and comprehensive application of videos. Meanwhile, in the campus video monitoring system, the emergency command system and the like at the present stage, video viewing and scheduling are performed by taking a two-dimensional video as a core, which is often not intuitive enough, commanders cannot master the field situation in real time, and the emergency command scheduling efficiency is low. In addition, various service systems are too decentralized, fight each other and are difficult to find out quickly, and when conducting command management, workers need to look up relevant information on each subsystem independently, so that the working efficiency is reduced, and a global monitoring video picture is lacked for visually displaying and calling each subsystem; therefore, the invention provides a virtual-real combined three-dimensional visualization system.
Disclosure of Invention
In order to solve the problems of the scheme, the invention provides a virtual-real combined three-dimensional visualization system.
The purpose of the invention can be realized by the following technical scheme:
a virtual-real combined three-dimensional visualization system comprises a layout module, a display module and a server;
the layout module is used for performing camera layout in a campus, acquiring camera shooting performance, setting installation characteristics according to the acquired camera shooting performance, establishing a campus panoramic model, marking a to-be-selected area meeting installation requirements in the panoramic model according to the set installation characteristics, setting a corresponding to-be-selected installation point in the to-be-selected area, performing corresponding monitoring range marking in the panoramic model according to area information corresponding to the to-be-selected installation point, and marking a corresponding point location value; generating a monitoring range plan according to the current panoramic model, analyzing the monitoring range plan, eliminating redundant mounting points to be selected, obtaining target mounting points, and mounting a camera on the target mounting points;
the display module is used for displaying monitoring data, acquiring a panoramic model, carrying out corresponding monitoring association, carrying out corresponding virtual-real fusion through a virtual combination technology, acquiring a corresponding monitoring display model, and carrying out campus monitoring through the monitoring display model.
Further, the method for setting the corresponding mounting points to be selected in the area to be selected comprises the following steps:
dividing a region to be selected into a plurality of unit regions, acquiring region information corresponding to each unit region, wherein the region information comprises an acquisition range, an installation convenience value and a monitoring region correction coefficient, and marking the unit region as i, wherein i =1, 2, \8230, and n is a positive integer; respectively marking the acquisition range, the installation convenience value and the monitoring area correction coefficient as CMi, AZi and alpha; calculating corresponding point location values according to a formula QYi = b1 multiplied by alpha multiplied by CMi + b2 multiplied by AZi, wherein both b1 and b2 are proportional coefficients, the value range is that 0-bundle b1 is less than or equal to 1, 0-bundle b2 is less than or equal to 1, and selecting the unit area with the highest point location value as the installation point to be selected.
Further, the method for dividing the region to be selected into a plurality of unit regions comprises the following steps:
and establishing a corresponding unit area information matching table, matching the corresponding unit areas and the interval distances according to the corresponding camera information, and sequentially marking the corresponding unit areas of the areas to be selected according to the interval distances from top to bottom.
Further, the method for analyzing the monitoring range plan comprises the following steps:
and removing the mounting points to be selected preliminarily according to the monitoring overlapping range to obtain screening points, monitoring and combining the screening points to obtain a combination to be selected, performing priority selection on the obtained combination to be selected to obtain a target combination, and marking the corresponding screening points in the target combination as target mounting points.
Further, the method for removing the preliminary mounting points to be selected according to the monitoring range overlapping comprises the following steps:
step SA1: removing and sorting the mounting points to be selected according to the monitoring overlapping range and the corresponding point position value to obtain a first sequence;
step SA2: establishing a monitoring range discrimination model;
step SA3: removing the mounting points to be selected according to the first sequence, judging through a monitoring range judging model after removing to obtain a judging result, and removing the next mounting point to be selected when the judging result meets the requirement; and when the judgment result is that the installation points to be selected do not meet the requirements, recovering the installation points to be selected, removing the next installation points to be selected until all the installation points to be selected in the first sequence are correspondingly removed and judged, and marking the rest installation points to be selected as screening points.
Further, the method for selecting the priority of the obtained combination to be selected comprises the following steps:
marking target mounting points in the combination to be selected as j, wherein j =1, 2, 8230, m, m is a positive integer; obtaining point position values QYj corresponding to each target installation point in the combination to be selected according to a formula
Figure BDA0003880648710000031
Calculating a combination representative value, acquiring the standard installation cost CBZ of a single camera according to a formula
Figure BDA0003880648710000032
Calculating corresponding combination cost, wherein beta j is an installation price adjustment coefficient corresponding to a corresponding target installation point, calculating an overlapping value CDZ corresponding to a combination to be selected, calculating a priority value according to a priority formula UY = b3 xDB-b 4 xZHC-b 5 xCDZ, wherein b3, b4 and b5 are all proportional coefficients and have the value range of 0<b3≤1,0<b4≤1,0<b5 is less than or equal to 1, and the selected combination with the maximum priority value is selected as the target combination.
Compared with the prior art, the invention has the beneficial effects that: through laying the module and mutually supporting between the display module, based on multiple advanced technologies such as panorama concatenation camera, AR augmented reality, data fusion, three-dimensional outdoor scene GIS, under the prerequisite of make full use of current conventional equipment, realize campus video monitoring, intelligence is reported to the police, the perimeter is taken precautions against, safety precaution comprehensive service management in an organic whole such as intelligent linkage, audio-visual management is used, promote the level of management that becomes more meticulous in campus greatly, the current video monitoring system of transformation, solve the picture confusion, unordered, it is boring, be difficult to understand, be difficult to discern the problem, it is controllable to have realized the universe, the stereoscopic presentation, three-dimensional visual, the effect of virtual patrol and control, reached "see clearly", "see completely", "see clearly", realize the comprehensive informationization and the modernization of campus management really.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic block diagram of the present invention.
Detailed Description
The technical solutions of the present invention will be described below clearly and completely in conjunction with the embodiments, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1, a virtual-real combined three-dimensional visualization system includes a layout module, a display module and a server;
the layout module is used for laying cameras in a campus, and the specific method comprises the following steps:
the method comprises the steps of obtaining shooting performance of a camera, namely shooting range, definition and the like, wherein the adopted camera is a high-definition panoramic camera which is the most advanced product of the current home and abroad technology and is formed by combining a plurality of high-definition cameras, the full-coverage monitoring of a large area, a large scene and 360-degree no dead angle and no blind area can be realized through a multi-picture splicing technology, clear monitoring pictures can be displayed under the condition of insufficient illumination especially, and the method has the technical characteristics of less consumption, high efficiency and strong scene sense. The device has no mechanical moving part, is firm and durable, is maintenance-free, particularly does not work due to freezing in an ice and snow or frost environment, and solves the defects of the existing low-point monitoring device mainly provided with a gun and a ball machine; setting installation characteristics according to the acquired shooting performance of the camera, establishing a campus panoramic model, marking a to-be-selected area meeting installation requirements in the panoramic model according to the set installation characteristics, setting a corresponding to-be-selected installation point in the to-be-selected area, marking a corresponding monitoring range in the panoramic model according to area information corresponding to the to-be-selected installation point, and marking a corresponding point position value; and generating a monitoring range plane graph according to the current panoramic model, analyzing the monitoring range plane graph, eliminating redundant mounting points to be selected, obtaining target mounting points, and mounting a camera on the target mounting points.
The method for setting the installation characteristics comprises the steps of determining a corresponding installation height interval and an acquisition range interval corresponding to the installation height according to camera shooting performance in a manual mode, then setting characteristics which are required to be met by corresponding installation positions, integrating the characteristics into a corresponding training set, establishing a corresponding installation characteristic analysis model based on a CNN network or a DNN network, training through the established training set, and training through an installation characteristic training model after successful training to obtain the corresponding installation characteristics.
The method for establishing the campus panoramic model can be established in a manual establishing mode or an unmanned aerial vehicle aerial shooting mode is adopted to realize three-dimensional oblique photography realistic modeling and obtain campus realistic data. And in the later stage, the real-scene video data and the virtual three-dimensional model can be fused in a correlation manner through a virtual-real fusion technology, wherein the real-scene video can truly reflect all dynamic changes in the scene, building information except the real-scene data is replaced by the three-dimensional virtual model data, the real-scene video and the model data are aligned and superposed in space and time, and when the real-scene visual angle changes, the superposed model data also generates corresponding updating changes, so that the real-time scene of the monitoring area can be displayed very intuitively. Compared with manual modeling, the modeling mode is low in cost and high in speed, is an actual scene model, and is higher in trueness.
Marking areas to be selected meeting installation requirements in the panoramic model according to the set installation characteristics, namely screening according to the installation characteristics and structural characteristics such as buildings in the panoramic model, specifically, establishing a corresponding area analysis model based on a CNN network or a DNN network, establishing a corresponding training set in a manual mode for training, analyzing and marking the areas to be selected in the panoramic model by the area analysis model after successful training, wherein the specific establishing and training process is common knowledge in the field, and therefore detailed description is not needed.
The method for setting the corresponding mounting points to be selected in the areas to be selected comprises the following steps:
dividing a region to be selected into a plurality of unit regions, acquiring region information corresponding to each unit region, wherein the region information comprises an acquisition range, an installation convenience value and a monitoring region correction coefficient, and marking the unit region as i, wherein i =1, 2, \8230, and n is a positive integer; respectively marking the acquisition range, the installation convenience value and the correction coefficient of the monitoring area as CMi, AZi and alpha; calculating corresponding point location values according to a formula QYi = b1 multiplied by alpha multiplied by CMi + b2 multiplied by AZi, wherein b1 and b2 are both proportional coefficients, the value range is 0 sum b1 is less than or equal to 1,0 sum b2 is less than or equal to 1, and the unit area with the highest point location value is selected as the installation point to be selected.
The acquisition range is the corresponding acquisition range when the camera is installed at the position according to the installation characteristic analysis of the camera, and the acquisition range is the area.
The installation convenience value is set according to the installation darkness of the corresponding unit area, and is specifically set according to the installation environment corresponding to the area to be selected and the position of the unit area, wherein the installation environment is information having influence on installation, for example, if the bottom of the area to be selected corresponds to a river, the installation difficulty is high, the eaves are provided, the ladder is inconvenient to erect and the like, and the installation convenience value can be specifically obtained by the prior art; the corresponding convenient analysis model can be established based on the CNN network or the DNN network, the corresponding training set is established in a manual mode for training, and the convenient analysis model after the training is successful is analyzed to obtain the corresponding installation convenient value;
the monitoring area correction coefficient is set according to the monitoring content corresponding to the corresponding monitoring range, that is, the monitoring content is different, the corresponding monitoring area correction coefficient has difference, specifically, a corresponding monitoring area correction coefficient matching table can be established manually according to the monitoring content possibly existing in the campus, and the corresponding monitoring area correction coefficient is obtained after matching.
The method for dividing the region to be selected into a plurality of unit regions comprises the following steps:
the size, shape and the like of the unit areas are set according to the installation requirements of the camera, corresponding interval distances are set, specifically, a corresponding unit area information matching table is established manually, the corresponding unit areas and the interval distances are matched according to the corresponding camera information, the areas to be selected are sequentially marked according to the interval distances and the sequence from top to bottom.
And generating a monitoring range plane graph according to the current panoramic model, namely generating a corresponding monitoring range plane graph according to the monitoring range corresponding to each installation point to be selected and the plane graph corresponding to the panoramic model, wherein corresponding drawings can be generated according to the prior art, so detailed description is not needed.
The method for analyzing the monitoring range plan comprises the following steps:
and removing the mounting points to be selected preliminarily according to the monitoring overlapping range to obtain screening points, monitoring and combining the screening points, namely combining the screening points under the condition of meeting the monitoring requirements of schools, combining the screening points in the prior art to obtain a combination to be selected, performing priority selection on the obtained combination to be selected to obtain a target combination, and marking the corresponding screening points in the target combination as target mounting points.
The method for removing the preliminary mounting points to be selected according to the monitoring range overlapping comprises the following steps:
step SA1: removing and sorting the mounting points to be selected according to the monitoring overlapping range and the corresponding point position value to obtain a first sequence;
the method comprises the steps of eliminating and sorting installation points to be selected according to monitoring overlapping ranges and corresponding point position values, namely sorting the installation points to be selected with the overlapping monitoring ranges, and removing the installation points to be selected with obvious overlapping, specifically, establishing a corresponding sorting model based on a CNN network or a DNN network, establishing a corresponding training set for training in a manual mode, and analyzing and sorting through the sorting model after the training is successful.
Step SA2: establishing a monitoring range distinguishing model;
the monitoring range judging model is used for judging whether the current monitoring range meets the monitoring requirement or not after the installation point to be selected is removed, namely judging whether the current monitoring range meets the requirement or not; specifically, a corresponding monitoring range discrimination model is established based on a CNN network or a DNN network, a corresponding training set is established in a manual mode for training, and analysis and judgment are performed through the monitoring range discrimination model after the training is successful.
Step SA3: removing the mounting points to be selected according to the first sequence, judging through a monitoring range judging model after removing to obtain a judging result, and removing the next mounting point to be selected when the judging result meets the requirement; and when the judgment result is that the installation points to be selected do not meet the requirements, recovering the installation points to be selected, removing the next installation points to be selected until all the installation points to be selected in the first sequence are correspondingly removed and judged, and marking the remaining installation points to be selected as screening points.
The method for selecting the priority of the obtained combination to be selected comprises the following steps:
marking target mounting points in the combination to be selected as j, wherein j =1, 2, 8230, m, m is a positive integer; obtaining point position values QYj corresponding to each target installation point in the combination to be selected according to a formula
Figure BDA0003880648710000081
Calculating a combination representative value, obtaining the standard installation cost CBZ of a single camera, wherein the standard installation cost is the installation cost under the condition that a corresponding installation convenience value is designated as the standard value in a manual mode, and the standard installation cost is calculated according to a formula
Figure BDA0003880648710000082
Calculating corresponding combined cost, wherein beta j is an installation price adjustment coefficient corresponding to the corresponding target installation point, specifically, matching is carried out according to the corresponding installation convenience value, and a corresponding installation price adjustment coefficient matching table is established manually according to the installation convenience value; calculating an overlapping value CDZ corresponding to the combination to be selected, calculating a priority value according to a priority formula UY = b3 × DB-b4 × ZHC-b5 × CDZ, wherein b3, b4 and b5 are all proportional coefficients and have a value range of 0<b3≤1,0<b4≤1,0<b5 is less than or equal to 1, and the selected combination with the maximum priority value is selected as the target combination.
The overlap value is calculated and set based on the overlap of each monitoring range, specifically, a corresponding overlap model is established based on a CNN network or a DNN network, a corresponding training set is established in a manual mode for training, the overlap model after the training is successful is analyzed, and the corresponding overlap value is obtained, wherein the higher the overlap rate is, the larger the overlap value is.
The display module is used for displaying the monitoring data, and the specific method comprises the following steps:
acquiring a panoramic model, and performing corresponding monitoring association, namely performing associated display on a monitoring video of a corresponding camera at a corresponding position in the panoramic model, wherein the corresponding association can be performed through the prior art, so detailed description is not required; and performing corresponding virtual-real fusion through a virtual combination technology to obtain a corresponding monitoring display model, and performing campus monitoring through the monitoring display model.
Corresponding virtual-real fusion is carried out through a virtual combination technology, and the virtual-real fusion can be carried out through the prior art; the real-scene video data and the virtual three-dimensional model are fused in a correlation mode through a virtual-real fusion technology, wherein the real-scene video can truly reflect all dynamic changes in a scene, the real-scene video and the model data are aligned and overlapped in space and time, when a real-scene visual angle changes, the overlapped model data also generate corresponding updating changes, and the real-time scene of a monitoring area can be displayed very visually. In this way, assistance can be provided for security personnel to quickly understand the campus layout.
The three-dimensional live-action model and the monitoring point location are displayed in a correlated manner, on the basis, the three-dimensional model can be fused in a panoramic camera video picture accessed by the system, the point location of a camera in a set view range is displayed at the corresponding mounting point location of the model, a manager can independently select a preview picture of an interested camera on the panoramic live-action picture, help security personnel to preview a target point location camera picture in advance, quickly eliminate irrelevant pictures, and achieve the effect of three-dimensional correlated prevention and control,
the above formulas are all calculated by removing dimensions and taking numerical values thereof, the formula is a formula which is obtained by acquiring a large amount of data and performing software simulation to obtain the closest real situation, and the preset parameters and the preset threshold value in the formula are set by the technical personnel in the field according to the actual situation or obtained by simulating a large amount of data.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (6)

1. A virtual-real combined three-dimensional visualization system is characterized by comprising a layout module, a display module and a server;
the arrangement module is used for arranging cameras in a campus, acquiring the shooting performance of the cameras, setting installation characteristics according to the acquired shooting performance of the cameras, establishing a campus panoramic model, marking a to-be-selected area meeting the installation requirements in the panoramic model according to the set installation characteristics, setting a corresponding to-be-selected installation point in the to-be-selected area, marking a corresponding monitoring range in the panoramic model according to area information corresponding to the to-be-selected installation point, and marking a corresponding point position value; generating a monitoring range plan according to the current panoramic model, analyzing the monitoring range plan, eliminating redundant mounting points to be selected, obtaining target mounting points, and mounting a camera on the target mounting points;
the display module is used for displaying monitoring data, acquiring the panoramic model, carrying out corresponding monitoring association, carrying out corresponding virtual-real fusion through a virtual combination technology, acquiring a corresponding monitoring display model, and carrying out campus monitoring through the monitoring display model.
2. The virtual-real combined three-dimensional visualization system according to claim 1, wherein the method for setting the corresponding candidate installation point in the candidate area comprises the following steps:
dividing a to-be-selected area into a plurality of unit areas, acquiring area information corresponding to each unit area, wherein the area information comprises an acquisition range, an installation convenience value and a monitoring area correction coefficient, and marking the unit area as i, wherein i =1, 2, 8230, n and n are positive integers; respectively marking the acquisition range, the installation convenience value and the correction coefficient of the monitoring area as CMi, AZi and alpha; calculating corresponding point location values according to a formula QYi = b1 multiplied by alpha multiplied by CMi + b2 multiplied by AZi, wherein both b1 and b2 are proportional coefficients, the value range is that 0-bundle b1 is less than or equal to 1, 0-bundle b2 is less than or equal to 1, and selecting the unit area with the highest point location value as the installation point to be selected.
3. The system of claim 2, wherein the method for dividing the candidate area into a plurality of unit areas comprises:
and establishing a corresponding unit area information matching table, matching the corresponding unit areas and the interval distances according to the corresponding camera information, and marking the corresponding unit areas in sequence according to the interval distances and the sequence from top to bottom for the areas to be selected.
4. The system of claim 2, wherein the method for analyzing the monitoring range plan comprises:
and removing the mounting points to be selected preliminarily according to the monitoring overlapping range to obtain screening points, monitoring and combining the screening points to obtain a combination to be selected, performing priority selection on the obtained combination to be selected to obtain a target combination, and marking the corresponding screening points in the target combination as target mounting points.
5. The virtual-real combined three-dimensional visualization system according to claim 4, wherein the method for performing preliminary candidate installation point elimination according to monitoring range overlapping comprises:
step SA1: removing and sorting the mounting points to be selected according to the monitoring overlapping range and the corresponding point position value to obtain a first sequence;
step SA2: establishing a monitoring range distinguishing model;
step SA3: removing the mounting points to be selected according to the first sequence, judging through a monitoring range judging model after removing to obtain a judging result, and removing the next mounting point to be selected when the judging result meets the requirement; and when the judgment result is that the installation points to be selected do not meet the requirements, recovering the installation points to be selected, removing the next installation points to be selected until all the installation points to be selected in the first sequence are correspondingly removed and judged, and marking the rest installation points to be selected as screening points.
6. The system of claim 4, wherein the method for selecting the priority of the obtained candidate combinations comprises:
marking target mounting points in the combination to be selected as j, wherein j =1, 2, 8230, m, m is a positive integer; obtaining point position values QYj corresponding to each target installation point in the combination to be selected according to a formula
Figure FDA0003880648700000021
Calculating a combination representative value, obtaining the standard installation cost CBZ of a single camera according to a formula
Figure FDA0003880648700000022
Calculating corresponding combination cost, wherein beta j is an installation price adjustment coefficient corresponding to a corresponding target installation point, calculating an overlapping value CDZ corresponding to a combination to be selected, calculating a priority value according to a priority formula UY = b3 xDB-b 4 xZHC-b 5 xCDZ, wherein b3, b4 and b5 are all proportional coefficients and have the value range of 0<b3≤1,0<b4≤1,0<b5 is less than or equal to 1, and the selected combination with the maximum priority value is selected as the target combination.
CN202211228861.6A 2022-10-09 2022-10-09 Virtual-real combined three-dimensional visualization system Withdrawn CN115604433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211228861.6A CN115604433A (en) 2022-10-09 2022-10-09 Virtual-real combined three-dimensional visualization system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211228861.6A CN115604433A (en) 2022-10-09 2022-10-09 Virtual-real combined three-dimensional visualization system

Publications (1)

Publication Number Publication Date
CN115604433A true CN115604433A (en) 2023-01-13

Family

ID=84847378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211228861.6A Withdrawn CN115604433A (en) 2022-10-09 2022-10-09 Virtual-real combined three-dimensional visualization system

Country Status (1)

Country Link
CN (1) CN115604433A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307622A (en) * 2023-03-31 2023-06-23 中国葛洲坝集团电力有限责任公司 Basic construction management and control system suitable for wind power building installation industry
CN116778129A (en) * 2023-08-18 2023-09-19 煤炭科学研究总院有限公司 Marking method and device for coal mine three-dimensional roadway page

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116307622A (en) * 2023-03-31 2023-06-23 中国葛洲坝集团电力有限责任公司 Basic construction management and control system suitable for wind power building installation industry
CN116778129A (en) * 2023-08-18 2023-09-19 煤炭科学研究总院有限公司 Marking method and device for coal mine three-dimensional roadway page
CN116778129B (en) * 2023-08-18 2023-11-21 煤炭科学研究总院有限公司 Marking method and device for coal mine three-dimensional roadway page

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN111836012B (en) Video fusion and video linkage method based on three-dimensional scene and electronic equipment
CN115604433A (en) Virtual-real combined three-dimensional visualization system
CN105516654B (en) A kind of supervision of the cities video fusion method based on scene structure analysis
CN101859433B (en) Image mosaic device and method
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
CN105554447A (en) Image processing technology-based coal mining face real-time video splicing system
CN103167270B (en) Personnel&#39;s head image pickup method, system and server
CN103198488A (en) PTZ surveillance camera realtime posture rapid estimation method
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN101719986A (en) PTZ tracking method and system based on multi-layered full-view modeling
CN114419231B (en) Traffic facility vector identification, extraction and analysis system based on point cloud data and AI technology
CN111586351A (en) Visual monitoring system and method for fusion of three-dimensional videos of venue
CN112383745A (en) Panoramic image-based digital presentation method for large-scale clustered construction project
CN110896462B (en) Control method, device and equipment of video monitoring cluster and storage medium
CN107809611A (en) A kind of 720 ° of Panoramic Warping method for real-time monitoring of hydraulic engineering and system
CN111222190A (en) Ancient building management system
CN115375779B (en) Method and system for camera AR live-action annotation
CN115798265B (en) Digital tower construction method based on digital twin technology and implementation system thereof
CN111683221B (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN115393192A (en) Multi-point multi-view video fusion method and system based on general plane diagram
CN103986905A (en) Method for video space real-time roaming based on line characteristics in 3D environment
CN111083368A (en) Simulation physics cloud platform panoramic video display system based on high in clouds
CN112911221B (en) Remote live-action storage supervision system based on 5G and VR videos
CN115859689B (en) Panoramic visualization digital twin application method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230113