CN115393192A - Multi-point multi-view video fusion method and system based on general plane diagram - Google Patents
Multi-point multi-view video fusion method and system based on general plane diagram Download PDFInfo
- Publication number
- CN115393192A CN115393192A CN202211033403.7A CN202211033403A CN115393192A CN 115393192 A CN115393192 A CN 115393192A CN 202211033403 A CN202211033403 A CN 202211033403A CN 115393192 A CN115393192 A CN 115393192A
- Authority
- CN
- China
- Prior art keywords
- monitoring
- fusion
- video
- image
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010586 diagram Methods 0.000 title claims abstract description 42
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 12
- 238000012544 monitoring process Methods 0.000 claims abstract description 126
- 230000004927 fusion Effects 0.000 claims abstract description 98
- 238000010276 construction Methods 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000005457 optimization Methods 0.000 claims abstract description 19
- 230000003247 decreasing effect Effects 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 21
- 238000004458 analytical method Methods 0.000 claims description 13
- 238000005520 cutting process Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 4
- 238000012806 monitoring device Methods 0.000 claims description 3
- 230000006855 networking Effects 0.000 claims description 3
- 238000005211 surface analysis Methods 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims 1
- 238000012217 deletion Methods 0.000 claims 1
- 230000037430 deletion Effects 0.000 claims 1
- 238000003860 storage Methods 0.000 abstract description 8
- 230000001360 synchronised effect Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000009826 distribution Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009435 building construction Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-point multi-view video fusion method and a system based on a general plane diagram, which relate to the fields of digital construction supervision and the like, and the method comprises the following steps: monitoring full-coverage point location planning and equipment layout based on the engineering project general plane diagram; dividing monitoring areas based on building space height layering; fusing multi-angle videos in the local monitoring area; extracting a key area of a local fusion video; based on the engineering project plan, performing regional filling fusion on the base map; rapidly increasing or decreasing the camera picture to make up the blind or remove the duplicate; and (5) performing color equalization and seam removal fusion optimization processing on the edges of the multi-region video. The method effectively solves the problem that the multi-point multi-view video fusion depends on the three-dimensional model and the complexity that the construction site environment is complex and changeable and the three-dimensional model needs to be updated frequently; synchronous supervision and scheduling command of a plurality of construction operation surfaces, road surfaces, storage yards and the like in the administrative project are realized, and supervision capacity and efficiency are improved; has higher practicability and universality.
Description
Technical Field
The invention belongs to the technical fields of digital construction, intelligent construction, safe and green construction, video security supervision and the like, particularly relates to image fusion processing, photogrammetry and virtual reality enhancement, and more particularly relates to a multi-point multi-view video fusion method and system based on a general plane diagram.
Background
The traditional construction site monitoring system has the problems of scattered video monitoring pictures, various and discontinuous systems and lack of space sense of the whole system, and cannot meet the supervision requirement in modern construction production.
Along with the development of building engineering digitization, especially in the direction of digital intelligent construction, the application of big data and artificial intelligence in construction sites is more and more extensive. At present, the multi-point multi-view monitoring video fusion which is popular in the market must rely on a static three-dimensional model, such as an oblique photography model, a BIM model and the like for projection mapping, and monitoring cameras are all arranged by adopting static point positions. Because the construction site of the construction site changes every day and every time, if the video fusion based on the three-dimensional model is adopted, the dynamic modeling needs to be carried out on the rising of the building surface, the change of stockpiling in a stock yard and the like; the high-frequency multi-time three-dimensional modeling cost is higher; manually adjusting monitoring distribution parameters after jacking the tower crane for erecting the camera monitoring group; manual intervention and maintenance are needed in the later period; the real-time performance cannot be guaranteed; therefore, the technology cannot meet the construction site of a construction site which changes dynamically day by day, but multipoint and multi-view angles such as a multi-tower crane and different heights cannot be spliced and fused without using a three-dimensional model.
Therefore, how to effectively realize the overall plane video monitoring of the intelligent digital construction site is a problem to be solved urgently.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a multi-point multi-view video fusion method and system based on a general plane graph, which can effectively realize the general plane video monitoring of an intelligent digital construction site.
To achieve the above object, according to an aspect of the present invention, there is provided a multi-point multi-view video fusion method based on a global plane map, including:
analyzing and planning the layout point positions of the monitoring equipment based on the engineering project general plane diagram, so that the monitoring equipment can completely cover the whole project construction site area;
analyzing a space height working face of the engineering project general plane diagram, and dividing target areas with different space heights into areas;
calling related monitoring pictures of each area to analyze image feature points, calculating corresponding positions and transformation relations among monitoring equipment corresponding to each area, preliminarily splicing the multi-angle monitoring video pictures, and fusing the spliced images;
adopting image edge recognition cutting processing to each target area in the fused video monitoring picture;
filling the cut local fusion video picture into a corresponding area in the engineering project general plane graph and fusing;
analyzing the images subjected to regional filling and fusion, rapidly increasing a camera for blind complementing on the monitoring image part with the area missing, and rapidly deleting the camera for duplicate removal on the overlapped area of the images;
and carrying out integral color equalization and splicing seam removal fusion optimization processing on the edges of the local fusion video regions.
In some optional embodiments, the planning of the deployment site of the monitoring device based on the analysis of the engineering project general plan comprises:
based on the analysis of the engineering project general plane diagram, the layout point positions of the monitoring equipment are automatically planned, so that the monitoring equipment can comprehensively cover the whole project construction site area, and after the monitoring erection is completed, the wired and/or wireless network layout and the self-adaptive networking are planned.
In some optional embodiments, the performing spatial height working plane analysis on the engineering project general plane diagram, and performing region division on target regions with different spatial heights includes:
the method comprises the steps of analyzing a space height operation surface of an engineering project general plane diagram, carrying out region division on target regions (such as a construction surface, a road surface and a yard) with different construction site space heights in a graticule mode, and carrying out region division on a main body construction surface and target regions (such as a construction surface, a road surface and a yard) with increased construction space heights by target equipment (including a tower crane, a construction elevator, an illumination light pole and the like) for erecting a monitoring camera group.
In some optional embodiments, the retrieving the relevant monitoring pictures of each region to perform image feature point analysis, calculating the corresponding positions and transformation relations between the monitoring devices corresponding to each region, preliminarily splicing the multi-angle monitoring video pictures, and performing fusion processing on the spliced images includes:
and preliminarily splicing the multi-angle monitoring video pictures by calculating corresponding positions and image transformation relations among monitoring equipment with the same point position and different angles, and fusing the spliced images.
In some optional embodiments, the performing image edge recognition cropping processing on each target region in the fused video surveillance image includes:
and (3) performing image edge recognition cutting processing on target areas (such as construction surfaces, road surfaces, storage yards and the like) in the video monitoring pictures subjected to local fusion, and temporarily storing the cut image pictures in the form of building block picture modules.
In some optional embodiments, the filling and fusing the cropped local fusion video picture to a corresponding area in a general plan of an engineering project includes:
and (3) the video which is locally fused and cut at the edge is fused into a block corresponding to the theodolite of the general plane diagram of the engineering project, parameters are required to be set in the first use, and unattended operation can be realized in the later stage if the disassembly change monitoring is not carried out.
In some optional embodiments, the analyzing the image after the regional filling and fusing, performing blind complementing by rapidly increasing a camera on the monitoring image part with the area missing, and performing duplicate removal by rapidly deleting a camera on the overlapping area of the image includes:
the method is characterized in that a camera is rapidly added to the area missing monitoring picture part for blind complementing, the camera is rapidly deleted to the picture overlapping area for duplication removing, and building block type splicing is adopted for dynamic complementing and static modularization rapid fusion.
In some optional embodiments, the performing an overall color-equalizing and stitching-seam-removing fusion optimization process on the plurality of locally fused video region edges includes:
uniformly converting the local fusion pictures at different angles into a digital ortho-image DOM;
adopting blocks corresponding to the graticule of the engineering general plane diagram and scale information of the general plane diagram, and monitoring the highest point of the tower crane as a reference topographic image;
respectively extracting local fusion picture videos at different angles by using an ASIFT algorithm as a key frame and a general plane map matching point of a topographic image of a monitoring picture of a tower crane at a high point;
and obtaining matching points of the key frame and the topographic image by using an ASIFT algorithm, wherein the ASIFT realizes complete affine invariance by calculating angle changes of a longitude angle and a latitude angle, and the matching points are pixel points with the same characteristics in a relative reference topographic image in a local fusion monitoring picture key frame image and a general plane image.
According to another aspect of the present invention, there is provided a general plan based multi-point multi-view video fusion system, comprising:
the full coverage point location planning and equipment layout module is used for analyzing and planning the layout point locations of the monitoring equipment based on the engineering project general plan so that the monitoring equipment can fully cover the whole project construction site area;
the monitoring area division module is used for carrying out space height operation surface analysis on the engineering project general plane diagram and carrying out area division on target areas with different space heights;
the multi-angle video fusion module is used for calling related monitoring pictures of each area to perform image characteristic point analysis, calculating corresponding positions and transformation relations among monitoring equipment corresponding to each area, primarily splicing the multi-angle monitoring video pictures, and fusing the spliced images;
the local fusion video key region extraction module is used for carrying out image edge identification cutting processing on each target region in the fused video monitoring picture;
the regional filling and fusing module is used for filling the cut local fusion video picture into a corresponding region in the engineering project general plane diagram and fusing the local fusion video picture;
the rapid camera increasing or decreasing frame blind complementing or duplicate removing module is used for analyzing the frames subjected to regional filling and fusion, rapidly increasing the cameras for blind complementing the monitoring frame part lacking in the region, and rapidly deleting the cameras for duplicate removal in the overlapped region of the frames;
and the multi-region video fusion optimization processing module is used for carrying out integral color equalization and splicing seam removal fusion optimization processing on the edges of the local fusion video regions.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
the engineering project general plan is used as a carrier, and the technology of video fusion, area division, monitoring picture key area extraction and building block type picture splicing is combined, so that the monitoring of the engineering project general plan on a construction site is dynamically obtained in real time. The dependency of multi-point multi-view video fusion on the three-dimensional model and the complexity of frequently updating the three-dimensional model due to the complex and changeable construction site environment are effectively solved; the high-frequency multi-time three-dimensional modeling cost is higher; manually adjusting monitoring distribution parameters after jacking the tower crane for erecting the camera monitoring group; manual intervention and maintenance are needed in the later period; the real-time performance cannot be guaranteed, and the like. The multi-point multi-view video fusion based on the general plane graph realizes low cost, quick layout and later unmanned watching, and has higher reliability and practicability.
Drawings
FIG. 1 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 2 is a system architecture diagram according to an embodiment of the present invention;
FIG. 3 is a general plan view of a project provided by an embodiment of the present invention;
FIG. 4 is a flowchart of a project fast camera add or subtract and video multi-layer fusion scheme according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating multi-angle video fusion in a local monitored area of a project according to an embodiment of the present invention;
fig. 6 illustrates a key area extraction of a project local fusion video according to an embodiment of the present invention;
FIG. 7 is a block diagram of a local merging video frame for a project according to an embodiment of the present invention;
fig. 8 is a process for project multi-region video edge color-uniformizing and overall fusion optimization according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention adopts an advanced real-time video fusion technology, and carries out filling fusion on a plurality of locally fused monitoring operation surfaces such as construction surfaces, road surfaces, storage yards and the like based on the region division and modularization form of a general plane diagram. The visualization of the construction site plane is realized, and the construction site can be better overlooked; the monitoring of the engineering project general plan of the construction site is dynamically obtained in real time, comprehensive monitoring and dispatching commands of a plurality of construction operation surfaces, road surfaces, storage yards and the like in the administrative projects are comprehensively and synchronously monitored and dispatched through the monitoring and supervision management of the project general plan, users are prevented from needing to switch monitoring pictures for inspection, and the monitoring capability and the working efficiency are improved.
The invention relates to the technical fields of image fusion processing, photogrammetry, virtual reality enhancement, digital construction, intelligent construction, safe and green construction, video security supervision and the like, and discloses a multi-point and multi-view video fusion method and a system based on a general plane diagram, wherein the method specifically comprises the following steps:
(1) Monitoring full-coverage point location planning and equipment layout based on the engineering project general plane diagram; (2) Dividing monitoring areas based on building space height layering; (3) multi-angle video fusion in the local monitoring area; (4) extracting a key area of a local fusion video; (5) Based on the engineering project plan, performing regional filling fusion on the base map; (6) rapidly increasing or decreasing the camera pictures to make up the blind or remove the duplicate; (7) And (5) carrying out color equalization and splicing seam removal fusion optimization processing on the edges of the multi-region video.
The method realizes the dynamic real-time acquisition of the engineering project general plan view monitoring of the construction site, and effectively solves the problems of the dependence of multi-point multi-view video fusion on the three-dimensional model and the complexity of the complex and changeable construction site environment and frequent updating of the three-dimensional model; the high-frequency multi-time three-dimensional modeling cost is higher; manually adjusting monitoring point distribution parameters after jacking a tower crane for erecting a camera monitoring group; manual intervention and maintenance are needed in the later period; the real-time performance cannot be guaranteed, and the like. The multi-point multi-view video fusion method and system based on the general plane diagram realize low cost, quick layout and later unmanned watching, and have higher reliability and practicability.
Fig. 1 is a schematic flowchart of a multi-point multi-view video fusion method based on a global plane map according to an embodiment of the present invention, including:
1. monitoring full-coverage point location planning and equipment layout based on the engineering project general plane diagram;
2. dividing monitoring areas based on building space height layering;
3. fusing multi-angle videos in the local monitoring area;
4. extracting a key area of a local fusion video;
5. regional filling fusion based on the engineering project plan as a base map;
6. rapidly increasing or decreasing the camera picture to make up the blind or remove the duplicate;
7. and (5) carrying out color equalization and splicing seam removal fusion optimization processing on the edges of the multi-region video.
Fig. 2 is an architecture diagram of a general plan view-based multi-point multi-view video fusion system according to an embodiment of the present invention, including:
the full coverage point location planning and equipment layout module is used for analyzing and planning the layout point locations of the monitoring equipment based on the engineering project general plan so that the monitoring equipment can fully cover the whole project construction site area;
the monitoring area division module is used for analyzing the space height operation surface of the engineering project general plane diagram and carrying out area division on construction surfaces, road surfaces, storage yards and the like with different space heights;
the multi-angle video fusion module is used for calling related monitoring pictures of each area to perform image characteristic point analysis, calculating corresponding positions and transformation relations among monitoring equipment corresponding to each area, primarily splicing the multi-angle monitoring video pictures, and fusing the spliced images;
the local fusion video key region extraction module is used for carrying out image edge identification cutting processing on a construction surface, a road surface, a storage yard and the like in the fused video monitoring picture;
the regional filling and fusing module is used for filling the cut local fusion video picture into a corresponding region in the engineering project general plane diagram and fusing the local fusion video picture;
the rapid camera increasing or decreasing frame blind complementing or duplicate removing module is used for analyzing the frames subjected to regional filling and fusion, rapidly increasing the cameras for blind complementing the monitoring frame part lacking in the region, and rapidly deleting the cameras for duplicate removal in most overlapped regions of the frames;
and the multi-region video fusion optimization processing module is used for carrying out integral color equalization and splicing seam removal fusion optimization processing on the color difference, the splicing seams and the like of the edges of the plurality of local fusion video regions.
FIG. 3 is a general plan view of a project in an embodiment of the present invention; identifying important information such as the height of a building or a building group, the height and placing position of a tower crane, road planning, yard arrangement and the like through a project general plan; and the full-coverage point location planning and equipment laying module analyzes the identified important information, plans the laying point location of the monitoring equipment, and plans wired and wireless (Wi-Fi, 4G, 5G and the like) network laying point and self-adaptive networking after the monitoring erection is finished. The layout scheme of the embodiment is as follows: referring to the left part of fig. 5, a tower crane is taken as an installation fulcrum, and the four sides are totally, two gun-type cameras and 8 infrared gun-type cameras with focal lengths of 1080P of 2.8mm-12mm are respectively installed on each side in a horizontal coaxial mode; as the monitoring of the highest point, the monitoring of the high-rise construction working surface of the building is mainly carried out. Four sides enclosure respectively installs a 15 meters high pole setting in the middle of, and four rifle type cameras are respectively installed to every pole setting top, and wherein two are coaxial and every group back to back installation, monitoring road and storage yard etc.. Through the arrangement of the monitoring equipment in the embodiment, the whole project site area is preliminarily and comprehensively covered.
Fig. 4 is a flow chart of a project fast camera add or subtract and video multi-layer fusion scheme in an embodiment of the invention, relating to two functional modules: the device comprises a monitoring area division module and a quick camera increasing or decreasing picture blind-filling or duplicate removal module; according to important information such as the height of a building body or a building group, the placement position of a tower crane, road planning, yard layout and the like, a monitoring area division module is used for analyzing a space height operation surface, and area division is performed on construction surfaces, road surfaces, yards and the like with different space heights. And the rapid blind adding or subtracting camera picture supplementing or duplicate removing module analyzes the pictures subjected to regional filling and fusion, rapidly adds a camera to the monitoring picture part lacking in the region for blind supplementing, and rapidly deletes the camera to remove the duplicate of most of the overlapped regions of the pictures.
FIG. 5 illustrates multi-angle video fusion within a project local surveillance zone in an embodiment of the present invention; the left part of fig. 5 is a single multi-view picture of 8 gun bolts installed on the tower crane in the embodiment, and the right part is a local splicing picture. The multi-angle video fusion module is used in the process, relevant monitoring pictures after area division are called to carry out image feature point analysis, corresponding positions and transformation relations among monitoring equipment are calculated, the multi-angle monitoring video pictures are preliminarily spliced, and the spliced images are subjected to fusion processing.
The multi-angle video fusion module mainly uses the following technologies: the multi-path image splicing is used for fusing the monitoring pictures with correlation in multiple paths; the image edge fusion is used for processing a splicing seam generated by image splicing; the image color homogenizing technology is used for unifying the colors of a plurality of groups of monitoring pictures and solving the color difference generated by different monitoring exposure degrees and the like; the image affine and perspective transformation technology is used for carrying out multi-angle transformation on a plurality of groups of monitoring and fusing pictures; the image automatic rectification technology is used for solving the deformation generated in the image transformation process. Such as: the image splicing part has differences such as chromatic aberration, gaps and the like, and the problems of local edge gaps and the like can be effectively solved through optimization of a fusion algorithm; in the global image fusion, because of different shooting time, equipment shooting parameters and the like, color difference taking a block as a unit can occur in the final fusion, and the global color can be consistent through color extraction, correction, enhancement and the like; distortion occurs in image shooting, fusion, visual angle conversion and the like, contour detection, linear detection and the like can be adopted, the structures of constraint items of curves and linear structures are kept to be drawn, similar transformation is constrained, the shape of a spliced image is corrected, and final image distortion caused by projection distortion is reduced.
FIG. 6 is a block diagram illustrating key region extraction for a project local fusion video in an embodiment of the present invention; in the process, a local fusion video key region extraction function module is used, and the building construction working face is subjected to image edge recognition cutting processing through manual framing or deep learning; and temporarily storing the cut image pictures in the form of a building block picture module.
FIG. 7 is a regional fill of a project local fusion video frame in an embodiment of the present invention; the regional filling fusion function module is used in the process. The module carries out corresponding area filling in a general plan of an engineering project on monitoring pictures of a construction operation surface, a road, a storage yard and the like after identification and cutting processing in a local fusion video picture, and comprises the following steps: fusing a multi-layered building block picture (a video which is locally fused and cut on the edge) into a block corresponding to a longitude and latitude net of an engineering project general plane diagram; the project engineering general plane diagram has scale information, and the problems that multi-point multi-view (multi-tower machine and different heights) cannot be spliced and fused independently of a three-dimensional model, the distortion is overlarge after multi-path fusion, the imaging is large and small in size and the like are effectively solved; the first use needs to set parameters, and the later stage can achieve unattended operation if the disassembly change monitoring is not needed.
FIG. 8 is a block diagram of a project multi-region video edge shading and global blending optimization process in an embodiment of the present invention; in the process, the multi-region video fusion optimization processing functional module is used for carrying out integral color equalization and splicing seam removal fusion optimization processing on the color difference, the splicing seams and the like of the edges of the plurality of local fusion video regions.
The multi-region video fusion optimization relates to uniformly converting local fusion pictures with different angles into a Digital ortho image (DOM). Adopting blocks corresponding to a 'graticule' of an engineering general plane diagram and scale information of the general plane diagram, and monitoring a highest point of a tower crane (the height is 90 meters from the ground) as a reference topographic image; and respectively extracting local fusion picture videos at different angles by using an ASIFT algorithm to serve as a key frame and a general plane map matching point of a terrain image of a tower crane-fused highest point monitoring picture. And obtaining matching points of the key frame and the topographic image by using an ASIFT algorithm, wherein the ASIFT realizes complete affine invariance by calculating angle changes of a longitude angle and a latitude angle, and the matching points are pixel points with the same characteristics in a relative reference topographic image in a local fusion monitoring picture key frame image and a general plane image.
The coordinate transformation formula of the relative reference terrain image in the local fusion monitoring key frame image and the general plane image consists of an affine transformation matrix:
any one of the matrices a can be decomposed into:
ri and Ti represent the transformation matrix for the rotation variation and the transformation matrix for the tilt variation, respectively.
And projecting the local fusion monitoring key frame image subjected to affine transformation to a corresponding area of the general plan view through perspective transformation.
The multipoint multi-view video fusion method and system based on the general plane map, which are provided by the embodiment, realize the dynamic real-time acquisition of the engineering project general plane map monitoring of a construction site by using an engineering project general plane map carrier and combining the video fusion, the area division, the key area extraction of a monitoring picture and the 'building block type' picture splicing technology. The dependency of multi-point multi-view video fusion on the three-dimensional model and the complexity of frequently updating the three-dimensional model due to the complex and changeable construction site environment are effectively solved; the high-frequency multi-time three-dimensional modeling cost is higher; manually adjusting monitoring point distribution parameters after jacking a tower crane for erecting a camera monitoring group; manual intervention and maintenance are needed in the later period; the real-time performance cannot be guaranteed, and the like. The multi-point multi-view video fusion method and system based on the general plane diagram realize low cost, quick layout and later unmanned watching, and have higher reliability and practicability.
It should be noted that, according to the implementation requirement, each step/component described in the present application can be divided into more steps/components, and two or more steps/components or partial operations of the steps/components can be combined into new steps/components to achieve the purpose of the present invention.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.
Claims (9)
1. A multi-point multi-view video fusion method based on a general plane diagram is characterized by comprising the following steps:
analyzing and planning the layout point positions of the monitoring equipment based on the engineering project general plane diagram, so that the monitoring equipment can completely cover the whole project construction site area;
performing space height operation surface analysis on the engineering project general plane diagram, and performing region division on target regions with different space heights;
calling related monitoring pictures of each area to perform image characteristic point analysis, calculating corresponding positions and transformation relations among monitoring equipment corresponding to each area, primarily splicing the multi-angle monitoring video pictures, and fusing the spliced images;
adopting image edge recognition cutting processing to each target area in the fused video monitoring picture;
filling the cut local fusion video picture into a corresponding area in the engineering project general plane graph and fusing;
analyzing the images subjected to regional filling and fusion, rapidly increasing a camera for blind complementing on the monitoring image part with the area missing, and rapidly deleting the camera for duplicate removal on the overlapped area of the images;
and carrying out integral color equalization and splicing seam removal fusion optimization processing on the edges of the local fusion video regions.
2. The method of claim 1, wherein planning the placement sites of the monitoring equipment based on engineering project general plan analysis comprises:
and automatically planning the layout points of the monitoring equipment after the analysis based on the engineering project general plane diagram, so that the monitoring equipment can comprehensively cover the whole project construction site area, and planning wired and/or wireless network layout and self-adaptive networking after the monitoring and erection are completed.
3. The method according to claim 2, wherein the performing the spatial height working plane analysis on the engineering project general plan and performing the area division on the target areas with different spatial heights comprises:
and (3) analyzing a space height operation surface of the engineering project general plane diagram, carrying out region division on target regions with different construction site space heights in a form of a graticule, and carrying out region division on a main body construction surface and the target region with the increased construction space height by target equipment for erecting a monitoring camera group.
4. The method according to claim 3, wherein the retrieving of the related monitoring pictures of each region for image feature point analysis, calculating the corresponding positions and transformation relations between the monitoring devices corresponding to each region, preliminarily stitching the multi-angle monitoring video pictures, and performing fusion processing on the stitched images comprises:
and preliminarily splicing the multi-angle monitoring video pictures by calculating corresponding positions and image transformation relations among monitoring equipment with the same point position and different angles, and fusing the spliced images.
5. The method according to claim 4, wherein the clipping processing for image edge recognition is performed on each target area in the fused video surveillance image, and the clipping processing comprises:
and performing image edge recognition cutting processing on a target area in the video monitoring picture after the local fusion, and temporarily storing the cut image picture in the form of a building block picture module.
6. The method according to claim 5, wherein the filling and merging the cropped local merged video frame into the corresponding area in the engineering project general plan view comprises:
the video which is locally fused and cut at the edge is fused into a block corresponding to the theodolite of the general plane diagram of the engineering project, parameters are required to be set during the first use, and unattended operation can be achieved in the later stage if the disassembly change monitoring is not needed.
7. The method according to claim 6, wherein analyzing the fused images, performing fast camera addition for blind compensation on the monitoring image parts with missing areas, and performing fast camera deletion for duplicate removal on the overlapping areas of the images comprises:
the method is characterized in that a camera is rapidly added to the area missing monitoring picture part for blind complementing, the camera is rapidly deleted to the picture overlapping area for duplication removing, and building block type splicing is adopted for dynamic complementing and static modularization rapid fusion.
8. The method according to claim 7, wherein the performing global color-averaging, disstitching and fusion optimization on the plurality of locally fused video region edges comprises:
uniformly converting the local fusion pictures at different angles into a digital ortho-image DOM;
adopting blocks corresponding to the graticule of the engineering general plane diagram and scale information of the general plane diagram, and monitoring the highest point of the tower crane as a reference topographic image;
respectively extracting local fusion picture videos at different angles by using an ASIFT algorithm to serve as a global plane map matching point of a key frame and a topographic image of a monitoring picture of a tower crane at a high point;
and obtaining matching points of the key frame and the topographic image by using an ASIFT algorithm, wherein the ASIFT realizes complete affine invariance by calculating angle changes of a longitude angle and a latitude angle, and the matching points are pixel points with the same characteristics in the relative reference topographic image in the key frame image of the local fusion monitoring picture and the general plane image.
9. A system for multi-point multi-view video fusion based on a global plane map, comprising:
the full coverage point location planning and equipment layout module is used for analyzing and planning the layout point locations of the monitoring equipment based on the engineering project general plan so that the monitoring equipment can fully cover the whole project construction site area;
the monitoring area division module is used for carrying out space height operation surface analysis on the engineering project general plane diagram and carrying out area division on target areas with different space heights;
the multi-angle video fusion module is used for calling related monitoring pictures of each area to perform image characteristic point analysis, calculating corresponding positions and transformation relations among monitoring equipment corresponding to each area, primarily splicing the multi-angle monitoring video pictures, and fusing the spliced images;
the local fusion video key region extraction module is used for carrying out image edge identification cutting processing on each target region in the fused video monitoring picture;
the regional filling and fusing module is used for filling the cut local fusion video picture into a corresponding region in the engineering project general plane diagram and fusing the local fusion video picture;
the rapid camera increasing or decreasing frame blind complementing or duplicate removing module is used for analyzing the frames subjected to regional filling and fusion, rapidly increasing the cameras for blind complementing the monitoring frame part lacking in the region, and rapidly deleting the cameras for duplicate removal in the overlapped region of the frames;
and the multi-region video fusion optimization processing module is used for carrying out integral color equalization and splicing seam removal fusion optimization processing on the edges of the local fusion video regions.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211033403.7A CN115393192A (en) | 2022-08-26 | 2022-08-26 | Multi-point multi-view video fusion method and system based on general plane diagram |
PCT/CN2023/071527 WO2024040863A1 (en) | 2022-08-26 | 2023-01-10 | General layout-based multi-site multi-view video fusion method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211033403.7A CN115393192A (en) | 2022-08-26 | 2022-08-26 | Multi-point multi-view video fusion method and system based on general plane diagram |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115393192A true CN115393192A (en) | 2022-11-25 |
Family
ID=84122929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211033403.7A Pending CN115393192A (en) | 2022-08-26 | 2022-08-26 | Multi-point multi-view video fusion method and system based on general plane diagram |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115393192A (en) |
WO (1) | WO2024040863A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115589536A (en) * | 2022-12-12 | 2023-01-10 | 杭州巨岩欣成科技有限公司 | Drowning prevention multi-camera space fusion method and device for swimming pool |
WO2024040863A1 (en) * | 2022-08-26 | 2024-02-29 | 中建三局集团有限公司 | General layout-based multi-site multi-view video fusion method and system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8611691B2 (en) * | 2009-07-31 | 2013-12-17 | The United States Of America As Represented By The Secretary Of The Army | Automated video data fusion method |
CN110225315A (en) * | 2019-07-12 | 2019-09-10 | 北京派克盛宏电子科技有限公司 | Electric system screen monitored picture fusion method |
CN111586351A (en) * | 2020-04-20 | 2020-08-25 | 上海市保安服务(集团)有限公司 | Visual monitoring system and method for fusion of three-dimensional videos of venue |
CN114882201A (en) * | 2022-04-15 | 2022-08-09 | 中建三局集团有限公司 | Real-time panoramic three-dimensional digital construction site map supervision system and method |
CN115393192A (en) * | 2022-08-26 | 2022-11-25 | 中建三局集团有限公司 | Multi-point multi-view video fusion method and system based on general plane diagram |
-
2022
- 2022-08-26 CN CN202211033403.7A patent/CN115393192A/en active Pending
-
2023
- 2023-01-10 WO PCT/CN2023/071527 patent/WO2024040863A1/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024040863A1 (en) * | 2022-08-26 | 2024-02-29 | 中建三局集团有限公司 | General layout-based multi-site multi-view video fusion method and system |
CN115589536A (en) * | 2022-12-12 | 2023-01-10 | 杭州巨岩欣成科技有限公司 | Drowning prevention multi-camera space fusion method and device for swimming pool |
Also Published As
Publication number | Publication date |
---|---|
WO2024040863A1 (en) | 2024-02-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115393192A (en) | Multi-point multi-view video fusion method and system based on general plane diagram | |
CN108564647B (en) | A method of establishing virtual three-dimensional map | |
Han et al. | Potential of big visual data and building information modeling for construction performance analytics: An exploratory study | |
Jiang et al. | UAV-based 3D reconstruction for hoist site mapping and layout planning in petrochemical construction | |
US8818076B2 (en) | System and method for cost-effective, high-fidelity 3D-modeling of large-scale urban environments | |
US7509241B2 (en) | Method and apparatus for automatically generating a site model | |
CN110910338A (en) | Three-dimensional live-action video acquisition method, device, equipment and storage medium | |
CN109902332A (en) | A kind of power matching network system based on Three-dimension | |
CN107396046A (en) | A kind of stereoscopic monitoring system and method based on the true threedimensional model of oblique photograph | |
CN104330074A (en) | Intelligent surveying and mapping platform and realizing method thereof | |
CN110660125B (en) | Three-dimensional modeling device for power distribution network system | |
CN114419231B (en) | Traffic facility vector identification, extraction and analysis system based on point cloud data and AI technology | |
CN112383745A (en) | Panoramic image-based digital presentation method for large-scale clustered construction project | |
Busch et al. | Lumpi: The leibniz university multi-perspective intersection dataset | |
CN116210013A (en) | BIM (building information modeling) visualization system and device, visualization platform and storage medium | |
CN111522360A (en) | Banded oblique photography automatic route planning method based on electric power iron tower | |
CN116883251B (en) | Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video | |
CN114882201A (en) | Real-time panoramic three-dimensional digital construction site map supervision system and method | |
CN114299236A (en) | Oblique photogrammetry space-ground fusion live-action modeling method, device, product and medium | |
CN115604433A (en) | Virtual-real combined three-dimensional visualization system | |
Zhou et al. | Application of UAV oblique photography in real scene 3d modeling | |
CN110189395B (en) | Method for realizing dynamic analysis and quantitative design of landscape elevation based on human visual angle oblique photography | |
CN109035365B (en) | Mosaic processing method of high-resolution image | |
CN112529498B (en) | Warehouse logistics management method and system | |
CN114359489A (en) | Method, device and equipment for making real-scene image in pipeline construction period and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |