CN116993642A - Panoramic video fusion method, system and medium for building engineering temporary construction engineering - Google Patents
Panoramic video fusion method, system and medium for building engineering temporary construction engineering Download PDFInfo
- Publication number
- CN116993642A CN116993642A CN202310983115.6A CN202310983115A CN116993642A CN 116993642 A CN116993642 A CN 116993642A CN 202310983115 A CN202310983115 A CN 202310983115A CN 116993642 A CN116993642 A CN 116993642A
- Authority
- CN
- China
- Prior art keywords
- panoramic
- image
- information
- image data
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 29
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 15
- 238000013507 mapping Methods 0.000 claims abstract description 119
- 230000004927 fusion Effects 0.000 claims abstract description 63
- 238000012545 processing Methods 0.000 claims abstract description 28
- 230000009466 transformation Effects 0.000 claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000005457 optimization Methods 0.000 claims abstract description 21
- 238000005516 engineering process Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000006073 displacement reaction Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 4
- 230000004075 alteration Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 2
- 238000007526 fusion splicing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
The application discloses a panoramic video fusion method, a system and a medium for a temporary construction project of a building project, wherein the method comprises the following steps: carrying out image preprocessing on a real-time video picture of a building engineering temporary construction project and carrying out feature extraction and feature matching on the picture in the same plane, and determining a transformation relation of image data mapped by two adjacent planes; performing chromatic aberration optimization processing, deforming the plane mapping image data subjected to chromatic aberration optimization processing according to the transformation relation to obtain a corresponding panoramic video image, and acquiring flow field estimation information by combining with space horizontal position information; calibrating the planar mapping image data to obtain planar panoramic image registration information; and carrying out panoramic video stitching and fusion images based on the planar panoramic image registration information, and obtaining a three-dimensional fusion image of panoramic video of the building engineering temporary construction project through a three-dimensional fusion technology. The application effectively solves the defect that the prior art cannot smoothly play the high-resolution panoramic video image.
Description
Technical Field
The application relates to the field of image processing, in particular to a panoramic video fusion method, a system and a medium for a building engineering temporary construction project.
Background
The digital three-dimensional panoramic image is a three-dimensional panoramic image obtained by capturing image information of the whole environment scene through a camera, splicing and integrating images by using software, and processing a plane image. The three-dimensional panoramic image can simulate a two-dimensional plane image into a real three-dimensional space so as to achieve the effect of simulating and reproducing a real environment scene. With the continuous development of computer software and hardware technology, intelligent wearable devices are becoming popular, and the head-mounted virtual reality device displays virtual environment images in front of the eyes of users through an image display screen, so that an experience of being placed in the virtual environment is created for the users. When wearing the head-mounted virtual reality device, the user can isolate the environment image outside the demonstration scope of the image display screen by limiting the visual field scope of the user to the display scope of the image display screen, so that the user can obtain experience of immersing in the virtual scene. Through a plurality of cameras shooting panoramic video images with high image quality, more image details in environmental scenes are reserved, and a user can obtain more real immersion feeling when watching panoramic video through virtual reality equipment. In the prior art, when panoramic video images of an environmental scene are acquired by a plurality of cameras, there is an overlapping area between images acquired by adjacent cameras, and the overlapping area generally has differences in color brightness and deformation degree due to differences in shooting time and angles. When images acquired by a plurality of cameras are spliced to obtain a panoramic image, if the images in the overlapping area are simply overlapped, the visual effect of the panoramic image is seriously affected. Therefore, how to make the spliced and fused images smoother, more natural and clear in transition effectively improves the panoramic image splicing effect is one of the technical problems to be solved in the panoramic image splicing and synthesizing process.
Disclosure of Invention
The application aims to provide a panoramic video fusion method, a system and a medium for a building engineering temporary construction project, which can realize simple and efficient image fusion operation and effectively solve the defect that the prior art scheme cannot smoothly play high-resolution panoramic video images.
In order to achieve the above object, the present application provides the following solutions:
a panoramic video fusion method for a building engineering temporary construction project, the method comprising:
acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas, and performing image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project;
projecting and mapping the preprocessed video image information into the same plane to obtain plane mapping image data;
performing feature extraction and feature matching on the planar mapping image data, determining matching feature points of two adjacent planar mapping image data, and determining a transformation relation of the two adjacent planar mapping image data according to the matching feature points;
performing color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to a transformation relation to obtain a corresponding panoramic video image;
based on the time sequence information of the panoramic video image information, combining with the spatial horizontal position information, carrying out flow field estimation on the overlapping area of the plane mapping image data to obtain flow field estimation information;
calibrating the planar mapping image data through the flow field estimation information to obtain planar panoramic image registration information;
performing panoramic video stitching fusion images based on the planar panoramic image registration information to obtain panoramic fusion images;
and rendering the panoramic fusion image to a corresponding position of a preset three-dimensional model in real time through a three-dimensional fusion technology for displaying, so as to obtain a three-dimensional fusion image of the panoramic video of the building engineering temporary construction project.
Optionally, acquiring real-time video frames of a plurality of adjacent cameras with overlapping areas, and performing image preprocessing on the real-time video frames, specifically including:
carrying out average weighting treatment on the values of all pixel points of the real-time video picture according to a preset two-dimensional Gaussian filtering kernel function;
and carrying out graying treatment on the real-time video picture subjected to the average weighting treatment to obtain a corresponding gray image.
Optionally, projection mapping is performed on the preprocessed video image information to the same plane, so as to obtain plane mapping image data, which specifically includes:
extracting images of the video images to obtain a panoramic information image set;
performing image information projection mapping based on overlapping and association of images through the panoramic information image set to obtain panoramic mapping images;
and correcting by combining the panoramic mapping image with the image acquisition characteristics to obtain plane mapping image data.
Optionally, feature extraction and feature matching are performed on the plane mapping image data, matching feature points of two adjacent plane mapping image data are determined, and a transformation relationship of the two adjacent plane mapping image data is determined according to the matching feature points, and the method specifically includes:
extracting features of the plane mapping image data picture to obtain corresponding feature points and feature descriptors;
performing rough matching on each characteristic point, and determining the Hamming distance between two characteristic points according to the characteristic descriptors;
performing fine matching according to the Hamming distance to obtain matching feature points of two adjacent plane mapping image data pictures;
and determining the transformation relation of the two adjacent plane mapping image data pictures according to the matching characteristic points and a preset random sampling consistency algorithm, and storing the transformation relation.
Optionally, performing color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data after the color difference optimization processing according to a transformation relationship to obtain a corresponding panoramic video image, where the method includes:
performing color correction according to the color correction parameters between two adjacent plane mapping image data pictures and preset global adjustment parameters;
establishing panoramic images according to the number of cameras and the resolution of video pictures;
and overlapping and optimizing the overlapping areas of the two adjacent real-time video pictures in the panoramic image to obtain the panoramic image after overlapping and optimizing treatment.
Optionally, based on the time sequence information of the panoramic video image information, in combination with the spatial horizontal position information, performing flow field estimation on the overlapping area of the planar mapping image data to obtain flow field estimation information, which specifically includes:
integrating the image acquisition time of the panoramic video image information through the image acquisition frequency to acquire the time sequence information of the panoramic video image information;
performing association mapping through correspondence between the time sequence information and the space horizontal position information, and determining space position time sequence comparison information;
and carrying out flow field estimation on the overlapping area of the plane mapping image data based on the space position time sequence comparison information to obtain flow field estimation information.
Optionally, based on the spatial position time sequence comparison information, performing flow field estimation on the overlapping region of the planar mapping image data to obtain flow field estimation information, and the method includes:
in the stage of projection mapping of the panoramic video image information to the same plane, determining a superposition area of the plane mapping image data;
determining the displacement amount and the displacement direction of the image acquisition device;
analyzing the displacement amount and the displacement direction based on flow field estimation to obtain displacement flow field analysis information;
and performing image comparison on the overlapped area, and acquiring flow field estimation information by combining the displacement flow field analysis information.
Optionally, the panoramic fusion image is rendered to a corresponding position of a preset three-dimensional model in real time for display by a three-dimensional fusion technology, which specifically comprises the following steps:
determining a three-dimensional model of a target area and a plurality of discrete point pairs of the panoramic fusion image, wherein the discrete point pairs are composed of one three-dimensional model point coordinate and one panoramic fusion image rasterization coordinate;
and determining the mapping relation of the panoramic fusion image according to the discrete point pairs, carrying out coordinate interpolation according to the mapping relation, and carrying out panoramic video sampling according to the coordinate interpolation to obtain a three-dimensional fusion image of the panoramic video.
The application provides a panoramic video fusion system for a temporary construction project of a building project, which comprises:
the data acquisition module is used for acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas and carrying out image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project;
the mapping module is used for carrying out projection mapping on the preprocessed video image information to the same plane to obtain plane mapping image data;
the transformation relation determining module is used for carrying out feature extraction and feature matching on the plane mapping image data, determining matching feature points of two adjacent plane mapping image data, and determining the transformation relation of the two adjacent plane mapping image data according to the matching feature points;
the optimization processing module is used for carrying out color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to the transformation relation to obtain a corresponding panoramic video image;
the flow field estimation module is used for carrying out flow field estimation on the overlapping area of the plane mapping image data based on the time sequence information of the panoramic video image information and combining with the space horizontal position information to obtain flow field estimation information;
the calibration module is used for calibrating the plane mapping image data through the flow field estimation information to obtain plane panoramic image registration information;
the fusion module is used for carrying out panoramic video stitching fusion images based on the planar panoramic image registration information to obtain panoramic fusion images;
and the three-dimensional fusion module is used for rendering the panoramic fusion image to the corresponding position of the preset three-dimensional model in real time through a three-dimensional fusion technology for display.
The application provides a computer readable storage medium which stores a computer program, and is characterized in that the computer program realizes the panoramic video fusion method of the building engineering temporary construction engineering when being executed by a processor.
According to the specific embodiment provided by the application, the application discloses the following technical effects:
the application provides a panoramic video fusion method, a system and a medium for a building engineering temporary construction project, which can realize simple and efficient image fusion operation by the image fusion mode of the technical scheme, effectively solve the defect that the prior art scheme can not smoothly play high-resolution panoramic video images, obviously reduce the requirement of high-capacity video real-time transmission on network bandwidth, and promote the applicability of panoramic video playing technology in different scenes.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a panoramic video fusion method for a building engineering temporary construction project provided by an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application aims to provide a panoramic video fusion method, a system and a medium for a building engineering temporary construction project, which can realize simple and efficient image fusion operation and effectively solve the defect that the prior art scheme cannot smoothly play high-resolution panoramic video images.
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
Example 1
As shown in fig. 1, the embodiment provides a panoramic video fusion method for a temporary construction project of a building engineering, which includes:
s1: acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas, and performing image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project.
The step S1 specifically includes:
(1) And carrying out average weighting processing on the values of all the pixel points of the real-time video picture according to a preset two-dimensional Gaussian filtering kernel function.
(2) And carrying out graying treatment on the real-time video picture subjected to the average weighting treatment to obtain a corresponding gray image.
S2: and carrying out projection mapping on the preprocessed video image information to the same plane to obtain plane mapping image data.
The step S2 specifically includes:
(1) And extracting the video image to obtain a panoramic information image set.
(2) And performing image information projection mapping based on overlapping and association of images through the panoramic information image set to obtain panoramic mapping images.
(3) And correcting by combining the panoramic mapping image with the image acquisition characteristics to obtain plane mapping image data.
S3: and carrying out feature extraction and feature matching on the plane mapping image data, determining matching feature points of two adjacent plane mapping image data, and determining the transformation relation of the two adjacent plane mapping image data according to the matching feature points.
The step S3 specifically includes:
(1) And extracting the characteristics of the plane mapping image data picture to obtain corresponding characteristic points and characteristic descriptors.
(2) And performing rough matching on each characteristic point, and determining the Hamming distance between the two characteristic points according to the characteristic descriptors.
(3) And performing fine matching according to the Hamming distance to obtain matching feature points of two adjacent plane mapping image data pictures.
(4) And determining the transformation relation of the two adjacent plane mapping image data pictures according to the matching characteristic points and a preset random sampling consistency algorithm, and storing the transformation relation.
S4: and carrying out color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to a transformation relation to obtain a corresponding panoramic video image.
Wherein, step S4 includes:
(1) And carrying out color correction according to the color correction parameters between two adjacent plane mapping image data pictures and the preset global adjustment parameters.
(2) And establishing panoramic images according to the number of cameras and the resolution of video pictures.
(3) And overlapping and optimizing the overlapping areas of the two adjacent real-time video pictures in the panoramic image to obtain the panoramic image after overlapping and optimizing treatment.
S5: and carrying out flow field estimation on the overlapping area of the plane mapping image data based on the time sequence information of the panoramic video image information and combining with the space horizontal position information to obtain flow field estimation information.
The step S5 specifically includes:
(1) And integrating the image acquisition time of the panoramic video image information through the image acquisition frequency to acquire the time sequence information of the panoramic video image information.
(2) And carrying out association mapping through correspondence between the time sequence information and the space horizontal position information, and determining space position time sequence comparison information.
(3) Based on the space position time sequence comparison information, carrying out flow field estimation on the superposition area of the plane mapping image data to obtain flow field estimation information, wherein the method specifically comprises the following steps:
1) And in the stage of projection mapping of the panoramic video image information to the same plane, determining the superposition area of the plane mapping image data.
2) And determining the displacement amount and the displacement direction of the image acquisition device.
3) And analyzing the displacement and the displacement direction based on flow field estimation to obtain displacement flow field analysis information.
4) And performing image comparison on the overlapped area, and acquiring flow field estimation information by combining the displacement flow field analysis information.
S6: and calibrating the planar mapping image data through the flow field estimation information to obtain planar panoramic image registration information.
S7: and carrying out panoramic video stitching and fusion image based on the planar panoramic image registration information to obtain a panoramic fusion image.
S8: and rendering the panoramic fusion image to a corresponding position of a preset three-dimensional model in real time through a three-dimensional fusion technology for displaying, so as to obtain a three-dimensional fusion image of the panoramic video of the building engineering temporary construction project.
The step S8 specifically includes:
(1) And determining a three-dimensional model of the target area and a plurality of discrete point pairs of the panoramic fusion image, wherein the discrete point pairs are composed of one three-dimensional model point coordinate and one panoramic fusion image rasterization coordinate.
(2) And determining the mapping relation of the panoramic fusion image according to the discrete point pairs, carrying out coordinate interpolation according to the mapping relation, and carrying out panoramic video sampling according to the coordinate interpolation to obtain a three-dimensional fusion image of the panoramic video.
In this embodiment, panoramic video image information is obtained and projected and mapped to the same plane, and plane mapping image data is obtained; acquiring spatial horizontal position information of an image acquisition device; based on time sequence information of panoramic video image information, combining with spatial horizontal position information, carrying out flow field estimation on the overlapping region of the planar mapping image data to obtain flow field estimation information; calibrating the planar mapping image data to obtain planar panoramic image registration information; and carrying out panoramic video stitching and fusion image display on the planar panoramic image registration information. The embodiment of the application achieves the technical effects of intelligently optimizing the fusion splicing scheme of panoramic video images, accurately and quickly generating the image fusion information of panoramic video splicing and restoring the authenticity of the image fusion information. The technical problem that the fusion splicing scheme of panoramic video images is unreasonable, so that the image fusion information is unreliable is solved. In addition, a plurality of adjacent camera pictures with overlapping areas are spliced into a complete picture through a panoramic splicing technology, then the real-time spliced picture is rendered to a corresponding three-dimensional model position by utilizing computer graphics, and the combination of the geographic position and the real-time panoramic video is realized, so that the accuracy of splicing the panoramic video of the temporary construction engineering is ensured, and the real situation is more attached.
Example two
The embodiment provides a building engineering temporary construction engineering panoramic video fusion system, which comprises:
the data acquisition module is used for acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas and carrying out image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project.
And the mapping module is used for carrying out projection mapping on the preprocessed video image information to the same plane to obtain plane mapping image data.
And the transformation relation determining module is used for carrying out feature extraction and feature matching on the plane mapping image data, determining matching feature points of two adjacent plane mapping image data, and determining the transformation relation of the two adjacent plane mapping image data according to the matching feature points.
And the optimization processing module is used for carrying out color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to the transformation relation to obtain a corresponding panoramic video image.
And the flow field estimation module is used for carrying out flow field estimation on the overlapping area of the plane mapping image data based on the time sequence information of the panoramic video image information and combining the space horizontal position information to acquire flow field estimation information.
And the calibration module is used for calibrating the plane mapping image data through the flow field estimation information to obtain plane panoramic image registration information.
And the fusion module is used for carrying out panoramic video stitching fusion images based on the planar panoramic image registration information to obtain panoramic fusion images.
And the three-dimensional fusion module is used for rendering the panoramic fusion image to the corresponding position of the preset three-dimensional model in real time through a three-dimensional fusion technology for display.
Example III
The embodiment provides an electronic device, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic device to execute the panoramic video fusion method for the building engineering temporary construction project of the first embodiment.
Alternatively, the electronic device may be a server.
In addition, the embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the panoramic video fusion method of the building engineering temporary construction project of the first embodiment when being executed by a processor.
Embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present application and the core ideas thereof; also, it is within the scope of the present application to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the application.
Claims (10)
1. The panoramic video fusion method for the building engineering is characterized by comprising the following steps:
acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas, and performing image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project;
projecting and mapping the preprocessed video image information into the same plane to obtain plane mapping image data;
performing feature extraction and feature matching on the planar mapping image data, determining matching feature points of two adjacent planar mapping image data, and determining a transformation relation of the two adjacent planar mapping image data according to the matching feature points;
performing color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to a transformation relation to obtain a corresponding panoramic video image;
based on the time sequence information of the panoramic video image information, combining with the spatial horizontal position information, carrying out flow field estimation on the overlapping area of the plane mapping image data to obtain flow field estimation information;
calibrating the planar mapping image data through the flow field estimation information to obtain planar panoramic image registration information;
performing panoramic video stitching fusion images based on the planar panoramic image registration information to obtain panoramic fusion images;
and rendering the panoramic fusion image to a corresponding position of a preset three-dimensional model in real time through a three-dimensional fusion technology for displaying, so as to obtain a three-dimensional fusion image of the panoramic video of the building engineering temporary construction project.
2. The method according to claim 1, wherein acquiring a real-time video picture of a plurality of adjacent cameras having overlapping areas and performing image preprocessing on the real-time video picture, comprises:
carrying out average weighting treatment on the values of all pixel points of the real-time video picture according to a preset two-dimensional Gaussian filtering kernel function;
and carrying out graying treatment on the real-time video picture subjected to the average weighting treatment to obtain a corresponding gray image.
3. The method according to claim 1, wherein projecting the preprocessed video image information into the same plane, and obtaining the plane-mapped image data, specifically comprises:
extracting images of the video images to obtain a panoramic information image set;
performing image information projection mapping based on overlapping and association of images through the panoramic information image set to obtain panoramic mapping images;
and correcting by combining the panoramic mapping image with the image acquisition characteristics to obtain plane mapping image data.
4. The method according to claim 1, wherein the feature extraction and feature matching are performed on the planar mapping image data, the matching feature points of two adjacent planar mapping image data are determined, and the transformation relationship of the two adjacent planar mapping image data is determined according to the matching feature points, specifically including:
extracting features of the plane mapping image data picture to obtain corresponding feature points and feature descriptors;
performing rough matching on each characteristic point, and determining the Hamming distance between two characteristic points according to the characteristic descriptors;
performing fine matching according to the Hamming distance to obtain matching feature points of two adjacent plane mapping image data pictures;
and determining the transformation relation of the two adjacent plane mapping image data pictures according to the matching characteristic points and a preset random sampling consistency algorithm, and storing the transformation relation.
5. The method according to claim 1, wherein performing color difference optimization processing on the two adjacent planar mapping image data, and deforming the planar mapping image data after the color difference optimization processing according to a transformation relationship, to obtain a corresponding panoramic video image, includes:
performing color correction according to the color correction parameters between two adjacent plane mapping image data pictures and preset global adjustment parameters;
establishing panoramic images according to the number of cameras and the resolution of video pictures;
and overlapping and optimizing the overlapping areas of the two adjacent real-time video pictures in the panoramic image to obtain the panoramic image after overlapping and optimizing treatment.
6. The method according to claim 1, wherein based on the time sequence information of the panoramic video image information, in combination with spatial horizontal position information, performing flow field estimation on the overlapping region of the planar mapping image data to obtain flow field estimation information, specifically comprising:
integrating the image acquisition time of the panoramic video image information through the image acquisition frequency to acquire the time sequence information of the panoramic video image information;
performing association mapping through correspondence between the time sequence information and the space horizontal position information, and determining space position time sequence comparison information;
and carrying out flow field estimation on the overlapping area of the plane mapping image data based on the space position time sequence comparison information to obtain flow field estimation information.
7. The method of claim 6, wherein the flow field estimation is performed on the overlapping region of the planar map image data based on the spatial position timing comparison information to obtain flow field estimation information, the method comprising:
in the stage of projection mapping of the panoramic video image information to the same plane, determining a superposition area of the plane mapping image data;
determining the displacement amount and the displacement direction of the image acquisition device;
analyzing the displacement amount and the displacement direction based on flow field estimation to obtain displacement flow field analysis information;
and performing image comparison on the overlapped area, and acquiring flow field estimation information by combining the displacement flow field analysis information.
8. The method of claim 1, wherein the panoramic fusion image is rendered to a corresponding position of a preset three-dimensional model for display in real time by a three-dimensional fusion technology, and specifically comprises:
determining a three-dimensional model of a target area and a plurality of discrete point pairs of the panoramic fusion image, wherein the discrete point pairs are composed of one three-dimensional model point coordinate and one panoramic fusion image rasterization coordinate;
and determining the mapping relation of the panoramic fusion image according to the discrete point pairs, carrying out coordinate interpolation according to the mapping relation, and carrying out panoramic video sampling according to the coordinate interpolation to obtain a three-dimensional fusion image of the panoramic video.
9. A panoramic video fusion system for a building engineering temporary construction project, the system comprising:
the data acquisition module is used for acquiring real-time video pictures of a plurality of adjacent cameras with overlapping areas and carrying out image preprocessing on the real-time video pictures; the real-time video picture is a real-time video picture of a building engineering temporary construction project;
the mapping module is used for carrying out projection mapping on the preprocessed video image information to the same plane to obtain plane mapping image data;
the transformation relation determining module is used for carrying out feature extraction and feature matching on the plane mapping image data, determining matching feature points of two adjacent plane mapping image data, and determining the transformation relation of the two adjacent plane mapping image data according to the matching feature points;
the optimization processing module is used for carrying out color difference optimization processing on the two adjacent plane mapping image data, and deforming the plane mapping image data subjected to the color difference optimization processing according to the transformation relation to obtain a corresponding panoramic video image;
the flow field estimation module is used for carrying out flow field estimation on the overlapping area of the plane mapping image data based on the time sequence information of the panoramic video image information and combining with the space horizontal position information to obtain flow field estimation information;
the calibration module is used for calibrating the plane mapping image data through the flow field estimation information to obtain plane panoramic image registration information;
the fusion module is used for carrying out panoramic video stitching fusion images based on the planar panoramic image registration information to obtain panoramic fusion images;
and the three-dimensional fusion module is used for rendering the panoramic fusion image to the corresponding position of the preset three-dimensional model in real time through a three-dimensional fusion technology for display.
10. A computer readable storage medium storing a computer program, which when executed by a processor implements the construction engineering temporary engineering panoramic video fusion method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310983115.6A CN116993642A (en) | 2023-08-07 | 2023-08-07 | Panoramic video fusion method, system and medium for building engineering temporary construction engineering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310983115.6A CN116993642A (en) | 2023-08-07 | 2023-08-07 | Panoramic video fusion method, system and medium for building engineering temporary construction engineering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116993642A true CN116993642A (en) | 2023-11-03 |
Family
ID=88533591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310983115.6A Pending CN116993642A (en) | 2023-08-07 | 2023-08-07 | Panoramic video fusion method, system and medium for building engineering temporary construction engineering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116993642A (en) |
-
2023
- 2023-08-07 CN CN202310983115.6A patent/CN116993642A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11076142B2 (en) | Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene | |
CN109615703B (en) | Augmented reality image display method, device and equipment | |
US6717586B2 (en) | Apparatus, method, program code, and storage medium for image processing | |
JP4698831B2 (en) | Image conversion and coding technology | |
US9460555B2 (en) | System and method for three-dimensional visualization of geographical data | |
AU2019309552B2 (en) | Method and data-processing system for synthesizing images | |
CN112017222A (en) | Video panorama stitching and three-dimensional fusion method and device | |
WO2023207452A1 (en) | Virtual reality-based video generation method and apparatus, device, and medium | |
CN107862718B (en) | 4D holographic video capture method | |
CN113538659A (en) | Image generation method and device, storage medium and equipment | |
CN113012299A (en) | Display method and device, equipment and storage medium | |
CN112446939A (en) | Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium | |
EP3057316B1 (en) | Generation of three-dimensional imagery to supplement existing content | |
CN116168076A (en) | Image processing method, device, equipment and storage medium | |
CN118196135A (en) | Image processing method, apparatus, storage medium, device, and program product | |
CN113546410B (en) | Terrain model rendering method, apparatus, electronic device and storage medium | |
CN108765582B (en) | Panoramic picture display method and device | |
CN114358112A (en) | Video fusion method, computer program product, client and storage medium | |
CN109801351B (en) | Dynamic image generation method and processing device | |
JP6799468B2 (en) | Image processing equipment, image processing methods and computer programs | |
CN116993642A (en) | Panoramic video fusion method, system and medium for building engineering temporary construction engineering | |
JP2021196870A (en) | Virtual viewpoint rendering device, method, and program | |
CN113821107B (en) | Indoor and outdoor naked eye 3D system with real-time and free viewpoint | |
WO2024152678A1 (en) | Generation method and apparatus for human body depth map, and electronic device, storage medium and computer program product | |
CN109407329B (en) | Space light field display method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |