CN103226830A - Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment - Google Patents

Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment Download PDF

Info

Publication number
CN103226830A
CN103226830A CN2013101487710A CN201310148771A CN103226830A CN 103226830 A CN103226830 A CN 103226830A CN 2013101487710 A CN2013101487710 A CN 2013101487710A CN 201310148771 A CN201310148771 A CN 201310148771A CN 103226830 A CN103226830 A CN 103226830A
Authority
CN
China
Prior art keywords
video
texture
virtual
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101487710A
Other languages
Chinese (zh)
Other versions
CN103226830B (en
Inventor
高鑫光
兰江
李胜
汪国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing weishiwei Information Technology Co.,Ltd.
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201310148771.0A priority Critical patent/CN103226830B/en
Publication of CN103226830A publication Critical patent/CN103226830A/en
Application granted granted Critical
Publication of CN103226830B publication Critical patent/CN103226830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to an automatic matching correction method of a video texture projection in a three-dimensional virtual-real fusion environment and a method of fusing an actual video image and a virtual scene. The automatic matching correction method comprises the steps of constructing the virtual scene, obtaining video data, fusing video textures and correcting a projector. A shot actual video is subjected to virtual scene fusion on the surface of the complicated scene such as an earth surface and a building by a texture projection mode, the expression and showing abilities of dynamic scene information in the virtual-real environment are improved, and the layered sense of the scene is enhanced. A dynamic video texture coverage effect of the large-scale virtual scene can be realized by increasing the number of the videos at different shooting angles, so that a dynamic reality effect of the virtual-real fusion of the virtual-real environment and the display scene is realized. The obvious color jump is eliminated, and a visual effect is improved by conducting color consistency processing on video frames in advance. With the adoption of an automatic correction algorithm, the virtual scene and the actual video can be fused more precisely.

Description

The automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment
Technical field
This paper relates to virtual reality, relates in particular to a kind of method of utilizing real video image and virtual scene to merge, proofread and correct, belongs to technical fields such as virtual reality, computer graphics, computer vision and man-machine interaction.
Background technology
In virtual reality system, using the details on the surface on static images performance buildings or ground is the most frequently used means, adopts the texture mapping mode to realize usually.The deficiency of this method is the texture of scene surface in case set just no longer change, changes ignoring of key element for the model of place surface, has reduced the sense of reality of virtual environment, can not give a kind of sensation on the spot in person of people.In order to eliminate the sense of reality deficiency that static images causes, utilizing video to replace picture is a kind of idea intuitively.Present stage also has some systems to add video elementary, but is to adopt the form that plays window mostly, utilizes existing video player displaying video, has just reached the effect of global monitoring, does not accomplish the fusion that video and scene are real.There are some research work to improve on this basis, by in the space, making up additional plane, and on this plane the mode of displaying video strengthen the sense of reality (can be referring to K.Kim, S.Oh, J.Lee, I.Essa.Augmenting Aerial Earth Maps with Dynamic Information.IEEE international Symposium on Mixed and Augmented Reality, Science and Technology Proceedings.19-22Oct, 2009, Orlando, Florida, USA. and Y.Wang, D.Bowman, D.Krum, E.Coelho, T.Smith-Jackson, D.Bailey, S.Peck, S.Anand, T.Kennedy, and Y.Abdrazakov.Effects of Video Placement and Spatial Context Presentation on Path Reconstruction Tasks with Contextualized Videos.IEEE Transactions on visualization and computer graphics, Vol.14, No.6, November/December2008.), though said method has added virtual environment with video, environment for use is very limited, can only be attached on some heavy construction plane or the smooth ground, for complicated a little scene situation, for example build positions such as turning or irregular ground, their geometric configuration can't represent that the method for these plane displaying videos is just inapplicable with plane approximation.
On the other hand, because the development in graphics and vision field has had the algorithm of a lot of maturations, for example based on the coupling of color, the coupling of texture, the coupling of feature (Edge Direction, SIFT, HOG).But these methods all is the method that is applied to two dimensional image, uses bigger limitation in three dimensions.And present stage and the relevant algorithm of projector's correction, mostly be for the algorithm of view field's keystone under " projector-screen " system, as: multi-projector method for correcting image and equipment, application number 201010500209.6, bearing calibration is limited in the two-dimensional space, by obtaining the independent image information correction parameter corresponding in the zero lap zone that video camera gathers respectively with independent image, video data according to the video camera of correction parameter correspondence carries out treatment for correcting, only proofreaies and correct at image overlapping or that the overlapping region is less.Based on the tangibly true 3 D displaying method of multi-projector rotating panel 3-dimensional image, application number: 200810114457.X, by obtaining three-dimensional space the cross-sectional image that obtains different angles is described, make hand can directly touch stereopsis, improved the contrast of stereo-picture simultaneously, but this application mainly relies on rotary screen to solve the tangibly problem of 3-D view, and is different with this job applications scene.
Above patented claim or feature matching method of the prior art there is no too many reference significance on the correction of three dimensions projector.
Summary of the invention
The objective of the invention is to, with the real video of taking, mode by the texture projection, carrying out virtual scene in scene surface such as the face of land of complexity and buildingss merges, the expression that improves scene multidate information in the reality environment with demonstrate one's ability, also strengthened the stereovision of scene, and can be by increasing number of videos from different shooting angle, realize the dynamic video texture coverage effect of virtual scene on a large scale, thereby realize the dynamic sense of reality effect of the actual situation fusion of reality environment and displayed scene.
In order to realize technical purpose, the present invention adopts following technical scheme:
The automatic correct methods matching of video texture projection in a kind of three-dimensional actual situation integrated environment, its step comprises:
1) sets up the surface according to the remotely-sensed data image that obtains in advance and have the terrain model of static texture image and the virtual scene that constitutes by a plurality of models that comprise three-dimensional geometry and texture; Video camera pose information of living in when obtaining true capture video stream of multistage and records photographing;
During 2) according to described take video camera pose information of living in described virtual scene, adds the virtual projection machine model and with the viewing volume of the corresponding projector of camera parameters, the while is according to the initial pose value in the video camera pose information setting virtual projection plane model virtual scene;
3) image of described true capture video stream is carried out the frame of video pre-service and obtain the dynamic video texture, utilize the projective textures technology that described pretreated video data is projected in the virtual environment;
4) model surface static state texture and/or the original remote sensing image texture in the face of land and described dynamic video texture in the described virtual environment are merged, obtain the final texture value that scene surface covers;
5) from described virtual projection machine model, obtain virtual projection plane as the image under the viewpoint according to described final texture value by playing up means, and with true capture video stream in corresponding image coupling, the structure energy function;
6) utilize in the energy function optimum solution that the initial pose value of the projector in the described virtual scene is reset, finish virtual projection plane and proofread and correct.
Further, the texture fusion method is as follows in the described step 4):
1) replacement model view matrix and projection matrix are converted into virtual view under projector's viewpoint, draw described virtual scene, obtain the depth value under the front projection machine viewpoint (it is depth buffered to utilize Z-Buffer to realize);
2) replacement model view matrix and projection matrix become viewpoint under the virtual view again, repaint described virtual scene, obtain the corresponding real depth value of each point in the scene;
3) under each projector's viewpoint, draw virtual scene successively, obtain the projective textures coordinate of each point in the scene by automatic texture generating mode, and to above-mentioned steps 1), 2) comparison of the described real depth value that obtains and described depth value (utilize Z-Buffer realization depth buffered);
4) if both are equal, adopt projector's video texture,, adopt model of place self texture if do not wait, and by setting the mode iteration of texture-combined device function, all projectors in having traveled through scene, the final texture value of each point in the acquisition scene.
Further, corresponding image coupling in described step 5) and the true capture video stream, setting up with the posture information is that the energy function building method of independent variable is as follows:
The first step, replacement model view matrix and projection matrix are adjusted to the projector place with viewpoint in the virtual scene, and the drafting scene obtains the image under the width of cloth virtual environment, utilizes the mean-shift algorithm that image is cut apart the back image is done binary conversion treatment;
Second step extracted a key frame from described true capture video stream, use the method for the first step to do binary conversion treatment;
In the 3rd step, profile errors in the viewing volume zone that calculating projector forms is done the XOR processing to the rapid image that obtains of described first two steps by pixel, and statistics is 1 pixel quantity, and this result is an energy function first;
The 4th step, utilize SIFT consistance operator to add the feature of local message, the match point of collecting in the image of binary conversion treatment that first and second step obtains not pass through is right, obtain the right error amount of match point by key point constraint (Key-point constraint) process, this error amount is the energy function second portion;
In the 5th step, distribute different weights for two parts of energy function;
The 6th step, for finding the solution of energy function optimal value,
In the 7th step, utilize optimum solution to replace the initial pose value of projector.
Further, described energy function optimal value is found the solution in accordance with the following methods:
At first energy function is applied simulated annealing, the solution space of function is narrowed down in the optimum solution approximate extents, utilize the compression of downhill simplex algorithm pairing approximation solution space again, obtain optimum solution.
Further, when utilizing the mean-shift algorithm that image is cut apart, utilizes the color characteristic of building and highway in described first and second step, the pixel value in non-building or highway zone is changed to white, the zone of preserved building model or highway correspondence, then image is done binary conversion treatment, will build with the highway relevant range and be changed to black.
Further, it is as follows the image of described true capture video stream to be carried out the pretreated method of frame of video:
Video data decoding obtains individual video frame image, extracts a sample frame and utilize the SIFT operator to seek Feature Points Matching in the sample frame from each video flowing, and carry out colour consistency and handle.
Further, described colour consistency is treated to:
1) from two videos that mate, respectively extracts a sample frame, make up the color histogram that all pixels form in the frame,, make two width of cloth frame of video have identical color histogram and distribute by color histogram equalization and regulationization processing;
2) each frame in the same video flowing is done histogram equalization and the regulation processing identical with corresponding sample frame, thus whole video stream is finished consistance and handle;
3) create buffer memory (cache) for frame of video, size can be held 50 frame of video (frame of video resolution is 1920*1080) approximately;
4) adopt the list structure of first in first out (FIFO) to be written into frame data.
Further, described true capture video stream obtains by the http agreement, carries out video data decoding in this locality, and frame of video is saved as the Jepg form.
Further, described video frame image is carried out multiresolution handle, be written into the frame of video of different resolution for same image according to different situations, adopt by pixel and carry out the bilinear interpolation operation, with image take out analyse into original image 1/4,1/16,1/64 in one or more.
Further, described Jepg format video image is increased the Alpha passage.
The present invention also proposes a kind of real video image and virtual scene fusion method, the steps include:
1) sets up the surface according to the remotely-sensed data image that obtains in advance and have static texture image model and virtual scene; Relative position in the described virtual scene between model space position and model, towards, the size and reality scene be consistent;
2) obtain true capture video stream of multistage and records photographing video camera pose of living in information;
3) realization of the method for the invention can be based upon on the VR-Platform based on digital earth, each virtual projection plane has the Cartesian coordinates that geo-localisation information and virtual reality have and represents this two covers coordinate representation mode, therefore be converted to the world coordinates that the Cartesian coordinates at virtual scene place represents and be combined in according to the latitude and longitude coordinates of described shooting earth surface of living in and add virtual projection machine model and the corresponding viewing volume of projector's model in the described virtual scene, simultaneously according to the initial pose value in the virtual projection plane model virtual scene of video camera pose information setting under world coordinate system;
4) image of described true capture video stream is carried out the frame of video pre-service and obtain the dynamic video texture, utilize the projective textures technology that described pretreated video data is projected in the virtual environment;
5) original remote sensing image texture of the static texture and/or the face of land and the described dynamic video texture with model in the described virtual environment merges;
6) there is crossing overlay area to adopt texture to merge to different projectors in the virtual projection machine model.
Beneficial effect of the present invention
(a) overcome complicated scene condition, realized the fusion of video and virtual scene, utilize video texture to substitute original terrain remote sensing texture and the intrinsic coarse still image texture of model,, promoted visual effect for the virtual scene texture has increased multidate information.And by increasing number of videos, scope widens one's influence.
(b) for video provides buffer structure, and made up the data pyramid, promoted the efficient that shows, the data of adjacent two layers are replaced can.
(c) provide automatic correcting algorithm, initial virtual projection plane pose has been adjusted, made the fusion of virtual scene and real video more accurate, make comparisons with initial position, more precisely be embodied on the value of energy function, the position is accurate more, and the value of energy function levels off to zero more.
(d) carry out colour consistency processing in advance for frame of video, eliminate the obvious color saltus step, promote visual effect.
Description of drawings
Fig. 1 is a concrete operations realization flow synoptic diagram among automatic correct methods matching one embodiment of video texture projection in the three-dimensional actual situation integrated environment of the present invention;
Fig. 2 a, Fig. 2 b are the scene synoptic diagram of texture of not adding drop shadow among automatic correct methods matching one embodiment of video texture projection in the three-dimensional actual situation integrated environment of the present invention;
Fig. 3 a, Fig. 3 b are the scene synoptic diagram that has added projective textures among automatic correct methods matching one embodiment of video texture projection in the three-dimensional actual situation integrated environment of the present invention;
Fig. 4 be among automatic correct methods matching one embodiment of video texture projection in the three-dimensional actual situation integrated environment of the present invention projector without the scene synoptic diagram of overcorrection;
Fig. 5 be among automatic correct methods matching one embodiment of video texture projection in the three-dimensional actual situation integrated environment of the present invention projector through the scene synoptic diagram of overcorrect.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, be understandable that described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those skilled in the art belong to the scope of protection of the invention not making the every other embodiment that is obtained under the creative work prerequisite.
(1) makes up virtual scene.Utilize the remote sensing image that obtains in advance that terrain texture is set, in the Virtual Space, make up the surface and have the model of static texture image and the virtual scene that model constitutes, the locus of model and the relative position between the model in the scene, should be consistent with reality scene as far as possible towards key elements such as, sizes.
(2) obtain video data.Data Source can be a rig camera, also can be the video image that mobile device is taken, and obtains the parameter information of video camera when taking simultaneously, is used for the first location of Virtual Space projector.Video flowing taken out analyses, obtain single-frame images, and to image do that multiresolution is handled and color of light according to the consistance processing.
(3) video texture merges.According to the video camera latitude and longitude information of obtaining in the step (2), in step (1), add virtual projection machine model and its viewing volume, and by the video camera pose information setting virtual projection machine model that obtains in the step (2) towards.At present, because the restriction of implementation method in the viewing volume space of single virtual view, can load 32 virtual projection machine models at the most simultaneously.Utilize the projective textures technology that video data is projected in the virtual environment, original remote sensing image texture of model or ground and dynamic video texture are merged.If there is crossing overlay area in different projectors, also need adopt mixing operation for this zone.
(4) projector proofreaies and correct.Obtain the image under the virtual projection plane viewpoint, do coupling with image in the relevant real video.By the algorithm among the present invention, calculate buildings or highway disparity range in virtual image and the real image, and local feature difference, the structure energy function is found the solution the optimum solution in the energy function.Utilize optimum solution to reset, finish trimming process, promote effect for the projector in the virtual scene.
Below from several aspects the inventive method is done and to be specified.
At first do and specify for some notions:
Video camera: the video source in the real space is used to obtain video data.
Projector: the dummy model in the virtual scene is used at virtual scene projection video texture.
Texture merges: a model can be used the texture of several separate sources, needs so texture color values different on the same point is carried out fusion treatment, obtains final color value.
Depth value: the representative that is obtained after the perspective transform of any some process in the space is in the value of Z direction and virtual view distance.
Depth buffered (Z-Buffer): scene cushions a buffering of identical size through what preserve after playing up with color, each element in the buffering has been stored the depth value in the scene, represents the nearest depth value that body surface had of the pairing three-dimensional scenic middle distance of this element viewpoint.
Projective textures technology: differ from four traditional point texture pinup picture modes, the form of texture with projection is applied in the virtual scene, with building and/or the landform fusion in the virtual scene, as the final texture of building and/or landform.
Technical scheme (2) specific implementation is as follows: for video data, obtain by traditional http agreement, carry out video data decoding in this locality, obtain individual frame of video, video is saved as the Jepg form, from each video flowing, extract a sample frame, utilizing the SIFT operator to seek Feature Points Matching in the sample frame (herein can be for the video source process of presorting, the video source of taking identical building is divided in together, it is consuming time to reduce matching process like this, improve pre-service efficient), and carry out colour consistency and handle.
The concrete operations that colour consistency is handled are respectively to extract a sample frame from two videos that mate, make up the color histogram that all pixels form in the frame, by color histogram equalization and regulationization processing, making two width of cloth frame of video have identical color histogram distributes, all having approximately uniform color histogram with the sample frame for all frames in the same video flowing distributes, so do histogram equalization and the regulation processing identical for each frame in the same video flowing, thus whole video stream finished consistance and handle with corresponding sample frame.The purpose of this processing is exactly to allow the video texture of overlapping region that identical texture color can be arranged, and improves syncretizing effect between video, avoids occurring obvious saltus step effect.Because the frame of video that parses is bigger, and the memory source preciousness, institute thinks that frame of video created cache, and size is 50 frame of video.Select 50 reason to be herein, in the process of obtaining video flowing and resolving, the resolution of single video frame maximum is 1920*1080, and each pixel all has 4 bytes, be RGB and Alpha passage, wherein Alpha has determined the translucent degree of image, and its span is 0 opaque to 255(0 representative, and 255 represent the full impregnated bright), read a frame video so and will consume the 1MB space, 30 road videos will consume the internal memory of 30MB, if estimate with the interior 1G that saves as how do can buffer memory 50 frames.Adopt the list structure of FIFO.If spatial cache is full, then time-out imports into and enters wait.If because network problem, display speed then returns the previous frame video faster than being written into speed, until there being new frame data to import into.A prioritization scheme provided by the invention: the space-saving strategy of another one is exactly to carry out multiresolution for frame of video to handle, and makes up pyramid for same image, and different situations are written into the frame of video of different resolution, the save memory expense.Concrete mode is to carry out the bilinear interpolation operation by pixel, image is taken out to analyse be 1/4,1/16,1/64 of original image.In addition, a prioritization scheme provided by the invention: in order raising the efficiency, to make frame of video can be directly used in the texture projection algorithm, need to increase the Alpha passage, as fusion parameters between video and the scene or between the video for video frame images.
Technical scheme (3) specific implementation is as follows:
At first reset virtual view is converted under projector's viewpoint by model view matrix and projection matrix, empty the depth buffer under the virtual view, and polygon side-play amount and color mask be set, corresponding virtual scene in the rendering technique scheme (1), acquisition is when front projection machine depth buffered as under the viewpoint, and constitutes depth texture;
Secondly, by model view matrix and projection matrix replacement viewpoint is become again under the virtual view, empty color and depth buffer, corresponding virtual scene in the rendering technique scheme (1), comprise its superficial makings, obtain the corresponding real depth value of each point in the scene thus.
At last, reset, under each projector's viewpoint, draw scene successively by model view matrix and projection matrix.Obtain the projective textures coordinate of each point in the scene by automatic texture generating mode, and the real depth value that obtains by first and second two step and Z-Buffer value determine that relatively each puts final texture value in the scene, if both equate, then adopt projector's video texture, if etc., do not utilize model of place self texture.This process of iteration, all projectors in having traveled through scene.
For the fusion between the different video, adopt the mode of setting texture-combined device function to realize.Because colour consistency is proofreaied and correct and is finished in the frame of video preprocessing process, thus the Replace mode adopted herein, promptly with the alternative original value of texture fragment afterwards.
Technical scheme (4) specific implementation is as follows:, be for projector's 3 d space coordinate x in the virtual scene y, z and three direction deflection angle φ, θ, the correction of γ for the correction of projector's pose.
The pose value that obtains when obtaining with video is as the initial value in the virtual projection plane virtual scene, and still owing to the influence of equipment precision, this numerical value can not make projective textures and Virtual Space merge fully.So need the outer trimming process of plus.It is the energy function of independent variable with the posture information that the present invention adopts structure, and the mode that energy function is found the solution optimal value is proofreaied and correct virtual projection plane.
At first, reset by model view matrix and projection matrix, viewpoint in the virtual scene is adjusted to the projector place, the drafting scene obtains the image under the width of cloth virtual environment, utilizes the mean-shift algorithm that image is cut apart, and utilizes the color characteristic of building and highway, the pixel value in non-building or highway zone is changed to white, binary conversion treatment is done to image then in the zone of preserved building model or highway correspondence, will build and the highway relevant range is changed to black.
Second step extracted a key frame from video, use and the similar method of the first step, and binary conversion treatment is done in the zone of preserved building model or highway correspondence.
In the 3rd step, calculate profile errors in the projector viewing volume zone
First two steps are obtained image, do XOR by pixel and handle, last statistics is 1 pixel quantity, with the first of this result as energy function.
The 4th step is for the building with outward appearance rotational symmetry character, if only be the result that mistake may appear in the profile coupling, so need to add the feature of some local messages.Utilize SIFT consistance operator, the match point of collecting in the image of binary conversion treatment that the one or two liang of step obtains not pass through is right.Obtain the right error amount of match point by Key-point constraint process, with this numerical value second portion of energy function the most.
The 5th step, distribute different weights for two parts of energy function, the present invention has distributed more weight for overall profile errors, and so far energy function makes up and finishes.
The 6th step for finding the solution of energy function optimal value, at first applied simulated annealing to energy function, and the solution space of function is narrowed down in the optimum solution approximate extents.Utilize downhill simplex algorithm pairing approximation solution space further to compress again, thereby obtain optimum solution.
In the 7th step, utilize optimum solution replacement technology to realize initial projector's pose value in (3).
Present embodiment merges according to video and proofreaies and correct product process, can be divided into following step and implement:
1 makes up virtual scene
Be example with the virtual campus earlier,, make up the data pyramid, under different points of view, landform is bound the texture of different levels by existing terrain remote sensing data.Create landmark building model in the campus, and, manually add model to relevant position according to the landform remotely-sensed data, allow build between relative position relation be consistent with reality as far as possible.See Fig. 1 step (2) remote sensing landform transmission of data → (3) LOD processing → (4) terrain texture value → (6) model data → (7) locus calibration → (8) model texture.
2 obtain video data
From different video source,, obtain undressed video flowing as monitoring camera or camera, mobile phone in the school.Video flowing taken out analyses into single-frame images, and image is carried out multiresolution handle, create the image of different resolution, and be image interpolation Alpha passage, make things convenient between subsequent video and video and scene between fusion.In addition, preserve longitude and latitude, visual angle and the directional information of video source, be used for the initial alignment of Virtual Space projector.Seeing that Fig. 1 step (11) analyses for taking out of frame of video, make up multiresolution, is that influence increases alpha passage → (13) video flowing → (14) projector texture.
3 video textures merge
Video image and landform, model texture are merged mutually.Realize by three scene drawings, under the projector visual angle, draw object for the first time, obtain corresponding Z-Buffer, under virtual view, draw scene for the second time, obtain the real depth value of each point in the scene.Draw for the third time, obtain depth value by preceding twice drafting and compare, the texture value of each point in the decision scene.Fusion process realizes by setting different texture fusion devices.See Fig. 1 step (1) texture coordinates generate automatically → (5) Z-buffer value in comparison → (9) virtual scene multipass drafting → (10) texture-combined device function → (12) of real depth value final image.
4 projectors proofread and correct
Viewpoint is placed the projector place, draw scene, obtain the virtual scene image.Find its video flowing in real scene according to projector, and extract wherein key frame.Above-mentioned two width of cloth images are carried out image segmentation handle, the algorithm that can select for use is a lot, mean-shift for example, and normalized cut, JSEG, pixel affinity, what the present invention adopted is the mean-shift algorithm.With image segmentation is zones of different, takes out according to color characteristic and separates out ground or partial building, and the part that will have nothing to do is rejected.Image to after separating is done normalized, and non-model place part is set to white, and model place part is set to black.Two images are done the xor operation by pixel then, if two image resolutions are different, need to add the consistance processing procedure.To the result is that one pixel is done counting operation, and this result is as the first of energy function.Outline is a kind of comparison means of the overall situation, needs some local feature couplings as a supplement, uses SIFT characteristic matching operator herein, selects several group of feature point, calculates its key-point error value, with the second portion of this value stack as energy function.At this moment, the correction problem of the projector energy function that changes polynary independent variable into is asked the optimum solution problem.The present invention uses the combination of simulated annealing and downhill simplex algorithm.Find approximate optimal solution by simulated annealing, among a small circle, optimization is done in the projector position by downhill simplex algorithm again.Obtain after the optimum solution,, finish trimming process the pose of this value replacement virtual projection plane.See that Fig. 1 step (15) is taken out based on image segmentation → (16) SIFT operator extraction local feature match point of Mean-shift → (17) virtual projection plane pose calibration value → (18) building or highway and analyse, and do xor operation → (19) Key-Point error → (20) Downhill simplex → (21) simulated annealing → (22) and make up energy function.

Claims (11)

1. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment, its step comprises:
1) sets up terrain model and the virtual scene that the surface has static texture image according to the remotely-sensed data image that obtains in advance; Video camera pose information of living in when obtaining true capture video stream of multistage and records photographing;
During 2) according to described take video camera pose information of living in described virtual scene, adds the virtual projection machine model and with the viewing volume of the corresponding projector of camera parameters, the while is according to the initial pose value in the video camera pose information setting virtual projection plane model virtual scene;
3) image of described true capture video stream is carried out the frame of video pre-service and obtain the dynamic video texture, utilize the projective textures technology that described pretreated video data is projected in the virtual environment;
4) original remote sensing image texture in the terrain model surface static texture and/or the face of land in the described virtual environment and described dynamic video texture are merged, obtain the final texture value that scene surface covers;
5) from described virtual projection machine model, obtain virtual projection plane according to described final texture value as the image under the viewpoint, and with true capture video stream in corresponding image coupling, the structure energy function;
6) utilize in the energy function optimum solution that the initial pose value of the projector in the described virtual scene is reset, finish virtual projection plane and proofread and correct.
2. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 1 is characterized in that the texture fusion method is as follows in the described step 4):
1) replacement model view matrix and projection matrix are converted into virtual view under projector's viewpoint, draw described virtual scene, obtain at the depth value under the front projection machine viewpoint;
2) replacement model view matrix and projection matrix become viewpoint under the virtual view again, repaint described virtual scene, obtain the corresponding real depth value of each point in the scene;
3) under each projector's viewpoint, draw virtual scene successively, obtain the projective textures coordinate of each point in the scene, and above-mentioned steps 1 by automatic texture generating mode), 2) the described real depth value that obtains and the comparison of described depth value;
4) if both are equal, adopt projector's video texture,, adopt model of place self texture if do not wait, and by setting the mode iteration of texture-combined device function, all projectors in having traveled through scene, the final texture value of each point in the acquisition scene.
3. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 1, it is characterized in that, corresponding image coupling in described step 5) and the true capture video stream, setting up with the posture information is that the energy function building method of independent variable is as follows:
The first step, replacement model view matrix and projection matrix are adjusted to the projector place with viewpoint in the virtual scene, and the drafting scene obtains the image under the width of cloth virtual environment, utilizes the mean-shift algorithm that image is cut apart the back image is done binary conversion treatment;
Second step extracted a key frame from described true capture video stream, use the method for the first step to do binary conversion treatment;
In the 3rd step, profile errors in the viewing volume zone that calculating projector forms is done the XOR processing to the rapid image that obtains of described first two steps by pixel, and statistics is 1 pixel quantity, and this result is an energy function first;
The 4th step, utilize SIFT consistance operator to add the feature of local message, the match point of collecting in the image of binary conversion treatment that first and second step obtains not pass through is right, obtain the right error amount of match point by key point constraint Key-point constraint process, this error amount is the energy function second portion;
In the 5th step, distribute different weights for two parts of energy function;
The 6th step, for finding the solution of energy function optimal value,
In the 7th step, utilize optimum solution to replace the initial pose value of projector.
4. as the automatic correct methods matching of video texture projection in claim 1 or the 3 described three-dimensional actual situation integrated environments, it is characterized in that described energy function optimal value is found the solution in accordance with the following methods:
At first energy function is applied simulated annealing, the solution space of function is narrowed down in the optimum solution approximate extents, utilize the compression of downhill simplex algorithm pairing approximation solution space again, obtain optimum solution.
5. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 3, it is characterized in that, when utilizing the mean-shift algorithm that image is cut apart, utilizes the color characteristic of building and highway in described first and second step, the pixel value in non-building or highway zone is changed to white, the zone of preserved building model or highway correspondence, then image is done binary conversion treatment, will build with the highway relevant range and be changed to black.
6. the automatic correct methods matching of video texture projection is characterized in that in the three-dimensional actual situation integrated environment as claimed in claim 1, and it is as follows that the image of described true capture video stream is carried out the pretreated method of frame of video:
Video data decoding obtains individual video frame image, extracts a sample frame and utilize the SIFT operator to seek Feature Points Matching in the sample frame from each video flowing, and carry out colour consistency and handle.
7. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 6 is characterized in that described colour consistency is treated to:
1) from two videos that mate, respectively extracts a sample frame, make up the color histogram that all pixels form in the frame,, make two width of cloth frame of video have identical color histogram and distribute by color histogram equalization and regulationization processing;
2) each frame in the same video flowing is done histogram equalization and the regulation processing identical with corresponding sample frame, thus whole video stream is finished consistance and handle;
3) create buffer memory cache for frame of video, big I is held 50 frame of video;
4) adopt the list structure of fifo fifo to be written into video requency frame data.
8. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 6, it is characterized in that, described true capture video stream obtains by the http agreement, carries out video data decoding in this locality, and frame of video is saved as the Jepg form.
9. the automatic correct methods matching of video texture projection in the three-dimensional actual situation integrated environment as claimed in claim 6, it is characterized in that, described video frame image is carried out multiresolution to be handled, be written into the frame of video of different resolution according to different situations for same image, adopt by pixel and carry out the bilinear interpolation operation, with image take out analyse into original image 1/4,1/16,1/64 in one or more.
10. the automatic correct methods matching of video texture projection is characterized in that in the three-dimensional actual situation integrated environment as claimed in claim 8, and described Jepg format video image is increased the Alpha passage.
11. real video image and virtual scene fusion method the steps include:
1) sets up the surface according to the remotely-sensed data image that obtains in advance and have static texture image model and virtual scene; Relative position in the described virtual scene between model space position and model, towards, the size and reality scene be consistent;
2) obtain true capture video stream of multistage and records photographing video camera pose of living in information;
3) latitude and longitude coordinates of described shooting earth surface of living in is converted to world coordinates and the combination that the Cartesian coordinates at virtual scene place is represented, in described virtual scene, add virtual projection machine model and the corresponding viewing volume of projector's model, simultaneously according to the initial pose value in the virtual projection plane model virtual scene of video camera pose information setting under world coordinate system;
4) image of described true capture video stream is carried out the frame of video pre-service and obtain the dynamic video texture, utilize the projective textures technology that described pretreated video data is projected in the virtual environment;
5) original remote sensing image texture of the static texture and/or the face of land and the described dynamic video texture with model in the described virtual environment merges;
6) there is crossing overlay area to adopt texture to merge to different projectors in the virtual projection machine model.
CN201310148771.0A 2013-04-25 2013-04-25 The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment Active CN103226830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310148771.0A CN103226830B (en) 2013-04-25 2013-04-25 The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310148771.0A CN103226830B (en) 2013-04-25 2013-04-25 The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment

Publications (2)

Publication Number Publication Date
CN103226830A true CN103226830A (en) 2013-07-31
CN103226830B CN103226830B (en) 2016-02-10

Family

ID=48837265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310148771.0A Active CN103226830B (en) 2013-04-25 2013-04-25 The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment

Country Status (1)

Country Link
CN (1) CN103226830B (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533318A (en) * 2013-10-21 2014-01-22 北京理工大学 Building outer surface projection method
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN104320616A (en) * 2014-10-21 2015-01-28 广东惠利普路桥信息工程有限公司 Video monitoring system based on three-dimensional scene modeling
CN104348892A (en) * 2013-08-09 2015-02-11 阿里巴巴集团控股有限公司 Information display method and device
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
CN105023294A (en) * 2015-07-13 2015-11-04 中国传媒大学 Fixed point movement augmented reality method combining sensors and Unity3D
CN105118061A (en) * 2015-08-19 2015-12-02 刘朔 Method used for registering video stream into scene in three-dimensional geographic information space
CN105357511A (en) * 2015-12-08 2016-02-24 上海图漾信息科技有限公司 Depth data detection system
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN106127743A (en) * 2016-06-17 2016-11-16 武汉大势智慧科技有限公司 Automatic Reconstruction bidimensional image and the method and system of threedimensional model accurate relative location
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN106406508A (en) * 2015-07-31 2017-02-15 联想(北京)有限公司 Information processing method and relay equipment
CN107251100A (en) * 2015-02-27 2017-10-13 微软技术许可有限责任公司 The virtual environment that physics is limited moulds and anchored to actual environment
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN108196679A (en) * 2018-01-23 2018-06-22 河北中科恒运软件科技股份有限公司 Gesture-capture and grain table method and system based on video flowing
CN108257164A (en) * 2017-12-07 2018-07-06 中国航空工业集团公司西安航空计算技术研究所 A kind of actual situation what comes into a driver's matching fusion embedded-type software architecture
CN108536281A (en) * 2018-02-09 2018-09-14 腾讯科技(深圳)有限公司 Weather reproducting method and device, storage medium and electronic device in virtual scene
CN108600771A (en) * 2018-05-15 2018-09-28 东北农业大学 Recorded broadcast workstation system and operating method
CN109003250A (en) * 2017-12-20 2018-12-14 罗普特(厦门)科技集团有限公司 A kind of image and threedimensional model fusion method
CN109034031A (en) * 2018-07-17 2018-12-18 江苏实景信息科技有限公司 Monitor processing method, processing unit and the electronic equipment of video
CN109087402A (en) * 2018-07-26 2018-12-25 上海莉莉丝科技股份有限公司 Method, system, equipment and the medium of particular surface form are covered in the particular surface of 3D scene
CN110555822A (en) * 2019-09-05 2019-12-10 北京大视景科技有限公司 color consistency adjusting method for real-time video fusion
CN110753265A (en) * 2019-10-28 2020-02-04 北京奇艺世纪科技有限公司 Data processing method and device and electronic equipment
CN111061421A (en) * 2019-12-19 2020-04-24 北京澜景科技有限公司 Picture projection method and device and computer storage medium
CN111064946A (en) * 2019-12-04 2020-04-24 广东康云科技有限公司 Video fusion method, system, device and storage medium based on indoor scene
CN111145362A (en) * 2020-01-02 2020-05-12 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN111445535A (en) * 2020-04-16 2020-07-24 浙江科澜信息技术有限公司 Camera calibration method, device and equipment
CN111540022A (en) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 Image uniformization method based on virtual camera
CN111582022A (en) * 2020-03-26 2020-08-25 深圳大学 Fusion method and system of mobile video and geographic scene and electronic equipment
CN111737518A (en) * 2020-06-16 2020-10-02 浙江大华技术股份有限公司 Image display method and device based on three-dimensional scene model and electronic equipment
CN112053446A (en) * 2020-07-11 2020-12-08 南京国图信息产业有限公司 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN112052751A (en) * 2020-08-21 2020-12-08 上海核工程研究设计院有限公司 Containment water film coverage rate detection method
CN112437276A (en) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 WebGL-based three-dimensional video fusion method and system
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN114143528A (en) * 2020-09-04 2022-03-04 北京大视景科技有限公司 Multi-video stream fusion method, electronic device and storage medium
CN114220312A (en) * 2022-01-21 2022-03-22 北京京东方显示技术有限公司 Virtual training method, device and system
CN114363600A (en) * 2022-03-15 2022-04-15 视田科技(天津)有限公司 Remote rapid 3D projection method and system based on structured light scanning
CN115314690A (en) * 2022-08-09 2022-11-08 北京淳中科技股份有限公司 Image fusion band processing method and device, electronic equipment and storage medium
TWI790732B (en) * 2021-08-31 2023-01-21 宏碁股份有限公司 Image correction method and image correction device
CN115866218A (en) * 2022-11-03 2023-03-28 重庆化工职业学院 Scene image fused vehicle-mounted AR-HUD brightness self-adaptive adjusting method
CN116012564A (en) * 2023-01-17 2023-04-25 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN117041511A (en) * 2023-09-28 2023-11-10 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
WO2023231793A1 (en) * 2022-05-31 2023-12-07 京东方科技集团股份有限公司 Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product
CN117459663A (en) * 2023-12-22 2024-01-26 北京天图万境科技有限公司 Projection light self-correction fitting and multicolor repositioning method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142153A (en) * 2010-01-28 2011-08-03 香港科技大学 Image-based remodeling method of three-dimensional model
CN102598651A (en) * 2009-11-02 2012-07-18 索尼计算机娱乐公司 Video processing program, device and method, and imaging device mounted with video processing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598651A (en) * 2009-11-02 2012-07-18 索尼计算机娱乐公司 Video processing program, device and method, and imaging device mounted with video processing device
CN102142153A (en) * 2010-01-28 2011-08-03 香港科技大学 Image-based remodeling method of three-dimensional model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN FRÜH AND AVIDEH ZAKHOR: "Constructing 3D City Models by Merging Aerial and Ground Views", 《IEEE COMPUTER GRAPHICS AND APPLICATIONS》, vol. 23, no. 6, 31 December 2003 (2003-12-31) *
JIANXIONG XIAO ET AL: "Image-based Street-side City Modeling", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 28, no. 5, 31 December 2009 (2009-12-31), XP055247689, DOI: doi:10.1145/1618452.1618460 *
段晓娟 等: "虚拟实景空间中的纹理映射技术研究", 《计算机工程》, vol. 27, no. 5, 31 May 2001 (2001-05-31) *
王邦松 等: "航空影像色彩一致性处理算法研究", 《遥感信息》, 31 December 2011 (2011-12-31) *

Cited By (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348892A (en) * 2013-08-09 2015-02-11 阿里巴巴集团控股有限公司 Information display method and device
CN103533318A (en) * 2013-10-21 2014-01-22 北京理工大学 Building outer surface projection method
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN104320616A (en) * 2014-10-21 2015-01-28 广东惠利普路桥信息工程有限公司 Video monitoring system based on three-dimensional scene modeling
CN107251100A (en) * 2015-02-27 2017-10-13 微软技术许可有限责任公司 The virtual environment that physics is limited moulds and anchored to actual environment
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
CN105023294A (en) * 2015-07-13 2015-11-04 中国传媒大学 Fixed point movement augmented reality method combining sensors and Unity3D
CN105023294B (en) * 2015-07-13 2018-01-19 中国传媒大学 With reference to the fixed point mobile augmented reality method of sensor and Unity3D
CN106406508A (en) * 2015-07-31 2017-02-15 联想(北京)有限公司 Information processing method and relay equipment
CN105118061A (en) * 2015-08-19 2015-12-02 刘朔 Method used for registering video stream into scene in three-dimensional geographic information space
CN105357511A (en) * 2015-12-08 2016-02-24 上海图漾信息科技有限公司 Depth data detection system
CN105357511B (en) * 2015-12-08 2018-05-15 上海图漾信息科技有限公司 depth data detecting system
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN106127743A (en) * 2016-06-17 2016-11-16 武汉大势智慧科技有限公司 Automatic Reconstruction bidimensional image and the method and system of threedimensional model accurate relative location
CN106127743B (en) * 2016-06-17 2018-07-20 武汉大势智慧科技有限公司 The method and system of automatic Reconstruction bidimensional image and threedimensional model accurate relative location
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN107507263B (en) * 2017-07-14 2020-11-24 西安电子科技大学 Texture generation method and system based on image
CN108257164A (en) * 2017-12-07 2018-07-06 中国航空工业集团公司西安航空计算技术研究所 A kind of actual situation what comes into a driver's matching fusion embedded-type software architecture
CN109003250A (en) * 2017-12-20 2018-12-14 罗普特(厦门)科技集团有限公司 A kind of image and threedimensional model fusion method
CN109003250B (en) * 2017-12-20 2023-05-30 罗普特科技集团股份有限公司 Fusion method of image and three-dimensional model
CN108196679A (en) * 2018-01-23 2018-06-22 河北中科恒运软件科技股份有限公司 Gesture-capture and grain table method and system based on video flowing
CN108196679B (en) * 2018-01-23 2021-10-08 河北中科恒运软件科技股份有限公司 Gesture capturing and texture fusion method and system based on video stream
CN108536281B (en) * 2018-02-09 2021-05-14 腾讯科技(深圳)有限公司 Weather reproduction method and apparatus in virtual scene, storage medium, and electronic apparatus
CN108536281A (en) * 2018-02-09 2018-09-14 腾讯科技(深圳)有限公司 Weather reproducting method and device, storage medium and electronic device in virtual scene
CN108600771A (en) * 2018-05-15 2018-09-28 东北农业大学 Recorded broadcast workstation system and operating method
CN109034031A (en) * 2018-07-17 2018-12-18 江苏实景信息科技有限公司 Monitor processing method, processing unit and the electronic equipment of video
CN109087402B (en) * 2018-07-26 2021-02-12 上海莉莉丝科技股份有限公司 Method, system, device and medium for overlaying a specific surface morphology on a specific surface of a 3D scene
CN109087402A (en) * 2018-07-26 2018-12-25 上海莉莉丝科技股份有限公司 Method, system, equipment and the medium of particular surface form are covered in the particular surface of 3D scene
CN110555822B (en) * 2019-09-05 2023-08-29 北京大视景科技有限公司 Color consistency adjustment method for real-time video fusion
CN110555822A (en) * 2019-09-05 2019-12-10 北京大视景科技有限公司 color consistency adjusting method for real-time video fusion
CN110753265A (en) * 2019-10-28 2020-02-04 北京奇艺世纪科技有限公司 Data processing method and device and electronic equipment
CN111064946A (en) * 2019-12-04 2020-04-24 广东康云科技有限公司 Video fusion method, system, device and storage medium based on indoor scene
CN111061421A (en) * 2019-12-19 2020-04-24 北京澜景科技有限公司 Picture projection method and device and computer storage medium
CN111061421B (en) * 2019-12-19 2021-07-20 北京澜景科技有限公司 Picture projection method and device and computer storage medium
CN111145362A (en) * 2020-01-02 2020-05-12 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN111145362B (en) * 2020-01-02 2023-05-09 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN111582022B (en) * 2020-03-26 2023-08-29 深圳大学 Fusion method and system of mobile video and geographic scene and electronic equipment
CN111582022A (en) * 2020-03-26 2020-08-25 深圳大学 Fusion method and system of mobile video and geographic scene and electronic equipment
CN111445535A (en) * 2020-04-16 2020-07-24 浙江科澜信息技术有限公司 Camera calibration method, device and equipment
CN111540022B (en) * 2020-05-14 2024-04-19 深圳市艾为智能有限公司 Image unification method based on virtual camera
CN111540022A (en) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 Image uniformization method based on virtual camera
CN111737518A (en) * 2020-06-16 2020-10-02 浙江大华技术股份有限公司 Image display method and device based on three-dimensional scene model and electronic equipment
CN112053446A (en) * 2020-07-11 2020-12-08 南京国图信息产业有限公司 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN112053446B (en) * 2020-07-11 2024-02-02 南京国图信息产业有限公司 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN112052751A (en) * 2020-08-21 2020-12-08 上海核工程研究设计院有限公司 Containment water film coverage rate detection method
CN114143528A (en) * 2020-09-04 2022-03-04 北京大视景科技有限公司 Multi-video stream fusion method, electronic device and storage medium
CN112437276A (en) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 WebGL-based three-dimensional video fusion method and system
CN112437276B (en) * 2020-11-20 2023-04-07 埃洛克航空科技(北京)有限公司 WebGL-based three-dimensional video fusion method and system
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112637582B (en) * 2020-12-09 2021-10-08 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
TWI790732B (en) * 2021-08-31 2023-01-21 宏碁股份有限公司 Image correction method and image correction device
CN114220312B (en) * 2022-01-21 2024-05-07 北京京东方显示技术有限公司 Virtual training method, device and virtual training system
CN114220312A (en) * 2022-01-21 2022-03-22 北京京东方显示技术有限公司 Virtual training method, device and system
CN114363600B (en) * 2022-03-15 2022-06-21 视田科技(天津)有限公司 Remote rapid 3D projection method and system based on structured light scanning
CN114363600A (en) * 2022-03-15 2022-04-15 视田科技(天津)有限公司 Remote rapid 3D projection method and system based on structured light scanning
WO2023231793A1 (en) * 2022-05-31 2023-12-07 京东方科技集团股份有限公司 Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product
CN115314690A (en) * 2022-08-09 2022-11-08 北京淳中科技股份有限公司 Image fusion band processing method and device, electronic equipment and storage medium
CN115314690B (en) * 2022-08-09 2023-09-26 北京淳中科技股份有限公司 Image fusion belt processing method and device, electronic equipment and storage medium
CN115866218B (en) * 2022-11-03 2024-04-16 重庆化工职业学院 Scene image fusion vehicle-mounted AR-HUD brightness self-adaptive adjustment method
CN115866218A (en) * 2022-11-03 2023-03-28 重庆化工职业学院 Scene image fused vehicle-mounted AR-HUD brightness self-adaptive adjusting method
CN116012564B (en) * 2023-01-17 2023-10-20 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN116012564A (en) * 2023-01-17 2023-04-25 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN117041511A (en) * 2023-09-28 2023-11-10 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
CN117041511B (en) * 2023-09-28 2024-01-02 青岛欧亚丰科技发展有限公司 Video image processing method for visual interaction enhancement of exhibition hall
CN117459663B (en) * 2023-12-22 2024-02-27 北京天图万境科技有限公司 Projection light self-correction fitting and multicolor repositioning method and device
CN117459663A (en) * 2023-12-22 2024-01-26 北京天图万境科技有限公司 Projection light self-correction fitting and multicolor repositioning method and device

Also Published As

Publication number Publication date
CN103226830B (en) 2016-02-10

Similar Documents

Publication Publication Date Title
CN103226830B (en) The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment
US20200404247A1 (en) System for and method of social interaction using user-selectable novel views
CN100594519C (en) Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
EP3170151B1 (en) Blending between street view and earth view
KR101319805B1 (en) Photographing big things
CN104599243B (en) A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN104183016B (en) A kind of construction method of quick 2.5 dimension building model
CN110717494B (en) Android mobile terminal indoor scene three-dimensional reconstruction and semantic segmentation method
WO2020192355A1 (en) Method and system for measuring urban mountain viewing visible range
CN108876926A (en) Navigation methods and systems, AR/VR client device in a kind of panoramic scene
CN106296783A (en) A kind of combination space overall situation 3D view and the space representation method of panoramic pictures
EP3533218B1 (en) Simulating depth of field
Bradley et al. Image-based navigation in real environments using panoramas
Jian et al. Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
CN110908510A (en) Application method of oblique photography modeling data in immersive display equipment
CN108769648A (en) A kind of 3D scene rendering methods based on 720 degree of panorama VR
CN101334900B (en) Image based plotting method
CN108564654B (en) Picture entering mode of three-dimensional large scene
Alshawabkeh et al. Automatic multi-image photo texturing of complex 3D scenes
CN110035275B (en) Urban panoramic dynamic display system and method based on large-screen fusion projection
Ruzínoor et al. 3D terrain visualisation for GIS: A comparison of different techniques
Tan et al. Large scale texture mapping of building facades
Neumann et al. Visualizing reality in an augmented virtual environment
US20170228926A1 (en) Determining Two-Dimensional Images Using Three-Dimensional Models

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200724

Address after: 830-3, 8 / F, No. 8, Sijiqing Road, Haidian District, Beijing 100195

Patentee after: Beijing weishiwei Information Technology Co.,Ltd.

Address before: 100871 Haidian District the Summer Palace Road,, No. 5, Peking University

Patentee before: Peking University