A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
Technical field
The present invention relates to the virtual reality fusion method of multiple video strems and three-dimensional scenic, be more specifically that the virtual objects in the content of many video images and three-dimensional scenic is merged, belong to technical field of virtual reality.
Background technology
The modeling of virtual environment is widely used in the application such as emulation, scenic spot displaying, three-dimensional map, can obtain lifting to a certain extent when the geological information of virtual environment, appearance are similar with true environment or accurate to the sense of reality of virtual environment whole time corresponding.But carry out comparatively accurate modeling to scene to need to expend more manpower, and due to model texture be the static images gathered in advance, the virtual environment built in this way can not reflect the dynamic change of event, activity etc. in true environment.Therefore, by the true concern being dynamically day by day subject to researcher of virtual environment reflection scene.
Strengthening virtual environment is a kind of Virtual Realization technology occurred to overcome the above problems.Strengthen the three-dimensional model that Virtual Environment establishes environment in advance, after camera or projection arrangement are demarcated, the two-dimensional video of the equipment collections such as camera or the dimensional surface information of object are registered in virtual environment in real time.Enhancing virtual environment based on video image strengthens the one in Virtual Environment, and it uses video image to strengthen virtual environment.The collection of video image is relative convenient with acquisition, it have recorded environment dynamic change in time, therefore create out by it the dynamic virtual environment changed with video image, and the global space information comprised in virtual environment can promote again the understanding to video content conversely.Before making the present invention, the Jinhui Hu of University of Southern California proposes a kind of method [Jinhui Hu.Intergrating complementary information for photorealistic representation [D] .Los Angeles:University of Southern California, 2009] of the use video image enhancement virtual environment based on texture mapping.Original video image is twisted into the image corresponding with building surface direction by him, then uses the method for characteristic matching by the image registration after distortion in texture cache, finally carrys out the texture on Renewal model surface by the content of texture cache.The method can only use a road video to upgrade at every turn, and rendering efficiency is also lower, and video image there will be sawtooth or fogs after distortion registration.The present invention proposes a kind of method that direct use raw video image strengthens virtual environment, the polyphaser real-time scheduling method that the method use location is relevant, video image and the three-dimensional scenic of scheduling camera merge, and achieve the fusion of multiple video strems and virtual scene.
Summary of the invention
The virtual scene that the object of the invention is to solve static state can not reflect the problem of environment dynamic change, propose a kind of virtual reality fusion method of multiple video strems and three-dimensional scenic, the multiple camera of the method Real-Time Scheduling, carries out fusion by its video image and three-dimensional scenic and draws.
For achieving the above object, the multiple video strems of the present invention's proposition and the method for three-dimensional scenic virtual reality fusion comprise the following steps:
(1) use the video image of one or more collected by camera environment, by process that is real-time or off-line, parameter information during collected by camera followed the tracks of, parameter information include but not limited to camera position, towards, focal length, timestamp etc.
(2) parameter when taking according to camera calculates camera view frustums corresponding in three dimensions, view frustums is the Virtual Space region corresponding with the geographic range that camera is taken, judge visible camera set under user's viewpoint afterwards, the video image of the visible camera of scheduling suitable quantity is used for virtual reality fusion.
(3) to each visible camera, calculate the incidence relation between virtual objects in its video image and three-dimensional scenic, use the method for video-projection to be merged by the virtual objects in video image and three-dimensional scenic according to this incidence relation.
(4) fusion results is visual in virtual environment, for user provides the interactive walkthrough to virtual scene and the automatic patrol service to specified camera.
Wherein, use the video image of one or more collected by camera environment, follow the tracks of parameter during collected by camera, by using the mode of sensing equipment real time record camera parameter when gathering or being calculated the parameter of camera by the graphical analysis mode of off-line.
Wherein, parameter when taking according to camera calculates camera view frustums corresponding in three dimensions, and view frustums represents the shooting area of camera.Afterwards according to the current viewpoint position of user and the direction calculating camera view frustums crossing situation with user's viewing area, if to be in viewing area interior or crossing with viewing area and meet optical axis and viewpoint direction is no more than certain angle for view frustums, think that camera is visible.The scheduling of the list management camera such as list in definition observability list, list to be added, list to be retired, candidate list and use, is loaded the visible camera of suitable quantity, its video image and three-dimensional scenic is carried out virtual reality fusion by list in using.
Wherein, parameter when taking according to camera calculates the incidence relation between the virtual objects such as point, line, surface, body in video image and three-dimensional scenic, by the content projection of video image on the virtual objects in three-dimensional scenic, dynamic by the change displayed scene of video image.
Wherein, carry out visual to fusion results in virtual environment, can multiple view such as the syncretizing effect simultaneously under initial three-dimensional scene, raw video image, user's viewpoint, the syncretizing effect under camera shooting viewpoint according to user's request.User can in virtual scene interactive walkthrough.According to the browse path between user's request object of planning camera, camera is gone on patrol automatically.
Compared with prior art, the invention has the beneficial effects as follows:
(1) according to the multiple visible camera in viewpoint Automatic dispatching scene, realize multi-channel video and three-dimensional scenic to merge simultaneously;
(2) fusion calculation dynamic upgrades, and after the parameter of the camera of collection changes, only need recalculate once i.e. renewable existing fusion results according to parameter;
(3) there is good extensibility, the scene of different range can be adapted to, the various objects in video and scene can be merged, and consider the hiding relation between object;
(4) by the video fusion of multipath dispersion in unified virtual environment, enhance the spacetime correlation between video, contribute to user and the Static and dynamic content in multiple video is understood;
(5) directly use the original video of camera shooting as model texture, fusion results is compared with original image, little to real information entrained in video image loss, enhances the scene sense of reality and Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of multiple video strems of the present invention and three-dimensional scenic virtual reality fusion method;
Fig. 2 is the ambient image schematic diagram of collected by camera;
Fig. 3 is the view frustums schematic diagram of camera;
Fig. 4 is camera distribution and user's viewpoint schematic diagram;
Fig. 5 is virtual environment schematic diagram;
Fig. 6 is video image and virtual environment syncretizing effect schematic diagram.
Embodiment
Below in conjunction with accompanying drawing and example, the present invention is described in further detail.As shown in Figure 1, its step is as follows for the flow process of the multiple video strems that the present invention proposes and three-dimensional scenic virtual reality fusion method:
Step 1, adopts the video image of one or more collected by camera environment, as shown in Figure 2.Follow the tracks of parameter information during collected by camera, follow the tracks of and realize by modes such as sensor implementation record, offline image analyses.Follow the tracks of the camera parameter that obtains to need to be transformed under the coordinate system in virtual environment to complete fusion process.
Step 2, builds the view frustums of camera, as shown in Figure 3 in three dimensions according to the parameter information of camera.In figure 3, hexahedron ABCD-EFGH represents the view frustums of certain camera, and it is enclosed by six bread and forms, and wherein points to the center O of face EFGH from the center O of face ABCD
1ray be the optical axis direction of camera.Then in virtual environment, user's viewing area is built according to user's current view point position and direction.In the diagram, user is in position O, and AOB represents the viewing area of user, the camera in scene:
(1) if camera position and viewpoint position exceed certain distance, then think that camera is invisible, such as, camera C0 in Fig. 4, C4 and O exceed distance d, therefore invisible.If camera position and viewpoint position are in certain distance, and meet the camera cone and non-intersect or camera light direction of principal axis and viewpoint direction the angle of user's viewing area exceedes certain angle, then think that camera is invisible, if camera C1, the C2 in Fig. 4 be not in user's viewing area, C3 is crossing with viewing area but its optical axis and viewpoint direction angle exceedes certain angle, and therefore C1, C2, C3 are invisible;
(2) if camera position and viewpoint position are in certain distance, and it is full and meet the following conditions: the camera cone is crossing with user's viewing area or in user's viewing area, and camera light direction of principal axis and viewpoint direction angle are not more than certain angle, then think that camera is visible, such as, camera C5 in Fig. 4 is in user's viewing area, C6 is crossing with user's viewing area, and the angle of the optical axis direction of C5, C6 and viewpoint direction within the specific limits, and therefore C5, C6 are visible.
After calculating the observability of camera, upgrade the visible camera list U in using according to following flow process:
(2.1) preserve last computation and obtain camera observability result, empty and exit list Q and list J to be added, the camera flag position of putting in observability list V is-1.Wherein, the zone bit of camera is that 1 expression camera is visible, and zone bit is that 0 expression camera is invisible, and-1 represents unknown; Enter step (2.2);
(2.2) to camera each in list V, if current view point is left in its position exceed certain distance, think that camera is invisible, its mark is set to 0; Otherwise enter step (2.2);
(2.3) if the position of camera and current view point distance within the specific limits: the bounding box calculating camera view frustums, calculate the viewing area of user in current view point, if bounding box and viewing area non-intersect, thinking that camera is invisible, is 0 by the mark position of camera; Calculating the angle of camera light direction of principal axis and viewpoint direction, if angle is greater than certain angle, is 0 by the mark position of camera; Otherwise enter step (2.4);
(2.4) if the bounding box of camera view frustums is crossing with viewing area or in viewing area, and the angle of camera light direction of principal axis and viewpoint direction is not more than certain angle then thinks that camera is visible, is 1 by the mark position of camera.The observability list that the observability list this calculated and last computation obtain compares, if camera was invisible for this visible last time, is sent into list J to be added, if camera was visible for this invisible last time, is sent to and exits list Q; Enter step (2.5);
(2.5) camera in list J to be added is taken out feeding candidate list C.According to user's request, from C, select the camera of suitable quantity to send into list U in use and ask the video image of camera.From candidate list C and using in list U get and remove and exit camera identical in list Q and discharge the resource of being correlated with.
Step 3, to each camera in list U, analyze the incidence relation R between the virtual objects such as point, line, surface, body in the content of video image and virtual environment, the signal of virtual environment as shown in Figure 5.The depth map D of its finding scene is played up, according to the pixel C of R and D by video image under camera view
pbe converted to the texture color C that scenario objects is corresponding
t, computing formula is:
Wherein, I
tfor the texture coordinate that object is corresponding, d
tfor the depth value that object is corresponding, D
tfor the depth value in depth map D, f is according to incidence relation R, image pixel C
pand texture coordinate meter I
tcalculate the function of texture color.Multiple image is fused to the situation of same object, calculate pixel in every width image to the contribution of object, to contribute the texture as weight calculation object, computing formula is:
Wherein, i is the index of image, C
tifor the texture color that object calculates from image i, ω
ifor image i is to the contribution margin of object.By above computation process, achieve the fusion process of each object in multiple video strems and three-dimensional scenic.
Step 4, carry out visual to the fusion results of video image and three-dimensional scenic in virtual environment, visual result as shown in Figure 6.The fusion results of the video image before the three-dimensional scenic before fusion, fusion, the fusion results under user's viewpoint, camera view can be shown according to user's needs simultaneously.User can in virtual environment interactive walkthrough.Select camera part or all of in scene as the target of patrol automatically according to user's request, the patrol path automatically between planning camera is also gone on patrol along this path automatically to target camera.
The part that the present invention does not elaborate belongs to those skilled in the art's known technology.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.