CN104599243A - Virtual and actual reality integration method of multiple video streams and three-dimensional scene - Google Patents

Virtual and actual reality integration method of multiple video streams and three-dimensional scene Download PDF

Info

Publication number
CN104599243A
CN104599243A CN201410769001.2A CN201410769001A CN104599243A CN 104599243 A CN104599243 A CN 104599243A CN 201410769001 A CN201410769001 A CN 201410769001A CN 104599243 A CN104599243 A CN 104599243A
Authority
CN
China
Prior art keywords
camera
video image
virtual
user
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410769001.2A
Other languages
Chinese (zh)
Other versions
CN104599243B (en
Inventor
周忠
刘培富
周颐
吴威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing large landscape Technology Co. Ltd.
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410769001.2A priority Critical patent/CN104599243B/en
Publication of CN104599243A publication Critical patent/CN104599243A/en
Application granted granted Critical
Publication of CN104599243B publication Critical patent/CN104599243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual and actual reality integration method of multiple video streams and a three-dimensional scene and belongs to the virtual reality technical field. The virtual and actual reality integration method comprises collecting the video image information of the environment through cameras and tracking camera collection parameters; calculating a corresponding vision cone body of every camera in a three-dimensional space according to the camera collection parameters, calculating visible cameras of a user under the current viewpoint based on the corresponding vision cone bodies and scheduling a video image of every visible camera; calculating the association relation of the video image of every visible camera and a virtual object in the three-dimensional scene and performing integration on the video images and the virtual object in the three-dimensional scene according to the association relation; performing visualization on an integration result in the virtual environment and providing the interactive roaming and automatic patrol service for the user. According to the virtual and actual reality integration method of the multiple video streams and the three-dimensional scene, the plurality of cameras in the scene are timely scheduled, the virtual and actual reality integration is performed on the video images of the cameras and the three-dimensional scene, and accordingly the purpose for reflecting the real dynamic of the environment is achieved.

Description

A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
Technical field
The present invention relates to the virtual reality fusion method of multiple video strems and three-dimensional scenic, be more specifically that the virtual objects in the content of many video images and three-dimensional scenic is merged, belong to technical field of virtual reality.
Background technology
The modeling of virtual environment is widely used in the application such as emulation, scenic spot displaying, three-dimensional map, can obtain lifting to a certain extent when the geological information of virtual environment, appearance are similar with true environment or accurate to the sense of reality of virtual environment whole time corresponding.But carry out comparatively accurate modeling to scene to need to expend more manpower, and due to model texture be the static images gathered in advance, the virtual environment built in this way can not reflect the dynamic change of event, activity etc. in true environment.Therefore, by the true concern being dynamically day by day subject to researcher of virtual environment reflection scene.
Strengthening virtual environment is a kind of Virtual Realization technology occurred to overcome the above problems.Strengthen the three-dimensional model that Virtual Environment establishes environment in advance, after camera or projection arrangement are demarcated, the two-dimensional video of the equipment collections such as camera or the dimensional surface information of object are registered in virtual environment in real time.Enhancing virtual environment based on video image strengthens the one in Virtual Environment, and it uses video image to strengthen virtual environment.The collection of video image is relative convenient with acquisition, it have recorded environment dynamic change in time, therefore create out by it the dynamic virtual environment changed with video image, and the global space information comprised in virtual environment can promote again the understanding to video content conversely.Before making the present invention, the Jinhui Hu of University of Southern California proposes a kind of method [Jinhui Hu.Intergrating complementary information for photorealistic representation [D] .Los Angeles:University of Southern California, 2009] of the use video image enhancement virtual environment based on texture mapping.Original video image is twisted into the image corresponding with building surface direction by him, then uses the method for characteristic matching by the image registration after distortion in texture cache, finally carrys out the texture on Renewal model surface by the content of texture cache.The method can only use a road video to upgrade at every turn, and rendering efficiency is also lower, and video image there will be sawtooth or fogs after distortion registration.The present invention proposes a kind of method that direct use raw video image strengthens virtual environment, the polyphaser real-time scheduling method that the method use location is relevant, video image and the three-dimensional scenic of scheduling camera merge, and achieve the fusion of multiple video strems and virtual scene.
Summary of the invention
The virtual scene that the object of the invention is to solve static state can not reflect the problem of environment dynamic change, propose a kind of virtual reality fusion method of multiple video strems and three-dimensional scenic, the multiple camera of the method Real-Time Scheduling, carries out fusion by its video image and three-dimensional scenic and draws.
For achieving the above object, the multiple video strems of the present invention's proposition and the method for three-dimensional scenic virtual reality fusion comprise the following steps:
(1) use the video image of one or more collected by camera environment, by process that is real-time or off-line, parameter information during collected by camera followed the tracks of, parameter information include but not limited to camera position, towards, focal length, timestamp etc.
(2) parameter when taking according to camera calculates camera view frustums corresponding in three dimensions, view frustums is the Virtual Space region corresponding with the geographic range that camera is taken, judge visible camera set under user's viewpoint afterwards, the video image of the visible camera of scheduling suitable quantity is used for virtual reality fusion.
(3) to each visible camera, calculate the incidence relation between virtual objects in its video image and three-dimensional scenic, use the method for video-projection to be merged by the virtual objects in video image and three-dimensional scenic according to this incidence relation.
(4) fusion results is visual in virtual environment, for user provides the interactive walkthrough to virtual scene and the automatic patrol service to specified camera.
Wherein, use the video image of one or more collected by camera environment, follow the tracks of parameter during collected by camera, by using the mode of sensing equipment real time record camera parameter when gathering or being calculated the parameter of camera by the graphical analysis mode of off-line.
Wherein, parameter when taking according to camera calculates camera view frustums corresponding in three dimensions, and view frustums represents the shooting area of camera.Afterwards according to the current viewpoint position of user and the direction calculating camera view frustums crossing situation with user's viewing area, if to be in viewing area interior or crossing with viewing area and meet optical axis and viewpoint direction is no more than certain angle for view frustums, think that camera is visible.The scheduling of the list management camera such as list in definition observability list, list to be added, list to be retired, candidate list and use, is loaded the visible camera of suitable quantity, its video image and three-dimensional scenic is carried out virtual reality fusion by list in using.
Wherein, parameter when taking according to camera calculates the incidence relation between the virtual objects such as point, line, surface, body in video image and three-dimensional scenic, by the content projection of video image on the virtual objects in three-dimensional scenic, dynamic by the change displayed scene of video image.
Wherein, carry out visual to fusion results in virtual environment, can multiple view such as the syncretizing effect simultaneously under initial three-dimensional scene, raw video image, user's viewpoint, the syncretizing effect under camera shooting viewpoint according to user's request.User can in virtual scene interactive walkthrough.According to the browse path between user's request object of planning camera, camera is gone on patrol automatically.
Compared with prior art, the invention has the beneficial effects as follows:
(1) according to the multiple visible camera in viewpoint Automatic dispatching scene, realize multi-channel video and three-dimensional scenic to merge simultaneously;
(2) fusion calculation dynamic upgrades, and after the parameter of the camera of collection changes, only need recalculate once i.e. renewable existing fusion results according to parameter;
(3) there is good extensibility, the scene of different range can be adapted to, the various objects in video and scene can be merged, and consider the hiding relation between object;
(4) by the video fusion of multipath dispersion in unified virtual environment, enhance the spacetime correlation between video, contribute to user and the Static and dynamic content in multiple video is understood;
(5) directly use the original video of camera shooting as model texture, fusion results is compared with original image, little to real information entrained in video image loss, enhances the scene sense of reality and Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of multiple video strems of the present invention and three-dimensional scenic virtual reality fusion method;
Fig. 2 is the ambient image schematic diagram of collected by camera;
Fig. 3 is the view frustums schematic diagram of camera;
Fig. 4 is camera distribution and user's viewpoint schematic diagram;
Fig. 5 is virtual environment schematic diagram;
Fig. 6 is video image and virtual environment syncretizing effect schematic diagram.
Embodiment
Below in conjunction with accompanying drawing and example, the present invention is described in further detail.As shown in Figure 1, its step is as follows for the flow process of the multiple video strems that the present invention proposes and three-dimensional scenic virtual reality fusion method:
Step 1, adopts the video image of one or more collected by camera environment, as shown in Figure 2.Follow the tracks of parameter information during collected by camera, follow the tracks of and realize by modes such as sensor implementation record, offline image analyses.Follow the tracks of the camera parameter that obtains to need to be transformed under the coordinate system in virtual environment to complete fusion process.
Step 2, builds the view frustums of camera, as shown in Figure 3 in three dimensions according to the parameter information of camera.In figure 3, hexahedron ABCD-EFGH represents the view frustums of certain camera, and it is enclosed by six bread and forms, and wherein points to the center O of face EFGH from the center O of face ABCD 1ray be the optical axis direction of camera.Then in virtual environment, user's viewing area is built according to user's current view point position and direction.In the diagram, user is in position O, and AOB represents the viewing area of user, the camera in scene:
(1) if camera position and viewpoint position exceed certain distance, then think that camera is invisible, such as, camera C0 in Fig. 4, C4 and O exceed distance d, therefore invisible.If camera position and viewpoint position are in certain distance, and meet the camera cone and non-intersect or camera light direction of principal axis and viewpoint direction the angle of user's viewing area exceedes certain angle, then think that camera is invisible, if camera C1, the C2 in Fig. 4 be not in user's viewing area, C3 is crossing with viewing area but its optical axis and viewpoint direction angle exceedes certain angle, and therefore C1, C2, C3 are invisible;
(2) if camera position and viewpoint position are in certain distance, and it is full and meet the following conditions: the camera cone is crossing with user's viewing area or in user's viewing area, and camera light direction of principal axis and viewpoint direction angle are not more than certain angle, then think that camera is visible, such as, camera C5 in Fig. 4 is in user's viewing area, C6 is crossing with user's viewing area, and the angle of the optical axis direction of C5, C6 and viewpoint direction within the specific limits, and therefore C5, C6 are visible.
After calculating the observability of camera, upgrade the visible camera list U in using according to following flow process:
(2.1) preserve last computation and obtain camera observability result, empty and exit list Q and list J to be added, the camera flag position of putting in observability list V is-1.Wherein, the zone bit of camera is that 1 expression camera is visible, and zone bit is that 0 expression camera is invisible, and-1 represents unknown; Enter step (2.2);
(2.2) to camera each in list V, if current view point is left in its position exceed certain distance, think that camera is invisible, its mark is set to 0; Otherwise enter step (2.2);
(2.3) if the position of camera and current view point distance within the specific limits: the bounding box calculating camera view frustums, calculate the viewing area of user in current view point, if bounding box and viewing area non-intersect, thinking that camera is invisible, is 0 by the mark position of camera; Calculating the angle of camera light direction of principal axis and viewpoint direction, if angle is greater than certain angle, is 0 by the mark position of camera; Otherwise enter step (2.4);
(2.4) if the bounding box of camera view frustums is crossing with viewing area or in viewing area, and the angle of camera light direction of principal axis and viewpoint direction is not more than certain angle then thinks that camera is visible, is 1 by the mark position of camera.The observability list that the observability list this calculated and last computation obtain compares, if camera was invisible for this visible last time, is sent into list J to be added, if camera was visible for this invisible last time, is sent to and exits list Q; Enter step (2.5);
(2.5) camera in list J to be added is taken out feeding candidate list C.According to user's request, from C, select the camera of suitable quantity to send into list U in use and ask the video image of camera.From candidate list C and using in list U get and remove and exit camera identical in list Q and discharge the resource of being correlated with.
Step 3, to each camera in list U, analyze the incidence relation R between the virtual objects such as point, line, surface, body in the content of video image and virtual environment, the signal of virtual environment as shown in Figure 5.The depth map D of its finding scene is played up, according to the pixel C of R and D by video image under camera view pbe converted to the texture color C that scenario objects is corresponding t, computing formula is:
C t = f ( R , C p , I t ) if d t < D t 0 if d t &GreaterEqual; D t
Wherein, I tfor the texture coordinate that object is corresponding, d tfor the depth value that object is corresponding, D tfor the depth value in depth map D, f is according to incidence relation R, image pixel C pand texture coordinate meter I tcalculate the function of texture color.Multiple image is fused to the situation of same object, calculate pixel in every width image to the contribution of object, to contribute the texture as weight calculation object, computing formula is:
C = &Sigma; i &omega; i C ti
Wherein, i is the index of image, C tifor the texture color that object calculates from image i, ω ifor image i is to the contribution margin of object.By above computation process, achieve the fusion process of each object in multiple video strems and three-dimensional scenic.
Step 4, carry out visual to the fusion results of video image and three-dimensional scenic in virtual environment, visual result as shown in Figure 6.The fusion results of the video image before the three-dimensional scenic before fusion, fusion, the fusion results under user's viewpoint, camera view can be shown according to user's needs simultaneously.User can in virtual environment interactive walkthrough.Select camera part or all of in scene as the target of patrol automatically according to user's request, the patrol path automatically between planning camera is also gone on patrol along this path automatically to target camera.
The part that the present invention does not elaborate belongs to those skilled in the art's known technology.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (5)

1. a virtual reality fusion method for multiple video strems and three-dimensional scenic, first uses the video image of one or more collected by camera environment, and follows the tracks of parameter during collected by camera; Calculate camera view frustums corresponding in three dimensions according to parameter during collected by camera afterwards, on this basis according to the position of user's viewpoint and the observability of direction calculating camera, dispatch the video image of visible camera; Then calculate the incidence relation between virtual objects in the video image of each visible camera and three-dimensional scenic, use this incidence relation video image and virtual objects to be merged; Last in virtual environment, fusion results is visual, and provide interactive walkthrough and the automatic patrol service to camera for user; It is characterized in that following steps:
(1) use the video image of one or more collected by camera environment, video image is as the data source representing scene dynamics in real time; Follow the tracks of parameter information during collected by camera, parameter information is used for schedule video image and three-dimensional scenic correctly merges;
(2) parameter information when taking according to camera calculates camera view frustums corresponding in three dimensions, and this view frustums is spatial dimension being similar to or accurate expression in virtual environment of the true environment captured by camera; According to the viewpoint position of user in virtual environment and direction, calculate current to the visible camera set of user, dispatch the video image of visible camera;
(3) to the camera in visible set, calculate the incidence relation in its video image and three-dimensional scenic between virtual objects according to camera parameter information, utilize this incidence relation that video image and virtual objects are carried out virtual reality fusion;
(4) in virtual environment, the fusion results of video image and virtual objects is carried out visual, user can in virtual environment interactive walkthrough the camera of specifying is gone on patrol automatically.
2. method according to claim 1, is characterized in that: step (1) uses one or more collected by camera video image, and camera includes but not limited to general camera, panorama camera; Parameter when following the tracks of collected by camera is off-line or real-time process, parameter information when this process calculates collected by camera by sensor or image analysis method, parameter information comprise position, towards, focal length, timestamp; Parameter information real-time coding together with video image is transmitted or is kept in local memory device.
3. method according to claim 2, it is characterized in that: step (2) is to each camera in scene, parameter information when gathering according to it calculates camera view frustums corresponding in three dimensions, view frustums defines the range of observation of camera in virtual environment, and this scope is corresponding with the true geographical space captured by camera; Afterwards according to position and the direction of user's current view point, calculate the crossing situation of each camera view frustums and user's viewing area, and according to the observability of following rule judgment camera:
If the position of the position of camera and viewpoint exceedes certain distance, then think that camera is invisible; If camera position and viewpoint position are in certain distance, and meet the camera cone with not in user's viewing area or camera light direction of principal axis and viewpoint direction angle exceed certain angle, then think that camera is invisible;
If camera position and viewpoint position are in certain distance, and meet the following conditions: the camera cone is crossing with user's viewing area or in user's viewing area, and camera light direction of principal axis and viewpoint direction angle are not more than certain angle, then think that camera is visible;
Above the deterministic process of camera observability is accelerated, by camera is divided into groups according to position distribution, divide into groups near searching according to user's viewpoint during calculating thus reduce calculated amount; After calculating the observability of camera, upgrade the visible camera list U in using according to following flow process:
(2.1) preserve last computation and obtain camera observability result, empty and exit list Q and list J to be added, the camera flag position of putting in observability list V is-1; Wherein, the zone bit of camera is that 1 expression camera is visible, and zone bit is that 0 expression camera is invisible, and-1 represents unknown; Enter step (2.2);
(2.2) to camera each in list V, if current view point is left in its position exceed certain distance, think that camera is invisible, its mark is set to 0; Otherwise enter step (2.2);
(2.3) if the position of camera and current view point distance within the specific limits: the bounding box calculating camera view frustums, calculate the viewing area of user in current view point, if bounding box and viewing area non-intersect, thinking that camera is invisible, is 0 by the mark position of camera; Calculating the angle of camera light direction of principal axis and viewpoint direction, if angle is greater than certain angle, is 0 by the mark position of camera; Otherwise enter step (2.4);
(2.4) if the bounding box of camera view frustums is crossing with viewing area or in viewing area, and the angle of camera light direction of principal axis and viewpoint direction is not more than certain angle then thinks that camera is visible, is 1 by the mark position of camera; The observability list that the observability list this calculated and last computation obtain compares, if camera was invisible for this visible last time, is sent into list J to be added, if camera was visible for this invisible last time, is sent to and exits list Q; Enter step (2.5);
(2.5) camera in list J to be added is taken out feeding candidate list C; According to user's request, from C, select the camera of suitable quantity to send into list U in use and ask the video image of camera; From candidate list C and using in list U get and remove and exit camera identical in list Q and discharge the resource of being correlated with;
According to application demand, synchronously processed by the video image of timestamp parameter to camera each in list U.
4. method according to claim 1, it is characterized in that: step (3) is to visible camera each in scene, parameter information when taking according to camera calculates the incidence relation in the content of its video image and three-dimensional scenic between virtual objects, when camera parameter does not change only with calculating once; Merged by virtual objects in the method for video-projection and three-dimensional scenic by several video images according to incidence relation afterwards, object comprises the assembly of point, line, surface and three; The content of several video images can merge with same object, and the cross section of video image is processed by methods such as characteristic matching, transparent channel, weighted means; Need during fusion to consider the hiding relation between object, judge the hiding relation between object fast by depth map method, the part that is blocked uses original texture or specifies according to real needs, and the part that is not blocked merges with video image.
5. method according to claim 1, it is characterized in that: the fusion results of video image and three-dimensional scenic is carried out visual by step (4) in virtual environment, and add the syncretizing effect under display initial three-dimensional scene, raw video image, user's viewpoint, the syncretizing effect isometric drawing under camera shooting viewpoint according to demand; User can carry out interactive walkthrough in virtual scene; Demand according to user selects camera part or all of in scene, and realize the automatic patrol to these cameras by the browse path between planning camera, path comprises time, viewpoint position, viewpoint direction information.
CN201410769001.2A 2014-12-11 2014-12-11 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic Active CN104599243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410769001.2A CN104599243B (en) 2014-12-11 2014-12-11 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410769001.2A CN104599243B (en) 2014-12-11 2014-12-11 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic

Publications (2)

Publication Number Publication Date
CN104599243A true CN104599243A (en) 2015-05-06
CN104599243B CN104599243B (en) 2017-05-31

Family

ID=53124993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410769001.2A Active CN104599243B (en) 2014-12-11 2014-12-11 A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic

Country Status (1)

Country Link
CN (1) CN104599243B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005970A (en) * 2015-06-26 2015-10-28 广东欧珀移动通信有限公司 Augmented reality implementation method and apparatus
CN105389850A (en) * 2015-11-03 2016-03-09 北京大学(天津滨海)新一代信息技术研究院 Novel visibility generation method for large-scale three-dimensional scene
CN105979360A (en) * 2015-12-04 2016-09-28 乐视致新电子科技(天津)有限公司 Rendering image processing method and device
CN106340064A (en) * 2016-08-25 2017-01-18 北京大视景科技有限公司 Mixed-reality sandbox device and method
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN107223271A (en) * 2016-12-28 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device
CN107832716A (en) * 2017-11-15 2018-03-23 中国科学技术大学 Method for detecting abnormality based on active-passive Gauss on-line study
CN107888897A (en) * 2017-11-01 2018-04-06 南京师范大学 A kind of optimization method of video source modeling scene
CN107918948A (en) * 2017-11-02 2018-04-17 深圳市自由视像科技有限公司 4D Video Rendering methods
CN108154553A (en) * 2018-01-04 2018-06-12 中测新图(北京)遥感技术有限责任公司 The seamless integration method and device of a kind of threedimensional model and monitor video
CN109963120A (en) * 2019-02-26 2019-07-02 北京大视景科技有限公司 The combined control system and method for more ptz cameras in a kind of virtual reality fusion scene
CN110659385A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Fusion method of multi-channel video and three-dimensional GIS scene
CN110679152A (en) * 2017-05-31 2020-01-10 维里逊专利及许可公司 Method and system for generating a fused reality scene based on virtual objects and on real world objects represented from different vantage points in different video data streams
CN110675506A (en) * 2019-08-21 2020-01-10 佳都新太科技股份有限公司 System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion
CN111278519A (en) * 2017-09-08 2020-06-12 索尼互动娱乐股份有限公司 Second screen projection from space and user perception of a companion robot or device
CN111696216A (en) * 2020-06-16 2020-09-22 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
WO2020228768A1 (en) * 2019-05-14 2020-11-19 广东康云科技有限公司 3d intelligent education monitoring method and system, and storage medium
CN112651881A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Image synthesis method, apparatus, device, storage medium, and program product
CN113223130A (en) * 2021-03-17 2021-08-06 浙江大华技术股份有限公司 Path roaming method, terminal equipment and computer storage medium
CN113722644A (en) * 2021-09-03 2021-11-30 北京房江湖科技有限公司 Method and device for selecting browsing point in virtual space based on external equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009158B (en) * 2019-12-18 2020-09-15 华中师范大学 Virtual learning environment multi-channel fusion display method for field practice teaching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605211A (en) * 2009-07-23 2009-12-16 杭州镭星科技有限公司 Virtual three-dimensional building and actual environment real scene shooting video there is not the method that is stitched into
CN101951502A (en) * 2010-10-19 2011-01-19 北京硅盾安全技术有限公司 Three-dimensional intelligent video monitoring method
CN104183014A (en) * 2014-08-13 2014-12-03 浙江大学 An information labeling method having high fusion degree and oriented to city augmented reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605211A (en) * 2009-07-23 2009-12-16 杭州镭星科技有限公司 Virtual three-dimensional building and actual environment real scene shooting video there is not the method that is stitched into
CN101951502A (en) * 2010-10-19 2011-01-19 北京硅盾安全技术有限公司 Three-dimensional intelligent video monitoring method
CN104183014A (en) * 2014-08-13 2014-12-03 浙江大学 An information labeling method having high fusion degree and oriented to city augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周凡: "视频影像增强虚拟三维场景的注册与渲染方法研究", 《中国博士学位论文全文数据库信息科技辑(月刊)》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005970B (en) * 2015-06-26 2018-02-16 广东欧珀移动通信有限公司 The implementation method and device of a kind of augmented reality
CN105005970A (en) * 2015-06-26 2015-10-28 广东欧珀移动通信有限公司 Augmented reality implementation method and apparatus
CN105389850A (en) * 2015-11-03 2016-03-09 北京大学(天津滨海)新一代信息技术研究院 Novel visibility generation method for large-scale three-dimensional scene
CN105389850B (en) * 2015-11-03 2018-05-01 北京大学(天津滨海)新一代信息技术研究院 A kind of observability generation method of extensive three-dimensional scenic
CN105979360A (en) * 2015-12-04 2016-09-28 乐视致新电子科技(天津)有限公司 Rendering image processing method and device
WO2017092332A1 (en) * 2015-12-04 2017-06-08 乐视控股(北京)有限公司 Method and device for image rendering processing
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN106354251B (en) * 2016-08-17 2019-04-02 深圳前海小橙网科技有限公司 A kind of model system and method that virtual scene is merged with real scene
CN106340064A (en) * 2016-08-25 2017-01-18 北京大视景科技有限公司 Mixed-reality sandbox device and method
CN106340064B (en) * 2016-08-25 2019-02-01 北京大视景科技有限公司 A kind of mixed reality sand table device and method
CN107223271A (en) * 2016-12-28 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device
CN110679152A (en) * 2017-05-31 2020-01-10 维里逊专利及许可公司 Method and system for generating a fused reality scene based on virtual objects and on real world objects represented from different vantage points in different video data streams
CN110679152B (en) * 2017-05-31 2022-01-04 维里逊专利及许可公司 Method and system for generating fused reality scene
CN111278519A (en) * 2017-09-08 2020-06-12 索尼互动娱乐股份有限公司 Second screen projection from space and user perception of a companion robot or device
CN107888897A (en) * 2017-11-01 2018-04-06 南京师范大学 A kind of optimization method of video source modeling scene
CN107888897B (en) * 2017-11-01 2019-11-26 南京师范大学 A kind of optimization method of video source modeling scene
CN107918948A (en) * 2017-11-02 2018-04-17 深圳市自由视像科技有限公司 4D Video Rendering methods
CN107832716A (en) * 2017-11-15 2018-03-23 中国科学技术大学 Method for detecting abnormality based on active-passive Gauss on-line study
CN108154553A (en) * 2018-01-04 2018-06-12 中测新图(北京)遥感技术有限责任公司 The seamless integration method and device of a kind of threedimensional model and monitor video
CN109963120A (en) * 2019-02-26 2019-07-02 北京大视景科技有限公司 The combined control system and method for more ptz cameras in a kind of virtual reality fusion scene
WO2020228768A1 (en) * 2019-05-14 2020-11-19 广东康云科技有限公司 3d intelligent education monitoring method and system, and storage medium
CN110675506A (en) * 2019-08-21 2020-01-10 佳都新太科技股份有限公司 System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion
CN110675506B (en) * 2019-08-21 2021-07-09 佳都科技集团股份有限公司 System, method and equipment for realizing three-dimensional augmented reality of multi-channel video fusion
CN110659385B (en) * 2019-09-12 2020-10-09 中国测绘科学研究院 Fusion method of multi-channel video and three-dimensional GIS scene
CN110659385A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Fusion method of multi-channel video and three-dimensional GIS scene
CN111696216A (en) * 2020-06-16 2020-09-22 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
CN111696216B (en) * 2020-06-16 2023-10-03 浙江大华技术股份有限公司 Three-dimensional augmented reality panorama fusion method and system
CN112651881A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Image synthesis method, apparatus, device, storage medium, and program product
CN112651881B (en) * 2020-12-30 2023-08-01 北京百度网讯科技有限公司 Image synthesizing method, apparatus, device, storage medium, and program product
CN113223130A (en) * 2021-03-17 2021-08-06 浙江大华技术股份有限公司 Path roaming method, terminal equipment and computer storage medium
CN113722644A (en) * 2021-09-03 2021-11-30 北京房江湖科技有限公司 Method and device for selecting browsing point in virtual space based on external equipment
CN113722644B (en) * 2021-09-03 2023-07-21 如你所视(北京)科技有限公司 Method and device for selecting browsing point positions in virtual space based on external equipment

Also Published As

Publication number Publication date
CN104599243B (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN104599243A (en) Virtual and actual reality integration method of multiple video streams and three-dimensional scene
US20200255143A1 (en) Three-dimensional reconstruction method, system and apparatus based on aerial photography by unmanned aerial vehicle
CN112633535A (en) Photovoltaic power station intelligent inspection method and system based on unmanned aerial vehicle image
WO2020192355A1 (en) Method and system for measuring urban mountain viewing visible range
CN106777373B (en) Three-dimensional police geographical information platform and system architecture
CN107247834A (en) A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition
WO2022052239A1 (en) Dynamic interactive method for urban viewing corridor recognition and planning simulation
CN106933961B (en) The three-dimensional police geographical information platform automatically analyzed based on commanding elevation
CN103606188B (en) Geography information based on imaging point cloud acquisition method as required
CN115187742B (en) Method, system and related device for generating automatic driving simulation test scene
Kido et al. Assessing future landscapes using enhanced mixed reality with semantic segmentation by deep learning
CN105898216A (en) Method of counting number of people by using unmanned plane
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
CN107291879A (en) The method for visualizing of three-dimensional environment map in a kind of virtual reality system
CN107067447A (en) A kind of integration video frequency monitoring method in large space region
CN110362895B (en) Land acquisition removal application management system based on BIM + GIS technology
Luo et al. Semantic Riverscapes: Perception and evaluation of linear landscapes from oblique imagery using computer vision
CN114299390A (en) Method and device for determining maintenance component demonstration video and safety helmet
CN114419231B (en) Traffic facility vector identification, extraction and analysis system based on point cloud data and AI technology
CN108388995A (en) A kind of method for building up of road asset management system and establish system
CN115082254A (en) Lean control digital twin system of transformer substation
CN112435337A (en) Landscape visual field analysis method and system
CN111783690A (en) Urban travelable area CIM information processing method based on vehicle density perception
CN103335608A (en) Airborne LiDAR three-dimensional data acquisition method for establishing three-dimensional digital power transmission and transformation grid
CN106652031A (en) 4D real dynamic display method for electric engineering design

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171222

Address after: 100085 A, No. 9, Third Street, Shanghai Haidian District, Beijing, 9 layer A1004-061

Patentee after: Beijing ancient Shitu Ding Technology Co. Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: Beihang University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190717

Address after: Room 211, 2nd floor, Building 133, Garden Road, Haidian District, Beijing 100088

Patentee after: Beijing large landscape Technology Co. Ltd.

Address before: 100085 A1004-061, 9th floor, Block A, 9th Shangdi Sanjie, Haidian District, Beijing

Patentee before: Beijing ancient Shitu Ding Technology Co. Ltd.

TR01 Transfer of patent right