CN116129043A - Universal three-dimensional model for fusing reality scene and construction method thereof - Google Patents
Universal three-dimensional model for fusing reality scene and construction method thereof Download PDFInfo
- Publication number
- CN116129043A CN116129043A CN202211711446.6A CN202211711446A CN116129043A CN 116129043 A CN116129043 A CN 116129043A CN 202211711446 A CN202211711446 A CN 202211711446A CN 116129043 A CN116129043 A CN 116129043A
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- static
- dynamic
- dimensional
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 20
- 230000003068 static effect Effects 0.000 claims abstract description 101
- 230000004927 fusion Effects 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims description 5
- 239000012634 fragment Substances 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to the technical field of computer vision and discloses a general three-dimensional model for fusing a real scene and a construction method thereof, wherein the general three-dimensional model comprises the steps of acquiring static target video data in a target area in the real scene by using a depth camera, obtaining a static three-dimensional point cloud set of the real scene according to the acquired static target video data, and constructing a static three-dimensional model according to the static three-dimensional point cloud set; acquiring dynamic target video data in a real scene by using a depth camera again, obtaining a dynamic three-dimensional point cloud set, and constructing a dynamic three-dimensional model according to the dynamic three-dimensional point cloud set; and superposing and fusing the obtained static three-dimensional model and the dynamic three-dimensional model to construct the general three-dimensional model of the real scene of the target area. The universal three-dimensional model comprises a plurality of fusion basic features for simulating actual reference objects in different scenes. The method has the beneficial effects of improving the universality of the three-dimensional model and the authenticity and the reducibility after being fused with the real scene.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a general three-dimensional model for fusing a real scene and a construction method thereof.
Background
With the progress of technology and the development of three-dimensional technology, three-dimensional models and three-dimensional scenes have been widely used in various fields because of their very visual and imaging experiences. The depth camera, namely the 3D camera, can detect the distance information of the shooting space, accurately knows the distance between each point in the image and the camera, and can acquire the three-dimensional space coordinates of each point in the image by combining the coordinates of the point in the 2D image, and then restore the real scene through the three-dimensional space coordinates, so as to realize the three-dimensional modeling of the scene.
In the current construction of a three-dimensional model of a scene, the three-dimensional model is constructed based on the fact that a plurality of depth cameras installed at different positions collect data of different visual angles, specifically, the collected position coordinate data are converted into point cloud data, and the point cloud data are fused to obtain the three-dimensional model at continuous moments. Although the three-dimensional model of the real scene can be obtained in the mode, the acquired data are in a segmented period and cannot perfectly reflect the characteristics of the real scene, so that the constructed three-dimensional model has a plurality of defects and cannot be suitable for the fusion of the real scene in the general scene.
Disclosure of Invention
The invention aims to provide a general three-dimensional model for fusing a real scene and a construction method thereof so as to improve the universality of the three-dimensional model of the real scene.
In order to achieve the above purpose, the invention adopts the following technical scheme: the universal three-dimensional model for fusing the real scene comprises a plurality of three-dimensional structures, wherein the three-dimensional structures comprise a plurality of fusion basic features for fusing the real scene, and the fusion basic features can be used for simulating actual reference objects in different scenes.
The principle and the advantages of the scheme are as follows: when the method is actually applied, when the three-dimensional model fusion is carried out on a section of real scene after the video image data of the real scene is acquired, the section of video image data is led to a general three-dimensional model, and each target object in the real scene is simulated through a plurality of fusion basic features in the three-dimensional model, so that each component feature in the real scene is truly restored in the three-dimensional model. Through the model, objects contained in any real scene can be simulated, so that a complete and real fusion three-dimensional model is constructed to be fused with the real scene, the reality of the real scene is further improved, and the universality of the three-dimensional model can be improved to the greatest extent.
The scheme also provides a general three-dimensional model construction method for fusing the reality scene, which comprises the following steps:
step S1, acquiring static target video data in a target area in a real scene by using a depth camera, obtaining a static three-dimensional point cloud set of the real scene according to the acquired static target video data, and constructing a static three-dimensional model according to the static three-dimensional point cloud set;
s2, acquiring dynamic target video data in a real scene by using a depth camera again, obtaining a dynamic three-dimensional point cloud set, and constructing a dynamic three-dimensional model according to the dynamic three-dimensional point cloud set;
and S3, superposing and fusing the obtained static three-dimensional model and the dynamic three-dimensional model, and constructing a general three-dimensional model of the real scene of the target area.
The beneficial effects are that: the method comprises the steps of collecting data through object types, constructing a bottom-layer basic static three-dimensional model according to collected static three-dimensional point clouds, then acquiring point clouds of dynamic targets in a real scene to construct a dynamic three-dimensional model, and finally superposing and fusing the static three-dimensional model and the dynamic three-dimensional model to obtain a general three-dimensional model in the scene, wherein each basic feature of the static three-dimensional model can be utilized to simulate each fixed component in the real scene, and the dynamic three-dimensional model can be utilized to represent the dynamic targets, so that the real scene of the real scene is truly restored, the reality is higher, and the method is applicable to fusion of other real scenes and has stronger universality.
Preferably, in step S1, in acquiring the still target video data, the still target video data at different viewing angles is acquired by using depth cameras disposed at different positions in the target area.
Preferably, as an improvement, for the same static object, its video data is acquired with at least 3 depth cameras of different perspectives.
Preferably, as an improvement, when the static three-dimensional model is built, the depth cameras distributed around the static target can be used for scanning the outer surface of the static target to obtain the outer surface three-dimensional point cloud of the static target, and the outer surface three-dimensional point cloud is used for building the static three-dimensional model of the static target.
Preferably, as a modification, if the external shape of the static object is a regular external shape, the height dimension of the static object is not limited when the three-dimensional point cloud of the external surface of the static object is collected.
Preferably, as a modification, in step S2, the step of obtaining a dynamic three-dimensional point cloud includes:
collecting a dynamic point cloud packet of the dynamic target by utilizing a plurality of depth cameras fixed in the target area;
a fixed point in the target area is selected as a coordinate origin, and then a space coordinate system is established;
all the dynamic point cloud packages are converted into the space coordinate system, and the uniform dynamic three-dimensional point cloud set of the dynamic target is obtained.
Preferably, as an improvement, in step S3, when the static three-dimensional model and the dynamic three-dimensional model are overlapped and fused, the static three-dimensional model is fixed first, and then overlapped and fused segments are divided according to time periods according to the motion trend of the dynamic target in the dynamic three-dimensional model.
Preferably, as an improvement, the overlapping fusion segments are divided by time period, that is, the segment before the dynamic object moves out of the current displayable screen is divided into the current frame, and then divided into the future frame.
Preferably, as an improvement, when fusing a real scene, firstly comparing a fixed object in the real scene with a static three-dimensional model in a general three-dimensional model, if the static three-dimensional model is of the same type, directly fusing the static three-dimensional model, and if the static three-dimensional model is of a different type, correspondingly changing the structural size of the static three-dimensional model to be the same as the size of the fixed object.
Drawings
Fig. 1 is a schematic flow chart of a general three-dimensional model construction method for fusing real scenes according to an embodiment of the present invention.
Detailed Description
The following is a further detailed description of the embodiments:
embodiment one:
this embodiment is basically as shown in fig. 1: the method for constructing the universal three-dimensional model for fusing the real scene comprises the following steps:
step S1, acquiring static target video data in a target area in a real scene by using a depth camera, obtaining a static three-dimensional point cloud set of the real scene according to the acquired static target video data, and constructing a static three-dimensional model according to the static three-dimensional point cloud set;
s2, acquiring dynamic target video data in a real scene by using a depth camera again, obtaining a dynamic three-dimensional point cloud set, and constructing a dynamic three-dimensional model according to the dynamic three-dimensional point cloud set;
and S3, superposing and fusing the obtained static three-dimensional model and the dynamic three-dimensional model, and constructing a general three-dimensional model of the real scene of the target area.
When the static target video data are collected, a plurality of depth cameras which are arranged at different positions in the target area are used for collecting the static target video data of all the static targets in the area under different visual angles, specifically, when the same static target is collected, at least 3 depth cameras with different visual angles are used for completing the collection, so that the data collection can be completed in 360 degrees around the whole body.
After the video data of the static targets are collected, selecting a fixed time point to select all image frames of the static targets at the current moment, converting all image frames at the current moment into three-dimensional point cloud data to obtain a static three-dimensional point cloud set of a real scene, and finally constructing a static three-dimensional model according to the static three-dimensional point cloud set.
When the video data of the dynamic target in the real scene are acquired, firstly, sequentially acquiring dynamic point cloud packages of the dynamic target by utilizing a plurality of depth cameras fixed in the target area to obtain a plurality of dynamic point cloud packages, then, selecting a fixed point in the target area as a coordinate origin to establish a space coordinate system of the real scene, converting all acquired dynamic point cloud packages into the space coordinate system to obtain a dynamic three-dimensional point cloud set unified with all the dynamic target, and finally, constructing a dynamic three-dimensional model according to the obtained dynamic three-dimensional point cloud set.
After a static three-dimensional model and a dynamic three-dimensional model are respectively established, a point is selected as a common coordinate origin, then the static three-dimensional model is fixed, then the overlapped and fused segments are divided according to the motion trend of a dynamic target in the dynamic three-dimensional model and the motion time period, the segments before the dynamic target moves out of a current displayable picture are divided into current frames, the later segments are divided into future frames, and then the dynamic three-dimensional models of the current frames and the future frames are overlapped and fused with the static three-dimensional model respectively according to the sequence, so that a general three-dimensional model of a real scene in the target area is obtained.
The invention also provides a general three-dimensional model for fusing the real scene, which comprises a plurality of three-dimensional structures, wherein the three-dimensional structures comprise a plurality of fusion basic features for fusing the real scene, and the fusion basic features can be used for simulating actual reference objects in different scenes, so that perfect fusion of the real scene and the three-dimensional model is completed. And when the real scene is fused, firstly comparing the static three-dimensional model in the fixed object and the general three-dimensional model in the real scene, if the static three-dimensional model and the general three-dimensional model are of the same type, directly fusing, and if the static three-dimensional model is of different types, correspondingly changing the static three-dimensional model to the same size as the fixed object, and then developing the fusion to obtain the fused real scene.
Specifically, in this embodiment, 3 depth cameras with an included angle of 120 degrees are installed around each target object, so as to complete dead-angle-free data acquisition of the target object.
The specific implementation process of this embodiment is as follows:
selecting a target area in a real scene, then respectively acquiring video data of static targets by a plurality of depth cameras, then selecting a time point randomly to select image frames of all the static targets at the current moment, converting the selected image frames into three-dimensional point cloud data, obtaining a static three-dimensional point cloud set of a plurality of real scenes, and constructing a static three-dimensional model of the real scene.
And then collecting data of the dynamic target, collecting dynamic point cloud packages of the dynamic target by using a depth camera arranged in the target area, completing the collection according to time points to obtain a plurality of dynamic point cloud packages, then selecting a fixed point in the target area as a coordinate origin to establish a space coordinate system of a real scene, converting all the collected dynamic point cloud packages into the space coordinate system to obtain a uniform dynamic three-dimensional point cloud set of all the dynamic target, and finally constructing a dynamic three-dimensional model according to the obtained dynamic three-dimensional point cloud set.
And finally, selecting a point as a common coordinate origin of the two three-dimensional models, and carrying out superposition fusion on the static three-dimensional model and the dynamic three-dimensional model to construct a general three-dimensional model of the real scene of the target area.
Along with the development of three-dimensional technology, the related products of the three-dimensional technology are widely applied in life at present, such as 3D movies, 3D games and the like, and the three-dimensional technology is gradually applied to the fields of video monitoring and the like at present, so that the conversion from a planarized monitoring picture to a three-dimensional monitoring picture is realized, and the monitoring effect is more real and accurate. In the current construction of three-dimensional models applied to the monitoring field, three-dimensional models are generally constructed by adopting a plurality of depth cameras to segment and collect three-dimensional point cloud data in a time-sharing mode, and although the method can also obtain the three-dimensional models, the final display effect of the obtained three-dimensional models is inaccurate, the real situation of real scenes can not be perfectly restored, and the three-dimensional model features constructed in the method are too single and can not be suitable for fusion in all scenes.
Aiming at the problems, a novel three-dimensional model construction method is specially developed in the scheme, a static three-dimensional model and a dynamic three-dimensional model are respectively constructed by respectively collecting a static object and a dynamic object of a real scene in a target area, and finally, the static three-dimensional model and the dynamic three-dimensional model are overlapped and fused, and when the static three-dimensional model and the dynamic three-dimensional model are fused, overlapped and fused fragments are divided according to time periods according to the motion trend of a dynamic object in the dynamic three-dimensional model, so that the fusion effect is better, and the reduction degree of the real scene is higher and more realistic. The universal three-dimensional model obtained by the method comprises a plurality of three-dimensional structures, each three-dimensional structure comprises a plurality of fusion basic characteristics capable of simulating actual reference objects in different scenes, and objects in the actual scenes are simulated more truly, so that the universal three-dimensional model can be better suitable for fusion of all scenes, and the universality and the authenticity of the three-dimensional model are further improved.
Embodiment two:
this embodiment is basically the same as embodiment one, except that: when the static three-dimensional model is built, the depth cameras distributed around the static target can be used for scanning the outer surface of the static target to obtain the outer surface three-dimensional point cloud of the static target, and the outer surface three-dimensional point cloud is used for building the static three-dimensional model of the static target; if the appearance of the static target is a regular appearance, the height dimension of the static target is not limited when the three-dimensional point cloud of the outer surface of the static target is collected.
The depth camera is used for scanning the outer surface of the static target to complete acquisition of three-dimensional point cloud data, so that the construction of a three-dimensional model can be completed, and when the static target is a regular outer surface, the three-dimensional point cloud data of the overall dimension of the static target is acquired only by the depth camera, and the height of the static target is set to be any value in the system, so that when different real scenes are matched and fused, superposition and fusion can be carried out with the actual fusion objects more vividly and accurately, real reduction of the real scenes is realized, and the universality of the three-dimensional model is further improved.
The foregoing is merely an embodiment of the present invention, and a specific structure and characteristics of common knowledge in the art, which are well known in the scheme, are not described herein, so that a person of ordinary skill in the art knows all the prior art in the application day or before the priority date of the present invention, and can know all the prior art in the field, and have the capability of applying the conventional experimental means before the date, so that a person of ordinary skill in the art can complete and implement the present embodiment in combination with his own capability in the light of the present application, and some typical known structures or known methods should not be an obstacle for a person of ordinary skill in the art to implement the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.
Claims (10)
1. A general three-dimensional model for fusing reality scene, its characterized in that: the three-dimensional structure comprises a plurality of fusion basic features for fusing real scenes, and the fusion basic features can be used for simulating actual reference objects in different scenes.
2. The general three-dimensional model construction method for fusing realistic scenes according to claim 1, wherein: the method comprises the following steps:
step S1, acquiring static target video data in a target area in a real scene by using a depth camera, obtaining a static three-dimensional point cloud set of the real scene according to the acquired static target video data, and constructing a static three-dimensional model according to the static three-dimensional point cloud set;
s2, acquiring dynamic target video data in a real scene by using a depth camera again, obtaining a dynamic three-dimensional point cloud set, and constructing a dynamic three-dimensional model according to the dynamic three-dimensional point cloud set;
and S3, superposing and fusing the obtained static three-dimensional model and the dynamic three-dimensional model, and constructing a general three-dimensional model of the real scene of the target area.
3. The general three-dimensional model construction method for fusing realistic scenes according to claim 2, wherein: in the step S1, when the still target video data is collected, the still target video data at different viewing angles is collected by using depth cameras arranged at different positions in the target area.
4. A general three-dimensional model construction method for fusing realistic scenes according to claim 3, wherein: for the same static target, at least 3 depth cameras with different visual angles are used for collecting video data of the same static target.
5. The general three-dimensional model construction method for fusing realistic scenes according to claim 2, wherein: when the static three-dimensional model is built, the depth cameras distributed around the static target can be used for scanning the outer surface of the static target to obtain the outer surface three-dimensional point cloud of the static target, and the outer surface three-dimensional point cloud is used for building the static three-dimensional model of the static target.
6. The general three-dimensional model construction method for fusing realistic scenes according to claim 5, wherein: if the appearance of the static target is a regular appearance, the height dimension of the static target is not limited when the three-dimensional point cloud of the outer surface of the static target is collected.
7. The general three-dimensional model construction method for fusing realistic scenes according to claim 2, wherein: in the step S2, the step of obtaining a dynamic three-dimensional point cloud set includes:
collecting a dynamic point cloud packet of the dynamic target by utilizing a plurality of depth cameras fixed in the target area;
a fixed point in the target area is selected as a coordinate origin, and then a space coordinate system is established;
all the dynamic point cloud packages are converted into the space coordinate system, and the uniform dynamic three-dimensional point cloud set of the dynamic target is obtained.
8. The general three-dimensional model construction method for fusing realistic scenes according to claim 2, wherein: in the step S3, when the static three-dimensional model and the dynamic three-dimensional model are overlapped and fused, the static three-dimensional model is fixed first, and then overlapped and fused fragments are divided according to time periods according to the motion trend of the dynamic target in the dynamic three-dimensional model.
9. The general three-dimensional model construction method for fusing realistic scenes according to claim 8, wherein: and the time division, superposition and fusion of the segments is that the segment before the dynamic target moves out of the current displayable picture is divided into a current frame and then divided into a future frame.
10. The general three-dimensional model construction method for fusing realistic scenes according to claim 2, wherein: when the real scene is fused, firstly comparing the static three-dimensional model in the fixed object and the general three-dimensional model in the real scene, if the static three-dimensional model is of the same type, directly fusing the static three-dimensional model, and if the static three-dimensional model is of different types, correspondingly changing the structural size of the static three-dimensional model to be the same as the size of the fixed object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211711446.6A CN116129043A (en) | 2022-12-29 | 2022-12-29 | Universal three-dimensional model for fusing reality scene and construction method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211711446.6A CN116129043A (en) | 2022-12-29 | 2022-12-29 | Universal three-dimensional model for fusing reality scene and construction method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116129043A true CN116129043A (en) | 2023-05-16 |
Family
ID=86303960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211711446.6A Pending CN116129043A (en) | 2022-12-29 | 2022-12-29 | Universal three-dimensional model for fusing reality scene and construction method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116129043A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876583A (en) * | 2023-12-27 | 2024-04-12 | 中铁四院集团南宁勘察设计院有限公司 | Three-dimensional automatic scanning data online test method and device |
-
2022
- 2022-12-29 CN CN202211711446.6A patent/CN116129043A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117876583A (en) * | 2023-12-27 | 2024-04-12 | 中铁四院集团南宁勘察设计院有限公司 | Three-dimensional automatic scanning data online test method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10560687B2 (en) | LED-based integral imaging display system as well as its control method and device | |
US6084979A (en) | Method for creating virtual reality | |
JP4904264B2 (en) | System and method for image processing based on 3D spatial dimensions | |
CN102984453B (en) | Single camera is utilized to generate the method and system of hemisphere full-view video image in real time | |
CN102801994B (en) | Physical image information fusion device and method | |
CN111275750A (en) | Indoor space panoramic image generation method based on multi-sensor fusion | |
CN111260777A (en) | Building information model reconstruction method based on oblique photography measurement technology | |
US9001115B2 (en) | System and method for three-dimensional visualization of geographical data | |
CN107452060A (en) | Full angle automatic data collection generates virtual data diversity method | |
CN101916455B (en) | Method and device for reconstructing three-dimensional model of high dynamic range texture | |
CN103226830A (en) | Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
CN107154197A (en) | Immersion flight simulator | |
CN104915978A (en) | Realistic animation generation method based on Kinect | |
CN107134194A (en) | Immersion vehicle simulator | |
CN102932638B (en) | 3D video monitoring method based on computer modeling | |
CN115937482B (en) | Holographic scene dynamic construction method and system for self-adapting screen size | |
CN103986905B (en) | Method for video space real-time roaming based on line characteristics in 3D environment | |
CN116129043A (en) | Universal three-dimensional model for fusing reality scene and construction method thereof | |
EP3057316A1 (en) | Generation of three-dimensional imagery to supplement existing content | |
CN115527016A (en) | Three-dimensional GIS video fusion registration method, system, medium, equipment and terminal | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
RU2606875C2 (en) | Method and system for displaying scaled scenes in real time | |
CN207601427U (en) | A kind of simulation laboratory based on virtual reality mixing | |
CN116612256B (en) | NeRF-based real-time remote three-dimensional live-action model browsing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |