CN111698424A - Method for complementing live-action roaming 3D information through common camera - Google Patents

Method for complementing live-action roaming 3D information through common camera Download PDF

Info

Publication number
CN111698424A
CN111698424A CN202010571677.6A CN202010571677A CN111698424A CN 111698424 A CN111698424 A CN 111698424A CN 202010571677 A CN202010571677 A CN 202010571677A CN 111698424 A CN111698424 A CN 111698424A
Authority
CN
China
Prior art keywords
panoramic image
camera
shooting
information
route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010571677.6A
Other languages
Chinese (zh)
Inventor
韩雷
钟祥灵
徐庆
银镭
余驰
任森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yire Technology Co ltd
Original Assignee
Sichuan Yire Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yire Technology Co ltd filed Critical Sichuan Yire Technology Co ltd
Priority to CN202010571677.6A priority Critical patent/CN111698424A/en
Publication of CN111698424A publication Critical patent/CN111698424A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method for complementing live-action roaming 3D information by a common camera, which comprises the following steps: s1, setting parameters of camera shooting and planning a shooting route; s2, generating a route map after shooting; s3, outputting a panoramic picture; s4, carrying out corresponding color processing and orientation correction on the panoramic image; s5, reducing the size of the picture on the server, importing the picture into a newly-built project file, and sequencing according to a route map; s6, setting the height of the camera, synchronizing the camera to each imported panoramic image, and starting to associate each imported panoramic image; s7, starting to construct a 3D space structure and complementing 3D information; s8, finding out the condition of error of the spatial information in the construction, and adjusting; and S9, marking the point positions of the walking route, previewing and modifying the walking effect, and finally uploading the engineering file to a server. The method and the device solve the problems that in the prior art, when the virtual 3D scene is generated, due to the fact that the panoramic picture is lack of depth information, the picture is severely stretched and distorted in a model, and experience is poor.

Description

Method for complementing live-action roaming 3D information through common camera
Technical Field
The invention relates to the technical field of 3D virtual and live-action roaming, in particular to a method for supplementing live-action roaming 3D information through a common camera.
Background
The 360-degree panoramic picture is pasted in a virtual sphere or a cube in a distorted mode through an algorithm, and the visual angle camera is located in the center of the sphere or the cube. Therefore, in the virtual 3D environment, the atmosphere of a real scene is simulated, and the user is provided with the feeling of being personally on the scene. A plurality of virtual 3D models bound with panoramic pictures are spliced together to form a whole, so that the complete simulation of a real scene is realized.
In the prior art, when a virtual 3D scene is generated, a plurality of continuous virtual 3D models are spliced together to form a complete whole through simple comparison, 2D information is imported into the virtual 3D models, severe stretching and distortion of the pictures occur in the models due to lack of depth information of panoramic pictures, the experience is poor, and when each individual virtual 3D model is switched and displayed in the virtual scene, severe frame skipping can occur due to lack of depth information, and the user experience is seriously affected.
Disclosure of Invention
The invention provides a method for complementing live-action roaming 3D information through a common camera, which aims to solve the problems that in the prior art, when a virtual 3D scene is generated, a plurality of continuous virtual 3D models are spliced together to form a complete whole through simple comparison, and when 2D information is introduced into the virtual 3D models, due to the fact that depth information is lacked in panoramic pictures, the pictures are severely stretched and distorted in the models, experience is poor, and when each single virtual 3D model is displayed in a switched mode in the virtual scene, serious picture frame skipping can occur due to the lack of the depth information, and user experience is severely influenced.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a method for complementing live-action roaming 3D information through a common camera comprises the following steps:
s1, setting parameters of camera shooting and planning a shooting route;
s2, generating a route map after shooting is finished;
s3, uploading the shot original data to a server, and outputting a panorama by using a panorama manager;
s4, after the panoramic image is output, performing corresponding color processing and azimuth correction on the panoramic image, and uploading the panoramic image to a server;
s5, reducing the size of the processed panoramic image on the server, importing the panoramic image into a newly-built project file, and sequencing according to a route map;
s6, setting the height of a camera, synchronizing the camera height to each imported panoramic image, associating each imported panoramic image, connecting each panoramic image to form an integral scene, and realizing preliminary simulation of a real scene;
s7, after the imported panoramic image is correlated, starting to construct a 3D space structure, complementing 3D information according to the relation of plane elevation obstacles, and realizing simulation of a real scene;
s8, previewing the construction effect after the 3D structure is constructed, finding out the condition of spatial information error in construction, adjusting the condition, and rendering the constructed engineering file;
and S9, marking the point positions of the walking route, planning the simulated walking route, realizing the simulated walking effect, previewing and modifying the walking effect, and finally uploading the engineering file to the server.
Further, in step S7, the constructed 3D space structure maintains the structural squareness.
Further, in step S7, in the three-dimensional construction software, the lower plane represents the ground, the upper plane represents the ceiling or the sky, the inner plane represents the surrounding walls, and with the ground, the ceiling and the surrounding walls, the entire three-dimensional space can be built, and the 3D information is complemented.
Further, step S1 includes the following steps:
and S11, setting parameters for camera shooting, debugging the camera shooting parameters, and taking the optimal parameters for shooting.
And S12, planning a shooting route, and designing the shooting route according to the requirement of later-stage simulation.
Further, step S2 includes the following steps:
and S21, generating a route map according to the shot actual route and the designed shooting route after all shooting is finished.
Further, step S4 includes the following steps:
and S41, after the panoramic image is output, performing color correction processing on the panoramic image.
S42, after color correction is finished, horizontal correction is carried out;
s43, after the horizontal correction is finished, performing vision center correction;
s44, after the color and the orientation effect of the panoramic image meet the requirements, performing supplementary processing on the panoramic image;
and S45, uploading the processed panoramic picture to a server according to a file filing specification.
The principle of the method of the invention is as follows:
on the basis of the traditional 2D panoramic picture, 3D information, namely depth information, is supplemented in a mode of manually adding the depth information, so that when the 2D picture is converted into a virtual 3D model, the virtual 3D model with the depth information is formed, and the function of seamless and simulated walking in the virtual 3D model is realized.
The specific content comprises the following three processes:
first, shooting process
1. Setting parameters for camera shooting, debugging camera shooting parameters, and taking optimal parameters for shooting.
2. And planning a shooting route, and designing the shooting route according to the requirement of later-stage simulation.
3. And after all the images are shot, generating a route map according to the shot actual route and design.
Second, panorama output process
1. And uploading the shot original data to a server.
2. And outputting a panorama corresponding to the original data by using a panorama manager.
3. And after the panoramic image is output, performing color correction processing on the panoramic image.
4. And after color correction is finished, horizontal correction is carried out.
5. And correcting the visual center of the panoramic image.
6. And after the panoramic image effect meets the requirement, performing supplementary processing on the panoramic image.
7. And processing the panoramic image, and uploading the panoramic image to a server according to a file filing specification.
Three, 3D construction process
1. And reducing the picture size of the processed panoramic image.
2. And importing the changed panoramic pictures into the built project files and sequencing according to the route maps.
3. The camera height is set and synchronized to each imported panorama.
4. And starting to associate each imported panoramic image, and connecting each panoramic image to form an integral scene, so as to realize the preliminary simulation of the real scene.
5. And after the imported panoramic image is associated, starting to construct a 3D space structure, complementing 3D information according to the relation of plane and vertical surface obstacles, and realizing simulation of a real scene.
6. The constructed structure keeps the structural squareness.
7. After the 3D structure is built, the building effect is previewed, the condition of spatial information error in the building is found out, and adjustment is made.
8. And rendering the constructed project file.
9. Marking the point positions of the walking route, planning the simulation walking route and realizing the simulation walking effect.
10. And after all marks are made, previewing the walking effect, modifying and uploading the engineering file.
Compared with the prior art, the invention has the following beneficial effects: the method for complementing the live-action roaming 3D information by the common camera realizes the simulation of a real scene by constructing a virtual 3D scene by complementing spatial information, namely complementing depth information, and realizes the simulation effect of continuous walking by setting a walking route in the virtual 3D scene constructed after complementing the spatial information; under the condition that a depth camera is not used, 3D information is complemented on the 2D panoramic image, the simulation of a real scene can be realized by using a common camera, the simulation effect by using the depth camera is achieved, and the continuous walking simulation effect is realized.
Drawings
Fig. 1 is a schematic step diagram of a method for complementing live-action roaming 3D information by a general camera according to the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The present invention will be further described with reference to the following examples, which are intended to illustrate only some, but not all, of the embodiments of the present invention. Based on the embodiments of the present invention, other embodiments used by those skilled in the art without any creative effort belong to the protection scope of the present invention.
Referring to fig. 1, a structure of an embodiment of the present invention is shown for illustrative purposes only and is not limited to the structure.
Example one
As shown in fig. 1, a method for completing live-action roaming 3D information by a general camera includes the following steps:
s1, setting parameters of camera shooting and planning a shooting route;
s2, generating a route map after shooting is finished;
s3, uploading the shot original data to a server, and outputting a panorama by using a panorama manager;
s4, after the panoramic image is output, performing corresponding color processing and azimuth correction on the panoramic image, and uploading the panoramic image to a server;
s5, reducing the size of the processed panoramic image on the server, importing the panoramic image into a newly-built project file, and sequencing according to a route map;
s6, setting the height of a camera, synchronizing the camera height to each imported panoramic image, associating each imported panoramic image, connecting each panoramic image to form an integral scene, and realizing preliminary simulation of a real scene;
s7, after the imported panoramic image is correlated, starting to construct a 3D space structure, complementing 3D information according to the relation of plane elevation obstacles, and realizing simulation of a real scene;
s8, previewing the construction effect after the 3D structure is constructed, finding out the condition of spatial information error in construction, adjusting the condition, and rendering the constructed engineering file;
and S9, marking the point positions of the walking route, planning the simulated walking route, realizing the simulated walking effect, previewing and modifying the walking effect, and finally uploading the engineering file to the server.
Further, step S1 includes the following steps:
and S11, setting parameters for camera shooting, debugging the camera shooting parameters, and taking the optimal parameters for shooting.
And S12, planning a shooting route, and designing the shooting route according to the requirement of later-stage simulation.
Preferably, step S2 includes the steps of:
and S21, generating a route map according to the shot actual route and the designed shooting route after all shooting is finished.
In this embodiment, step S4 includes the following steps:
and S41, after the panoramic image is output, performing color correction processing on the panoramic image.
And S42, performing horizontal correction after color correction is finished.
And S43, after the horizontal correction is finished, performing vision center correction.
And S44, after the color and the orientation effect of the panoramic image meet the requirements, performing supplementary processing on the panoramic image.
And S45, uploading the processed panoramic picture to a server according to a file filing specification.
Example two
In step S7, the constructed 3D space structure maintains the structural squareness.
The principle of the method of the invention is as follows:
on the basis of the traditional 2D panoramic picture, 3D information, namely depth information, is supplemented in a mode of manually adding the depth information, so that when the 2D picture is converted into a virtual 3D model, the virtual 3D model with the depth information is formed, and the function of seamless and simulated walking in the virtual 3D model is realized.
The specific content comprises the following three processes:
first, shooting process
1. Setting parameters for camera shooting, debugging camera shooting parameters, and taking optimal parameters for shooting.
Wherein, the shooting parameters that set up have:
a shooting mode: automatic/manual.
Shutter speed: 1/4000s-1s (auto mode); 1/4000s-8s (manual mode).
ISO:100-1600。
HDR mode: 3 sheets (-2 to 2EV)/6 sheets (-3 to 2EV)/6 sheets (-5 to 2 EV).
And (3) stroboflash prevention: off/50 Hz/60 Hz.
A shutter mode: hand held/tripod.
Exposure compensation: -3EV to +3EV (at 1/3EV intervals).
Timing shooting: off/5 sec/10 sec/20 sec.
The closest shooting distance: 0.5 m.
The parameters for later debugging are:
white balance: automatic/daylight/cloudy/white fabric light.
Brightness: 0/+1/+2/+3/+4/+5.
Contrast ratio: 0/+1/+2/+3/+4/+5.
Gamma:A/B。
Horizontal rectification: turn off/use the built-in gyroscope for level correction.
The shooting mode is a manual mode for adjusting the shutter speed and ISO (sensitivity).
ISO (sensitivity) was adjusted to a value of 100, and the presented image quality was optimal.
Shutter speed adjustment process:
(1) when dark environment light is not enough, the shutter speed takes 1/5 seconds as the standard shooting debugging, and the panorama light effect is seen to the piecing together picture, and insufficient light reduces the shutter speed, and light promotes the shutter speed excessively. And setting the shutter speed according to the optimal parameters of the effect after picture splicing in debugging. (picture effect standard bright part is not too bright, dark part has details as standard)
(2) In the case of sufficient light, the shutter speed is 1/200 seconds as standard, and the debugging is the same as the dark environment debugging.
2. And planning a shooting route, and designing the shooting route according to the requirement of later-stage simulation.
3. And after all the images are shot, generating a route map according to the shot actual route and design.
Second, panorama output process
1. And uploading the shot original data to a server.
2. And outputting a panorama corresponding to the original data by using a panorama manager.
3. And after the panoramic image is output, performing color correction processing on the panoramic image.
The color adjustment is carried out in the image processing software, and the edge structure of the model can be seen in a hidden way in the dark part of the picture where a large area cannot appear. The color saturation of the lamp light is enough without large-area white spots, namely over-exposed places.
4. And after color correction is finished, horizontal correction is carried out.
And if the horizontal structure on the picture is inclined, the horizontal structure needs to be adjusted to be horizontal in the picture horizontal correction software.
5. And correcting the visual center of the panoramic image.
In the image processing software, the road in the picture on the same straight path needs to be corrected to the center of the picture.
6. And after the panoramic image effect meets the requirement, performing supplementary processing on the panoramic image.
The image has no obvious black spots and white spots, and the structure and the outline of an object in the image can be seen in a hidden way in a black area, the horizontal structure on the image is not inclined, the vertical structure is not bent, the local structure is not twisted, the overall color saturation is sufficient, and the panoramic image effect meets the requirement. The purpose of ground supplement is to repair the tripod placed on the picture because of shooting, and the tripod needs to be erased in the picture processing software, so that the ground is a complete ground.
7. And processing the panoramic image, and uploading the panoramic image to a server according to a file filing specification.
Among them, the time, city and specific shooting location of shooting need to be taken to name the panorama file.
Three, 3D construction process
1. And reducing the picture size of the processed panoramic image.
The method comprises the steps of obtaining a picture, wherein the pixel of the shot picture exceeds 16000 × 8000, the picture is too large, computer processing is slow due to the fact that the picture is too large when the picture is processed and a three-dimensional scene is built, in order to improve processing efficiency and guarantee the quality of the picture, the size of the original picture is reduced to 10000 × 5000, and balance is found between the picture size and the picture effect as much as possible.
2. And importing the changed panoramic pictures into the built project files and sequencing according to the route maps.
3. The camera height is set and synchronized to each imported panorama.
The camera height refers to a height range which is seen when a person stands, and is used for achieving a visual effect that the real person stands to see. The height of the camera is controlled by using the height of a tripod, the height of the camera is controlled in a range of 130CM-150CM, and the visual angle height of people for watching things is simulated as much as possible, so that the uniformity of visual effects is achieved.
4. And starting to associate each imported panoramic image, and connecting each panoramic image to form an integral scene, so as to realize the preliminary simulation of the real scene.
5. And after the imported panoramic image is associated, starting to construct a 3D space structure, complementing 3D information according to the relation of plane and vertical surface obstacles, and realizing simulation of a real scene. In the three-dimensional construction software, the lower plane represents the ground, the upper plane represents the ceiling or the sky, and the inner plane represents the surrounding walls. With the ground, the ceiling and the surrounding walls, the whole three-dimensional space can be built, and 3D information is complemented.
6. The constructed structure keeps the structural squareness. The framework can make the simulation effect good.
7. After the 3D structure is built, the building effect is previewed, the condition of spatial information error in the building is found out, and adjustment is made.
The spatial information error is mainly reflected in a wall with thickness, and if the operation is wrong, two faces for constructing the wall are easy to be crossed but not parallel.
8. And rendering the constructed project file.
9. Marking the point positions of the walking route, planning the simulation walking route and realizing the simulation walking effect.
10. And after all marks are made, previewing the walking effect, modifying and uploading the engineering file.
The depth information can be obtained by blurring the picture, the blurred picture is obtained, then the pixel points in the blurred picture are calculated by a fuzzy comprehensive evaluation method, the scale factor of each pixel point is obtained, and the scale factor of each pixel point is converted into the relative depth value of each pixel point according to the focusing information of the picture, so that the depth information is obtained.
The above-described embodiments are intended to be illustrative, not limiting, of the invention, and therefore, variations of the example values or substitutions of equivalent elements are intended to be within the scope of the invention.
From the above detailed description, it will be apparent to those skilled in the art that the foregoing objects and advantages of the present invention are achieved and are in accordance with the requirements of the patent laws.

Claims (3)

1. A method for complementing live-action roaming 3D information through a common camera is characterized by comprising the following steps:
s1, setting parameters of camera shooting and planning a shooting route;
s2, generating a route map after shooting is finished;
s3, uploading the shot original data to a server, and outputting a panorama by using a panorama manager;
s4, after the panoramic image is output, performing corresponding color processing and azimuth correction on the panoramic image, and uploading the panoramic image to a server;
s5, reducing the size of the processed panoramic image on the server, importing the panoramic image into a newly-built project file, and sequencing according to a route map;
s6, setting the height of a camera, synchronizing the camera height to each imported panoramic image, associating each imported panoramic image, connecting each panoramic image to form an integral scene, and realizing preliminary simulation of a real scene;
s7, after the imported panoramic image is correlated, starting to construct a 3D space structure, complementing 3D information according to the relation of plane elevation obstacles, and realizing simulation of a real scene;
s8, previewing the construction effect after the 3D structure is constructed, finding out the condition of spatial information error in construction, adjusting the condition, and rendering the constructed engineering file;
and S9, marking the point positions of the walking route, planning the simulated walking route, realizing the simulated walking effect, previewing and modifying the walking effect, and finally uploading the engineering file to the server.
2. The method of claim 1, wherein the 3D space structure constructed in step S7 keeps the structural squareness.
3. The method of claim 1, wherein in step S7, the lower plane represents the ground, the upper plane represents the ceiling or sky, the inner plane represents the surrounding walls, and the entire three-dimensional space can be constructed with the ground, the ceiling and the surrounding walls, and the 3D information is supplemented.
CN202010571677.6A 2020-06-22 2020-06-22 Method for complementing live-action roaming 3D information through common camera Pending CN111698424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010571677.6A CN111698424A (en) 2020-06-22 2020-06-22 Method for complementing live-action roaming 3D information through common camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010571677.6A CN111698424A (en) 2020-06-22 2020-06-22 Method for complementing live-action roaming 3D information through common camera

Publications (1)

Publication Number Publication Date
CN111698424A true CN111698424A (en) 2020-09-22

Family

ID=72482679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010571677.6A Pending CN111698424A (en) 2020-06-22 2020-06-22 Method for complementing live-action roaming 3D information through common camera

Country Status (1)

Country Link
CN (1) CN111698424A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915374A (en) * 2015-04-27 2015-09-16 厦门理工学院 Tourist attraction 360-degree panoramic construction system and method
CN107483840A (en) * 2017-09-29 2017-12-15 北京紫优能源科技有限公司 Industrial monitoring system figure Web methods of exhibiting and device based on panorama sketch
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN108961395A (en) * 2018-07-03 2018-12-07 上海亦我信息技术有限公司 A method of three dimensional spatial scene is rebuild based on taking pictures
CN110505463A (en) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 Based on the real-time automatic 3D modeling method taken pictures

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915374A (en) * 2015-04-27 2015-09-16 厦门理工学院 Tourist attraction 360-degree panoramic construction system and method
CN107483840A (en) * 2017-09-29 2017-12-15 北京紫优能源科技有限公司 Industrial monitoring system figure Web methods of exhibiting and device based on panorama sketch
CN108257219A (en) * 2018-01-31 2018-07-06 广东三维家信息科技有限公司 A kind of method for realizing the roaming of panorama multiple spot
CN108961395A (en) * 2018-07-03 2018-12-07 上海亦我信息技术有限公司 A method of three dimensional spatial scene is rebuild based on taking pictures
CN110505463A (en) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 Based on the real-time automatic 3D modeling method taken pictures
CN110769240A (en) * 2019-08-23 2020-02-07 上海亦我信息技术有限公司 Photographing-based 3D modeling system and method, and automatic 3D modeling device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚玉杰: "《3D影视概论》", 31 August 2016 *
张琪: "《三维虚拟交换设计》", 31 January 2019 *

Similar Documents

Publication Publication Date Title
US7131733B2 (en) Method for creating brightness filter and virtual space creation system
US6983082B2 (en) Reality-based light environment for digital imaging in motion pictures
JP5224721B2 (en) Video projection system
US10950039B2 (en) Image processing apparatus
CN112437276A (en) WebGL-based three-dimensional video fusion method and system
CN103533318A (en) Building outer surface projection method
CN105051603B (en) For the more optical projection systems for the visual element for extending master image
US20120212477A1 (en) Fast Haze Removal and Three Dimensional Depth Calculation
CN112118435B (en) Multi-projection fusion method and system for special-shaped metal screen
CN104427230A (en) Reality enhancement method and reality enhancement system
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN105578172B (en) Bore hole 3D image display methods based on Unity3D engines
EP4261784A1 (en) Image processing method and apparatus based on artificial intelligence, and electronic device, computer-readable storage medium and computer program product
KR101725024B1 (en) System for real time making of 360 degree VR video base on lookup table and Method for using the same
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
CN108765582B (en) Panoramic picture display method and device
Nakamae et al. Rendering of landscapes for environmental assessment
CN103903274B (en) The method that the distortion curved surface projection correction of a kind of small radius and large curvature is merged
CN110035275B (en) Urban panoramic dynamic display system and method based on large-screen fusion projection
CN111698424A (en) Method for complementing live-action roaming 3D information through common camera
US9807302B1 (en) Offset rolling shutter camera model, and applications thereof
CN113516761B (en) Method and device for manufacturing naked eye 3D content with optical illusion
CN112866507B (en) Intelligent panoramic video synthesis method and system, electronic device and medium
CN111901579A (en) Large-scene projection display splicing method
CN109474811A (en) A kind of dome ball curtain projects automatic regulating system and method more

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922

RJ01 Rejection of invention patent application after publication