LU501944B1 - Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry - Google Patents

Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry Download PDF

Info

Publication number
LU501944B1
LU501944B1 LU501944A LU501944A LU501944B1 LU 501944 B1 LU501944 B1 LU 501944B1 LU 501944 A LU501944 A LU 501944A LU 501944 A LU501944 A LU 501944A LU 501944 B1 LU501944 B1 LU 501944B1
Authority
LU
Luxembourg
Prior art keywords
model
image
reconstruction
pbr
close
Prior art date
Application number
LU501944A
Other languages
German (de)
Inventor
Wei Wang
Original Assignee
Univ Hunan Normal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Hunan Normal filed Critical Univ Hunan Normal
Application granted granted Critical
Publication of LU501944B1 publication Critical patent/LU501944B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method: collecting image information by close-range photogrammetry; The image is aligned, and the three-dimensional point cloud of the image is obtained by point cloud computing. Clean up the redundant 3D point cloud of the image, and reconstruct the mesh to obtain the 3D data reconstruction model; Texture the 3D data reconstruction model to obtain the basic color map; Based on 3D data reconstruction model and basic color map, PBR map processing is carried out to obtain the image after model optimization and model structure re-topology; The images after model optimization and model structure re-topology are physically based rendering, and different maps are obtained. According to the invention, 3D data reconstruction by close-range photogrammetry is adopted, and the process and cost of 3D data collection, generation and storage are reduced through standardized process management according to the surface characteristics and material characteristics of specific scanned objects.

Description

DESCRIPTION LU501944 Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry
TECHNICAL FIELD The invention belongs to the field of intangible cultural heritage digitization, image acquisition technology and three-dimensional digital reconstruction, in particular to a method for making three-dimensional reconstruction and PBR maps based on close-range photogrammetry.
BACKGROUND Intangible cultural heritage is an important part of Chinese excellent traditional culture, and the protection and inheritance of intangible heritage is of great significance to the continuation of historical context. Three-dimensional scanning technology and close-range photogrammetry can record the intangible handicrafts with millimeter precision, and can completely record the surface texture and material characteristics of the handicrafts. Combined with PBR (Physical-based Rendering) rendering effect, the 3D model can be displayed in front of the audience. There are four kinds of 3D scanning technologies at home and abroad: structured light 3D scanning technology, 3D laser scanning technology, CT scanning technology and photogrammetry. Structured light 3D scanning technology is expensive and difficult to capture oversized objects. 3D laser scanning technology is expensive and difficult to capture the surface color information of objects, so it can't restore non-legacy crafts well. CT scanning technology is very expensive and not suitable for capturing and scanning small and tiny objects; Photogrammetry is relatively inexpensive, can capture microscopic models to urban landscapes, and is suitable for scanning intangible cultural heritage crafts. However, the scanned images still need a series of standardized 3D reconstruction technologies, and it is still difficult to transform them into 3D digital display models with high precision, high fidelity and network communication.
Compared with the intuitive three-dimensional model, traditional pictures, words, videos and other two-dimensional intangible cultural heritage transmission and communication contents have a relatively poor sense of immersion, and it is difficult for the audience to resonate and empathize. However, 3D digital technology based on close-range photogrammetry can reconstruct intangible cultural heritage crafts, which reduces the cost of 3D reconstruction. However, there are still some problems in photography technology and photography conditions that need to be improved when acquiring specific picture information and generating point cloud, 501944 data. In 3D model reconstruction, it is necessary to improve the accuracy of the model, ensure the high fidelity and high recovery model reconstruction and apply it to network propagation.
SUMMARY The purpose of this invention is to propose a low-cost and high-fidelity 3D reconstruction and PBR map making method based on close-range photogrammetry. By improving the data acquisition method, high-precision HDR images with high dynamic range can be obtained, which can be used for 3D digital reconstruction of a large number of intangible cultural heritages. High-detail models and high-detail basic color maps are obtained through software point cloud computing, and then various precise maps are obtained through texturing and physically based rendering, so as to finally realize the reconstruction of 3D intangible cultural heritage with low cost and high quality and finally realize network communication.
In order to achieve the above purpose, the present invention provides a method of 3D reconstruction and PBR mapping based on close-range photogrammetry, which includes: collecting image information, wherein that collected image information adopt a close-range photogrammetry method; aligning the image, and obtaining a three-dimensional point cloud of the image through point cloud computing; cleaning up the redundant 3D point cloud of the image, and performing surface grid generation to obtain a 3D data reconstruction model; carrying out image texturing treatment on the three-dimensional data reconstruction model to obtain a basic color map; based on the three-dimensional data reconstruction model and the basic color map, PBR mapping processing is carried out to obtain an image after model optimization and model structure re-topology; the image after model optimization and model structure re-topology is physically based rendered for several times to obtain different maps.
Optionally, the close-range photogrammetry method comprises the following steps: when shooting an object, covering a layer of soft light cloth at the light source, and adding a polarizer on the lens; Before using the camera, the underexposure parameters are set, the shutter speed 1s extended, the object is photographed to obtain an image, the image is obtained in the RAW 5019 44 format, and the image is corrected by the brightness and reflection area to obtain a flat picture.
Optionally, the process of obtaining the 3D point cloud of the image includes: firstly, checking whether the image is qualified; secondly, deleting the unqualified image; carrying out primary toning on the qualified image; constructing a 3D reconstruction model; inputting the toned image into the 3D reconstruction model; obtaining the aligned image; carrying out point cloud computing; and finally obtaining the 3D point cloud of the image.
Optionally, clean up the redundant 3D point cloud of the image.
Optionally, a high-detail model and a medium-high precision model can be obtained after the surface grid generation.
Optionally, the process of obtaining the 3D data reconstruction model includes: Firstly, the vertex information of 3D point cloud is obtained, and the surface grid of 3D model is calculated. After solving, a high-detail model is obtained, and the high-detail model is subjected to surface simplify operation and shade smooth operation to obtain a medium-high precision model.
Optionally, the texturing process includes: UV splitting the image to obtain the UV coordinates of the image; Mapping the mapping information to the three-dimensional model in the form of mapping, obtaining the basic color mapping, and solving the basic color mapping to obtain the mapping file.
Optionally, the process of model optimization and model structure re-topology includes the following steps: firstly, introducing the medium-high precision model into PBR workflow software, adjusting the model and performing shade smooth treatment to obtain the optimized model; Then, the optimized model is subjected to manual topology processing. In manual topology, perform Vertex Snap on the model, and then the polygon brush is used to draw the low poly model to make the model structure complete.
The invention provides a low-cost and high-fidelity 3D reconstruction and PBR map making method based on close-range photogrammetry, which has the following gain effects compared with the prior art: Based on low-cost close-range photogrammetry, HDR images with high dynamic range are obtained by improving data collection methods, which can be used for 3D digital reconstruction of a large number of intangible cultural heritages. Get high-detail model and high-detail basic color map through software point cloud computing; Then texture and physically based rendering, 501944 are carried out to obtain a variety of precise and high-fidelity maps, and finally low-cost and high-quality 3D intangible cultural heritage reconstruction and network communication are realized.
BRIEF DESCRIPTION OF THE FIGURES The drawings, which form a part of this application, are used to provide a further understanding of this application. The illustrative embodiments and descriptions of this application are used to explain this application, and do not constitute undue restrictions on this application. In the drawings: Fig. 1 is a schematic flow diagram of the method of 3D reconstruction and PBR mapping in the first embodiment of the present invention; Fig. 2 is a schematic structural diagram of image information collection in the first embodiment of the present invention; Fig. 3 is a structural diagram of image alignment and point cloud computing according to the first embodiment of the present invention; Fig. 4 is a structural diagram of 3D data reconstruction according to the first embodiment of the present invention; Fig. 5 is a schematic structural diagram of texturing in the first embodiment of the present invention; Fig. 6 is a structural schematic diagram of optimization processing in the first embodiment of the present invention; Fig. 7 is a structural diagram of physically based rendering in the first embodiment of the present invention.
DESCRIPTION OF THE INVENTION It should be noted that the embodiments in this application and the features in the embodiments can be combined with each other without conflict. The application will be described in detail with reference to the drawings and examples.
It should be noted that the steps shown in the flowchart of the drawings can be executed in a computer system such as a set of computer-executable instructions, and although the logical sequence is shown in the flowchart, in some cases, the steps shown or described can be executed in a different order than here.
Example 1 LU501944 As shown in fig. 1, this embodiment provides a method of 3D reconstruction and PBR mapping based on close-range photogrammetry, which includes: Collect image information, wherein that collected image information adopt a close-range photogrammetry method; Aligning the image, and obtaining a three-dimensional point cloud of the image through point cloud computing; Cleaning up the redundant 3D point cloud of the image, and performing surface grid generation to obtain a 3D data reconstruction model; Carrying out image texturing treatment on the three-dimensional data reconstruction model to obtain a basic color map; Based on the three-dimensional data reconstruction model and the basic color map, PBR mapping processing is carried out to obtain an image after model optimization and model structure re-topology; The image after model optimization and model structure re-topology 1s physically based rendered to obtain different maps.
As shown in Figure 2, image information collection includes close-range photogrammetry image information collection in artificial light environment and close-range photogrammetry image information collection in natural light environment.
Studio conditions: The portable studio is 80cm*80em*80cm in size, the outer shell is made of canvas and the inner lining is white curtain, which can well reflect light and ensure uniform lighting conditions around the scanned object. In order to ensure that the image acquisition can cover as many angles as possible in the vertical direction, the height of the subject should not exceed 50cm and the width should not exceed 60cm. The bottom of the scanned object is rotationally driven by the electric turntable. White soft light cloth can be selected as the inner lining of the studio. Choosing white soft light cloth can soften the stiff feeling of LED light source, and also can make light bounce in the studio for many times to obtain more uniform lighting conditions. When collecting images of bright objects, you can choose black velvet light-absorbing cloth lining, which can absorb extra light as much as possible, so as to avoid excessive environmental reflection information affecting the accuracy of image data collection.
Lighting conditions of the studio: The lighting conditions of the studio are mainly provided by six LED lamp bead belts, and the conventional setting is that two lamp belts are arranged 501944 the upper side and the left and right sides respectively. This lighting arrangement can make the surroundings of the subject full of uniform light, but the disadvantage is that sometimes there is a lack of illumination on the front. The solution is to use the annular LED fill light to load on the camera for front fill light.
Camera condition and color correction In the process of 3D image data collection, camera framing settings must be highly unified to achieve the best reconstruction effect in later 3D reconstruction. Shutter, aperture and ISO sensitivity among the three elements of camera exposure need to be consistent in the process of data acquisition. In addition, in order to maintain the color consistency of image data information, uniform white balance adjustment are usually adopted. If there are more accurate color requirements, standard color cards or Spyder Cube tools can be placed in the early stage of shooting, and image processing software can be used for uniform color correction in the later stage of data collation.
Aperture adjustment: For the aperture, the larger the aperture, the easier it is to cause shallow depth of field, that is, the front and rear of the focus will form blur effect; In portrait and still life photography, creating shallow depth of field is an effective means to highlight the subject and enhance the sense of atmosphere. However, when collecting 3D image data, shallow depth of field should be avoided. Shallow depth of field may cause the edges of objects with larger depth to be blurred, especially in the data acquisition environment, where the camera is usually within 50cm from the subject, and the imaging distance is shorter, which is more likely to cause shallow depth of field. However, there is a link in the later 3D reconstruction software, in which the computer will compare the edge contour information of each picture to align the images and generate 3D point clouds. If the edge is blurred by shallow depth of field, the computer calculation will probably fail, and even if the computer calculation is barely successful, the map will be blurred in the calculation of the basic color map.
In order to eliminate the shallow depth of field as much as possible, it is necessary to control the aperture F value at a higher level. After many experiments, it is concluded that for Aps-C camera, under the condition of equivalent Full Frame 50mm focal length, when the aperture F value is F12 or above, the edge of the object within SOCM can basically maintain a clear and usable level. For a full-frame camera, F12 needs to be multiplied by 1.5 to get the aperture of F18. At this aperture level, all the objects can be covered in the focal plane, and the so 1944 edges are clear and sharp, which will not produce too much blurring effect, which is convenient for later calculation and reconstruction. In addition, the optical characteristics of the lens determine that the larger the aperture, the lower the edge sharpness of the image. Appropriate reduction of the aperture can significantly improve the center sharpness and edge sharpness of the picture.
ISO sensitivity adjustment: in terms of ISO sensitivity, directly use the lowest sensitivity supported by the camera, all of which are ISO100. The higher the ISO sensitivity, the higher the exposure level of the image, and at the same time, the higher the noise level. In order to ensure the purity of the picture, it is the safest way to choose the lowest sensitivity. Low noise level can significantly improve the resolving quality of the basic color map in the later 3D reconstruction, and at the same time, it will also reduce the invalid noise on the 3D mesh.
Shutter adjustment: As the aperture and ISO sensitivity in the three elements of exposure have been determined, the camera exposure at this time should be seriously insufficient. For normal exposure, the remaining shutter speed can be adjusted by itself until the camera exposure histogram evenly distributes highlights and dark details. At this time, the adjusted shutter speed is very slow, so the camera will shake at the moment when the shutter button is pressed, which will seriously affect the stability of the picture during the exposure for up to one second. To solve this problem, we can adopt the strategy of delayed shutter, and after pressing the shutter, the camera will wait for two seconds before exposure, and within two seconds, the camera will be stable enough.
White balance adjustment: the color temperature of the six LED strip is 6500K standard color temperature, and the color rendering of the light in this color temperature range is the most accurate for white. Similarly, the color temperature of the LED ring fill light on the front of the subject is 6500K, which will not cause the problem of warm or cold picture under this lighting condition. In addition to uniformly controlling the color temperature of lights, the built-in white balance setting of each camera is also matched to 6500k, which can minimize the chromatic aberration when the camera collects images. In addition, considering the different brands and models of each camera, the colour space and imaging style are slightly different, which may lead to differences in the color of photos in the later output images. To solve this problem, we can use Datacolor SpyderCUBE stereo gray card correction, which is a tool to assist exposure. The front of SpyderCUBE is a standard 18% gray card for setting or later adjusting white balance. The 50 1944 back of SpyderCUBE is arranged with three basic gray-scale colors of black, white and gray at intervals, which can provide a more accurate reference for exposure adjustment during later adjustment. There is a black hole in the center of the lower black area, which can be regarded as an absolute black area during adjustment. The chrome-plated ball on the top can simply judge the position of the light source by emitting light.
Color correction: when the first photo of each work is collected, after adjusting the exposure parameters of the camera, put the SpyderCUBE into the picture to shoot together, and then remove it to collect the remaining pictures normally. After the data collection is completed, all the photos collected by the whole work are imported into Lightroom Software. At the later stage, firstly, observe the highlight area of the top chrome-plated ball to roughly judge the position of the light source. Then, according to the position, select the gray card area in the same direction as the light source, click with the "White Balance Selector" in Lightroom Software, and you can get the automatic and more accurate white balance parameters. At the same time, apply this white balance setting to the whole photo sequence, and you can complete the first-level color calibration.
Shooting angle adjustment: The accuracy of 3D reconstruction is directly proportional to the number of image acquisition and the number of acquisition angles, so it is necessary to switch different angles for shooting. There are two directions for switching angles, transverse angle and longitudinal angle. Generally, the transverse angle is 10 degrees as a scale, and 360 degrees are divided into 36 equal parts on average. Switch the photo angle to take photos. Generally, the vertical angle is changed by adjusting the height of the photographic tripod and the angle of the camera pan/tilt. For the works with integral shape, gentle fluctuation of surface structure, and less occluded parts observed at various angles, three vertical angles can be used to collect; For complex shapes and uneven surface structures, it is necessary to add two angles in the vertical angle, reaching more than five vertical angles, so as to capture more details and occluded parts, so as to better reconstruct all the modeling details of the works in the later reconstruction.
Improvement method in shooting: If the object is polished and coated with varnish, the surface is smooth, which is easy to produce highlights and reflections. However, under the existing collection conditions, this situation cannot be completely avoided, but some measures can be taken to minimize the influence of such factors. In the early shooting of lighting, because the light source is hard light, it is very easy to cause pure white highlights on the surface of the so 1944 subject; In this case, a layer of soft light cloth can be covered in front of the light source. The soft light cloth can not only soften the hard light source, but also diffuse the light source, making the light in the whole studio more uniform and reducing the generation of pure white highlights to a certain extent. In addition, a polariser can be installed on the lens, and the polariser can filter direct light in a single direction in diffuse reflection, When in use, manually rotate the angle of the filter until the reflection area in the viewfinder of the camera becomes the dimest, at which time the difference between the darkest light and the brightest light is 90 degree, which can greatly reduce the reflection and highlight of the object surface. It should be noted that the polarized light filter will filter a part of the light, which will lead to the decrease of the exposure of the camera. If the previously set parameters are used, the imaging exposure will be underexposed. At this time, it is necessary to further extend the shutter speed to ensure the correct exposure of the picture, depending on the exposure meter of the camera and the exposure histogram.
When the camera outputs images, be sure to output photos in RAW format. RAW format is a digital film format, which will record the complete exposure information of the image, including white balance, aperture, ISO sensitivity, shutter speed, etc. The higher the camera's tolerance, the more information can be recorded in the RAW format. Generally speaking, the RAW format can record a dynamic range of more than SEV, which will have a larger adjustment space in post-processing.
Post-shooting improvement method: When post-processing the image, the over-bright area and reflection area of the image can also be processed to some extent. After importing the RAW file sequence into the image processing software, use the Camera RAW tool for correction, in which the highlight and white column can be appropriately reduced, which can reduce the contrast of the picture, reduce the degree of white and highlight, and try to output a flat picture, which is helpful for the subsequent 3D reconstruction process. If there is a very dark part in the picture, you can slide the dark part and the black slider to the right. Because the dynamic range of the RAW format file is beyond that of the ordinary JEPG format file, the details of the dark part of the object can be restored after adjustment, which is convenient for the reconstruction software to calculate the highlight and dark part information.
Selection of outdoor photography environment
Outdoor data collection is different from the multi-angle collection method of the object M50 1944 the studio, in which the electric turntable is used to drive the object to rotate, while the camera is relatively fixed. Outside, the object is fixed and the relative position of the camera is changed, and the angle of each capture is adjusted by the photographer's standing position.
Photography time control: The matters needing attention in outdoor data collection of close-range photogrammetry are basically the same as those in the studio. Because outdoor natural light is provided by sunlight, it is basically uncontrollable, and as time goes by, the change of solar Angle and the change of cloud weather will have greater variables on natural lighting conditions, so the single data collection time should not be too long, and should be controlled within half an hour as far as possible.
Outdoor weather selection: outdoor data collection needs to avoid direct sunlight, which will cause obvious light-dark junction and strong highlights and shadow, which is not conducive to the later 3D reconstruction. The weather suitable for outdoor data collection is usually cloudy. In this weather, sunlight scattered by the atmosphere and clouds can provide soft and flat illumination without strong highlights and shadow. In order to further improve the illumination conditions, data collection can be carried out in the shadow of houses, and the illumination can be further flattened under the projection of buildings.
Aperture adjustment: For data acquisition of large objects, the conditions of exposure control are basically the same as those of data acquisition in the studio. It should be noted that, because the size of the objects collected outdoors is usually large, if the equivalent F18 aperture in the studio is used, the edge of the objects may be blurred to a certain extent. Depending on the size of the objects, the aperture can be continuously reduced until F22. At this time, the aperture has been reduced to the minimum, and the exposure is seriously insufficient, which needs to be compensated by ISO sensitivity and shutter speed.
ISO sensitivity adjustment: in terms of ISO sensitivity, it can be appropriately increased to ensure that the picture has higher exposure. Many experiments have proved that the image noise level of APS -C camera with ISO sensitivity within 400 is basically controllable, and the exposure of ISO400 is one file higher than that of ISO100, while the image noise of Full Frame camera with ISO800 is basically acceptable, and the exposure of ISO800 is two files higher than that of ISO100, that is, it is more advantageous to use full-frame camera for data acquisition outdoors.
Shutter speed adjustment: In terms of shutter speed, the shutter speed needs to be further, 1944 extended in combination with the reason that the outdoor lighting condition is lower than that in the studio and the aperture is further reduced. For long-time exposure, tripod and time-delay shutter must be used to ensure the stability of the picture. For some objects with rich details at the bottom, which need low-angle close-up, you need to put the objects on a table with a height of 50cm.
Color correction: For color correction, more attention should be paid to the consistency and accuracy of color when collecting data outdoors. Different from the white light with a fixed color temperature of 6500K in the indoor lighting conditions, for example, in sunny days, the color temperature of outdoor lighting is about 5200K-5500K, while in cloudy or cloudy afternoons, the color temperature of outdoor lighting is about 6000 K-7000 K, and the outdoor lighting conditions will change with the passage of time and the change of weather. In order to unify the tone in the later stage, it is necessary to set the white balance of the camera and the mobile phone uniformly. In the early stage of shooting, it is necessary to use the color card to record the white balance of the first photo, and import the image processing software for unified adjustment in the later stage of data sorting.
Close-up shooting: There are also requirements for the shooting of close-up pictures. First of all, because the closest focusing distance of close-up pictures is very close, usually within 20CM, it usually blurs the background when using a camera. To solve this problem, we can use mobile phone to collect close-up images. The mobile phone used is iPhone 12, and its main camera 1s a wide-angle lens with a 24mm F1.6 aperture of 12 million pixels. Its CMOS sensor is only 1/2.5 in size, which is much smaller than that of the camera. These conditions determine that the iPhone's camera can't cause shallow depth of field, and the iPhone has good imaging resolution and excellent image stabilization, which can be shot by hand. In addition, the small size of iPhone is very suitable for moving and rotating in a narrow range, and it is very suitable for capturing close-ups of details. When using a mobile phone camera, it is necessary to fix the white balance, fix the exposure, and turn on HDR format to keep the best latitude, especially if you can take photos in RAW format.
Adjustment of shooting angle: when collecting close-up data, we should focus on dealing with complicated structures, and use surround and multi-angle shooting; For the structure with hollowed-out details and front-back occlusion, the best way is to make the main object firmly fixed in the center of the picture, take it as the center of the circle, and take three semi-circulpy, 1944 shots with the hand-held camera, so that the captured photo collection can completely cover the occluded and occluded objects. There is no upper limit for the number of photos collected by close-ups. If the subject is very complicated, the number of close-ups can be increased. It should be noted that the more photos, the greater the pressure on the post-reconstruction of the computer. Significantly increasing the number of close-up images will also greatly increase the solving speed of the computer, which needs to be balanced between solving quality and solving speed.
As shown in Figure 3, image alignment and point cloud computing includes image information alignment, image alignment and point cloud computing, and image information processing.
Image checking and cleaning: After the data collection is completed, the data should be sorted and classified in time, then the whole photo collection should be checked one by one, and the unqualified images should be deleted. Then, the image processing software should be used for unified color and white balance correction to complete the first-level color matching, which can then be imported into the 3D reconstruction software for reconstruction process.
Image import: Open 3D reconstruction software, and select 1D+2D+3D mode in the upper layout mode. At this time, the window layout will change into photo sequence + individual photo+3D window, which is more suitable for observation. Then select the Workflow tab at the top, click the Inputs import button, and import the sorted image data set from the first to the last one into the software as a whole. Check the image data again. After the check is correct, switch to the Alignment tab at the top, click the Align Images button, and the 3D reconstruction software will automatically analyze the feature points of each image and try to reverse the original relative position of the camera through the image information.
Generation of dense point cloud: in the image alignment stage of 3D reconstruction software, the calculation speed is linked to the computer speed, especially the number of CPU cores; At the same time, it is also related to the number of collected images. The more images, the slower the solution speed. After the solution of 3D reconstruction software is completed, we will get a scene of 3D point clouds, and at the same time, we can see the original camera angle calculated by the software, and the number of these 3D point clouds can usually reach one million. These point clouds are the vertices of the later reconstructed model. The higher the density of 3D point clouds, the higher the accuracy that can be used for solving.
Information cleaning: Different from the images collected in the studio, there are few 5019 44 environmental factors. Because the images are collected in the outdoor environment, it is inevitable that the surrounding environment will be generated into a point cloud in the point cloud reconstruction. If it is not processed, the unnecessary environment will be reconstructed together in the later surface grid generation. In addition, the generation of unnecessary surrounding environment will waste the limited basic color map size, and will cause unnecessary workload when cleaning the model grid. Therefore, the unnecessary point cloud should be cleaned once it is generated. For the point cloud cleaning, it is necessary to change the (+software interface) view to the top view, and drag the periphery of the generated box in the top view to cover the whole object; Then switch the viewing angle to any one of the left view, right view or front view, and drag the generated box up and down to completely cover the whole object. When you drag and drop the generated box, you can leave some redundant parts slightly to prevent some protruding details from being cut.
As shown in Figure 4, 3D data reconstruction includes high-detail model reconstruction and medium-high precision model generation.
High detail model reconstruction Advanced software solution of the model: after you have the 3D point cloud and clean it up, you can proceed to the next step. In the next step, you can rebuild the grid. Click the Reconstruction tab at the top, and select the option of High Detail reconstruction in the Process. After that, the 3D reconstruction software will use the previously solved 3D point cloud as vertex information to calculate the surface mesh of the 3D model.
Storage after solving: After solving the model, you can get an advanced model with millions of triangular faces. At this time, you can click the Mesh grid option in Export in the upper right tab to export the model as obj format. After export, you can convert the model into Stl format and support 3D printing.
Model generation with medium and high precision Advanced model inspection: Before the next operation, the generated high-precision model needs to be inspected. Click the reconstruction tab and click check integrity in the tools column. The system will detect the triangle overlap and vertex color information of the model. If there is an error, it will be reported. Then, select check topology in the tools column to check the topology.
Generation of medium and high precision models: In the next step, surface simplify and 1944 smoothing operations will usually be performed on high precision models to adapt to the subsequent generation of basic color maps. In the first cleaning stage, we need to click simplify of the process in the work flow tab above. This step will reduce the surface, output a model with medium and high precision, and enter 100,000 in the Target triangle count to get a model with 100,000 triangular surfaces.
Smoothing operation of medium and high precision model: the simplified model will cause the uneven surface of the model due to the reduction of the number of faces. At this time, you need to click the smoothing tool in the tools column of the reconstruction tab at the top. Change the smoothing type of Smoothing Type to noise-removal noise removal, the smoothing weight of Smoothing Weight is set to 0.5, and the smoothing iteration number is set to 5. Then click OK, and the system will perform five smooth iterations on the model according to the settings.
As shown in Figure 5, the texturing process includes UV map splitting and basic color map solving.
UV graph splitting Before the settlement of the basic color map, it is necessary to split the UV of model. The UV map can define the coordinates of three-dimensional objects on a plane, and XYZ in the three-dimensional coordinate system corresponds to UVX in the two-dimensional coordinate system. Because of the plane coordinates, there is no depth axis, only UV coordinates. For mapping, a UV map with UV coordinates is needed to map the mapping information to the 3D model.
For UV splitting, first select unwrap splitting in tools column of reconstruction tab, select 1 in gutter slot, enter 16384*16384 in maximal textures count maximum mapping resolution, and click OK to get the result. Then click the texture texturing option in the workflow tab, and the system starts to solve the basic color map. Then you can preview the 3D model with the basic color map in the window.
Basic color map solution Export option Select the mesh grid model in the export column of workflow in the upper tab, and select the wavefront obj format in the fomat version format in the export settings; To export the relative camera position, check Export camera as a model part in camera settings camera options. Then an obj model file, an mtl boot file and two mapping files will be exported.
As shown in Figure 6, the optimization process includes model optimization and model 50 1944 structure re-topology.
Model optimization After the model and basic color map are solved by photogrammetry in 3D reconstruction software, it is imported into PBR workflow software and then the PBR workflow is made.
Import model into PBR workflow software: After starting PBR workflow software, import object model, and adjust the midpoint of the model to the zero point of world coordinates. After that, open the N panel, call up the model attributes, then enter the calculated scale in the scaling attributes above, unify the scale of XYZ three axes, then select the object, and transform-apply it to all transformations. In this way, the calibration of model size and coordinate system is completed.
Shade smooth treatment: At this time, the display form of the model is shade flat, and at this time, you need to right-click to select-shade smooth; Then, in the object data properties on the right, open the tab of normals, which will Auto smooth check and enter 30 degree. This step can change the default shading mode of the model, and the included angle less than 30 degree will be presented with a smooth shader, while the acute angle greater than 30 degree will still use straight shading.
Model re-topology In order to adapt to the Internet and mobile communication, as well as the support of virtual reality and mixed reality, it is necessary to further reduce the number of faces of the model and re-topology the model. Untreated model wiring is irregular. Generally, the huge number of vertices and triangles will make the model bulky. Such huge data will cause the model to spread very slowly on the Internet, and the original model without optimization will have higher requirements for the performance of the computer. Therefore, it is necessary to further optimize the model and map in this state.
Manual topology processing: Manual topology can be adopted for the re-topology of the model, specifically, turning on vertex or surface capture, redrawing a low poly model on the surface of the model by using polygon brush, so that the structure of the model is regular and all of them are arranged in four surface.
As shown in Figure 7, physically based rendering.
UV map splitting and sorting of model based on
After the model is re-topology, the original UV mapping structure will be destroyed. At this, 501944 time, the mapping of the original model is no longer suitable for the new model. It is necessary to split the UV structure of the re-topology model to adapt to the subsequent mapping baking stage.
Manual UV map splitting: For splitting the UV structure of the model, you need to select the edge editing mode in the editing mode, select the edges to be disassembled and mark them as manually seam edges, until there are enough seam edges to completely disassemble the model, enter the upper UV editing panel, and click Split UV to see the split UV.
UV map finishing: In order to further improve the area utilization rate of maps, select all faces in the interface of UV Pack Master, and click the UV Pack button. At this time, the system will automatically calculate the proportion of each UV to adapt to the size of the whole canvas. At this point, the UV splitting and finishing work of the model is completed.
Baking and processing of basic color map of model After obtaining the UV coordinate information map of the model, the next step is to "bake" the basic color map of the original model to the newly generated optimized model after re-topology. In computer graphics, baking is a technical term, which maps the three-dimensional information simulated by computer to a poster and finally applies it to the model with UV information.
Shader generation: the baking of basic color map can be done in PBR workflow software. First, you need to enter the shading panel above, then add a principled BSDF node for the low-modulus material panel, connect the output of BSDF to the surface node of the material output, and combine multiple nodes into an easy-to-use node by the principled BSDF node. Image textures drawn or baked from 3D mapping software can be directly linked to the corresponding parameters in this shader. The shader node contains multiple layers, and various materials can be created. The basic layer is diffuse reflection, metallicity, subsurface scattering and transmission. In addition, there are mirror layer, gloss layer and transparent coating.
Basic color map baking: Click Render Properties in PBR workflow software to switch the render engine to Cycles render engine. Find the "Bake" tab below, expand it and change the "Bake type" to "Diffuse". In the "Influence" column, uncheck "Direct" and "Indirect" and keep "Color". Subsequently, the step of candidate "Selected to Active" is to determine the object to be baked. Then, enter 0.3m in the "Extrusion" option and 0.1 in the max ray distance. Finally, first select the high poly model and then select the low poly model, then click the newly created, 1944 texture image, and then click Bake.
Reflection elimination: After baking, after connecting the texture image to the basic color node, we can see that the low poly model already has the texture; At this time, the default roughness and highlight of the principled BSDF is 0.5. After entering 1 in the roughness column and 0 in the highlight column, the reflection can be eliminated. After entering the upper UV editing panel, you can see that the newly baked basic color map has been generated according to the split UV information. Click Image-Save as -RGBA-16-bit color depth, which can ensure all the information of the baked map to the greatest extent.
Secondary processing of basic color map: Next, the baked basic color map can be secondarily toned, which needs to be applied to image processing software. In the image processing software, the highlight column should be lowered appropriately, the shadow column should be raised appropriately, the white column should be lowered appropriately, and the black column should be raised appropriately. This is to try to eliminate the contrast between highlight and projection during image acquisition. In the basic color map, the ideal state is a completely flat intrinsic color map without any lighting information. All environment and lighting information should be determined by the lighting in the rendering environment.
Processing and baking of normal map of model In PBR rendering workflow, only one basic color map is not enough. Some physical material properties in the real world need several other PBR material maps to define. The baking and making of normal map can greatly restore the texture of high poly mold on the surface of low poly model, and can also ensure the quality of model while saving computational resources. It is a very suitable means for real-time rendering and Internet communication.
In this study, four PBR maps of the photogrammetric scanning model of the object are used, namely, basic color map, normal map, roughness map and ambient occlusion map. With these four PBR maps and some parameter settings, the physical surface material characteristics of the object can be completely restored.
Normal map acquisition: There are two ways to acquire the normal map. One way is to bake the normal map from high poly model to low poly model. The specific principle is to treat the surface details of the high-level model as a whole map, and map these concave-convex details on a texture map suitable for the low poly model by baking. The second way is to generate a normal map through the surface intrinsic color texture of the basic color map, because the basic colpr 50 1944 map contains the basic diffuse reflection information of the object, while the basic color map obtained from photogrammetry basically retains some illumination information, which can be converted into a normal map by some means. In this study, baking and generation are adopted. After two maps are obtained, a new normal map is generated by superposition, which can restore the texture details of the high poly model to the greatest extent on the low poly model.
Normal map baking: in the baking step of normal map, first add a node of image texture in the object texture shader panel of the low poly model, and then change "Srgb" to "linear" in the color space of image texture, This is a color space suitable for normal mapping, while the traditional srgb color space is not suitable for the blue-purple linear mode of normal mapping. Then enter the rendering attribute on the right, make sure that the rendering engine is "Cycles" renderer, the rendering device selects "GPU calculation", open the setting column of baking at the bottom, change "bake type" to "normal", and keep the default settings for other options; If the basic color map has been baked before, "Selected to Active " will be automatically checked, and the settings of "Extrusion" and max ray distance will be consistent with the previous settings. At this time, you can click on the high poly model first, then add the low poly model as the active object, then click on the newly-built normal map in the material editing panel, and click on Bake.
Normal map export: After the baking is finished and the normal map is obtained, the texture image is exported as a PNG picture with 16-bit color depth and temporarily stored.
Application of normal map: Select low poly model in PBR workflow software, find the image texture node of normal map in the material editing panel, and then click image reload to update the normal map to the latest map. Then change the color space to "linear" in the nodes of the image texture, then add a vector node of the normal map, connect the color of the texture map node to the color of the normal map, and connect the normal of the normal map to the normal of the principled BSDF node, so that you can see the effect after having the normal map.
Drawing and processing of model roughness map Roughness mapping can restore some physical surface material characteristics of the object. Roughness map is an 8-bit gray map with 256 gray levels, which is used to define the smooth or rough attributes of a single pixel. When the pixel is pure black, it is absolutely smooth; When the pixel is pure white, it is absolutely rough. When applying roughness map, we should try to avoid producing pure black or pure white pixels, because there is no absolutely smooth or rough material in the real world. LU501944 Roughness mapping: Roughness mapping can be reversed with the basic color map of color removal, that is, it can be converted into negative mode. Click Image-Adjust-Reverse. At this time, we get a gray scale image in negative mode, and observe that the overall brightness is on the high side. At this time, we need to reduce the contrast, and add a brightness/contrast adjustment layer to reduce the contrast by 50. Subsequently, it is necessary to slightly increase the dark areas, and a gradation adjustment layer can be added. If there is uneven gray scale in the whole area of the map, you can use the lightening or deepening brush, adjust the flow or opacity of the brush to 25%, adjust the hardness of the brush to 0%, and smear the uneven areas until the effect is satisfactory.
Application of roughness map: import the processed roughness map into PBR workflow software, and load it as image texture node in the material attribute panel of the object. It is necessary to change the color space of the node to Non-color, that is, gray mode, and then connect the color of the image texture node to the roughness of the principle BSDF node; At this time, it can be seen that the model already has the roughness attribute, and the surface roughness attribute can be checked by rotating the model. If there is anything inappropriate or needs to be modified, you can click on the texture paint image drawing panel above, and you can draw directly with the brush. Choose black and white as the brush color, and apply it where it needs to be modified. If the reflective area is too strong, use a white brush with low transparency; if the reflective area is not enough, use a black brush with low transparency.
Baking and Processing of Ambient Occlusion Map In the real world, there will be a projection where the object touches the object, which usually also appears on the structure. For example, the surface of an object has a strong undulating structure, and a deep contact shadow will appear at the turning point of the structure. This feature can be realized by ambient occlusion in the workflow of PBR rendering.
Ambient occlusion map baking: ambient occlusion maps can be baked by using the ray tracing engine Cycles of PBR workflow software, or by using a third-party renderer or mapping software. Select ambient occlusion baking under Cycles engine. After selecting the model, create a new node of texture map in shader editor panel. The resolution is set to 4096*4096, named -AO Map, and the color space of the node is set to Non-Color, because the ambient occlusion map is a colorless gray map. Then select the rendering attribute on the right, change the rendering engine to Cycles engine, change the computing device to GPU computing, open the =o 1944 baking column, select the bake type to cover AO with ambient light, uncheck from the selected active, and click Bake to run and wait.
Application of the ambient occlusion map: After the ambient occlusion map is obtained, the effect of ambient occlusion can be seen by connecting the color of the texture mapping node to the color of the principle BSDF node. By comparison, it can be seen that the model with the ambient occlusion map is obviously more layered than the model without the ambient occlusion map, and the owner also has a contact shadow similar to the real world at the structural turning point. In this way, even if the rendering engine does not support screen-space ambient occlusion, it can have the effect of ambient occlusion.
For the connection application of ambient occlusion nodes, add a color-mixed RGB node in the shader editor of PBR workflow software, and change the mode to Multiply multiplication A * B; Then, the color of the basic color map is connected to the value 1 of the mixed RGB points, and the color of the color node of the ambient occlusion map is connected to the value 2 of the mixed RGB node; Then, the output value of the mixed RGB node is connected to the color of the principled BSDF node, and the coefficient of the mixed RGB node is changed to 1, that is, the model effect with ambient occlusion effect and basic color map is produced.
The above is only the preferred embodiment of this application, but the scope of protection of this application is not limited to this. Any changes or substitutions that can be easily thought of by those skilled in this field within the technical scope disclosed in this application should be covered by this application. Therefore, the scope of protection of this application should be subject to the scope of protection of the claims.

Claims (8)

CLAIMS LU501944
1. A method of 3D reconstruction and PBR mapping based on close-range photogrammetry, which is characterized by including: collecting image information, wherein that collected image information adopt a close-range photogrammetry method; aligning the image, and obtaining a three-dimensional point cloud of the image through point cloud computing; cleaning up the redundant 3D point cloud of the image, and performing surface grid generation to obtain a 3D data reconstruction model; carrying out image texturing treatment on the three-dimensional data reconstruction model to obtain a basic color map; based on the three-dimensional data reconstruction model and the basic color map, PBR mapping processing is carried out to obtain an image after model optimization and model structure re-topology; the image after model optimization and model structure re-topology is physically based rendered to obtain different maps.
2. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that the close-range photogrammetry method comprises the following steps: when shooting an object, covering a layer of soft light cloth at the light source, and adding a polarizer on the lens; before using the camera, the underexposure parameters are set, the shutter speed is extended, the object is photographed to obtain an image, the image is obtained in the RAW format, and the image is corrected by the brightness and reflection area to obtain a flat picture.
3. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that the process of obtaining the 3D point cloud of the image includes: firstly, checking whether the image is qualified; secondly, deleting the unqualified image; carrying out primary toning on the qualified image; constructing a 3D reconstruction model; inputting the toned image into the 3D reconstruction model; obtaining the aligned image; carrying out point cloud computing; and finally obtaining the 3D point cloud of the image.
4. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that clean up the redundant 3R 50 1944 point cloud of the image.
5. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that a high-detail model and a medium-high precision model can be obtained after the surface grid generation.
6. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 5, which 1s characterized in that the process of obtaining the 3D data reconstruction model includes: firstly, the vertex information of 3D point cloud is obtained, and the surface Grid of 3D model is calculated; after solving, a high-detail model is obtained, and the high-detail model is subjected to surface simplify operation and smoothing operation to obtain a medium-high precision model.
7. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that the texturing process includes: UV splitting the image to obtain the UV coordinates of the image; mapping the mapping information to the three-dimensional model in the form of mapping, obtaining the basic color mapping, and solving the basic color mapping to obtain the mapping file.
8. The method of 3D reconstruction and PBR mapping based on close-range photogrammetry according to claim 1, which is characterized in that the process of model optimization and model structure re-topology includes the following steps: firstly, introducing the medium-high precision model into PBR workflow software, adjusting the model and performing shade smooth treatment to obtain the optimized model; then, the optimized model is subjected to manual topology processing; in manual topology, the vertex of the model needs to be captured, and then the polygon brush is used to draw the low poly model to make the model structure complete.
LU501944A 2021-12-21 2022-04-27 Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry LU501944B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111574990.6A CN114241159A (en) 2021-12-21 2021-12-21 Three-dimensional reconstruction and PBR mapping manufacturing method based on close-range photogrammetry method

Publications (1)

Publication Number Publication Date
LU501944B1 true LU501944B1 (en) 2022-10-31

Family

ID=80760744

Family Applications (1)

Application Number Title Priority Date Filing Date
LU501944A LU501944B1 (en) 2021-12-21 2022-04-27 Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry

Country Status (2)

Country Link
CN (1) CN114241159A (en)
LU (1) LU501944B1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781618A (en) * 2021-07-19 2021-12-10 中南设计集团(武汉)工程技术研究院有限公司 Method and device for lightening three-dimensional model, electronic equipment and storage medium
CN114693885A (en) * 2022-04-11 2022-07-01 北京字跳网络技术有限公司 Three-dimensional virtual object generation method, apparatus, device, medium, and program product
CN114897697A (en) * 2022-05-18 2022-08-12 北京航空航天大学 Super-resolution reconstruction method for camera imaging model
CN116233408B (en) * 2022-12-27 2023-10-13 盐城鸿石智能科技有限公司 Camera module optical test platform and test method
CN116385612B (en) * 2023-03-16 2024-02-20 如你所视(北京)科技有限公司 Global illumination representation method and device under indoor scene and storage medium

Also Published As

Publication number Publication date
CN114241159A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
LU501944B1 (en) Method for Making Three-dimensional Reconstruction and PBR Maps Based on Close-range Photogrammetry
US7477777B2 (en) Automatic compositing of 3D objects in a still frame or series of frames
Greene Environment mapping and other applications of world projections
Debevec et al. A lighting reproduction approach to live-action compositing
JP4077869B2 (en) Light source estimation device, light source estimation system, light source estimation method, image resolution increasing device, and image resolution increasing method
JP4220470B2 (en) A reality-based lighting environment for digital imaging in cinema
Debevec Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography
Loscos et al. Interactive virtual relighting of real scenes
US20110074784A1 (en) Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-d images into stereoscopic 3-d images
CN107644453B (en) Rendering method and system based on physical coloring
CN109523622B (en) Unstructured light field rendering method
CN108986195A (en) A kind of single-lens mixed reality implementation method of combining environmental mapping and global illumination rendering
US20230043787A1 (en) Lighting assembly for producing realistic photo images
TW200426708A (en) A multilevel texture processing method for mapping multiple images onto 3D models
JP4428936B2 (en) Method for obtaining Euclidean distance from point in 3D space to 3D object surface from projection distance image stored in memory having projection distance along projection direction from projection plane to 3D object surface as pixel value
Goesele et al. Building a Photo Studio for Measurement Purposes.
Kang et al. View-dependent scene appearance synthesis using inverse rendering from light fields
Ahmed et al. Projector primary-based optimization for superimposed projection mappings
Miller et al. Illumination and reflection maps
CN114902277A (en) System and method for processing shadows on portrait image frames
CN116245741B (en) Image processing method and related device
Keshmirian A physically-based approach for lens flare simulation
Lind Photogrammetry scanned objects to a Game Engine
Martos et al. Acquisition and reproduction of surface appearance in architectural orthoimages
Schenkel et al. Comparison of normalized transfer functions for fast blending-based color correction of scans acquired under natural conditions

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20221031