CN116704102A - Automatic light distribution method based on point cloud scene and electronic equipment - Google Patents

Automatic light distribution method based on point cloud scene and electronic equipment Download PDF

Info

Publication number
CN116704102A
CN116704102A CN202310528664.4A CN202310528664A CN116704102A CN 116704102 A CN116704102 A CN 116704102A CN 202310528664 A CN202310528664 A CN 202310528664A CN 116704102 A CN116704102 A CN 116704102A
Authority
CN
China
Prior art keywords
scene
light source
building model
point cloud
voxel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310528664.4A
Other languages
Chinese (zh)
Inventor
蓝天
施磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yipinhui Digital Technology Shanghai Co ltd
Original Assignee
Yipinhui Digital Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yipinhui Digital Technology Shanghai Co ltd filed Critical Yipinhui Digital Technology Shanghai Co ltd
Priority to CN202310528664.4A priority Critical patent/CN116704102A/en
Publication of CN116704102A publication Critical patent/CN116704102A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to the technical field of three-dimensional modeling, and particularly discloses a method and electronic equipment for automatically distributing light based on a point cloud scene, wherein the method comprises the following steps: inputting a three-dimensional point cloud set of a building, generating global three-dimensional coordinates by using a central point of the three-dimensional point cloud set, and constructing a voxel data set by using the three-dimensional point cloud set; generating a building model in a model scene through the voxel data set, and making the building model be made of materials; traversing the voxel data set, and calculating a light source parameter and a light source coordinate according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set; mapping the light source parameters to a building model scene through light source coordinates; adjusting a plurality of light source parameters in a building model scene, and setting ray tracing parameters; and rendering the building model in real time based on a ray tracing algorithm. And automatically generating a building model light source according to the point cloud scene acquired by the depth camera, and rendering the building model based on a ray tracing algorithm, so that the light distribution difficulty of the building model is reduced.

Description

Automatic light distribution method based on point cloud scene and electronic equipment
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to an automatic light distribution method based on a point cloud scene and electronic equipment.
Background
With the rapid development of computer vision and artificial intelligence technology, applications such as three-dimensional reconstruction of scenes, target detection, environmental perception and the like are more and more widely performed by adopting a depth camera, and the depth camera can acquire depth information through shooting space so as to acquire 3D information of a target, and a 3D model is constructed, which is the greatest difference from a common camera.
To generate visual images in a three-dimensional computer graphics environment, ray tracing is a more realistic implementation than ray casting or scanline rendering. This approach works by back tracking the light path intersecting the phantom camera lens, and since a large number of similar rays traverse the scene, the scene visible information seen from the camera angle, as well as the software specific lighting conditions, can be constructed. Reflection, refraction, and absorption of a ray are calculated when the ray intersects an object or medium in a scene.
The popularity of ray tracing derives from its ability to simulate rays more realistically than other rendering methods such as scanline rendering or ray casting, something that is difficult to achieve for other algorithms, such as reflection and shadows, but is a natural outcome of ray tracing algorithms. Ray tracing is easy to implement and visually very good, so it is often the first attempted field of graphical programming.
In the prior art, after a model is built by point cloud data, the built building model is required to be subjected to light rendering to improve the visual effect of the building model, the traditional light layout scheme at present is to manually layout lights through WebGL or three-dimensional software, then an effect diagram of a scene is obtained through rendering, different types of light sources are required to be manually added into the building model, the illumination intensity and the illumination direction are regulated, the process is complex, and the labor cost required for light distribution is high.
Therefore, a new solution to the above-mentioned problems is needed for those skilled in the art.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a method and electronic equipment for automatically distributing light based on a point cloud scene, which automatically generates a building model light source according to the point cloud scene acquired by a depth camera, and renders a building model based on a ray tracing algorithm, so that the rendering efficiency of building model building light source is improved, and the difficulty of distributing light of the building model is reduced.
The invention discloses a method for automatically distributing light based on a point cloud scene, which comprises the following steps:
inputting a three-dimensional point cloud set of a building, generating global three-dimensional coordinates by using a central point of the three-dimensional point cloud set, and constructing a voxel data set by using the three-dimensional point cloud set;
generating a building model in a model scene through the voxel data set, and making the building model be made of materials;
traversing the voxel data set, and calculating a light source parameter and a light source coordinate according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set;
mapping the light source parameters to a building model scene through light source coordinates;
adjusting a plurality of light source parameters in a building model scene, and setting ray tracing parameters;
and rendering the building model in real time based on a ray tracing algorithm.
Further, a method of constructing a voxel data set from a three-dimensional point cloud set includes,
regularizing coordinates of the three-dimensional point cloud, wherein n represents the number of the acquired point clouds and 3 represents coordinates x, y and z of each point in three dimensions, and a vector of a stored form (n, 3) of the three-dimensional point cloud.
Three-dimensional point cloud q= { q 1 ,q 2 ,......,q n Conversion of the coordinate transformation to the xy plane yields a point set p= { p in the plane 1 ,p 2 ,.......,p n }, where q i =(x i ,y i ,z i ) T ,p i =(x i ,y i ) T
Traversing all points in the three-dimensional point cloud set to obtain x min ,y min ,x max ,y max And establishing a minimum bounding box of the three-dimensional point cloud set, and calculating the side length L of the grid, wherein:
calculating the grid number x in the x-direction and the y-direction respectively num ,y num
Let data point p i =(x i ,y i ) T Put into the corresponding grid cell (u, v), the data point p is i =(x i ,y i ) T A corresponding relation is established between the u and the v, wherein u is E [0, x num ],v∈[0,y num ];
Dividing the grid into a real-hole grid and an empty-hole grid according to whether data points are owned in grid units (u, v), wherein the data points are owned by the real-hole grid, and the empty-hole grid is unoccupied by the data points; judging the number of the hollow holes of each solid hole grid, if more than one of 8 adjacent grids in the adjacent grids is a hollow hole grid, the current grid is a boundary grid, otherwise, the current grid is not the boundary grid;
and voxelizing the divided boundary grids to obtain a voxel data set.
Further, generating a building model in the model scene from the voxel dataset, the method of adding material to the building model comprising,
traversing the voxel data set, and constructing a building model with a three-dimensional shape for the voxel data set by using a reverse geometric intersection algorithm;
performing principal component analysis on the building model, determining the category of the building model, and dividing the building model into a plurality of model parts;
manually adjusting the divided model components according to actual conditions;
further, the method for making the building model comprises the steps of,
inputting material and texture information, and correspondingly adding the material and texture information into a plurality of model components;
and mapping the global coordinates to the building model scene, and determining the coordinate positions of the building model materials and textures.
Further, traversing the voxel data set, calculating the light source parameters and the light source coordinates according to the three-dimensional coordinates and RGB value information of the voxels in the voxel data set,
dividing the voxel data set into an indoor scene voxel and an outdoor scene voxel, and determining the coordinates of the indoor scene voxel and the outdoor scene voxel;
inputting a preset light source RGB selection range, and calculating a scene voxel affected by the light source and an outdoor scene voxel range;
calculating the irradiation areas and irradiation intensities of the indoor scene and the outdoor scene according to the RGB values of a plurality of indoor scene voxels and outdoor scene voxels, synthesizing the irradiation areas and the irradiation intensities of the indoor scene and the outdoor scene to form light cones and light vectors, tracking the coordinates of a light source and determining the type of the light source;
and determining the light source coordinates and the light source colors according to the position relation between the light source coordinates and the coordinates of the indoor scene voxels and the outdoor scene voxels.
Further, the method for adjusting a plurality of light source parameters in the building model scene and setting the ray tracing parameters comprises the steps of setting rendering exposure parameters for the building model scene, and correcting the positions of the light sources, wherein the ray tracing parameters comprise environment light parameters, global illumination parameters and reflection and refraction parameters.
An electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the automatic light distribution method based on the point cloud scene when executing the computer program.
According to the method for automatically distributing light based on the point cloud scene, the building model is automatically generated according to the point cloud scene acquired by the depth camera, the voxel data set is constructed through the three-dimensional point cloud set, the building model is generated in the model scene through the voxel data set, the light source parameters and the light source coordinates are calculated according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set, the building model is rendered through the ray tracing algorithm, the rendering efficiency of the building model to construct the light source is improved, and therefore the light distribution difficulty of the building model is reduced.
Drawings
For a clearer description of embodiments of the invention or of solutions in the prior art, the drawings which are used in the description of the embodiments or of the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of steps of a method for automatically distributing light based on a point cloud scene according to an embodiment of the present invention;
FIG. 2 is a flowchart of steps (II) of a method for automatically distributing light based on a point cloud scene according to an embodiment of the present invention;
fig. 3 is a flowchart of steps (iii) of a method for automatically distributing light based on a point cloud scene according to an embodiment of the present invention;
fig. 4 is a flowchart of steps of a method for automatically distributing light based on a point cloud scene according to an embodiment of the present invention;
FIG. 5 is a flowchart of steps (fifth) of a method for automatically distributing light based on a point cloud scene according to an embodiment of the present invention;
fig. 6 is a structural composition diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The invention discloses a method for automatically distributing light based on a point cloud scene, which comprises the following steps:
s1: inputting a three-dimensional point cloud set of a building, generating global three-dimensional coordinates by using a central point of the three-dimensional point cloud set, performing filtering processing on three-dimensional point cloud data in the three-dimensional point cloud set, setting a range on the attribute of points in the three-dimensional point cloud data, performing filtering on all points in the designated dimension direction, and removing obvious outliers of the three-dimensional point cloud data; preferably, the statistical filter for removing obvious outliers of the three-dimensional point cloud data is used for removing the obvious outliers. Outlier features are sparsely distributed in space. Considering the characteristics of the outliers, it may be defined that a point cloud is less than a certain density, i.e. the point cloud is invalid. The average distance of each point to its nearest k points is calculated. The distances of all points in the point cloud should constitute a gaussian distribution. According to the given mean value and variance, points outside the variance can be removed; dividing the three-dimensional point cloud, removing ground points in the three-dimensional point cloud data, and reserving the three-dimensional point cloud data of the target building; the method comprises the steps of projecting a point cloud to the ground, judging and classifying the points on the ground and the building points according to the elevation change of the points in each area and statistics of the number of the points, and reserving the building points; thereby eliminating the influence of outliers or ground points on the calculation of the light source position; and constructing a voxel data set through the three-dimensional point cloud set, enabling a computer to respectively calculate differences between maximum values and minimum values of coordinates of point cloud data in three directions of XYZ according to the input point cloud data, then determining the length, width and height of an initial voxel according to the three differences, automatically establishing the initial voxel by the computer after calculation, and all the point cloud data are contained in the established initial voxel. After the initial voxels are established, the initial voxels can be divided and a three-dimensional model is established, the specific implementation method is that the initial voxels are decomposed into N voxels with smaller volumes, invalid voxels in the initial voxels are removed by using a bresenham algorithm, and the rest voxels can form a voxel data set of the three-dimensional point cloud set;
s2: generating a building model in a model scene through the voxel data set, and making the building model be made of materials; the three-dimensional body containing the voxels can be represented by three-dimensional rendering or extracting polygonal equivalent surfaces with given threshold outlines, and the rendering effect of the three-dimensional model is improved by texturing the building model and combining a light source;
s3: traversing the voxel data set, and calculating a light source parameter and a light source coordinate according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set;
s4: mapping the light source parameters to a building model scene through light source coordinates;
s5: adjusting a plurality of light source parameters in the building model scene, and setting ray tracing parameters so as to achieve an expected rendering effect;
s6: the building model is rendered in real time based on a ray tracing algorithm, the ray tracing algorithm adopts an image formed by pixels, light rays of a light source are projected into a scene, after the light rays emitted by the light source are in contact with the building model, a map of the building model after the light sources are rendered is calculated according to RGB values, reflectivity, diffuse reflectivity and refractive index of materials and textures of the building model, and a final rendering result is obtained after the maps of the light sources are overlaid and mixed pixel by pixel.
In particular, a method of constructing a voxel data set from a three-dimensional point cloud set includes,
s101: regularizing coordinates of the three-dimensional point cloud, wherein n represents the number of the acquired point clouds and 3 represents coordinates x, y and z of each point in three dimensions, and a vector of a stored form (n, 3) of the three-dimensional point cloud.
S102: three-dimensional point cloud q= { q 1 ,q 2 ,......,q n Conversion of the coordinate transformation to the xy plane yields a point set p= { p in the plane 1 ,p 2 ,.......,p n }, where q i =(x i ,y i ,z i ) T ,p i =(x i ,y i ) T
S103: traversing all points in the three-dimensional point cloud set to obtain x min ,y min ,x max ,y max And establishing a minimum bounding box of the three-dimensional point cloud set, and calculating the side length L of the grid, wherein:
calculating the grid number x in the x-direction and the y-direction respectively num ,y num
S104: let data point p i =(x i ,y i ) T Put into the corresponding grid cell (u, v), the data point p is i =(x i ,y i ) T A corresponding relation is established between the u and the v, wherein u is E [0, x num ],v∈[0,y num ];
S105: dividing the grid into a real-hole grid and an empty-hole grid according to whether data points are owned in grid units (u, v), wherein the data points are owned by the real-hole grid, and the empty-hole grid is unoccupied by the data points; judging the number of the hollow holes of each solid hole grid, if more than one of 8 adjacent grids in the adjacent grids is a hollow hole grid, the current grid is a boundary grid, otherwise, the current grid is not the boundary grid;
s106: and voxelizing the divided boundary grids to obtain a voxel data set.
In particular, a building model is generated in a model scene from a voxel dataset, a method of adding material to the building model comprising,
s201: traversing the voxel data set, and constructing a building model with a three-dimensional shape for the voxel data set by using a reverse geometric intersection algorithm;
s202: performing principal component analysis on the building model, determining the category of the building model, and dividing the building model into a plurality of model parts;
s203: manually adjusting the divided model components according to actual conditions;
specifically, the method for making the building model comprises the following steps of,
s211: inputting material and texture information, and correspondingly adding the material and texture information into a plurality of model components, wherein the material and texture information comprises reflectivity, refractive index and RGB values of the material and texture;
s212: and mapping the global coordinates to a building model scene, determining the coordinate positions of the materials and the textures of the building model, and correspondingly storing the materials and the textures and the coordinates of the building model so that the building model is rendered by a subsequent light source based on the materials and the textures.
Specifically, the method for calculating the light source parameters and the light source coordinates according to the three-dimensional coordinates and RGB value information of the voxels in the voxel data set comprises the steps of,
s301: detecting whether a surrounding space exists in the voxel data set, dividing the voxel data set into an indoor scene voxel and an outdoor scene voxel, and determining coordinates of the indoor scene voxel and the outdoor scene voxel; traversing coordinate points of bounding boxes formed by the voxel data sets, judging whether a plurality of voxels exist around the coordinate points to enclose the coordinate points, if so, locating the coordinate points in an indoor scene, and simultaneously obtaining a plurality of voxel coordinates enclosing the coordinate points, namely, indoor scene voxels, and recording the indoor scene range of the indoor scene voxels after the coordinate points are traversed; if the coordinate point is not surrounded, the coordinate point is located in an outdoor scene, and the coordinate point is traversed and then recorded in the outdoor scene range;
s302: inputting a RGB range of a preset light source, calculating a scene voxel influenced by the light source and an outdoor scene voxel range, obtaining the RGB range influenced by illumination by inputting the RGB range of a pre-generated light source, and judging that the voxel is influenced by the light source if the RGB value of the voxel is positioned in the RGB range influenced by illumination;
s303: the light source parameters comprise light source types, light source directions, light source colors and light source directions, the irradiation areas and the irradiation intensities of the indoor scene and the outdoor scene are calculated according to RGB values of a plurality of indoor scene voxels and outdoor scene voxels, the irradiation areas and the irradiation intensities of the indoor scene and the outdoor scene are synthesized to form light cones and light vectors, and the light source coordinates are tracked and the light source types are determined; the light source type comprises a point light source and parallel light, if the inclination angle between light cones is larger than a set angle threshold, namely the light vectors are nearly parallel, the light source is judged to be a parallel light source, and if the inclination angle between the light cones is smaller than the set angle threshold, the light source is judged to be the point light source;
s304: determining a light source coordinate and a light source color according to the position relation between the light source coordinate and the coordinates of the indoor scene voxel and the outdoor scene voxel; if the light source is a point light source, calculating to obtain a point light source coordinate according to the confluence point of the light vectors, if the point light source coordinate is not in the coordinate range of the indoor scene or the outdoor scene, adjusting the RGB value of the light source according to the RGB range of the preset light source, and recalculating the light cone and the light vectors according to the irradiation area and the irradiation intensity of the voxels affected by the light source until the light source coordinate is in the coordinate range of the indoor scene or the outdoor scene, thereby determining the light source coordinate and the light source color.
Specifically, the method for adjusting a plurality of light source parameters in a building model scene and setting ray tracing parameters comprises the steps of setting rendering exposure parameters for the building model scene, correcting the position of the light source and adjusting the intensity of the light source, wherein the ray tracing parameters comprise ambient light parameters, global illumination parameters and reflection and refraction parameters.
An electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the automatic light distribution method based on the point cloud scene when executing the computer program.
According to the method for automatically distributing light based on the point cloud scene, the building model is automatically generated according to the point cloud scene acquired by the depth camera, the voxel data set is constructed through the three-dimensional point cloud set, the building model is generated in the model scene through the voxel data set, the light source parameters and the light source coordinates are calculated according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set, the building model is rendered through the ray tracing algorithm, the rendering efficiency of the building model for constructing the light source is improved, and therefore the light distribution difficulty of the building model is reduced, and the light distribution efficiency is improved.
The invention has been further described with reference to specific embodiments, but it should be understood that the detailed description is not to be construed as limiting the spirit and scope of the invention, but rather as providing those skilled in the art with the benefit of this disclosure with the benefit of their various modifications to the described embodiments.

Claims (7)

1. A method for automatically distributing light based on a point cloud scene, the method comprising:
s1: inputting a three-dimensional point cloud set of a building, generating global three-dimensional coordinates by using a central point of the three-dimensional point cloud set, and constructing a voxel data set by using the three-dimensional point cloud set;
s2: generating a building model in a model scene through a voxel data set, and making the building model be made into materials;
s3: traversing the voxel data set, and calculating a light source parameter and a light source coordinate according to the three-dimensional coordinates and RGB value information of voxels in the voxel data set;
s4: mapping the light source parameters to a building model scene through light source coordinates;
s5: adjusting a plurality of light source parameters in the building model scene, and setting ray tracing parameters;
s6: and rendering the building model in real time based on a ray tracing algorithm.
2. The method of automatic light distribution based on point cloud scenes according to claim 1, wherein the method of constructing a voxel dataset from a three-dimensional point cloud comprises,
s101: regularizing coordinates of the three-dimensional point cloud, wherein n represents the number of the acquired point clouds and 3 represents coordinates x, y and z of each point in three dimensions, and a vector of a stored form (n, 3) of the three-dimensional point cloud.
S102: three-dimensional point cloud q= { q 1 ,q 2 ,......,q n Conversion of the } to the xy-plane by coordinate transformation yields flatIn-plane point set p= { p 1 ,p 2 ,.......,p n }, where q i =(x i ,y i ,z i ) T ,p i =(x i ,y i ) T
S103: traversing all points in the three-dimensional point cloud set to obtain x min ,y min ,x max ,y max And establishing a minimum bounding box of the three-dimensional point cloud set, and calculating the side length L of the grid, wherein:
calculating the grid number x in the x-direction and the y-direction respectively num ,y num
S104: let data point p i =(x i ,y i ) T Put into the corresponding grid cell (u, v), the data point p is i =(x i ,y i ) T A corresponding relation is established between the u and the v, wherein u is E [0, x num ],v∈[0,y num ];
S105: dividing the grid into a real-hole grid and an empty-hole grid according to whether data points are owned in grid units (u, v), wherein the data points are owned by the real-hole grid, and the empty-hole grid is unoccupied by the data points; judging the number of the hollow holes of each solid hole grid, if more than one of 8 adjacent grids in the adjacent grids is a hollow hole grid, the current grid is a boundary grid, otherwise, the current grid is not the boundary grid;
s106: and voxelizing the divided boundary grids to obtain a voxel data set.
3. The method of claim 1, wherein generating a building model in the model scene from the voxel dataset, adding material to the building model comprises,
s201: traversing the voxel data set, and constructing a building model with a three-dimensional shape for the voxel data set by using a reverse geometric intersection algorithm;
s202: performing principal component analysis on the building model, determining the category of the building model, and dividing the building model into a plurality of model components;
s203: and manually adjusting the divided model components according to actual conditions.
4. The method for automatically distributing light based on a point cloud scene as recited in claim 3, wherein said building model is made of a material comprising,
s211: inputting material and texture information, and correspondingly adding the material and texture information into a plurality of model components;
s212: and mapping the global coordinates to the building model scene according to the global coordinates, and determining the coordinate positions of the building model materials and textures.
5. The method of claim 1, wherein traversing the voxel dataset and calculating the light source parameters and light source coordinates based on three-dimensional coordinates and RGB value information of voxels in the voxel dataset comprises,
s301: dividing the voxel data set into an indoor scene voxel and an outdoor scene voxel, and determining coordinates of the indoor scene voxel and the outdoor scene voxel;
s302: inputting a preset light source RGB selection range, and calculating a scene voxel affected by the light source and an outdoor scene voxel range;
s303: calculating the irradiation areas and irradiation intensities of the indoor scene and the outdoor scene according to the RGB values of a plurality of indoor scene voxels and outdoor scene voxels, synthesizing the irradiation areas and the irradiation intensities of the indoor scene and the outdoor scene to form light cones and light vectors, tracking the coordinates of a light source and determining the type of the light source;
s304: and determining the light source coordinates and the light source colors according to the position relation between the light source coordinates and the coordinates of the indoor scene voxels and the outdoor scene voxels.
6. The method of claim 1, wherein the adjusting the light source parameters in the building model scene and the setting the ray tracing parameters includes setting rendering exposure parameters for the building model scene and correcting the light source position, wherein the ray tracing parameters include ambient light parameters, global illumination parameters, reflection and refraction parameters.
7. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method for automatically distributing light based on a point cloud scene according to any one of claims 1 to 6 when executing the computer program.
CN202310528664.4A 2023-05-11 2023-05-11 Automatic light distribution method based on point cloud scene and electronic equipment Pending CN116704102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310528664.4A CN116704102A (en) 2023-05-11 2023-05-11 Automatic light distribution method based on point cloud scene and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310528664.4A CN116704102A (en) 2023-05-11 2023-05-11 Automatic light distribution method based on point cloud scene and electronic equipment

Publications (1)

Publication Number Publication Date
CN116704102A true CN116704102A (en) 2023-09-05

Family

ID=87834826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310528664.4A Pending CN116704102A (en) 2023-05-11 2023-05-11 Automatic light distribution method based on point cloud scene and electronic equipment

Country Status (1)

Country Link
CN (1) CN116704102A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440554A (en) * 2023-12-20 2024-01-23 深圳市正远科技有限公司 Scene modeling method and system for realizing LED light source based on digitization
CN118070403A (en) * 2024-04-17 2024-05-24 四川省建筑设计研究院有限公司 BIM-based method and system for automatically generating lamp loop influence area space

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440554A (en) * 2023-12-20 2024-01-23 深圳市正远科技有限公司 Scene modeling method and system for realizing LED light source based on digitization
CN117440554B (en) * 2023-12-20 2024-04-02 深圳市正远科技有限公司 Scene modeling method and system for realizing LED light source based on digitization
CN118070403A (en) * 2024-04-17 2024-05-24 四川省建筑设计研究院有限公司 BIM-based method and system for automatically generating lamp loop influence area space

Similar Documents

Publication Publication Date Title
CN112150575B (en) Scene data acquisition method, model training method and device and computer equipment
US10467805B2 (en) Image rendering of laser scan data
CN111968215B (en) Volume light rendering method and device, electronic equipment and storage medium
CN116704102A (en) Automatic light distribution method based on point cloud scene and electronic equipment
US6529192B1 (en) Method and apparatus for generating mesh models of 3D objects
US7199793B2 (en) Image-based modeling and photo editing
US7474803B2 (en) System and method of three-dimensional image capture and modeling
CN112633657B (en) Construction quality management method, device, equipment and storage medium
CN114419240B (en) Illumination rendering method and device, computer equipment and storage medium
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
CN115205492A (en) Method and device for real-time mapping of laser beam on three-dimensional model
Favorskaya et al. Realistic 3D-modeling of forest growth with natural effect
CN116664752B (en) Method, system and storage medium for realizing panoramic display based on patterned illumination
US20220392121A1 (en) Method for Improved Handling of Texture Data For Texturing and Other Image Processing Tasks
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN114139249A (en) Automatic light distribution method and device based on illusion engine and electronic equipment
Popovski et al. Comparison of rendering processes on 3D model
CN112927352A (en) Three-dimensional scene local area dynamic flattening method and device based on flattening polygon
CN117333598B (en) 3D model rendering system and method based on digital scene
CN116824082B (en) Virtual terrain rendering method, device, equipment, storage medium and program product
CN118015197B (en) Live-action three-dimensional logic singulation method and device and electronic equipment
Guruprasad Object 3D Effect With Photometric Lighting With Real-Time View
US20240153207A1 (en) Systems, methods, and media for filtering points of a point cloud utilizing visibility factors to generate a model of a scene
US20240193864A1 (en) Method for 3d visualization of sensor data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination