CN116563476B - Cloud image display method and device, electronic equipment and computer readable storage medium - Google Patents
Cloud image display method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN116563476B CN116563476B CN202310837894.9A CN202310837894A CN116563476B CN 116563476 B CN116563476 B CN 116563476B CN 202310837894 A CN202310837894 A CN 202310837894A CN 116563476 B CN116563476 B CN 116563476B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- area
- model
- dimensional model
- point data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013507 mapping Methods 0.000 claims abstract description 73
- 238000005070 sampling Methods 0.000 claims abstract description 64
- 238000009877 rendering Methods 0.000 claims abstract description 8
- 230000000875 corresponding effect Effects 0.000 claims description 15
- 230000002596 correlated effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a cloud picture display method, a device, electronic equipment and a computer readable storage medium, which are used for acquiring the area of each area on a three-dimensional model based on the two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space, and acquiring the number of sampling points in the area based on the area, wherein the larger the area of the area is, the larger the number of the sampling points in the area is, the smaller the area of the area is, the smaller the number of the sampling points in the area is, the field variable sampling points in each area of the three-dimensional model are sampled from field variables of the three-dimensional model based on the number of the sampling points in the area, and the field variable cloud picture of the three-dimensional model is displayed by rendering the field variable sampling points, so that the distribution of the field variable sampling points in each area is uniform in the displayed cloud picture, and the purpose of improving the cloud picture display function of the field variable is realized.
Description
Technical Field
The present application relates to the field of machine vision, and in particular, to a cloud image display method, a cloud image display device, an electronic device, and a computer readable storage medium.
Background
In some scenarios, in order to visualize the field variables of a virtual three-dimensional model, a cloud image is typically used to display the field variables of individual points in the three-dimensional model.
But the cloud graphic display function of the existing field variable is to be improved.
Disclosure of Invention
The application provides a cloud picture display method, a cloud picture display device, electronic equipment and a computer readable storage medium, and aims to solve the problem of how to improve the cloud picture display function of field variables.
In order to achieve the above object, the present application provides the following technical solutions:
the first aspect of the present application provides a cloud image display method, including:
acquiring the area of each region on the three-dimensional model based on a two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space;
obtaining a number of sampling points in the area based on the area, the number being positively correlated with the area;
sampling field variable sampling points in each region of the three-dimensional model from field variables of the three-dimensional model based on the number of sampling points in the region;
and displaying the field variable cloud picture of the three-dimensional model by rendering the field variable sampling points.
In some implementations, the obtaining the area of each region on the three-dimensional model based on the two-dimensional model obtained by mapping the three-dimensional model to the two-dimensional space includes:
obtaining mapping parameters of a three-dimensional model to a two-dimensional space, wherein the mapping parameters comprise two-dimensional point data, an area formed by the two-dimensional point data and a mapping proportion, the two-dimensional point data is the data of the three-dimensional point data on the three-dimensional model mapped to the two-dimensional space, and the mapping proportion is the proportion of the distance between any two three-dimensional point data and the distance between the corresponding two-dimensional point data;
and calculating the area of the region on the three-dimensional model based on the mapping proportion and the two-dimensional point data in the region.
In some implementations, the three-dimensional model includes a Mesh model;
the obtaining the mapping parameters of the three-dimensional model to the two-dimensional space comprises the following steps:
dividing the Mesh model into patches;
combining the dough sheets meeting the preset combining conditions to obtain the three-dimensional model of the divided area;
and extracting mapping parameters of the three-dimensional model to a two-dimensional space.
In some implementations, the merging the patches that meet a preset merging condition to obtain the three-dimensional model of the divided region includes:
dividing adjacent patches with included angles smaller than a first preset threshold value into one patch to obtain secondary divided areas, wherein any one secondary divided area is used as a seed;
searching secondary division patches adjacent to the seeds and with included angles smaller than a second preset threshold value as target patches;
and merging the target surface patch and the seeds to obtain a three-time dividing region.
In some implementations, the extracting mapping parameters of the three-dimensional model to two-dimensional space includes:
calculating a normal vector of a target area, wherein the target area is any area in the three-time dividing area;
constructing a plane containing the normal vector;
projecting all patches in the target area to the plane;
calculating a projected bounding box on the plane as the two-dimensional space;
and projecting each three-dimensional point data on the patch in the target area to a two-dimensional space to obtain the projected two-dimensional point data of the target area.
In some implementations, further comprising:
and taking the ratio of the two-dimensional point data to the size of the bounding box as the mapping ratio.
In some implementations, the three-dimensional model includes a BRep model;
the obtaining the mapping parameters of the three-dimensional model to the two-dimensional space comprises the following steps:
extracting the three-dimensional point data, the two-dimensional point data and the corresponding relation between the three-dimensional point data and the two-dimensional point data from model data of the BRep model;
based on the extracted data, the mapping parameters are calculated.
A second aspect of the present application provides a display device of a cloud image, including:
the first acquisition module is used for acquiring the area of each region on the three-dimensional model based on a two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space;
a second acquisition module for acquiring a number of sampling points in the area based on the area, the number being positively correlated with the area;
a sampling module for sampling field variable sampling points in each region of the three-dimensional model from field variables of the three-dimensional model based on the number of sampling points in the region;
and the display module is used for displaying the field variable cloud picture of the three-dimensional model by rendering the field variable sampling points.
A third aspect of the present application provides an electronic apparatus comprising:
a memory and a processor;
the memory is used for storing an application program, and the processor is used for running the application program to execute the cloud image display method provided by the first aspect of the application.
A fourth aspect of the application provides a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of displaying a cloud image provided by the first aspect of the application.
A fifth aspect of the application provides a computer program product comprising computer programs/instructions which when executed by a processor implement the method of displaying a cloud image provided by the first aspect of the application.
According to the cloud picture display method, device, electronic equipment and computer readable storage medium, the area of each area on the three-dimensional model is obtained based on the two-dimensional model obtained by mapping the three-dimensional model to the two-dimensional space, and the number of sampling points in the area is obtained based on the area, so that the larger the area of the area is, the larger the number of the sampling points in the area is, the smaller the area of the area is, the smaller the number of the sampling points in the area is, and the field variable sampling points in each area of the three-dimensional model are sampled from the field variable of the three-dimensional model based on the number of the sampling points in the area, and the field variable sampling points in the three-dimensional model are displayed through rendering the field variable sampling points.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a cloud image display method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a BRep model mapped to a two-dimensional space;
fig. 3 is an exemplary diagram of an extracted Mesh model;
fig. 4 is an exemplary diagram of a sub-divided region of the extracted Mesh model;
fig. 5 is an exemplary diagram of three divided areas of the extracted Mesh model;
fig. 6 is a flowchart for extracting mapping parameters of a Mesh model;
fig. 7 is a schematic diagram of a display device with cloud images according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in embodiments of the present application, "one or more" means one, two, or more than two; "and/or", describes an association relationship of the association object, indicating that three relationships may exist; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The plurality of the embodiments of the present application is greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first," "second," and the like are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance, or alternatively, for indicating or implying a sequential order.
The field variable simulation of the virtual model can be understood as: and obtaining the field variable of each point in the virtual model by using a simulation mode. To intuitively display the field variables of the virtual model, the field variables of the virtual model are typically displayed in the form of a cloud. That is, the field variable of each point in the virtual model is taken as one point (simply referred to as a field variable point) in the point cloud, and the field variables of all points constitute a cloud image.
In some implementations, a virtual model is first geometrically parsed, then field variables are sampled based on the geometry parsing results, and sampled field variable sampling points are displayed as a cloud image.
The inventors found in the course of the study that, because the geometric analysis result is obtained based on the virtual model, the density of the patches in which the planes of the different areas are divided is different for the irregular virtual model in the geometric analysis result, for example, in one virtual model, a first plane is divided into a first number of triangles (an example of a patch), and a second plane substantially equal to the area of the first plane is divided into a second number of triangles, the first number may be greatly different from the second number, in which case the sampling density of the field variables is also different, that is, the sampling of the field variables is not uniform for the different areas of the virtual model, and therefore, the field variable point distribution in the displayed cloud is not uniform, so that the field variable distribution of the virtual model cannot be truly embodied.
In order to solve the problem of uneven sampling of field variables in different areas of a virtual model, cloud image display of the field variables is realized by adopting a mode of reconstructing the virtual model, namely, after geometric analysis, triangular patches are divided into the virtual model again, the patches of an original model are divided into more uniform patches, and then the field variables corresponding to the vertexes of the new model are obtained from the field variables. However, the inventors found that in this way, the number of patches is difficult to control, which easily leads to a sharp increase in the data amount of the cloud image.
It can be seen that the cloud-patterned display function of the existing field variables is to be improved.
The embodiment of the application provides a display method of a cloud picture, which aims to realize uniform sampling of a field variable, enable the cloud picture displayed based on the sampled field variable to be closer to the actual distribution condition of the field variable, and reduce the data volume of the displayed cloud picture.
In the following embodiments of the present application, a virtual model may be understood as a virtual three-dimensional model, which may also be referred to as a virtual three-dimensional model or a three-dimensional model. An example of the virtual model is a surface model, and the surface model will be described below.
Fig. 1 is a cloud image display method provided by an embodiment of the present application, including the following steps:
s11, obtaining mapping parameters of the virtual model to the two-dimensional plane.
The mapping parameters comprise two-dimensional point data, an area formed by the two-dimensional point data and a mapping proportion.
Each point on the virtual model is referred to as a three-dimensional point, and data corresponding to the three-dimensional point, such as position data of the three-dimensional point, is referred to as three-dimensional point data. The data after the three-dimensional point data is mapped to the two-dimensional space is referred to as two-dimensional point data.
The relationship between three-dimensional point data and virtual model is called space relationship, the relationship between two-dimensional point data of three-dimensional point mapping and two-dimensional space is called mapping relationship, and the ratio between space relationship and mapping relationship is called mapping ratio. That is, the ratio of the distance between any two three-dimensional point data to the distance between the corresponding two-dimensional point data is referred to as a map ratio. In some implementations, the mapping ratio is a ratio of a distance between any two three-dimensional point data to a distance between corresponding two-dimensional point data, and in other implementations, the mapping ratio is a ratio of a distance between any two-dimensional point data to a distance between corresponding three-dimensional point data.
The virtual model is divided into various areas, and based on the principle that three-dimensional points are mapped to a two-dimensional space to obtain two-dimensional point data, the various areas on the virtual model are mapped to the two-dimensional space to obtain areas formed by the two-dimensional point data. It is to be understood that, assuming that the first region is any one region on the virtual model, the first region is mapped to a second region obtained in two-dimensional space, and is constituted by two-dimensional point data obtained by mapping three-dimensional point data in the first region.
In some implementations, the virtual model is a boundary representation (Boundary Representation, BRep) model based on a boundary surface. It is understood that the model data of the BRep model based on the boundary surface includes three-dimensional point data and two-dimensional point data of each divided region in the BRep model.
Taking fig. 2 as an example, the surface model a is a BRep model, and taking a point (x, y, z) on the surface model a as an example, the point mapped to the two-dimensional space is (u, v). The (x, y, z) and (u, v) and correspondence are included in the model data.
In some implementations, to facilitate subsequent calculations, both the two-dimensional point data and the three-dimensional point data in the model data are normalized to within a predetermined interval range, e.g., (0, 1).
It is understood that taking data as position data as an example, the ratio of the distance between any two three-dimensional point data and the distance between the corresponding two-dimensional point data is a mapping ratio. Also taking fig. 2 as an example, the mapping ratio is a ratio of a first distance between (x 1, y1, z 1) and (x 2, y2, z 2) to a second distance between (u 1, v 1) and (u 2, v 2).
The mapping proportion may or may not be included in the model data of the BRep model based on the boundary surface, and may be calculated based on the above example when not included.
In other implementations, the virtual model is a Mesh model. In this case, the area is divided into the face sheet (Mesh) model, and then the mapping parameters are obtained based on the divided area, taking the Mesh model (model without divided area) shown in fig. 3 as an example, the specific flow is as shown in fig. 6, and the method comprises the following steps:
s111, acquiring each patch in the Mesh model.
It can be understood that the patches can be regarded as the initially divided areas, the patches can be obtained by pre-dividing, and the patches can be obtained by reading in this step, or the patches can be obtained by dividing the Mesh model in this step. The division into patches is referred to as a first division.
S112, dividing the adjacent surface patches with the included angles smaller than a first preset threshold into one surface patch to obtain a secondary divided area.
An example of the first preset threshold is 30 degrees, and a secondary division area obtained using the preset threshold is shown in fig. 4.
The manner in which the included angle between adjacent patches is calculated is not described in detail herein.
S113, performing third division on the secondary division area.
The specific flow of the third division is as follows: any one of the patches is selected as a seed, patches which are adjacent to the seed and have an included angle smaller than a second preset threshold value are searched according to the topological relation, the searched patches are called target patches, the target patches and the seed are combined to be used as new seeds until the seed is not enlarged any more, and the seed at the moment is used as a three-time dividing area. And searching and merging the target patches by taking the patches which are not used as seeds and are not merged as new seeds until no residual patches exist, and obtaining the three-time divided areas. An example of the second preset threshold is 30 degrees.
It is understood that the first preset threshold value and the second preset threshold value may be the same or different.
The resulting three sub-divided regions on the example of fig. 4 are shown in fig. 5. For convenience of description, any one of the three-divided regions is referred to as a target region.
The area of the consistent continuous dough sheet is obtained through secondary division, and the area easy to carry out plane parameterization is obtained through tertiary division, so that the area easy to carry out plane parameterization can be obtained rapidly based on the area obtained through the division mode.
It is understood that the second division and the third division are each aimed at merging the patches satisfying the preset condition, and the division into the second division and the third division is merely an example and is not limiting, for example, the second division or the third division is performed, etc.
S114, after the three-time division area of the virtual model is obtained, the mapping parameters of the virtual model to the two-dimensional space are extracted.
Specifically, a normal vector of the target area is calculated, and a plane containing the normal vector is constructed. All patches in the target area are projected onto the plane, and the projected bounding box on the plane is calculated. And taking the bounding box as a two-dimensional space, and projecting each three-dimensional point data on the patch in the target area to the two-dimensional space to obtain the two-dimensional point data after the projection of the target area. The ratio of the two-dimensional point data to the size of the bounding box is taken as a mapping ratio.
And after the two-dimensional point data are obtained, carrying out normalization processing.
And S12, calculating the area of each region on the virtual model based on the mapping proportion and the two-dimensional point data in each region.
It is understood that, for any one region, based on the number of two-dimensional point data in the region, the area of the region in the two-dimensional space can be obtained, and based on the mapping proportion, the area of the region corresponding to the region on the virtual model can be obtained.
S13, obtaining the number of sampling points in each region based on the area of each region on the virtual model.
In some implementations, the number of sampling points is calculated in terms of area and number of samples per unit area.
The number of sampling points is obtained based on the area of each region, so that regions with larger areas can be sampled with more points, and regions with smaller areas can be sampled with less points. I.e. the number is positively correlated with the area.
S14, sampling the field variable sampling points in each region of the virtual model from the field variables of the virtual model based on the number of the sampling points in each region.
In some implementations, the field variables of the virtual model are pre-computed. In this step, field variable data points are sampled from the field variables.
It will be appreciated that each point on the virtual model corresponds to a field variable, and that given the number of sampling points for a region, the number of field variables is uniformly sampled in that region as field variable sampling points for that region.
In some implementations, field variable sampling points are stored in terms of the region to which they belong, with one region stored as a lattice.
And S15, displaying a field variable cloud picture of the virtual model by rendering field variable sampling points.
In some implementations, the field variable data lattice of each region is formed into a texture unit, and when the model is rendered, the texture unit is bound to the region corresponding to the model, and finally the simulated cloud image is rendered.
As can be seen from the flow shown in fig. 1, the virtual model is mapped to the two-dimensional space, the area of each region in the model is obtained based on the mapping result, and the sampled number of each region is obtained based on the area, so that the distribution of the sampling points on the virtual model can be more uniform. And sampling the displayed field variable cloud picture according to the sampling points, and making the cloud picture more close to the actual distribution of the field variable on the virtual model.
Moreover, based on the mode of sampling the field variable of the area of each area, the relation between the sampling points and the area can be preconfigured, namely the sampling quantity is controllable, excessive data quantity is not introduced, and the data quantity of the cloud picture can be controlled to be maintained at a lower level under the condition of obtaining the field variable distribution which is closer to the actual one.
It is to be understood that S11-S12 are only one implementation manner that is relatively convenient to implement for obtaining the area of each region on the three-dimensional model based on the two-dimensional model obtained by mapping the three-dimensional model to the two-dimensional space, and other manners may be adopted to obtain the area of each region on the three-dimensional model, for example, based on the topology structure of the two-dimensional model, etc.
The embodiment of the application also discloses a cloud image display device, as shown in fig. 7, comprising: the device comprises a first acquisition module, a second acquisition module, a sampling module and a display module.
The first acquisition module is used for acquiring the area of each region on the three-dimensional model based on a two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space. The second acquisition module is used for acquiring the number of sampling points in the area based on the area, and the number is positively correlated with the area.
The sampling module is used for sampling the field variable sampling points in each area of the three-dimensional model from the field variable of the three-dimensional model based on the number of the sampling points in the area. The display module is used for displaying the field variable cloud picture of the three-dimensional model by rendering the field variable sampling points.
In some implementations, the first obtaining module obtains, based on a two-dimensional model obtained by mapping a three-dimensional model to a two-dimensional space, areas of each region on the three-dimensional model by: obtaining mapping parameters of a three-dimensional model to a two-dimensional space, wherein the mapping parameters comprise two-dimensional point data, an area formed by the two-dimensional point data and a mapping proportion, the two-dimensional point data is the data of the three-dimensional point data on the three-dimensional model mapped to the two-dimensional space, and the mapping proportion is the proportion of the distance between any two three-dimensional point data and the distance between the corresponding two-dimensional point data; and calculating the area of the region on the virtual model based on the mapping proportion and the two-dimensional point data in the region.
In some implementations, the three-dimensional model includes a Mesh model. The specific mode of the first acquisition module for acquiring the mapping parameters of the three-dimensional model to the two-dimensional space is as follows: dividing the Mesh model into patches, combining the patches meeting preset combining conditions to obtain the three-dimensional model of the divided area, and extracting mapping parameters of the three-dimensional model mapped to a two-dimensional space.
In some implementations, the specific manner of the first obtaining module merging the patches that meet the preset merging condition to obtain the three-dimensional model of the divided region is as follows: dividing adjacent patches with included angles smaller than a first preset threshold value into one patch to obtain secondary divided areas, wherein any one secondary divided area is used as a seed; searching secondary division patches adjacent to the seeds and with included angles smaller than a second preset threshold value as target patches; and merging the target surface patch and the seeds to obtain a three-time dividing region.
In some implementations, the specific way for the first obtaining module to extract the mapping parameters of the three-dimensional model to the two-dimensional space is: calculating a normal vector of a target area, wherein the target area is any area in the three-time dividing area; constructing a plane containing the normal vector; projecting all patches in the target area to the plane; calculating a projected bounding box on the plane as the two-dimensional space; and projecting each three-dimensional point data on the patch in the target area to a two-dimensional space to obtain the projected two-dimensional point data of the target area.
In some implementations, the specific manner in which the first obtaining module extracts the mapping parameters of the three-dimensional model mapped to the two-dimensional space further includes the following steps: and taking the ratio of the two-dimensional point data to the size of the bounding box as the mapping ratio.
In some implementations, the three-dimensional model includes a BRep model, and the specific manner of acquiring the mapping parameters of the three-dimensional model mapped to the two-dimensional space by the first acquisition module is: extracting the three-dimensional point data, the two-dimensional point data and the corresponding relation between the three-dimensional point data and the two-dimensional point data from model data of the BRep model; based on the extracted data, the mapping parameters are calculated.
The cloud image display device shown in fig. 7 can display a cloud image with field variable distribution closer to actual field variable distribution of the virtual model, and has smaller data volume, thereby being beneficial to saving calculation and display resources.
The present application provides an electronic device including: memory and a processor. The memory is used for storing an application program, and the processor is used for running the application program so as to execute the cloud image display method provided by the application.
The present application provides a computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the display method of cloud images provided by the present application.
The present application provides a computer program product comprising computer programs/instructions which when executed by a processor implement the method of displaying cloud images provided by the present application. The functions of the methods of embodiments of the present application, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored on a computing device readable storage medium. Based on such understanding, a part of the present application that contributes to the prior art or a part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device, etc.) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other.
Claims (8)
1. The cloud picture display method is characterized by comprising the following steps of:
acquiring the area of each region on the three-dimensional model based on a two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space;
obtaining a number of sampling points in the area based on the area, the number being positively correlated with the area;
sampling field variable sampling points in each region of the three-dimensional model from field variables of the three-dimensional model based on the number of sampling points in the region;
displaying a field variable cloud picture of the three-dimensional model by rendering the field variable sampling points;
the obtaining the area of each region on the three-dimensional model based on the two-dimensional model obtained by mapping the three-dimensional model to the two-dimensional space comprises the following steps:
obtaining mapping parameters of a three-dimensional model to a two-dimensional space, wherein the mapping parameters comprise two-dimensional point data, an area formed by the two-dimensional point data and a mapping proportion, the two-dimensional point data is the data of the three-dimensional point data on the three-dimensional model mapped to the two-dimensional space, and the mapping proportion is the proportion of the distance between any two three-dimensional point data and the distance between the corresponding two-dimensional point data;
calculating the area of the region on the three-dimensional model based on the mapping proportion and the two-dimensional point data in the region;
the three-dimensional model comprises a dough sheet Mesh model;
the obtaining the mapping parameters of the three-dimensional model to the two-dimensional space comprises the following steps:
dividing the Mesh model into patches;
combining the dough sheets meeting the preset combining conditions to obtain the three-dimensional model of the divided area;
and extracting mapping parameters of the three-dimensional model to a two-dimensional space.
2. The method according to claim 1, wherein the merging the patches satisfying a preset merging condition to obtain the three-dimensional model of the divided region includes:
dividing adjacent patches with included angles smaller than a first preset threshold value into one patch to obtain secondary divided areas, wherein any one secondary divided area is used as a seed;
searching secondary division patches adjacent to the seeds and with included angles smaller than a second preset threshold value as target patches;
and merging the target surface patch and the seeds to obtain a three-time dividing region.
3. The method of claim 2, wherein the extracting mapping parameters of the three-dimensional model to two-dimensional space comprises:
calculating a normal vector of a target area, wherein the target area is any area in the three-time dividing area;
constructing a plane containing the normal vector;
projecting all patches in the target area to the plane;
calculating a projected bounding box on the plane as the two-dimensional space;
and projecting each three-dimensional point data on the patch in the target area to the two-dimensional space to obtain the two-dimensional point data after the projection of the target area.
4. A method according to claim 3, further comprising:
and taking the ratio of the two-dimensional point data to the size of the bounding box as the mapping ratio.
5. The method of claim 1, wherein the three-dimensional model comprises a boundary representation BRep model;
the obtaining the mapping parameters of the three-dimensional model to the two-dimensional space comprises the following steps:
extracting the three-dimensional point data, the two-dimensional point data and the corresponding relation between the three-dimensional point data and the two-dimensional point data from model data of the BRep model;
based on the extracted data, the mapping parameters are calculated.
6. A display device for cloud graphics, comprising:
the first acquisition module is used for acquiring the area of each region on the three-dimensional model based on a two-dimensional model obtained by mapping the three-dimensional model to a two-dimensional space;
a second acquisition module for acquiring a number of sampling points in the area based on the area, the number being positively correlated with the area;
a sampling module for sampling field variable sampling points in each region of the three-dimensional model from field variables of the three-dimensional model based on the number of sampling points in the region;
the display module is used for displaying the field variable cloud picture of the three-dimensional model by rendering the field variable sampling points;
the first obtaining module is specifically configured to:
obtaining mapping parameters of a three-dimensional model to a two-dimensional space, wherein the mapping parameters comprise two-dimensional point data, an area formed by the two-dimensional point data and a mapping proportion, the two-dimensional point data is the data of the three-dimensional point data on the three-dimensional model mapped to the two-dimensional space, and the mapping proportion is the proportion of the distance between any two three-dimensional point data and the distance between the corresponding two-dimensional point data;
calculating the area of the region on the three-dimensional model based on the mapping proportion and the two-dimensional point data in the region;
the three-dimensional model comprises a dough sheet Mesh model;
the first obtaining module obtains mapping parameters of mapping the three-dimensional model to a two-dimensional space, including:
dividing the Mesh model into patches;
combining the dough sheets meeting the preset combining conditions to obtain the three-dimensional model of the divided area;
and extracting mapping parameters of the three-dimensional model to a two-dimensional space.
7. An electronic device, comprising:
a memory and a processor;
the memory is used for storing an application program, and the processor is used for running the application program to execute the cloud image display method according to any one of claims 1 to 5.
8. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of displaying a cloud image according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310837894.9A CN116563476B (en) | 2023-07-10 | 2023-07-10 | Cloud image display method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310837894.9A CN116563476B (en) | 2023-07-10 | 2023-07-10 | Cloud image display method and device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116563476A CN116563476A (en) | 2023-08-08 |
CN116563476B true CN116563476B (en) | 2023-09-12 |
Family
ID=87496911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310837894.9A Active CN116563476B (en) | 2023-07-10 | 2023-07-10 | Cloud image display method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116563476B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605988A (en) * | 2013-12-06 | 2014-02-26 | 康江科技(北京)有限责任公司 | Foundation cloud atlas classification method based on spatial pyramid random mapping |
CN109325500A (en) * | 2018-08-02 | 2019-02-12 | 佛山科学技术学院 | A kind of method for extracting characteristics of three-dimensional model and device based on Area-weighted |
CN110442925A (en) * | 2019-07-16 | 2019-11-12 | 中南大学 | A kind of three-dimensional visualization method and system based on the reconstruct of real-time dynamic partition |
CN114359226A (en) * | 2022-01-05 | 2022-04-15 | 南京邮电大学 | Three-dimensional model set visual area extraction method based on hierarchical superposition and region growth |
CN116205978A (en) * | 2023-02-22 | 2023-06-02 | 中冶赛迪信息技术(重庆)有限公司 | Method, device, equipment and storage medium for determining mapping image of three-dimensional target object |
CN116385622A (en) * | 2023-05-26 | 2023-07-04 | 腾讯科技(深圳)有限公司 | Cloud image processing method, cloud image processing device, computer and readable storage medium |
-
2023
- 2023-07-10 CN CN202310837894.9A patent/CN116563476B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605988A (en) * | 2013-12-06 | 2014-02-26 | 康江科技(北京)有限责任公司 | Foundation cloud atlas classification method based on spatial pyramid random mapping |
CN109325500A (en) * | 2018-08-02 | 2019-02-12 | 佛山科学技术学院 | A kind of method for extracting characteristics of three-dimensional model and device based on Area-weighted |
CN110442925A (en) * | 2019-07-16 | 2019-11-12 | 中南大学 | A kind of three-dimensional visualization method and system based on the reconstruct of real-time dynamic partition |
CN114359226A (en) * | 2022-01-05 | 2022-04-15 | 南京邮电大学 | Three-dimensional model set visual area extraction method based on hierarchical superposition and region growth |
CN116205978A (en) * | 2023-02-22 | 2023-06-02 | 中冶赛迪信息技术(重庆)有限公司 | Method, device, equipment and storage medium for determining mapping image of three-dimensional target object |
CN116385622A (en) * | 2023-05-26 | 2023-07-04 | 腾讯科技(深圳)有限公司 | Cloud image processing method, cloud image processing device, computer and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116563476A (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108648269B (en) | Method and system for singulating three-dimensional building models | |
CN107481312B (en) | Image rendering method and device based on volume rendering | |
CN110738721A (en) | Three-dimensional scene rendering acceleration method and system based on video geometric analysis | |
US20130027417A1 (en) | Alternate Scene Representations for Optimizing Rendering of Computer Graphics | |
CN109712131A (en) | Quantization method, device, electronic equipment and the storage medium of Lung neoplasm feature | |
CN112190935A (en) | Dynamic volume cloud rendering method and device and electronic equipment | |
CN114820980A (en) | Three-dimensional reconstruction method and device, electronic equipment and readable storage medium | |
CN116563476B (en) | Cloud image display method and device, electronic equipment and computer readable storage medium | |
US20230326129A1 (en) | Method and apparatus for storing visibility data of three-dimensional model, device, and storage medium | |
CN111340949B (en) | Modeling method, computer device and storage medium for 3D virtual environment | |
EP4287134A1 (en) | Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions | |
US8952968B1 (en) | Wave modeling for computer-generated imagery using intersection prevention on water surfaces | |
WO2023179091A1 (en) | Three-dimensional model rendering method and apparatus, and device, storage medium and program product | |
EP1197922A2 (en) | Apparatus, system, and method for simplifying annotations on a geometric surface | |
CN113838199B (en) | Three-dimensional terrain generation method | |
Scholz et al. | Level of Detail for Real-Time Volumetric Terrain Rendering. | |
CN112435322B (en) | Rendering method, device and equipment of 3D model and storage medium | |
Stiver et al. | Sketch based volumetric clouds | |
Vanakittistien et al. | Game‐ready 3D hair model from a small set of images | |
CN115082635B (en) | Method and system for realizing multiple states of geographic entity based on cutting inclination model | |
US11954802B2 (en) | Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions | |
CN117611781B (en) | Flattening method and device for live-action three-dimensional model | |
CN116152446B (en) | Geological model subdivision method, device, terminal and medium based on UE4 | |
US20230394767A1 (en) | Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions | |
CN117953134A (en) | Perspective projection point aggregation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |