CN116366827B - High-precision large-scene image processing and transmitting method and device facing web end - Google Patents
High-precision large-scene image processing and transmitting method and device facing web end Download PDFInfo
- Publication number
- CN116366827B CN116366827B CN202310077909.6A CN202310077909A CN116366827B CN 116366827 B CN116366827 B CN 116366827B CN 202310077909 A CN202310077909 A CN 202310077909A CN 116366827 B CN116366827 B CN 116366827B
- Authority
- CN
- China
- Prior art keywords
- bounding volume
- view cone
- error
- nodes
- screen space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title abstract description 38
- 238000012545 processing Methods 0.000 title abstract description 19
- 238000005192 partition Methods 0.000 claims abstract description 41
- 238000010586 diagram Methods 0.000 claims abstract description 15
- 230000005540 biological transmission Effects 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 2
- 238000009877 rendering Methods 0.000 description 18
- 230000003993 interaction Effects 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008846 dynamic interplay Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a web-end-oriented high-precision large-scene image processing and transmitting method and a device thereof, wherein the method comprises the following steps: step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map; step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained; step S3: and transmitting after delaying loading when the simplified model diagram is loaded on the web side. Compared with the prior art, the method is a scene division method based on the kd tree, the division can not ensure the geometric characteristics in each partition, reduces the resource use at the same time, and provides random access of grid flow for users. The method has the advantages of realizing reasonable division of large-scale scenes, accelerating the speed of loading the large scenes, optimizing the model simplification effect and facilitating the subsequent transmission work.
Description
Technical Field
The application relates to the technical field, in particular to a web-end-oriented high-precision large-scene image processing and transmitting method and a web-end-oriented high-precision large-scene image processing and transmitting device.
Background
With the popularization of the internet, the three-dimensional interaction technology is used for displaying the three-dimensional model and the scene, so that the three-dimensional model and the scene play an important role in enhancing the sense of reality and immersion of the user experience. Currently, with the rapid development of three-dimensional data acquisition and modeling technology, the data volume of a three-dimensional model represented by a polygonal grid is continuously increased, which brings a lot of difficulties to digital storage, transmission and drawing of exhibits.
In fact, most existing systems have delay problems due to too long data downloading time, lack of adaptability to networks and client devices, and problems that model topology and attribute characteristics cannot be maintained, three-dimensional scenes are not supported by simplifying generation of multi-level-of-detail models, and the like are caused, and the defects seriously affect quality of user experience. The root cause of these problems is that three-dimensional geometric models are transmitted through grids, but the transmission speed of the network cannot meet the demands of users.
Thus, studying how to efficiently transmit in real time the three-dimensional geometric information required by the user over the network has always been an important issue in graphic studies.
In recent years, with the development of technology, the three-dimensional model is more convenient and easier to acquire, and the application of the three-dimensional model is more extensive. The three-dimensional model interaction is used as a novel information interaction mode, and a more visual information representation means is provided compared with the two-dimensional model interaction. However, such three-dimensional model interactive applications require users to download and install applications, which severely hamper large-scale cross-platform applications and popularization due to portability, cross-platform operation and information interoperability limitations, one of the solutions is to develop web-oriented three-dimensional model interactive applications. The three-dimensional model interaction application program based on the web browser has portability and universality, and provides a basis for sharing three-dimensional model information across platforms.
Three-dimensional model rendering is a web-oriented three-dimensional model interactive application key support technology. However, efficient rendering and dynamic interaction of web-oriented three-dimensional models presents challenges. On the one hand, web browser platforms have low performance compared to the computing power between local applications due to the inefficiency of JavaScript interpretation, and web applications cannot efficiently handle large amounts of data and complex computations. On the other hand, three-dimensional models contain more data than two-dimensional models. As business requirements increase in complexity, these models also contain a large amount of attribute data such as normals, lighting, and the like, and complex interaction requirements further increase the amount of data used for web browser 3D rendering. For the reasons of the above two aspects, the computing power of the web platform and the data size of the 3D model affect the user experience of the web application.
Since the advent of WebGL in 2011, webGL-based third-party JavaScript3D rendering engines have been implemented, such as three. Js, babylon. Js, etc., that can load the entire model file at once.
But this mechanism has the following problems in Web3D applications:
firstly, model loading and rendering delay caused by a one-time model data loading and rendering mechanism based on a synchronous communication mode is larger, and in the synchronous data communication mode, a client side can load and render after waiting for all model data transmission to be completed. On the one hand, insufficient or unstable bandwidth of the mobile radio network may cause delays in model loading.
Furthermore, for web platforms with weak computing power, there is considerable delay in rendering large 3D models at once, and browser pages may even get stuck.
Traditionally, the problem of large data delays has been solved using a progressive grid approach in three-dimensional applications. However, due to the complex model decompression process, the web platform is subject to greater computational pressure, which causes an increase in user response delay, resulting in customers abandoning this approach. End users cannot afford large-scale, complex interactive 3D applications.
Therefore, an asynchronous decentralized three-dimensional model transmission method is needed to solve the network congestion problem caused by one-time loading. In addition, the problem of computational redundancy in a one-time loading rendering model data mechanism needs to be considered, the three-dimensional model carries a large amount of data related to the dynamic interaction requirement of a user besides model topology data, and the data are not needed when the rendering is initialized in the model, but are used in subsequent user interaction. The one-time model loading and rendering mechanism loads and renders all model structure data and the like, so that extra loading and computing pressure is brought to a web platform with weaker computing capacity, and the computing response delay of the service is longer.
In the conventional cloud and web terminal transmission method, complex interactions and large data volume three-dimensional model rendering computation cause an increase in communication cost and a delay in user service due to a shortage of computing power of a terminal. Accordingly, there is a need for an efficient method of mitigating model initialization and data rendering to mitigate computational pressure of web-based terminals.
Based on the above two discussions, using a one-time model-loading rendering mechanism to build more complex web3D applications may lead to serious service delay and even disruption problems. Therefore, it is important to research a three-dimensional model loading and rendering mechanism in a web platform environment with an interactive function.
Disclosure of Invention
Aiming at the technical problems, the application provides a web-end-oriented high-precision large-scene image processing and transmitting method and a web-end-oriented high-precision large-scene image processing and transmitting device, and the web-end-oriented high-precision large-scene streaming problem is effectively solved.
The application provides a web-end-oriented high-precision large-scene image processing and transmitting method, which comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding volume and the view cone in the partitioning graph, and if the bounding volume is positioned outside the view cone, setting all nodes of the bounding volume to be invisible, setting sub-nodes of the bounding volume outside the view cone to be invisible, so as to eliminate all nodes of the bounding volume outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the determination in step S21 is yes, the positional relationship of the bounding volume and the view cone is checked:
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, step S1 comprises the steps of:
step S11: acquiring grids M of all vertexes VM of the bounding volume for each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding volume;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if the midpoint VML of partition ML or the midpoint VMR of partition MR is greater than N, N is the number of vertices specified by the user in one partition , And repeating the steps S11 to S14, and obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
The application also provides a device for the high-precision large scene image processing method facing the web end, which comprises the following steps: the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
and the transmission module is used for transmitting after the simplified model diagram is loaded on the web terminal and the loading is delayed.
Preferably, the simplified model module comprises:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, the first judging module includes:
the first setting module is used for checking the position relation between the bounding volume and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, setting sub-nodes of the bounding volume outside the view cone to be invisible, and eliminating all bounding volume nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
the child nodes of the bounding volume are set to be invisible if they do not intersect the view cone and are outside the view cone.
Preferably, the second judging module includes:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
The beneficial effects that this application can produce include:
1) Compared with the prior art, the method for processing and transmitting the high-precision large scene image facing the web end is a scene division method based on the kd tree, the division can not guarantee the geometric characteristics in each partition, the resource use at the same time is reduced, and the random access of the grid flow is provided for users. The method has the advantages of realizing reasonable division of large-scale scenes, accelerating the speed of loading the large scenes, optimizing the model simplification effect and facilitating the subsequent transmission work.
2) According to the web-end-oriented high-precision large-scene image processing and transmitting method, the screen space error is used for judging, and the reasonable maxSSE parameter is set, so that the closer to the center of the screen, the higher the resolution of the loaded block is, and compared with other algorithms, the region of interest of a user can be loaded more accurately.
Drawings
FIG. 1 is a schematic diagram of a result obtained after performing kd-Tree algorithm space division by a processing example in an embodiment of the present application;
FIG. 2 is a network loading time chart of a simplified model obtained by processing the results shown in FIG. 1 by the methods of examples and comparative examples provided in the present application; wherein a) is a loading time chart obtained by a comparative example method; b) Providing a loading time chart obtained by the method for the application;
FIG. 3 is a process diagram (Hewan village) of an embodiment of the present application, wherein the red circle is the center of the viewpoint of the picture;
fig. 4 is a graph showing the result of the processing of fig. 3 by the method provided in the embodiment of the present application, where the red circle is the view center range, and the definition of the picture in the range is significantly higher than that of fig. 3;
fig. 5 is a schematic flow chart of a web-end-oriented high-precision large-scene image processing and transmitting method provided by the application;
fig. 6 is a schematic diagram of a high-precision large-scene image processing and transmitting device facing to a web end.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
The technical features that are not used for solving the technical problems of the present application are all set or installed according to the methods commonly used in the prior art, and are not described herein.
Referring to fig. 5, the high-precision large scene image processing and transmitting method facing to the web end provided by the application comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding volume and the view cone in the partitioning graph, and if the bounding volume is positioned outside the view cone, setting all nodes of the bounding volume to be invisible, setting sub-nodes of the bounding volume outside the view cone to be invisible, so as to eliminate all nodes of the bounding volume outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the determination in step S21 is yes, the positional relationship of the bounding volume and the view cone is checked:
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
According to the method, according to the characteristic that the viewpoint center is the focus of people, the part close to the center is loaded firstly, and then the content of the edge part is loaded, by adopting the method, the updating transmission speed of the scene recorded by the picture can be improved, the rendering time of clear pictures is reduced, meanwhile, clear images of the viewpoint center of the picture are loaded for the user, the waiting discomfort of the user in the period of waiting for the picture to be rendered is reduced, the user can conveniently and quickly acquire picture information, the waiting time is shortened, and the rendering experience of the image rendering transmission user is effectively improved.
Preferably, step S1 comprises the steps of:
step S11: acquiring grids M of all vertexes VM of the bounding volume for each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding volume;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if the midpoint VML of partition ML or the midpoint VMR of partition MR is greater than N, N is the number of vertices specified by the user in one partition , And repeating the steps S11 to S14, and obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
The KD tree method is adopted to divide the rendering efficiency and accurately extract the viewpoint center area of the picture.
Another aspect of the present application also provides an apparatus according to the above method, including:
the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
and the transmission module is used for transmitting after the simplified model diagram is loaded on the web terminal and the loading is delayed.
Preferably, the simplified model module comprises:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, the first judging module includes:
the first setting module is used for checking the position relation between the bounding volume and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, setting sub-nodes of the bounding volume outside the view cone to be invisible, and eliminating all bounding volume nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
the child nodes of the bounding volume are set to be invisible if they do not intersect the view cone and are outside the view cone.
Preferably, the second judging module includes:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Examples
The embodiment specifically comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map, wherein the obtained result is shown in fig. 1;
step S1: the method specifically comprises the following steps:
step S11: acquiring grids M of all vertexes VM of the bounding volume for each object in the image in the model image;
step S12: determining the longest axis AM among the three axes of each bounding volume;
step S13: the midpoints MVM from the vertices VM are found, ordered in the order of the longest axes AM.
Step S14: the vertex VM is divided into two half areas from the midpoint MVM: partition ML and partition MR.
Step S15: if the midpoint VML of partition ML or the midpoint VMR of partition MR is greater than N (N is the number of vertices specified by the user in one partition) ) , Repeating the steps S11 to S14, and obtaining a division diagram of the tree data structure after finishing the operation according to the steps;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
wherein, step S2: the method comprises the following steps:
step S21: judging whether the visible area block is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is that the recursion to the leaf node is not carried out, checking the position relation between the bounding volume and the view cone:
step S212: if the bounding volume is outside the view cone, then all nodes of the bounding volume are not visible, and child nodes of the bounding volume are also not visible outside the view cone, so all bounding volume nodes outside its view cone are culled.
Step S213: if the bounding volume intersects the view cone, continuing to recursively check the positional relationship between the child nodes of the bounding volume and the view cone;
step S214: if the child node of the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the determination result is no, a proper level is selected to load the corresponding block according to the screen space error, for example, in the embodiment, the levels are according to sse=0 to 1, specifically, four levels are selected, wherein sse=0 to 0.25 is L0, sse=0.25 to 0.5 is L1, sse=0.5 to 1 is L2, and sse=1 to 2 is L3.
The screen space error formula is as follows:
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor, where height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum.
Step S221: if the judgment result is recursion to the leaf nodes, checking the position relation of the bounding volume and the view cone:
step S222: if the bounding volume is outside the view cone, then the node is not visible, and child nodes of the bounding volume are also outside the view cone and are not visible. All nodes of the bounding volume outside the viewing cone are eliminated.
Step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
Step S3: and transmitting after delaying loading when the simplified model diagram is loaded on the web side.
Comparative example
The same objects as the embodiments are processed according to Shen Yongzeng, liu Dongyue, xu Jun. Design and implementation of octree-based virtual scene manager [ J ]. Computer system application, 2012,21 (03): 147-150+45. Octree partitioning algorithm disclosed in the examples.
Execution environment of the above steps in this embodiment: based on a 3.60GHz Intel (R) Core (TM) i9-9900KFCPU processor, a windows 1064-bit operating system of 32.0GB memory and an NVIDIAGTX1080TI display card. The application software comprises: thre. Js library.
The development software comprises the following steps: neuronDataReaderSDK, prepar3DSDK,3DMAX,visualstudio2017, developed language js.
The method for dividing the graph 1 by adopting the embodiment and the method for providing the comparison example respectively, the time required for the obtained division is referred to in the graph 2, as can be known from the graph 2 (time chart required for loading on the network), the kd-Tree method adopted in the application can finish space division only by 5000ms, and compared with the octree algorithm adopted in the comparison example, the space division can be finished by 7000ms, and as can be known from the result of the graph 2, the loading efficiency of the method obtained after the processing of the method provided by the application is higher. The model loading speed of the corresponding scene generated by using the kd-Tree algorithm is found to be about 17% faster than that of the model generated by using the octree algorithm, and the loading efficiency is improved.
The result obtained by processing fig. 3 by the method provided in the embodiment is shown in fig. 4, the red frames in fig. 3 to 4 are all view centers, in order to simplify the processing, the image display definition of the view center area in the image is improved compared with that in fig. 3, the time required for overall transmission and loading is shortened, and when the processed image is shown in fig. 4, a user can acquire view center information faster, the image transmission time is shortened, and the user experience is improved.
Since the algorithm is affected by the distance between the block and the center point of the screen, the closer to the center of the screen, the higher the resolution of the loaded block, on the premise of the same screen distance.
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.
Claims (3)
1. The web-end-oriented high-precision large-scene image processing method is characterized by comprising the following steps of:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding volume and the view cone in the partitioning graph, and if the bounding volume is positioned outside the view cone, setting all nodes of the bounding volume to be invisible, setting sub-nodes of the bounding volume outside the view cone to be invisible, so as to eliminate all nodes of the bounding volume outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
fov,aspectRatio≤1
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is screen space error;
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the judgment result in the step S21 is yes, checking the position relationship between the bounding volume and the view cone;
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
2. The web-side-oriented high-precision large-scene image processing method according to claim 1, wherein the step S1 comprises the steps of:
step S11: acquiring grids M of all vertexes VM of the bounding volume for each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding volume;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if the midpoint VML of partition ML or the midpoint VMR of partition MR is greater than N, N is the number of vertices specified by the user in one partition , Repeating steps S11 to SS14, obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
3. A device for a web-side-oriented high-precision large-scene image processing method according to claim 1 or 2, comprising: the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
the transmission module is used for transmitting after delaying loading when the web end loads the simplified model diagram; the simplified model module includes:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting a proper level for loading according to the screen space error;
the first judging module includes:
the first setting module is used for checking the position relation between the bounding volume and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, setting sub-nodes of the bounding volume outside the view cone to be invisible, and eliminating all bounding volume nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
fov,aspectRatio≤1
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
the second judging module includes:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310077909.6A CN116366827B (en) | 2023-01-13 | 2023-01-13 | High-precision large-scene image processing and transmitting method and device facing web end |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310077909.6A CN116366827B (en) | 2023-01-13 | 2023-01-13 | High-precision large-scene image processing and transmitting method and device facing web end |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116366827A CN116366827A (en) | 2023-06-30 |
CN116366827B true CN116366827B (en) | 2024-02-06 |
Family
ID=86940221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310077909.6A Active CN116366827B (en) | 2023-01-13 | 2023-01-13 | High-precision large-scene image processing and transmitting method and device facing web end |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116366827B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599493A (en) * | 2016-12-19 | 2017-04-26 | 重庆市勘测院 | Visual implementation method of BIM model in three-dimensional large scene |
CN110990737A (en) * | 2019-12-09 | 2020-04-10 | 江苏艾佳家居用品有限公司 | LOD-based lightweight loading method for indoor scene of web end |
CN111968212A (en) * | 2020-09-24 | 2020-11-20 | 中国测绘科学研究院 | Viewpoint-based dynamic scheduling method for three-dimensional urban scene data |
WO2021174659A1 (en) * | 2020-03-04 | 2021-09-10 | 杭州群核信息技术有限公司 | Webgl-based progressive real-time rendering method for editable large scene |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012083508A1 (en) * | 2010-12-24 | 2012-06-28 | 中国科学院自动化研究所 | Fast rendering method of third dimension of complex scenes in internet |
EP3346449B1 (en) * | 2017-01-05 | 2019-06-26 | Bricsys NV | Point cloud preprocessing and rendering |
-
2023
- 2023-01-13 CN CN202310077909.6A patent/CN116366827B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599493A (en) * | 2016-12-19 | 2017-04-26 | 重庆市勘测院 | Visual implementation method of BIM model in three-dimensional large scene |
CN110990737A (en) * | 2019-12-09 | 2020-04-10 | 江苏艾佳家居用品有限公司 | LOD-based lightweight loading method for indoor scene of web end |
WO2021174659A1 (en) * | 2020-03-04 | 2021-09-10 | 杭州群核信息技术有限公司 | Webgl-based progressive real-time rendering method for editable large scene |
CN111968212A (en) * | 2020-09-24 | 2020-11-20 | 中国测绘科学研究院 | Viewpoint-based dynamic scheduling method for three-dimensional urban scene data |
Also Published As
Publication number | Publication date |
---|---|
CN116366827A (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340928B (en) | Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment | |
US9659400B2 (en) | Efficiently implementing and displaying independent 3-dimensional interactive viewports of a virtual world on multiple client devices | |
KR101130407B1 (en) | Systems and Methods for Providing an Enhanced Graphics Pipeline | |
CN114820905B (en) | Virtual image generation method and device, electronic equipment and readable storage medium | |
CN113946402B (en) | Cloud mobile phone acceleration method, system, equipment and storage medium based on rendering separation | |
CN111068312A (en) | Game picture rendering method and device, storage medium and electronic equipment | |
CN113498532B (en) | Display processing method, display processing device, electronic apparatus, and storage medium | |
CN112765513A (en) | Fine-grained Web3D online visualization method for large-scale building scene | |
WO2023207963A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN111210497A (en) | Model rendering method and device, computer readable medium and electronic equipment | |
CN112948043A (en) | Fine-grained Web3D online visualization method for large-scale building scene | |
CN114820910A (en) | Rendering method and device | |
Li et al. | CEBOW: A Cloud‐Edge‐Browser Online Web3D approach for visualizing large BIM scenes | |
CN116366827B (en) | High-precision large-scene image processing and transmitting method and device facing web end | |
CN112494941A (en) | Display control method and device of virtual object, storage medium and electronic equipment | |
JP7346741B2 (en) | Methods, computer systems, and computer programs for freeview video coding | |
CN115393490A (en) | Image rendering method and device, storage medium and electronic equipment | |
Wang et al. | Performance bottleneck analysis and resource optimized distribution method for IoT cloud rendering computing system in cyber-enabled applications | |
CN115100347A (en) | Shadow rendering method, device, equipment and storage medium | |
CN114119831A (en) | Snow accumulation model rendering method and device, electronic equipment and readable medium | |
WO2023029424A1 (en) | Method for rendering application and related device | |
WO2024109006A1 (en) | Light source elimination method and rendering engine | |
Liang et al. | A point-based rendering approach for real-time interaction on mobile devices | |
Jiang et al. | A large-scale scene display system based on webgl | |
CN116402975B (en) | Method and device for loading and rendering three-dimensional model in WEB platform environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Hu Yong Inventor after: Geng Chenming Inventor after: Shen Xukun Inventor before: Geng Chenming Inventor before: Shen Xukun Inventor before: Hu Yong |
|
GR01 | Patent grant | ||
GR01 | Patent grant |