WO2023066122A1 - Procédé de traitement de données de modèle tridimensionnel, procédé de génération de données de modèle tridimensionnel et appareils associés - Google Patents

Procédé de traitement de données de modèle tridimensionnel, procédé de génération de données de modèle tridimensionnel et appareils associés Download PDF

Info

Publication number
WO2023066122A1
WO2023066122A1 PCT/CN2022/125053 CN2022125053W WO2023066122A1 WO 2023066122 A1 WO2023066122 A1 WO 2023066122A1 CN 2022125053 W CN2022125053 W CN 2022125053W WO 2023066122 A1 WO2023066122 A1 WO 2023066122A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
rendering
patch
electronic device
rendering data
Prior art date
Application number
PCT/CN2022/125053
Other languages
English (en)
Chinese (zh)
Inventor
张培
王中翔
付振寰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023066122A1 publication Critical patent/WO2023066122A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present application relates to the field of media technology, and in particular to a processing method, a generating method and a related device for 3D model data.
  • 3-dimensional (3-dimension, 3D) applications render more and more 3-dimensional scenes and become more and more detailed, so that the 3-dimensional data required for rendering becomes larger and larger. Due to the limited computing power and storage space on the terminal device, the data volume of most 3D scenes has reached the data rendering limit of the terminal device. Therefore, there is currently a need for a way to render large 3D scenes on devices with limited computing power and storage space.
  • a large 3D scene is divided into multiple smaller 3D scenes on a server, and 3D resource files corresponding to each smaller 3D scene are generated.
  • the terminal device downloads the corresponding 3D resource file from the server and executes the rendering of the current scene.
  • the terminal device downloads the 3D resource file corresponding to the new scene from the server, and deletes the previously downloaded 3D resource file.
  • the present application provides a method for processing 3D model data, which can ensure the efficiency of a device when rendering a large-scale scene without causing the 3D application on the device to freeze.
  • the first aspect of the present application provides a method for processing three-dimensional model data, which is applied to a first electronic device.
  • the method includes: the first electronic device receives a first data request from the second electronic device, the first data request includes the position of a first observation point, and the first data request is used to request a corresponding three-dimensional model from the first electronic device data, so as to implement the rendering of the 3D model data on the second electronic device.
  • the first electronic device determines first target data from the data of the three-dimensional model according to the first data request, and the first target data is consistent with the first observation in the three-dimensional model running in the second electronic device point location. That is to say, after the first electronic device determines the position of the first observation point in the three-dimensional model, it can determine the object that needs to be rendered in the three-dimensional model, that is, the object that can be observed based on the position of the first observation point, so that it can determine The first target data used for rendering.
  • the first target data includes rendering data for performing 3D model rendering
  • the rendering data includes rendering data of the first object and rendering data of the second object
  • the rendering data of the first object has high fineness Due to the fineness of the rendering data of the second object, the distance between the first object and the first viewpoint location is smaller than the distance between the second object and the first viewpoint location.
  • both the first object and the second object are objects that can be observed from the position of the first observation point.
  • both the first object and the second object can be a complete object, and the first object and the second object can also be different parts of an object.
  • Both the first object and the second object are composed of multiple patches, so the rendering data of the first object and the rendering data of the second object can be vertices, patches, positions, shapes, surface textures, colors, etc. for rendering The parameters of the scene.
  • the first electronic device sends the first target data to the second electronic device.
  • the first electronic device can obtain the data request sent by the second electronic device in real time, and determine part of the data related to the position of the observation point of the 3D model in the second electronic device from a large amount of 3D model data according to the data request, Further, the determined part of the data is returned to the second electronic device to ensure that the second electronic device can render the three-dimensional model in real time. Moreover, since the data sent to the second electronic device each time is a small part of data related to the position of the observation point, it does not need to take too long to realize the transmission of the 3D data, which ensures the smooth operation of the 3D application.
  • the amount of data to be transmitted can be reduced, the pressure on bandwidth can be reduced, and the transmission speed of 3D model data can be improved to ensure the rendering efficiency of 3D applications. , to avoid the phenomenon of 3D application freezing.
  • the fineness of the rendering data of different objects can be judged by comparing the maximum error of the rendering data of different objects before and after lightweight processing. For example, for the above-mentioned first object and the second object, first determine the maximum error of the model rendered by the rendering data of the first object and the second object before and after lightweight processing; and then compare the corresponding The maximum error and the maximum error corresponding to the second object determine the fineness level between the rendering data of the first object and the rendering data of the second object.
  • the maximum error corresponding to the first object is smaller than the maximum error corresponding to the second object, it means that the fineness of the rendering data of the first object is higher than that of the rendering data of the second object; on the contrary, if the maximum error corresponding to the first object Greater than the maximum error corresponding to the second object, it means that the fineness of the rendering data of the first object is lower than the fineness of the rendering data of the second object.
  • the lightweight processing refers to simplifying the 3D model, and reducing the amount of data of the 3D model at the cost of reducing the accuracy of the rendered 3D model.
  • the maximum error of the object's rendering data before and after lightweight processing refers to the vertex position between the 3D model rendered based on the rendered data after lightweight processing and the 3D model rendered based on the rendering data before lightweight processing error. The higher the weight reduction processing level of the 3D model, the larger the error between the 3D model after the weight reduction processing and the 3D model before the weight reduction processing.
  • the fineness of the rendering data of different objects can also be judged by comparing the data volume ratio of the rendering data of different objects before and after lightweight processing. For example, for any object, the higher the ratio of the object's data volume after lightweight processing to the data volume before lightweight processing, the lower the object's lightweight, and the finer the object's fineness after lightweight processing. high.
  • the data volume of the first object after weight reduction accounts for 80% of the data volume of the first object before weight reduction, and the data volume of the second object after weight reduction If the amount accounts for 50% of the data volume before the weight reduction of the second object, it means that the weight reduction degree of the second object is higher than that of the first object, that is, the fineness of the second object is lower than that of the first object.
  • the fineness of the rendered data of different objects by comparing the ratio of the number of patches of the rendered data of different objects before and after lightweight processing.
  • the ratio of the number of patches included after lightweight processing to the number of patches included before lightweight processing the lower the lightweight of the object, then the weight of the object after lightweight processing The higher the precision.
  • the ratio of the number of patches corresponding to the first object is greater than the ratio of the number of patches corresponding to the second object, it can be determined that the rendering data of the first object is less lightweight than the second object.
  • the lightness degree of the rendering data of the second object, after light weight processing, the fineness of the rendering data of the first object is higher than the fineness of the rendering data of the second object.
  • the data of the three-dimensional model includes multiple pieces of rendering data corresponding to the first object, and the multiple pieces of rendering data have different finenesses.
  • the first electronic device may determine the distance between the first object in the three-dimensional model and the first viewpoint position according to the first viewpoint position in the first data request; then, the first An electronic device selects one piece of rendering data from the multiple pieces of rendering data as the rendering data of the first object according to the distance and the fineness of the multiple pieces of rendering data.
  • the fineness of the rendering data of the first object has a negative correlation with the distance, that is, the greater the distance, the lower the fineness of the rendering data of the first object; the smaller the distance, The higher the fineness of the rendering data of the first object is.
  • the first electronic device selects one piece of rendering data from multiple pieces of rendering data with different finenesses as the rendering data of the first object according to the distance between the first object and the position of the first observation point, so that The fineness of the selected rendering data will not affect the final imaging quality of the first object, and at the same time, the data volume of the rendering data of the first object can be reduced as much as possible.
  • the first piece of rendering data among the multiple pieces of rendering data each includes multiple sets of rendering data, and each set of rendering data in the multiple sets of rendering data is used to render the first object,
  • the multiple sets of rendering data have different finenesses, and the first set of rendering data is any one of the multiple sets of rendering data.
  • the multiple copies of rendering data are obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • Render data is obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • the multiple sets of rendering data included in the first rendering data are obtained by performing the same number of times of mesh reduction processing and different multiples on the original rendering data of the first object.
  • the multiple pieces of rendering data corresponding to the first object are obtained after different times of mesh reduction processing, and the multiple sets of rendering data in each piece of rendering data are obtained based on different times of mesh reduction processing .
  • the fineness of each set of rendering data in the multiple rendering data is different, and the fineness of multiple sets of rendering data in the same rendering data is also different.
  • the first electronic device may select one piece of rendering data from the plurality of pieces of rendering data; then, the first electronic device selects one piece of rendering data from the multiple pieces of rendering data according to the performance index and/or network status of the second electronic device A group of rendering data is selected from the group of rendering data as the rendering data of the first object.
  • the first electronic device can determine one of the layers from the rendering data of multiple layers according to the distance between the first object and the first observation point, and then based on the performance index and/or network of the second electronic device state, and further determine the rendering data of one of the lightweight levels from the multiple lightweight levels in the hierarchy, so as to obtain the rendering data that needs to be sent to the second electronic device finally.
  • the performance index includes one or more of the frame rate, rendering delay, and temperature parameters of the second electronic device; the network status includes bandwidth and One or more of network delays.
  • the frame rate of the second electronic device when the frame rate of the second electronic device is high, it means that the performance of the second electronic device is high, and the rendering data with a lower lightweight level can be selected, that is, the rendering data with higher fineness can be selected from multiple sets of rendering data.
  • Rendering data for another example, when the temperature of the second electronic device is high, it means that the power consumption of the second electronic device is high, and you can select rendering data with a higher lightweight level, that is, select finer rendering data from multiple sets of rendering data low-density rendering data, so as to reduce the rendering consumption of the second electronic device.
  • the first target data further includes buffer data
  • the buffer data includes N pieces of rendering data corresponding to the first object, where N is an integer greater than 1, and the N pieces of rendering data
  • the rendering data are all used to execute the rendering of the first object, and the N pieces of rendering data have different finenesses.
  • the second electronic device can use the received buffer data when the position of the observation point in the three-dimensional model changes or the device state changes Perform rendering to avoid re-obtaining new data from the first electronic device, thereby improving rendering speed.
  • the data of the three-dimensional model includes M pieces of rendering data corresponding to the first object, and the fineness of the M pieces of rendering data is different, and the M is an integer greater than the N .
  • the first electronic device determines the distance between the first object in the three-dimensional model and the position of the first observation point according to the first data request, and the first data request includes the first observation point position; according to the distance, determine one piece of rendering data from the plurality of pieces of rendering data as the target rendering data of the first object, the fineness of the target rendering data meets the fineness requirement of the first object, The fineness requirement of the first object is related to the distance; N pieces of rendering data are selected from the M pieces of rendering data as the buffer data, and the fineness of the N pieces of rendering data are all less than or equal to the target The granularity of the rendered data.
  • the method further includes: the first electronic device receiving a second data request from the second electronic device; and the first electronic device according to the second data request, Determining second target data from the data of the three-dimensional model, the second target data is related to the position of the second observation point in the three-dimensional model running in the second electronic device; the first electronic device according to the first electronic device A target data and the second target data determine third target data, and generate first information, wherein the first target data does not include the third target data, and the second target data includes the third target data Target data, the first information is used to indicate fourth target data to be deleted, the first target data includes the fourth target data, and the second target data does not include the fourth target data; The first electronic device sends the third target data and the first information to the second electronic device.
  • the first electronic device determines the data required by the second electronic device based on the new data request, and compares the determined data with the data previously sent to the second electronic device. Compare the data sent by the device, select the newly added data and the data that is no longer needed, so as to send the new data to the second electronic device and instruct the second electronic device to delete the data that is no longer needed, so that the second electronic device
  • the multiplexing of the data in the device avoids sending a large amount of repeated data to the second electronic device.
  • a second aspect of the present application provides a method for generating 3D model data, including: acquiring a 3D model file, where the 3D model file includes a plurality of meshes for constituting a 3D model.
  • the 3D model file may include data of multiple meshes, and the shape of each object in the 3D model is defined by the data of multiple meshes.
  • each mesh is composed of a set of vertices, edges, and patches.
  • the edge information can be defined through the vertex information, and then the edge information is used to form the patch information, and finally the mesh is obtained by combining each patch.
  • the objects in the 3D model can be obtained by mesh definition obtained by combining multiple faces.
  • each first patch set in the plurality of first patch sets includes a plurality of connected patches, and each first surface
  • the number of patches in the patchset is the same. For example, 10000 patches are divided into 100 patch sets, and each patch set includes 100 connected patches.
  • each first patch set group in the plurality of first patch set groups includes a plurality of the first patch sets, so The plurality of first mesh sets are used to form the first three-dimensional model. For example, 100 mesh sets are divided into 25 mesh set groups, each mesh set group includes 4 mesh sets, and each mesh set includes 100 mesh sets.
  • the plurality of first mesh collection groups are used to form the first three-dimensional model (ie, the original three-dimensional model), and the plurality of second mesh collection groups are used to form the second three-dimensional model, because the second three-dimensional model It is composed of the second set of meshes reduced by meshes, so the fineness of the second 3D model is lower than that of the original first 3D model, but the amount of data of the second 3D model is smaller than that of the original first 3D model amount of data.
  • the number of patches in each second patch set in the multiple second patch sets is the same as the number of patches in the multiple second patch sets
  • Each first patch set in the first patch set has the same number of patches.
  • the multiple second patch set groups obtained after the patch reduction process can be disassembled. points, and recompose multiple second patch sets, so as to ensure that the number of patches in each patch set is fixed.
  • the above-mentioned multiple steps can be executed cyclically, that is, dividing the patch set into patch set groups, performing patch reduction processing on the patch set groups, and splitting faces.
  • the patch set group after patch reduction processing is obtained to obtain the patch set after performing different times of patch reduction processing.
  • the accuracy of the three-dimensional model formed by the mesh set after performing different times of mesh reduction processing is different, so that the three-dimensional model data with different accuracy can be obtained.
  • 3D model data with different fineness and data volume can be obtained, so as to meet model rendering requirements in different scenarios.
  • the electronic device can select the 3D model data of the corresponding fineness according to the position of the observation point in the terminal device or the performance of the terminal device, so that it can meet the requirements of different terminal devices as much as possible. Minimize the amount of model data that needs to be transmitted and ensure real-time rendering of the 3D model.
  • the method further includes: performing patch reduction processing on each first patch set group in the plurality of first patch set groups, so as to obtain the reduced number of patches
  • a plurality of third mesh collection groups, the plurality of third mesh collection groups are used to form a third three-dimensional model, the number of meshes of each third mesh collection group in the plurality of third mesh collection groups The number of patches is different from that of each second patch set in the plurality of second patch sets.
  • each of the plurality of first patch set groups includes 400 patches
  • performing faceting on each of the plurality of first patch set groups After the sheet reduction process each second patch set group in the plurality of second patch set groups obtained includes 200 patches, that is, the number of patches has been reduced by 200; for the plurality of first patch set groups After performing the patch reduction process on each of the first patch set groups in , each of the obtained multiple third patch set groups includes 100 patches, that is, 300 patches are reduced .
  • multiple levels of 3D model data can be obtained, and the fineness of each level of 3D model data is different; during the process of generating multiple levels of 3D model data, the Different mesh reduction multiples perform mesh reduction processing, so that 3D model data of different levels can be further generated in each level of 3D model data, and the fineness of 3D model data of the same level but different levels is different.
  • the number of patches in each of the second patch sets is 1/N of the number of patches in each of the first patch sets, where N is an integer greater than 1;
  • the number of patches in each of the third patch sets is 1/M of the number of patches in each of the first patch sets, where M is an integer greater than N.
  • the method further includes: generating a tree structure according to the multiple first patch sets and the multiple second patch sets; wherein the tree structure includes multiple first patch sets node and a plurality of second nodes, the plurality of first nodes respectively correspond to the plurality of first patch sets, the plurality of second nodes respectively correspond to the plurality of second patch sets, the Each node of the plurality of second nodes is connected to a plurality of nodes of the plurality of first nodes.
  • the multiple second patch sets One patch set corresponds to multiple patch sets in the plurality of first patch sets. That is, in the tree structure, each node in the plurality of second nodes is connected to a plurality of nodes in the plurality of first nodes.
  • each patch in the plurality of patches is composed of a plurality of vertices; performing patch reduction on each patch set in the plurality of first patch sets
  • the processing includes: determining a plurality of target vertices at edge positions in a first patch set, where the first patch set is any patch set in the plurality of first patch sets; locking the multiple target vertex, regenerate multiple patches to obtain a second patch set, the number of patches in the second set of patches is smaller than the number of patches in the first set of patches, and the second set of patches is A patch set in the plurality of second patch sets.
  • the vertices at the edge positions of the patch set are kept unchanged, and only the vertices inside the patch set are reduced, so as to realize the reduction process of the patch and ensure the execution of the patch reduction After processing, the edges of the mesh collection will not appear broken or broken.
  • the third aspect of the present application provides a method for processing three-dimensional model data, which is applied to a second electronic device.
  • the method includes: the second electronic device acquires the position of the first observation point in the three-dimensional model; the second electronic device determines the first target data from the data of the three-dimensional model according to the position of the first observation point, and the first target The data is related to the position of the first observation point in the three-dimensional model running in the second electronic device; the second electronic device sends the index of the first target data to the first electronic device, and the first target data The index of is used to indicate the first target data.
  • the second electronic device determines the target data required for rendering according to the position of the observation point, and sends the index of the target data to the first electronic device storing the 3D model data, so as to instruct the first electronic device to send the target data to the second electronic device Return target data, enabling real-time rendering of 3D models.
  • the first target data includes rendering data for performing 3D model rendering
  • the rendering data includes rendering data of a first object and rendering data of a second object, wherein the first The fineness of the rendering data of an object is higher than the fineness of the rendering data of the second object, and the distance between the first object and the position of the first observation point is smaller than the distance between the second object and the first The distance between watch point locations.
  • the data of the three-dimensional model includes multiple pieces of rendering data corresponding to the first object, and the multiple pieces of rendering data have different fineness;
  • the second electronic device according to the first observation Determining the rendering data of the first object in the first target data from the data of the three-dimensional model includes: the second electronic device determining the first object in the three-dimensional model according to the position of the first viewing point The distance between the object and the position of the first observation point; the second electronic device selects one piece of rendering data from the multiple pieces of rendering data as the set according to the distance and the fineness of the multiple pieces of rendering data The rendering data of the first object, wherein the fineness of the rendering data of the first object has a negative correlation with the distance.
  • the first piece of rendering data in the multiple pieces of rendering data includes multiple sets of rendering data, and each set of rendering data in the multiple sets of rendering data is used for rendering to obtain the first object , and the fineness of the multiple sets of rendering data is different, and the first set of rendering data is any one of the multiple sets of rendering data.
  • the multiple copies of rendering data are obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • Render data is obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • the multiple sets of rendering data included in the first rendering data are obtained by performing the same number of times of mesh reduction processing and different multiples of the original rendering data of the first object.
  • the selecting one piece of rendering data from the multiple pieces of rendering data as the rendering data of the first object according to the distance and the fineness of the multiple pieces of rendering data includes : according to the distance and the fineness of the plurality of rendering data, select a rendering data from the multiple rendering data; according to the performance index and/or network status of the second electronic device, select from the selected A set of rendering data is selected from multiple sets of rendering data in one piece of rendering data as the rendering data of the first object.
  • the performance index includes one or more of the frame rate, rendering delay, and temperature parameters of the second electronic device; the network status includes bandwidth and One or more of network delays.
  • the first target data further includes buffer data
  • the buffer data includes N pieces of rendering data corresponding to the first object, where N is an integer greater than 1, and the N pieces of rendering data
  • the rendering data are all used to execute the rendering of the first object, and the N pieces of rendering data have different finenesses.
  • the data of the three-dimensional model includes M pieces of rendering data corresponding to the first object, and the fineness of the M pieces of rendering data is different, and the M is an integer greater than the N ;
  • Determining the buffer data in the first target data from the data of the three-dimensional model according to the position of the first observation point includes: determining the first object in the three-dimensional model according to the position of the first observation point The distance between the object and the position of the first viewing point; according to the distance, one piece of rendering data is determined from the plurality of pieces of rendering data as the target rendering data of the first object, and the fineness of the target rendering data The fineness requirement of the first object is satisfied, and the fineness requirement of the first object is related to the distance; N pieces of rendering data are selected from the M pieces of rendering data as the buffer data, and the N pieces of rendering data are selected as the buffer data. The fineness of the rendering data is less than or equal to the fineness of the target rendering data.
  • the method further includes: the second electronic device determines second target data from the data of the three-dimensional model according to the position of the second observation point, and the second target data is consistent with the first The positions of the two observation points are related; the second electronic device determines third target data according to the first target data and the second target data, and generates first information, wherein the first target data does not include the first target data three target data, and the second target data includes the third target data, the first information is used to indicate fourth target data to be deleted, the first target data includes the fourth target data, and The second target data does not include the fourth target data; the second electronic device sends an index of the third target data and the first information to the first electronic device.
  • the fourth aspect of the present application provides a data processing device, including: a receiving unit, a processing unit, and a sending unit; the receiving unit is configured to receive a first data request from a second electronic device, and the first data request includes The position of the first observation point; the processing unit is configured to determine first target data from the data of the three-dimensional model according to the first data request, the first target data includes rendering data for performing rendering of the three-dimensional model, The rendering data includes rendering data of a first object and rendering data of a second object, wherein the fineness of the rendering data of the first object is higher than that of the rendering data of the second object, and the first The distance between the object and the position of the first observation point is smaller than the distance between the second object and the position of the first observation point; the sending unit is configured to send the first observation point to the second electronic device a target data.
  • the data of the three-dimensional model includes multiple pieces of rendering data corresponding to the first object, and the multiple pieces of rendering data have different finenesses; the processing unit is further configured to: according to the The first data request is to determine the distance between the first object in the three-dimensional model and the position of the first observation point; according to the distance and the fineness of the multiple rendering data, from the multiple rendering data A piece of rendering data is selected as the rendering data of the first object, wherein the fineness of the rendering data of the first object has a negative correlation with the distance.
  • the first piece of rendering data in the multiple pieces of rendering data includes multiple sets of rendering data, and each set of rendering data in the multiple sets of rendering data is used for rendering to obtain the first object , and the fineness of the multiple sets of rendering data is different, and the first set of rendering data is any one of the multiple sets of rendering data.
  • the multiple copies of rendering data are obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • Render data is obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • the multiple sets of rendering data included in the first rendering data are obtained by performing the same number of times of mesh reduction processing and different multiples of the original rendering data of the first object.
  • the processing unit is further configured to: select one piece of rendering data from the multiple pieces of rendering data according to the distance and the fineness of the multiple pieces of rendering data; The performance index and/or network status of the second electronic device, selecting a set of rendering data from multiple sets of rendering data in the selected piece of rendering data as the rendering data of the first object.
  • the performance index includes one or more of the frame rate, rendering delay, and temperature parameters of the second electronic device; the network status includes bandwidth and One or more of network delays.
  • the first target data further includes buffer data
  • the buffer data includes N pieces of rendering data corresponding to the first object, where N is an integer greater than 1, and the N pieces of rendering data
  • the rendering data are all used to execute the rendering of the first object, and the N pieces of rendering data have different finenesses.
  • the data of the three-dimensional model includes M pieces of rendering data corresponding to the first object, and the fineness of the M pieces of rendering data is different, and the M is an integer greater than the N
  • the processing unit is further configured to: according to the first data request, determine the distance between the first object in the three-dimensional model and the position of the first observation point; according to the distance, from the multiple One piece of rendering data is determined as the target rendering data of the first object, the fineness of the target rendering data meets the fineness requirement of the first object, and the fineness requirement of the first object is consistent with the specified The distance correlation; select N pieces of rendering data from the M pieces of rendering data as the buffer data, and the fineness of the N pieces of rendering data is less than or equal to the fineness of the target rendering data.
  • the first data request includes a data index
  • the data index is used to indicate the first target data
  • the data index is the second electronic device according to the first The position of the observation point is determined.
  • the receiving unit is further configured to receive a second data request from the second electronic device;
  • the second target data is determined in the data of the model, and the second target data is related to the position of the second observation point in the three-dimensional model running in the second electronic device;
  • the processing unit is also used to determine according to the first A target data and the second target data determine third target data, and generate first information, wherein the first target data does not include the third target data, and the second target data includes the third target data Target data, the first information is used to indicate fourth target data to be deleted, the first target data includes the fourth target data, and the second target data does not include the fourth target data;
  • the sending unit is further configured to send the third target data and the first information to the second electronic device.
  • the fifth aspect of the present application provides a data generation device, including: an acquisition unit and a processing unit; the acquisition unit is used to acquire a 3D model file, and the 3D model file includes a plurality of patches used to form a 3D model;
  • the processing unit is configured to: divide the plurality of patches into a plurality of first patch sets, each of the plurality of first patch sets includes a plurality of connected patches, and The number of patches in each of the first patch sets is the same; the multiple first patch sets are divided into multiple first patch sets, and each of the multiple first patch sets
  • the first mesh collection group includes a plurality of the first mesh collection groups, and the plurality of first mesh collection groups are used to form the first three-dimensional model; for each of the plurality of first mesh collection groups
  • the first mesh set group performs mesh reduction processing to obtain a plurality of second mesh set groups after the number of meshes has been reduced, and the plurality of second mesh set groups are used to form a second three-dimensional model.
  • the fineness of a three-dimensional model is higher than that of the second three-dimensional model; the plurality of second mesh sets are split to obtain a plurality of second mesh sets, and the plurality of second mesh sets are The number of patches in each second patch set in the set is the same as the number of patches in each first patch set in the plurality of first patch sets.
  • the processing unit is further configured to: perform patch reduction processing on each first patch set group in the plurality of first patch set groups, so as to obtain the number of patches A plurality of reduced third mesh set groups, the plurality of third mesh set groups are used to form a third three-dimensional model, and each of the third mesh set groups in the plurality of third mesh set groups The number of patches is different from that of each second patch set in the plurality of second patch sets.
  • the number of patches in each of the second patch sets is 1/N of the number of patches in each of the first patch sets, where N is an integer greater than 1;
  • the number of patches in each of the third patch sets is 1/M of the number of patches in each of the first patch sets, where M is an integer greater than N.
  • the processing unit is further configured to: generate a tree structure according to the multiple first patch sets and the multiple second patch sets; wherein the tree structure includes multiple a first node and a plurality of second nodes, the plurality of first nodes are respectively corresponding to the plurality of first patch sets, and the plurality of second nodes are respectively corresponding to the plurality of second patch sets , each node in the plurality of second nodes is connected to a plurality of nodes in the plurality of first nodes.
  • each of the plurality of patches is composed of a plurality of vertices; the processing unit is specifically configured to: determine a plurality of vertices in the first patch set group The target vertex, the first patch set is any one of the multiple first patch set groups; lock the multiple target vertices, and regenerate multiple patches to obtain the second surface A patch set group, the number of patches in the second patch set group is smaller than the number of patches in the first patch set group, and the second patch set group is one of the plurality of second patch set groups A collection of patches for .
  • the sixth aspect of the present application provides an electronic device, which may include a processor, the processor is coupled to a memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the above-mentioned first to third aspects are implemented. described method. For the steps in each possible implementation manner of the processor executing the first aspect to the third aspect, reference may be made to the first aspect to the third aspect for details, and details are not repeated here.
  • the seventh aspect of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, the computer executes any one of the above-mentioned first to third aspects. Implement the method described in the manner.
  • An eighth aspect of the present application provides a circuit system, where the circuit system includes a processing circuit configured to execute the method described in any one of the above first to third aspects.
  • a ninth aspect of the present application provides a computer program product, which, when run on a computer, causes the computer to execute the method described in any one of the above-mentioned first to third aspects.
  • FIG. 1 is a schematic diagram of downloading a resource file of a three-dimensional model in the related art
  • FIG. 2 is a schematic structural diagram of an electronic device 101 provided in an embodiment of the present application.
  • Fig. 3 is a schematic flowchart of the generation and processing of a 3D model data provided by the embodiment of the present application;
  • FIG. 4 is a schematic flowchart of a method 400 for generating three-dimensional model data provided by an embodiment of the present application
  • Fig. 5 is a schematic diagram of the data structure of a three-dimensional model file provided by the embodiment of the present application.
  • FIG. 6 is a schematic diagram of an object in a three-dimensional model provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a patch reduction process provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a generation process of three-dimensional model data provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of generating an undirected graph based on patches in a three-dimensional model provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of generating multi-level three-dimensional model data provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a tree structure provided by an embodiment of the present application.
  • FIG. 12A is a schematic diagram of the linked list structure provided by the embodiment of the present application.
  • FIG. 12B is a schematic diagram of a three-dimensional model rendered based on three-dimensional model data of different lightweight levels provided by the embodiment of the present application;
  • FIG. 13 is a schematic flowchart of a method 1300 for processing 3D model data provided by an embodiment of the present application.
  • FIG. 14 is a schematic flow chart of determining three-dimensional model data provided by the embodiment of the present application.
  • Fig. 15 is a schematic diagram of a rendering ball and a buffer ball provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a multi-layer rendering ball and a multi-layer buffer ball provided by an embodiment of the present application.
  • Fig. 17 is a rendering schematic diagram when the position of the observation point changes according to the embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a data processing device provided in an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a data generation device provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • a 3D model is a polygonal representation of an object, usually displayed on a computer or other video device. Objects shown may be real-world entities or fictional objects. Anything that exists in physical nature can be represented by a three-dimensional model.
  • a 3D model is composed of vertices and triangular patches. The edge information is defined through the vertex information, and then the information of the triangular patches is formed through the edge information. The final 3D model is composed of each triangular patch. made.
  • Mesh is a proper term in the field of 3D computer graphics and solid modeling.
  • a Mesh consists of a set of vertices, edges, and patches that define the shape of a three-dimensional object.
  • Simple shapes such as cubes, spheres, ellipses, etc., and complex shapes such as rocks, trees, water lakes, etc. can be spliced into groups of Mesh.
  • Mesh grouping is about splitting a large complex mesh into small simple meshes, such as removing rocks from mountains and bricks from buildings.
  • 3D data format The organization method and data standard of the rendering data used by 3D applications at runtime is called data format, and common data formats are glb, fbx, obj, etc.
  • 3D streaming media data After the 3D data used for rendering is specially processed, it is transmitted in segments and on demand through network transmission, without the need to download all the data at once. This technology can solve the problem of insufficient storage space of the terminal device.
  • Levels of Detail According to the position and importance of the nodes of the object model in the display environment, the allocation of resources for object rendering is determined, and the number of faces and details of non-important objects are reduced to obtain Efficient rendering operations. If the camera is far from the object, a low-resolution model is used. If the camera is getting closer to the object, the object model is replaced from low precision to medium precision and then to high precision.
  • N-level lightweight In the field of 3D models, lightweight technology refers to simplifying a complex model to improve rendering performance at the cost of reducing rendering effects. N-level lightweight refers to the lightweight processing of the original model into N model copies of different fineness. For example, level 1 lightweight reduces the number of meshes of the 3D model to 1/2, level 2 lightweight reduces the number of meshes of the 3D model to 1/4, etc.
  • Overlock It refers to ignoring the edge of the model during the process of model lightweighting. Simply put, if the current vertex is on the edge of the model, then the current vertex is not lightweighted. This operation is mainly to solve the problem of broken and broken edges after the model is lightweight.
  • Rendering Sphere A data buffer composed of 3D streaming media data directly used for rendering.
  • the size of the buffer is affected by the performance of the terminal device and the network. It can dynamically manage the increase or decrease of data through a certain algorithm.
  • Buffer ball a data buffer composed of 3D streaming media data for alternative rendering. This buffer is used to improve the hit rate of the rendering target. Its size is also affected by the performance of the user device and network performance. The original data comes from the server download , and can also dynamically manage the increase or decrease of data through a certain algorithm.
  • Rendering culling refers to the elimination of certain models through certain algorithms and strategies, thereby improving rendering efficiency and performance by reducing the number of objects that need to be rendered. For example, objects located at the back of the field of view, occluded objects, objects that are too far away, etc., can be removed from the rendering buffer.
  • Cascade coordinate system refers to the cascading of multiple levels of coordinate systems organized together, the coordinates of the upper node only record the relative value based on the lower coordinate system, without recording the complete absolute value. For example, for a node in the lower coordinate system, the coordinates of the node are (6666.6666, 6666.6666, 6666.6666); if the coordinates of the upper node are only offset by 1.1111 on the x-axis, y-axis and z-axis relative to the node, Then the coordinates of the upper layer node can be recorded as (1.1111, 1.1111, 1.1111) instead of (6667.7777, 6667.7777, 6667.7777).
  • the cascaded coordinate system can greatly reduce the space occupied by data storage, and solve the error problem caused by insufficient data representation digits in ultra-large-scale scenarios.
  • a large 3D scene is divided into multiple smaller 3D scenes on a server, and 3D resource files corresponding to each smaller 3D scene are generated.
  • the terminal device downloads the corresponding 3D resource file from the server and executes the rendering of the current scene.
  • the terminal device downloads the 3D resource file corresponding to the new scene from the server, and deletes the previously downloaded 3D resource file.
  • FIG. 1 is a schematic diagram of downloading a resource file of a 3D model in the related art. For some level-based games, you can divide each level in the game into an independent resource file; when the player enters the corresponding level, download the resource file corresponding to the current level from the server, and delete the resource file corresponding to the previous level. resource.
  • the embodiment of the present application provides a method for generating and processing 3D model data.
  • the large 3D data files are split into a large number of small 3D data files. , and stored in the database.
  • the first electronic device can obtain the data request sent by the second electronic device in real time, and determine part of the data related to the position of the observation point of the three-dimensional model in the second electronic device from a large amount of three-dimensional model data according to the data request, and then send the The second electronic device returns the determined part of the data, ensuring that the second electronic device can render the 3D model in real time.
  • the first electronic device sends 3D model data to the second electronic device in the form of 3D streaming media data, that is, each time the data sent to the second electronic device is a small part of data related to the position of the observation point, therefore It does not need to take too long to realize the transmission of 3D data, which ensures the smooth running of 3D applications.
  • the first electronic device and the second electronic device in the embodiment of the present application may be a server, a smart phone (mobile phone), a personal computer (personal computer, PC), a notebook computer, a tablet computer, a smart TV, a mobile Internet Devices (mobile internet device, MID), wearable devices (such as smart watches, smart glasses or smart helmets, etc.), virtual reality (virtual reality, VR) devices, augmented reality (augmented reality, AR) devices, industrial control (industrial control) ), wireless electronic devices in self driving, wireless electronic devices in remote medical surgery, wireless electronic devices in smart grid, transportation safety ), wireless electronic devices in smart cities, wireless electronic devices in smart homes, etc.
  • the following embodiments do not specifically limit the specific form of the electronic device.
  • the first electronic device may be, for example, a server
  • the second electronic device may be, for example, a mobile device such as a smart phone, a notebook computer, or a tablet computer.
  • FIG. 2 is a schematic structural diagram of an electronic device 101 provided in an embodiment of the present application.
  • the electronic device 101 includes a processor 103 , and the processor 103 is coupled to a system bus 105 .
  • the processor 103 may be one or more processors, each of which may include one or more processor cores.
  • a display adapter (video adapter) 107 which can drive a display 109, and the display 109 is coupled to the system bus 105.
  • the system bus 105 is coupled to an input-output (I/O) bus through a bus bridge 111 .
  • the I/O interface 115 is coupled to the I/O bus.
  • the I/O interface 115 communicates with various I/O devices, such as an input device 117 (such as a touch screen, etc.), an external memory 121, (such as a hard disk, a floppy disk, a CD or a flash drive), a multimedia interface, etc.).
  • Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB port 125 .
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computing, RISC) processor, a complex instruction set computing (complex instruction set computing, CISC) processor or a combination of the above.
  • the processor may be a special purpose device such as an ASIC.
  • the electronic device 101 can communicate with the software deployment server 149 through the network interface 129 .
  • the network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (virtual private network, VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • Hard disk drive interface 131 is coupled to system bus 105 .
  • the hardware driver interface is connected to the hard disk drive 133 .
  • Internal memory 135 is coupled to system bus 105 .
  • Data running on the internal memory 135 may include an operating system (OS) 137 of the electronic device 101 , application programs 143 and a scheduler.
  • OS operating system
  • application programs 143 application programs 143 and a scheduler.
  • the processor 103 can communicate with the internal memory 135 through the system bus 105, and fetches instructions and data in the application program 143 from the internal memory 135, thereby implementing program execution.
  • the operating system includes a Shell 139 and a kernel (kernel) 141.
  • Shell 139 is an interface between the user and the kernel of the operating system.
  • Shell 139 is the outermost layer of the operating system.
  • Shell 139 manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources.
  • the kernel 141 directly interacts with the hardware.
  • the operating system kernel usually runs processes, and provides inter-process communication, CPU time slice management, interrupt, memory management, IO management, and the like.
  • the application program 143 includes programs related to instant messaging.
  • the electronic device 101 can download the application program 143 from the software deployment server 149 .
  • FIG. 3 is a schematic flow chart of generating and processing 3D model data provided by an embodiment of the present application.
  • the user can upload the original file of the 3D model to the cloud in advance, and the server in the cloud processes the original file of the 3D model to generate 3D model data that can be used later.
  • the server can split the mesh in the original file of the 3D model, and recombine to obtain multiple mesh sets. Then, the server performs mesh reduction processing, that is, lightweight processing, on the reorganized multiple mesh sets, so as to obtain a model copy with reduced fineness. Further, the server may merge the mesh sets after the mesh reduction processing is performed, and perform mesh reduction processing on the merged mesh set, so as to obtain a model copy with a further reduced fineness. Multi-level data can be obtained by cyclically performing the combined patch collection and patch reduction processing operations, and the data at each level can be used to render the same object in the model, but the fineness of the data at different levels is different.
  • the server can perform N-level lightweighting on the 3D model, that is, when generating multi-level data, use different lightweight levels to perform mesh reduction processing on the mesh set, thereby generating multi-level, Multi-level model data. Different levels and different levels of model data correspond to different degrees of precision.
  • the server generates relevant index data for the generated 3D model data, so that the required data can be quickly located.
  • the 3D model data generated by the server may be stored in a 3D cloud database, so as to facilitate the server to search and acquire.
  • the server can determine and obtain data required for rendering from the 3D cloud database according to the observation point position in the 3D model fed back by the terminal device. Then, the server can assemble the three-dimensional streaming media data according to the acquired data, and remove the data related to the occluded object, so as to reduce the transmitted data. Finally, the server generates buffer balls, renders frame data related to the ball, and sends the generated frame data to the terminal device.
  • the frame data corresponding to the buffer ball is 3D model data for buffering, and the frame data corresponding to the rendering ball is data for directly rendering the 3D model. In this way, after receiving the frame data sent by the server, the terminal device can perform rendering of the 3D model according to the received frame data.
  • FIG. 4 is a schematic flowchart of a method 400 for generating 3D model data provided by an embodiment of the present application. As shown in FIG. 4, the method 400 includes the following steps 401-405.
  • step 401 a 3D model file is acquired, and the 3D model file includes a plurality of meshes used to form a 3D model.
  • the electronic device for generating 3D model data may obtain a 3D model file to be processed, and the 3D model file may be uploaded to the electronic device by a user (such as a designer of the 3D model).
  • the 3D model file may include data of multiple meshes, and the shape of each object in the 3D model is defined by the data of multiple meshes.
  • FIG. 5 is a schematic diagram of a data structure of a 3D model file provided by an embodiment of the present application.
  • the data structure of the 3D model file includes multiple hierarchical structures, namely: scene (scene), node (node), grid point (mesh) and material (material).
  • scene level is the entry point of the entire scene in the 3D model, and the scene level includes the rendering related parameters of the entire scene in the 3D model;
  • node level is a node in the scene level, and the node level is the organizer of the scene in the 3D model.
  • the mesh level is located under the node level, and is used to describe the geometric objects appearing in the scene
  • the material level is located under the mesh level
  • the material level includes parameters used to define the appearance of the model of the geometric object.
  • the material level includes material attribute parameters, which may include parameters such as base color, normal line, highlight, roughness, and metalness.
  • each mesh is composed of a set of vertices, edges, and patches.
  • the edge information can be defined through the vertex information, and then the edge information is used to form the patch information, and finally the mesh is obtained by combining each patch.
  • the objects in the 3D model can be obtained by mesh definition obtained by combining multiple faces. For example, refer to FIG. 6 , which is a schematic diagram of an object in a three-dimensional model provided by an embodiment of the present application.
  • the human head model can be connected by a plurality of vertices located in different positions to obtain multiple edges, and a plurality of facets are formed by the interconnected edges, and the plurality of faces A slice can then be used to describe the shape of the entire human head model.
  • Step 402 divide the plurality of patches into a plurality of first patch sets, each of the first patch sets in the plurality of first patch sets includes a plurality of connected patches, and each of the The number of patches in the first patch set is the same.
  • the patches included in each of the plurality of first patch sets are connected to each other, and there is no spatially separated patch.
  • 10000 patches are divided into 100 patch sets, and each patch set includes 100 connected patches.
  • the number of patches in the patch set composed of multiple patches is fixed.
  • Step 403 Divide the multiple first patch sets into multiple first patch set groups, each first patch set group in the multiple first patch set groups includes multiple first patch sets set, the plurality of first mesh set groups are used to form the first three-dimensional model.
  • each mesh includes multiple patches used to define the shape of the object, and the size and number of patches in each mesh can be different of.
  • the electronic device divides the multiple first patch sets in the 3D model file into multiple first patch set groups, and among the multiple first patch set groups Each first patch set group of includes multiple first patch sets.
  • the mesh includes a total of A faces
  • Step 404 Perform patch reduction processing on each of the plurality of first patch set groups to obtain a plurality of second patch set groups with reduced number of patches.
  • a second mesh set group is used to form a second three-dimensional model, and the fineness of the first three-dimensional model is higher than that of the second three-dimensional model.
  • mesh A composed of a plurality of patches can be converted into another mesh B by performing patch reduction processing.
  • mesh B has fewer vertices, edges and patches. Therefore, performing patch reduction processing on each of the plurality of first patch set groups can reduce the patches in each patch set group, thereby obtaining a plurality of reduced numbers of patches.
  • the second patch collection group wherein, the plurality of first mesh collection groups are used to form the original first three-dimensional model, and the plurality of second mesh collection groups are used to form the second three-dimensional model.
  • the second 3D model is composed of mesh sets after face reduction, the fineness of the second 3D model is lower than that of the original first 3D model, but the amount of data in the second 3D model is smaller than that of the original first 3D model.
  • a data volume of a three-dimensional model For example, in the second patch set group, the number of patches in each patch set group after the patch reduction process is 1/2 of that before the patch reduction process, that is, each of the multiple first patch set groups Each patch collection group is reduced by 1/2 of the patches.
  • the mesh reduction process may be performed after the edge locking is performed on the mesh set.
  • the vertices at the edge positions of the patch set are kept unchanged, and only the vertex reduction inside the patch set is reduced, so as to realize the patch reduction process.
  • the electronic device may determine a plurality of target vertices at edge positions in the first patch set group,
  • the first patch collection group is any one patch collection group in the plurality of first patch collection groups. Since the multiple patches in the first patch collection group are connected to each other, the multiple target vertices at the edge positions in the first patch collection group are actually multiple patches in the first patch collection group The outermost vertices.
  • the electronic device locks the plurality of target vertices, and regenerates a plurality of facets to obtain a second set of facets, the number of faces of the second set of facets is smaller than that of the first set of facets
  • the number of patches, the second patch collection group is one patch collection group in the plurality of second patch collection groups.
  • the electronic device may keep the positions of the plurality of target vertices unchanged, remove a part of vertices other than the target vertices, and reconnect the remaining vertices, thereby regenerating fewer facets, to obtain A second patch collection group that also includes a plurality of connected patches.
  • the patch set 1 on the left includes 7 vertices and 6 patches, among which there are 6 vertices at edge positions.
  • the vertices in the middle of the patch set 1 are removed, and the remaining 6 vertices are reconnected to obtain the patch set 2 on the right, the patch set Vertices at edge positions in 2 remain unchanged.
  • the patch set 2 includes 6 vertices and 4 patches, and the number of patches is reduced from 6 to 4.
  • the patch set 3 includes 10 vertices and 10 patches, and there are 8 vertices at the edge positions.
  • remove the two vertices in the middle of the patch set 3 and regenerate a new vertex in the middle, and then connect the remaining 9 vertices to get the right side
  • the patch set 4 of the patch set 4 the vertices at the edge positions in the patch set 4 remain unchanged.
  • the patch set 4 includes 9 vertices and 8 patches, and the number of patches is reduced from 10 to 8.
  • Step 405 splitting the multiple second patch sets to obtain multiple second patch sets, the number of patches in each second patch set in the multiple second patch sets is the same as the number of patches The number of patches in each first patch set among the plurality of first patch sets is the same.
  • the multiple second patch set groups obtained after the patch reduction process can be disassembled. points, and recompose multiple second patch sets, so as to ensure that the number of patches in each patch set is fixed.
  • the above-mentioned steps 402-405 can be executed cyclically, that is, the patch set is divided into patch set groups, and the patch reduction processing and splitting are performed on the patch set groups.
  • the mesh set group after the patch reduction process is used to obtain the mesh set after performing the patch reduction process for different times.
  • the accuracy of the three-dimensional model formed by the mesh set after performing different times of mesh reduction processing is different, so that the three-dimensional model data with different accuracy can be obtained.
  • a new piece of three-dimensional model data can be obtained after each re-division of multiple facet set groups obtained by the electronic device undergoes facet reduction processing.
  • multiple 3D model data with different degrees of fineness obtained by electronic equipment can form multiple layers of 3D model data. The fineness of 3D model data at different levels is different. is higher.
  • each of the plurality of second patch collection groups is reduced by 1/2 of the patches
  • the total number of patches becomes 1 of the original number of patches /2.
  • all the patches obtained after performing the patch reduction process are re-split into multiple new patch sets (that is, multiple second patch sets), and the number of patches in each patch set is fixed, so many The number of patch sets of the second patch set is 1/2 of the number of multiple first patch sets.
  • divide the plurality of second patch sets into multiple patch set groups the number of the plurality of patch set groups is 1/2 of the plurality of first patch set groups, and the plurality of patch sets Groups can be used again to perform patch reduction processing.
  • 3D model data with different fineness and data volume can be obtained, so as to meet model rendering requirements in different scenarios.
  • the electronic device can select the 3D model data of the corresponding fineness according to the position of the observation point in the terminal device or the performance of the terminal device, so that it can meet the requirements of different terminal devices as much as possible. Minimize the amount of model data that needs to be transmitted and ensure real-time rendering of the 3D model.
  • this embodiment can also use different mesh reduction factors in the mesh reduction processing stage , and further obtain 3D model data with different fineness.
  • this embodiment can also use different mesh reduction factors in the mesh reduction processing stage , and further obtain 3D model data with different fineness.
  • another 3D model data with different fineness can be obtained by performing mesh reduction processing on the mesh set group in the 3D model data; however, if different facets If the patch reduction process is performed on the 3D model data according to the patch reduction factor, multiple sets of 3D model data with different degrees of fineness can be further obtained.
  • the above-mentioned method 400 may further include: performing a patch reduction process on each first patch set group in the plurality of first patch set groups, so as to obtain a plurality of first patch sets with a reduced number of patches.
  • Three mesh collection groups, the plurality of third mesh collection groups are used to form a third three-dimensional model, and the number of meshes of each third mesh collection group in the plurality of third mesh collection groups is the same as the number of the third mesh collection groups.
  • the number of patches in each second patch collection group among the plurality of second patch collection groups is different.
  • This step is similar to the above-mentioned step 404, and the patch reduction process is performed on each of the multiple first patch set groups. The difference is that the mesh reduction amount of the mesh reduction process executed in this step is greater than the mesh reduction amount of the mesh reduction process executed in step 404 . Therefore, the reduced number of patches in the plurality of third patch sets obtained in this step is greater than the reduced number of patches in the plurality of second patch sets obtained in step 404 .
  • step 404 is based on the fact that the patch reduction factor is 2 and the number of multiple second patch set groups obtained after the patch reduction process is 2
  • the number of patches in each patch set group is 200.
  • the number of patches in each of the multiple third patch set groups obtained after performing the patch reduction process based on the patch reduction factor of 4 is 100.
  • the patch reduction amount of each patch set group in the third patch set is 300, which is greater than the patch reduction amount (200) of each patch set group in the second patch set group.
  • both the third 3D model and the second 3D model are obtained after a mesh reduction process, that is, the model data corresponding to the third 3D model and the first 3D model belong to the same level of model data; however, The third 3D model and the second 3D model are obtained after performing mesh reduction processing with different mesh reduction factors, so the model data corresponding to the third 3D model and the second 3D model belong to different levels of model data.
  • the patch reduction factor corresponding to the third 3D model is higher, that is, the lightweight level of the third 3D model is higher, so the level of the model data corresponding to the third 3D model is higher than that of the model data corresponding to the second 3D model .
  • each patch set in the first patch set may include the same number of patches.
  • the patch reduction process may be performed on the patch set based on different patch reduction factors, so that the number of patches in the patch set after the patch reduction has a multiple relationship with that before the patch reduction.
  • the number of patches in each of the second patch sets is 1/N of the number of patches in each of the first patch sets, where N is an integer greater than 1;
  • the number of patches in the three patch sets is 1/M of the number of patches in each of the first patch sets, where M is an integer greater than N.
  • N can be 2, and M can be 3.
  • a corresponding tree structure in order to facilitate indexing to 3D model data at a specific level of fineness, can be generated in this embodiment, so that based on the tree structure The hierarchical and level structure indexes to the corresponding 3D model data.
  • the above method 400 may further include: generating a tree structure according to the plurality of first patch sets and the plurality of second patch sets; wherein, the tree structure includes a plurality of first nodes and a plurality of The second node, the multiple first nodes respectively correspond to the multiple first patch sets, that is, each patch set in the multiple first patch sets is represented by a first node; the multiple The second nodes respectively correspond to the plurality of second patch sets, that is, each patch set in the plurality of second patch sets is represented by a second node.
  • the multiple second patch sets One patch set corresponds to multiple patch sets in the plurality of first patch sets. That is, in the tree structure, each node in the plurality of second nodes is connected to a plurality of nodes in the plurality of first nodes.
  • each of the above-mentioned multiple first nodes and multiple second nodes may represent multiple different A collection of patches under the level.
  • the node may represent one of the plurality of second patch sets and one of the plurality of fifth patch sets patch set; and, different patch sets represented by this node can be distinguished by different levels.
  • FIG. 8 is a schematic diagram of a process of generating 3D model data provided by an embodiment of the present application. As shown in FIG. 8, the process of generating three-dimensional model data includes the following steps 801-808.
  • Step 801 Construct an undirected graph by using the patches in the original 3D model data as vertices and the connections between the patches as edges.
  • an undirected graph can be constructed with the patches as vertices and the connections between the patches as edges.
  • an undirected graph refers to a graph whose edges have no direction. Assume that the number of patches used to construct the undirected graph in this embodiment is s.
  • FIG. 9 is a schematic diagram of generating an undirected graph based on patches in a 3D model according to an embodiment of the present application.
  • the patches as vertices and the connection relationship between the patches as the edges connected between the vertices, the connection relationship between the various patches can be clearly displayed.
  • Step 802 Based on the graph segmentation algorithm, segment the undirected graph to obtain b patch set groups.
  • the graph segmentation algorithm refers to dividing the vertices of the network into non-overlapping groups of a specified size and a specified number, and minimizing the number of edges between the groups. Since in an undirected graph, a patch is represented in the form of vertices, the vertices in the undirected graph can be segmented through the graph segmentation algorithm, and the b group of vertex groups can be obtained, and each vertex group actually corresponds to Multiple facet collections, that is, b facet collection groups are obtained.
  • Step 803 Lock the edges of the b mesh set groups, and perform mesh reduction processing on the b mesh set groups using G lightweight levels respectively, so as to obtain G pieces of 3D model data.
  • each piece of 3D model data includes b mesh set sets after mesh reduction, and the number of mesh reductions is different among different pieces of 3D model data.
  • the mesh reduction factors corresponding to the three lightweight levels are 2, 3, and 5 respectively.
  • the b mesh set groups are subjected to the mesh reduction process. , to obtain b mesh set groups with the number of d/2 patches; based on the lightweight level of the patch reduction factor of 3, perform the mesh reduction process on the b mesh set groups to obtain the number of b mesh sets
  • the patch set group is c/3; based on the lightweight level of the patch reduction factor of 5, perform the patch reduction process on the b patch set groups to obtain b patch sets with the number of c/5 patches Group.
  • each patch in the original 3D model data has corresponding normal data, surface texture data, texture data and other relevant data for rendering the model.
  • the normal data, surface texture data, and maps corresponding to the mesh set in each piece of 3D model data can be regenerated Data, etc. are used to render the data of the model.
  • Step 805 for the plurality of patch set groups after performing the patch reduction process, every N patch set groups are merged into a new patch set group, so that the new patch set group is the same as the patch set group before performing the patch reduction process
  • the number of patches in the patch set group is the same, that is, keep the number of patches in the patch set group unchanged.
  • N can be understood as the merging rate of the patch set, which can be determined according to the actual situation.
  • the N value corresponding to the piece of 3D model data can be 2, that is, every two mesh sets are merged into a new mesh set.
  • the N values may also be different.
  • step 806 G lightweight levels are used to perform mesh reduction processing on the merged mesh sets, so as to obtain G pieces of 3D model data.
  • Step 808 judging whether the number of remaining patch set groups is greater than 1.
  • step 805 If the number of remaining patch sets is greater than 1, go to step 805 above; if the remaining number of patch sets is not greater than 1, then stop processing the patch sets.
  • multiple mesh reduction processes can be performed on multiple mesh sets, and the three-dimensional model data obtained by each mesh reduction process is regarded as one level of three-dimensional model data.
  • the 3D model data in the same level by using G lightweight levels to perform mesh reduction processing on the mesh set, G pieces of 3D model data with different lightweight levels can be obtained.
  • FIG. 10 is a schematic diagram of generating multi-level 3D model data provided by an embodiment of the present application.
  • the 256 meshes can be divided into 64 mesh sets, and each mesh set includes 4 faces.
  • the set of 64 facets is the 3D model data of the 0th layer.
  • the patch reduction factor corresponding to this lightweight level is 4, so after performing the patch reduction process on the above 64 patch set groups, each of the 64 patch set groups The patches of the patch collection group are reduced to 1.
  • every 4 patch sets are re-merged into a new patch set to obtain 16 new patch sets, the 16
  • the 16 mesh set groups are the 3D model data of the first layer.
  • the number of patches in each of the 16 patch set groups is reduced to one.
  • the merging operation By performing the merging operation on the 16 reduced patch sets, every 4 patch sets are re-merged into a new patch set to obtain 4 new patch sets, the 4 Each patch set in the patch set includes 4 patches.
  • the 4 mesh set groups are the 3D model data of the second layer.
  • the number of patches in each of the four patch set groups is reduced to one.
  • each 4 patch sets are re-merged into a new patch set to obtain a new patch set, the patch The collection set consists of 4 patches.
  • the one mesh set group is the 3D model data of the third layer.
  • the electronic device can generate a tree-like tree structure, and each node in the tree structure represents a set of patches at each level, thereby realizing the indexing of the 3D model data .
  • FIG. 11 is a schematic diagram of a tree structure provided by an embodiment of the present application.
  • the 0th layer of the tree structure includes a total of 8 nodes, and the 8 nodes are respectively used to represent the 8 patch sets of the 0th layer.
  • the first layer of the tree structure includes 4 nodes in total, and the 4 nodes are respectively used to represent the 4 patch sets of the first layer.
  • the patch reduction process is performed on the 8 patch sets of the 0th layer, and the reduced patch sets are merged, that is, every two patch sets are merged into one A new patch set, thus obtaining the 4 patch sets of the first layer.
  • the second layer of the tree structure includes 2 nodes in total, and the 2 nodes are respectively used to represent the 2 patch sets of the second layer.
  • the mesh reduction process is performed on the four mesh sets of the first layer, and the mesh sets after the reduced mesh are merged, that is, every two mesh sets are merged into one A new set of patches, thus obtaining 2 sets of patches on the second layer.
  • the one node is used to represent a collection of patches in the third layer.
  • the mesh reduction process is performed on the two mesh sets of the second layer, and the mesh sets after the reduced mesh are merged, that is, every two mesh sets are merged into one A new set of patches, thus obtaining a set of patches on the third layer.
  • multi-level three-dimensional model data can be indexed through each node in the tree structure.
  • the patch set represented by the upper node is obtained by merging the patch sets represented by multiple lower nodes connected to the upper node, the 3D model data indicated by the upper node and the 3D model indicated by the multiple lower nodes The data is used to represent the same objects in the 3D model.
  • each node of each level may also represent 3D model data at multiple lightweight levels.
  • FIG. 12A is a schematic diagram of a linked list structure provided by an embodiment of the present application.
  • each node can be represented as a linked list structure as shown in Figure 12A, the linked list structure includes multiple items, and each item represents a certain lightweight level
  • FIG. 12B is a schematic diagram of a three-dimensional model rendered based on three-dimensional model data of different lightweight levels provided by an embodiment of the present application.
  • the lightweight level increases, the fineness of the 3D model decreases gradually, but the data volume of the 3D model data corresponding to the 3D model also decreases gradually.
  • the above describes a method for generating 3D model data provided by the embodiment of the present application.
  • the following will introduce in detail the processing method of the 3D model data provided by the embodiment of the present application, that is, how to send the generated 3D model data to the Model electronics.
  • FIG. 13 is a schematic flowchart of a method 1300 for processing 3D model data provided by an embodiment of the present application. As shown in FIG. 13, the method 1300 includes the following steps 1301-1303.
  • Step 1301 the first electronic device receives a first data request from a second electronic device, and the first data request includes the position of a first observation point.
  • the first electronic device used to execute the method for processing 3D model data in this embodiment and the above-mentioned electronic device used to execute the method for generating 3D model data may be the same device or different devices, This embodiment does not specifically limit it.
  • the first electronic device in this embodiment and the electronic device executing the method for generating 3D model data in the above embodiments are the same server, and the server can receive the data of the second electronic device after generating the corresponding 3D model data request, so as to send the corresponding three-dimensional model data to the second electronic device.
  • the first electronic device in this embodiment and the electronic device executing the method for generating 3D model data in the above embodiments may be different servers, and the server for generating 3D model data generates 3D model data based on the 3D model file uploaded by the user For the corresponding 3D model data, store the generated 3D model data in the cloud database or send it to another server that performs 3D model data processing in this embodiment, and the other server executes the 3D model data based on the data request of the second electronic device. processing.
  • the first data request of the second electronic device is used to request corresponding 3D model data from the first electronic device, so as to implement rendering of the 3D model data on the second electronic device.
  • the first data request may include a first viewpoint position, and the first viewpoint position is a position in the three-dimensional model running in the second electronic device.
  • the rendering of a 3D model is actually based on observing a three-dimensional 3D model at a certain position, and displaying the observed scene on a display device in the form of a 2D image.
  • the rendering of a 3D model can be understood as a 2D image captured by a camera at a certain position. Therefore, after the position of the observation point in the 3D model is determined, the object to be rendered in the 3D model can be determined, so that the 3D model data used for rendering can be determined.
  • Step 1302 the first electronic device determines first target data from the data of the 3D model according to the first data request, and the first target data is consistent with the data in the 3D model running in the second electronic device.
  • the first observation point is position dependent.
  • the first electronic device determines the position of the first observation point in the 3D model according to the first data request, it can determine the object to be rendered in the 3D model (that is, the object that can be observed based on the position of the first observation point object), so that the first target data for rendering can be determined.
  • the first target data includes rendering data for performing three-dimensional model rendering, and the rendering data includes rendering data of the first object and rendering data of the second object.
  • both the first object and the second object are objects that can be observed from the position of the first observation point.
  • both the first object and the second object can be a complete object, for example, the first object is a rock, and the second object is a grass; the first object and the second object can also be a part of an object , for example, the first object is a certain side of a rock, and the second object is a leaf in a tree.
  • Both the first object and the second object are composed of multiple patches, so the rendering data of the first object and the rendering data of the second object can be vertices, patches, positions, shapes, surface textures, colors, etc. for rendering The parameters of the scene.
  • the fineness of the rendering data of the first object is higher than the fineness of the rendering data of the second object, and the distance between the first object and the position of the first observation point is smaller than that of the second object The distance from the first observer position.
  • the amount of data to be transmitted can be reduced, the pressure on bandwidth can be reduced, and the transmission speed of 3D model data can be improved to ensure the rendering efficiency of 3D applications and avoid There is a phenomenon of 3D application freezing.
  • the rendering data of the first object may be lower-level 3D model data, while the rendering data of the second object
  • the rendering data may be higher-level 3D model data.
  • the rendering data of the first object is the 3D model data of the first layer
  • the rendering data of the second object is the 3D model data of the second layer, so the rendering data of the first object has higher fineness than the second object The granularity of the rendered data.
  • the data of the three-dimensional model includes multiple pieces of rendering data corresponding to the first object, and the multiple pieces of rendering data have different finenesses.
  • the first electronic device determines the distance between the first object in the three-dimensional model and the first viewpoint position according to the first viewpoint position in the first data request; then, the first The electronic device selects one piece of rendering data from the multiple pieces of rendering data as the rendering data of the first object according to the distance and the fineness of the multiple pieces of rendering data.
  • the fineness of the rendering data of the first object has a negative correlation with the distance, that is, the greater the distance, the lower the fineness of the rendering data of the first object; the smaller the distance, The higher the fineness of the rendering data of the first object is.
  • the first electronic device selects one piece of rendering data from multiple pieces of rendering data with different finenesses as the rendering data of the first object according to the distance between the first object and the position of the first observation point, so that The fineness of the selected rendering data will not affect the final imaging quality of the first object, and at the same time, the data volume of the rendering data of the first object can be reduced as much as possible.
  • the first electronic device may calculate the roughness value of each piece of rendering data, and determine the selected rendering data by comparing the roughness value of each piece of rendering data with a preset threshold.
  • the roughness value of the rendering data can be used to represent the roughness of the two-dimensional image rendered based on the rendering data.
  • the rough value f of each piece of rendering data can be calculated, and whether the rough value f is less than or equal to the preset threshold can be judged, and finally a piece of rendering data with the lowest fineness and roughness f less than or equal to the preset threshold can be selected as the Render data for the first object.
  • the multiple pieces of rendering data corresponding to the first object may be 3D model data at multiple levels.
  • the fineness of the 3D model data is not the same.
  • the first piece of rendering data among the pieces of rendering data corresponding to the first object includes multiple sets of rendering data, each set of rendering data is used for rendering to obtain the first object, and the multiple sets of rendering data The fineness of the rendering data is different, and the first rendering data is any rendering data in the plurality of rendering data.
  • the multiple pieces of rendering data corresponding to the first object may be 3D model data at multiple levels, and each piece of rendering data The multiple sets of rendering data included in the data may be 3D model data of different lightweight levels under the same layer.
  • the multiple pieces of rendering data are obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, wherein one piece of rendering data is obtained each time the mesh reduction process is performed; the first rendering The multiple sets of rendering data included in the data are obtained by performing the same number of times of mesh reduction processing with different multiples on the original rendering data of the first object.
  • the multiple pieces of rendering data corresponding to the first object are obtained after different times of mesh reduction processing, and the multiple sets of rendering data in each piece of rendering data are obtained based on different times of mesh reduction processing . In this way, the fineness of each piece of rendering data in the multiple pieces of rendering data is different, and the fineness of multiple sets of rendering data in the same piece of rendering data are also different.
  • the first electronic device may select from the multiple pieces of rendering data according to the distance and the fineness of the multiple pieces of rendering data Select a piece of rendering data from the rendering data; then, the first electronic device selects a set of rendering data from multiple sets of rendering data in the selected piece of rendering data according to the performance index and/or network status of the second electronic device as the rendering data of the first object.
  • the first electronic device can determine one of the layers from the rendering data of multiple layers according to the distance between the first object and the first observation point, and then based on the performance index and/or network of the second electronic device state, and further determine the rendering data of one of the lightweight levels from the multiple lightweight levels in the hierarchy, so as to obtain the rendering data that needs to be sent to the second electronic device finally.
  • the performance index of the second electronic device may include one or more of frame rate, rendering delay and temperature parameters of the second electronic device.
  • frame rate of the second electronic device when the frame rate of the second electronic device is high, it means that the performance of the second electronic device is high, and the rendering data with a lower lightweight level can be selected, that is, the rendering data with higher fineness can be selected from multiple sets of rendering data.
  • Rendering data for another example, when the temperature of the second electronic device is high, it means that the power consumption of the second electronic device is high, and you can select rendering data with a higher lightweight level, that is, select finer rendering data from multiple sets of rendering data low-density rendering data, so as to reduce the rendering consumption of the second electronic device.
  • the network state of the second electronic device may include one or more of bandwidth and network delay of the second electronic device. For example, when the bandwidth of the second electronic device is high, it means that the second electronic device can receive more data per unit time, so it can select rendering data with a lower lightweight level, that is, from multiple Select the rendering data with higher fineness in the group rendering data.
  • Step 1303 the first electronic device sends the first target data to the second electronic device.
  • the first electronic device determines the first target data according to the first data request, it sends the first target data to the second electronic device, so that the second electronic device The data performs the rendering of the 3D model.
  • the first target data determined by the first electronic device includes data used by the second electronic device for real-time rendering.
  • the first target data determined by the first electronic device may also include buffer data.
  • the second electronic device can use the received buffer data to perform rendering when the position of the observation point in the three-dimensional model changes or the state of the device changes, so as to avoid re-obtaining new data from the first electronic device, thereby improving Rendering speed.
  • the first target data further includes buffer data
  • the buffer data includes N pieces of rendering data corresponding to the first object, where N is an integer greater than 1, and the N pieces of rendering data are all used for The rendering of the first object is performed, and the fineness of the N rendering data is different. That is to say, the first target data includes rendering data and buffer data at the same time, the rendering data is one piece of rendering data corresponding to the first object, the buffer data is N pieces of rendering data corresponding to the first object, and the rendering data and buffer The data can all be used to render the same object.
  • the second electronic device after receiving the first target data, the second electronic device renders the 3D model in real time based on the rendering data in the first target data, and the buffer data in the first target data is used to generate The rendering of the three-dimensional model is performed when the state of the second electronic device changes or the state of the second electronic device changes.
  • the second electronic device may select N pieces of rendering data from the multiple pieces of rendering data with different fineness as the buffer data sent to the second electronic device .
  • the data of the three-dimensional model includes M pieces of rendering data corresponding to the first object, the M pieces of rendering data have different finenesses, and the M is an integer greater than the N.
  • the first electronic device may determine the distance between the first object in the three-dimensional model and the position of the first observation point according to the first data request, and the first data request includes the first An observation point position.
  • the first electronic device determines one piece of rendering data from the multiple pieces of rendering data as the target rendering data, the fineness of the target rendering data meets the fineness requirement of the first object,
  • the fineness requirement of the first object is related to the distance. Specifically, the farther the distance between the first object and the first observation point is, the lower the fineness requirement of the first object is, and the lower the fineness of the determined target rendering data is; The closer the distance between the first object and the first observation point is, the higher the fineness requirement of the first object is, and the higher the fineness of the determined target rendering data is.
  • the first electronic device selects N pieces of rendering data from the M pieces of rendering data as the buffer data, and the fineness of the N pieces of rendering data is less than or equal to the fineness of the target rendering data. That is to say, after the target rendering data is determined, rendering data whose fineness is smaller than or equal to the fineness of the target rendering data may be selected from the M pieces of rendering data as buffer data.
  • the second electronic device can When a change occurs (for example, the position of the observation point is getting farther and farther away from the object), the data of the corresponding fineness is quickly selected from the buffer data as the data for real-time rendering, which avoids re-obtaining new data from the first electronic device, thereby improving Rendering speed.
  • the second electronic device's own device status changes, for example, when the temperature rises, it selects data with lower fineness from the buffered data as data for real-time rendering, thereby reducing the rendering power consumption of the second electronic device and ensuring The temperature of the second electronic device is within a reasonable range.
  • the above describes the process of the first electronic device sending rendering data and buffering data to the second electronic device based on the data request. It can be understood that since the position of the observation point in the 3D model run by the second electronic device may change in real time, the second electronic device often needs to send a new data request to the first electronic device in real time to request a new user Data for rendering 3D models. However, in most cases, the position of the observation point in the 3D model run by the second electronic device changes slowly, so part of the data previously received by the second electronic device can still continue to be used for the rendering of the 3D model. In fact, an electronic device only needs to send part of the data that the second electronic device has not received to the second electronic device.
  • the above-mentioned method 1300 may further include the following multiple steps.
  • the first electronic device After sending the first target data to the second electronic device, the first electronic device receives a second data request from the second electronic device, the second data request may include a second observation point position,
  • the position of the second viewing point is a position in the 3D model running in the second electronic device.
  • the position of the second observation point is different from the position of the first observation point above, that is, the position of the observation point of the three-dimensional model running in the second electronic device has changed.
  • the first electronic device determines second target data from the data of the three-dimensional model according to the second data request, and the second target data is related to the position of the second observation point. Similarly, the first electronic device may determine the second target data in the data of the three-dimensional model based on the second viewpoint position.
  • the process of determining the second target data by the first electronic device is similar to the process of determining the first target data, for details, reference may be made to the above-mentioned step 1302 , which will not be repeated here.
  • the first electronic device determines third target data according to the first target data and the second target data, and generates first information.
  • the first target data does not include the third target data
  • the second target data includes the third target data, that is, the third target data is newly determined data based on the position of the second observation point.
  • the first information is used to indicate the fourth target data to be deleted, the first target data includes the fourth target data, and the second target data does not include the fourth target data, that is, the fourth target Data is data that is no longer needed.
  • the first electronic device sends the third target data and the first information to the second electronic device.
  • the second electronic device can receive the newly determined data based on the position of the second observation point (that is, the third target data); and, the second electronic device can delete the previously received data according to the first information. to the fourth target data.
  • the first electronic device determines the data required by the second electronic device based on the new data request, and compares the determined data with the data previously sent to the second electronic device. Compare the data sent by the device, select the newly added data and the data that is no longer needed, so as to send the new data to the second electronic device and instruct the second electronic device to delete the data that is no longer needed, so that the second electronic device
  • the multiplexing of the data in the device avoids sending a large amount of repeated data to the second electronic device.
  • the second electronic device carries the position of the observation point in the data request sent to the first electronic device, so that the first electronic device determines the data to be sent to the second electronic device based on the position of the observation point.
  • the second electronic device may also determine the data to be used based on the position of the observation point in the three-dimensional model, and send a data index to the first electronic device to request the data to be used. That is, the process of determining the target data based on the position of the observation point is performed in the second electronic device.
  • the first data request may include a data index
  • the data index is used to indicate the first target data
  • the data index is determined by the second electronic device according to the position of the first observation point. of. Specifically, after the second electronic device determines the first target data according to the position of the first observation point, it can obtain the data index of the first target data, and use the The first data request carries the data index to request the first electronic device to send the first target data.
  • the first electronic device When the first electronic device obtains the multi-level and multi-level three-dimensional model data and the tree structure generated based on the above-mentioned method for generating three-dimensional model data, after the first electronic device obtains the data request, it can pass The target data corresponding to the data request is determined by traversing each node in the tree structure. Specifically, the first electronic device may calculate the distance between the position of the observation point carried in the data request and the object represented by each node, and combine the fineness of the 3D model data corresponding to each node to determine whether the data corresponding to each node can be used as target data.
  • a rendering ball and a buffering ball can be introduced into the second electronic device, wherein the rendering ball is a data buffer for storing 3D model data directly used for rendering in the second electronic device; the buffering ball is A data buffer for storing 3D model data for optional rendering in the second electronic device.
  • the rendering ball is used to store the rendering data described in the above embodiments
  • the buffer ball is used to store the buffer data described in the above embodiments.
  • FIG. 14 is a schematic flowchart of determining three-dimensional model data provided by an embodiment of the present application. As shown in FIG. 14 , the process of determining three-dimensional model data includes the following steps 1401-1413.
  • Step 1401 based on device parameters/user-defined parameters, determine the rendering ball/buffer ball level, and the lightweight level.
  • the first electronic device may obtain device parameters fed back by the second electronic device or user-defined parameters.
  • the device parameters may include performance parameters such as frame rate, rendering delay, and temperature, and network parameters such as bandwidth and network delay.
  • the first electronic device may determine the level of the rendering ball and the buffering ball in the second electronic device, as well as the lightweight level of the data to be sent.
  • the first electronic device may determine that the levels of the rendering ball and the buffering ball in the second electronic device are both three layers according to the device parameters, that is, the second electronic device includes three layers of rendering balls and three layers of buffering balls, wherein the rendering ball and the buffering ball Different layers of buffer balls can be used to store data at different lightweight levels. Since the 3D model data acquired by the first electronic device is multi-level and multi-level, the first electronic device can determine three lightweight levels among the multiple lightweight levels of the 3D model data as the Data lightweight level.
  • the first electronic device may determine according to the device parameters that the levels of the rendering ball and the buffering ball in the second electronic device are both 1 layer, that is, the second electronic device only includes 1 layer of rendering ball and 1 layer of buffering ball. Therefore, the first electronic device may determine one of the multiple lightweight levels of the three-dimensional model data as the lightweight level of the data to be sent.
  • the user-defined parameters can also directly specify the level of the rendering ball and the buffering ball, and the lightweight level.
  • the first electronic device can determine the level of the rendering ball and the buffering ball, and the lightweight level by analyzing the user-defined parameters. level.
  • Step 1402 obtaining the position of the observation point in the 3D model.
  • the first electronic device may acquire the position of the observation point in the three-dimensional model based on the data request sent by the second electronic device.
  • Step 1403 find the next node i in the tree structure.
  • the tree structure is used for indexing the generated multi-level and multi-level 3D model data, and each node in the tree structure can represent a set of patches of each level in the 3D model data.
  • each node in the tree structure can represent a set of patches of each level in the 3D model data.
  • Step 1404 judge whether the rough value f(i, p) corresponding to node i is less than or equal to threshold 1.
  • the rough value f(i, p) corresponding to node i may refer to the rough value of the 3D model data indicated by node i, and the larger f(i, p) is, the result obtained based on the rendering of the 3D model data corresponding to node i
  • E represents the rendering error of the 3D model data corresponding to node i
  • the higher the fineness of the 3D model data corresponding to node i the smaller E is, that is, the location where node i is located
  • R is related to the distance between the object represented by the 3D model data corresponding to node i and the position of the observation point, and the farther the distance is, the larger R is. That is to say, the lower the level of the node i is, and the farther the distance between the object represented by the 3D model data corresponding to the node i and the position of the observation point is, the smaller f(i, p) is.
  • the threshold 1 can be expressed as ⁇ 1 , that is, it is judged whether f(i, p) satisfies f(i, p) ⁇ 1 .
  • ⁇ 1 can be understood as the radius of the buffer ball, and ⁇ 1 can be determined according to the performance of the second electronic device. For example, when the performance of the second electronic device is high, the value of ⁇ 1 may be larger; when the performance of the second electronic device is poor, the value of ⁇ 1 may be small.
  • R may also be related to the line-of-sight range of the observation point position, and the closer the object is located in the line-of-sight range, the greater R is.
  • R the above R can be determined based on the distance between the object and the observation point and the relationship between the object and the sight range of the observation point.
  • Step 1405 if the rough value f(i, p) corresponding to node i is less than or equal to ⁇ 1 , add the 3D model data corresponding to node i into the buffer data corresponding to the buffer ball.
  • the 3D model data corresponding to node i may be regarded as buffer data that needs to be sent to the second electronic device.
  • Step 1406 if the rough value f(i, p) corresponding to node i is greater than ⁇ 1 , remove the 3D model data corresponding to node i from the buffer data corresponding to the buffer ball.
  • the 3D model data corresponding to node i can be regarded as data that does not need to be stored in the buffer ball of the second electronic device.
  • Step 1407 judging whether the tree structure has been traversed.
  • step 1403 If the tree structure has not been traversed, go to step 1403, that is, find the next node in the tree structure; if the tree structure is traversed, go to step 1413, that is, return to rendering The change amount of the ball and the change amount of the buffer ball.
  • Step 1408 find the next node i in the tree structure.
  • Step 1409 judge whether the rough value f(i, p) corresponding to node i is less than or equal to threshold 2.
  • the threshold 2 can be expressed as ⁇ 2 , that is, it is judged whether f(i, p) satisfies f(i, p) ⁇ 2 .
  • ⁇ 2 can be understood as the radius of the rendering sphere, ⁇ 2 is smaller than ⁇ 1 , that is, the radius of the rendering sphere is smaller than that of the buffer sphere, that is, the fineness of the rendering data is higher than that of the buffering data.
  • ⁇ 2 may be determined according to the performance of the second electronic device. For example, when the performance of the second electronic device is high, the value of ⁇ 2 may be larger; when the performance of the second electronic device is poor, the value of ⁇ 2 may be small.
  • Step 1410 if the rough value f(i, p) corresponding to node i is less than or equal to ⁇ 2 , add the 3D model data corresponding to node i to the rendering data corresponding to the rendering sphere.
  • the 3D model data corresponding to node i can be regarded as rendering data that needs to be sent to the second electronic device.
  • the first electronic device may further determine whether the 3D model data corresponding to node i has been added to the buffer data corresponding to the buffer ball; and the first electronic device may determine whether the roughness value corresponding to the parent node of node i is greater than threshold 2, if If the node i satisfies the above three conditions, the 3D model data corresponding to the node i can be added to the rendering data corresponding to the rendering ball.
  • Step 1411 if the roughness value f(i, p) corresponding to node i is greater than ⁇ 2 , remove the 3D model data corresponding to node i from the rendering data corresponding to the rendering sphere.
  • the 3D model data corresponding to node i can be regarded as data that does not need to be stored in the rendering sphere of the second electronic device.
  • Step 1412 judging whether the tree structure has been traversed.
  • step 1408 that is, to find the next node in the tree structure; if the tree structure is traversed, proceed to step 1413, that is, return to rendering The change amount of the ball and the change amount of the buffer ball.
  • steps 1408-1412 may be executed synchronously with the above-mentioned steps 1403-1407, that is, the rendering data corresponding to the rendering ball and the buffering data corresponding to the buffering ball are determined synchronously.
  • Step 1413 return the change amount of the rendering ball and the change amount of the buffer ball.
  • the first electronic device can determine the 3D model data that needs to be added to the rendering data and the rendering data that needs to be removed from the rendering sphere, so the first electronic device can return the rendering sphere to the second electronic device. and the old rendering data to be deleted.
  • the first electronic device can determine the 3D model data that needs to be added to the buffer data and the rendering data that needs to be removed from the buffer ball, so the first electronic device can return the newly added buffer data and the buffer ball to the second electronic device. Old cached data to be deleted.
  • the second electronic device After the second electronic device receives the above-mentioned buffer data and rendering data, the second electronic device traverses the rendering data, and uses the cascade coordinate transformation algorithm to convert the three-dimensional coordinates of the nodes in the rendering data to the corresponding dimensional space, so as to perform corresponding Rendering operation.
  • FIG. 15 is a schematic diagram of a rendering ball and a buffering ball provided in an embodiment of the present application.
  • the position of the observer is located at the center of the rendering sphere and the buffer sphere
  • the radius of the buffer sphere is ⁇ 1
  • the radius of the rendering sphere is ⁇ 2
  • ⁇ 1 is greater than ⁇ 2 .
  • the electronic device first calculates the rough value based on the fineness of the 3D model data and the distance between the object represented by the 3D model data and the position of the observation point, and then judges the relationship between the rough value and ⁇ 1 , ⁇ 2 to determine whether to add the 3D model data to the buffer ball or the rendering ball.
  • the 3D model data included in the buffer sphere is more than the 3D model data in the rendering sphere, and when the position of the observation point shifts, the second electronic device can preferentially select a part from the buffer sphere The data that meets the requirements is filled into the rendering sphere, thereby avoiding re-obtaining new data from the first electronic device, and improving the rendering speed.
  • the second electronic device may include multi-level rendering balls and multi-level buffering balls, and different levels of rendering balls and buffering balls may be used to store 3D model data of different lightweight levels.
  • the node can also be used to represent 3D model data of different lightweight levels. Therefore, after the nodes to be added to the buffer ball and the rendering ball are determined through the above-mentioned embodiment corresponding to FIG. 3D model data of different lightweight levels are stored through different levels of buffer balls and rendering balls.
  • Fig. 17 is a rendering schematic diagram when the position of the observation point changes according to the embodiment of the present application.
  • Figure 17 in the process of rendering a complete earth on the second electronic device, when the observation point is located at a certain point on the earth, the data on a certain mountain on the earth can be rendered based on the data in the rendering sphere. A piece of rock; when the position of the observation point gradually changes, the data in the buffer ball is gradually filled into the rendering ball, so that more scenery can be seen, until the whole picture of the entire earth's surface can be seen.
  • the frame rate can be kept stable without stuttering.
  • the rendering data is transmitted through network streaming, without occupying the local storage space of the rendering device.
  • the data processing apparatus includes: a receiving unit 1801, a processing unit 1802, and a sending unit 1803; the receiving unit 1801 is configured to receive a first data request from a second electronic device, and the first data The request includes the position of the first observation point; the processing unit 1802 is configured to determine the first target data from the data of the 3D model according to the first data request, and the first target data includes Rendering data, where the rendering data includes rendering data of a first object and rendering data of a second object, wherein the fineness of the rendering data of the first object is higher than the fineness of the rendering data of the second object, so The distance between the first object and the first observation point is smaller than the distance between the second object and the first observation point; the sending unit 1803 is configured to send to the second electronic device Send the first target data.
  • the first target data includes rendering data for performing 3D model rendering
  • the rendering data includes rendering data of a first object and rendering data of a second object, wherein the first The fineness of the rendering data of an object is higher than the fineness of the rendering data of the second object, and the distance between the first object and the position of the first observation point is smaller than the distance between the second object and the first The distance between watch point locations.
  • the data of the 3D model includes multiple pieces of rendering data corresponding to the first object, and the multiple pieces of rendering data have different fineness;
  • the processing unit 1802 is further configured to: according to the According to the first data request, determine the distance between the first object in the three-dimensional model and the position of the first viewpoint; according to the distance and the fineness of the multiple rendering data, from the multiple rendering data A piece of rendering data is selected from the data as the rendering data of the first object, wherein the fineness of the rendering data of the first object has a negative correlation with the distance.
  • the first piece of rendering data in the multiple pieces of rendering data includes multiple sets of rendering data, and each set of rendering data in the multiple sets of rendering data is used for rendering to obtain the first object , and the fineness of the multiple sets of rendering data is different, and the first set of rendering data is any one of the multiple sets of rendering data.
  • the multiple copies of rendering data are obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • Render data is obtained after multiple mesh reduction processes are performed on the original rendering data of the first object, and one copy is obtained each time the mesh reduction processing is performed.
  • the multiple sets of rendering data included in the first rendering data are obtained by performing the same number of times of mesh reduction processing and different multiples of the original rendering data of the first object.
  • the processing unit 1802 is further configured to: select one piece of rendering data from the multiple pieces of rendering data according to the distance and the fineness of the multiple pieces of rendering data; The performance index and/or network state of the second electronic device, and select a set of rendering data from multiple sets of rendering data of the selected rendering data as the rendering data of the first object.
  • the performance index includes one or more of the frame rate, rendering delay, and temperature parameters of the second electronic device; the network status includes bandwidth and One or more of network delays.
  • the first target data further includes buffer data
  • the buffer data includes N pieces of rendering data corresponding to the first object, where N is an integer greater than 1, and the N pieces of rendering data
  • the rendering data are all used to execute the rendering of the first object, and the N pieces of rendering data have different finenesses.
  • the data of the three-dimensional model includes M pieces of rendering data corresponding to the first object, and the fineness of the M pieces of rendering data is different, and the M is an integer greater than the N
  • the processing unit 1802 is further configured to: according to the first data request, determine the distance between the first object in the three-dimensional model and the position of the first observation point; according to the distance, from the Determining one piece of rendering data among multiple pieces of rendering data as the target rendering data of the first object, the fineness of the target rendering data meets the fineness requirement of the first object, and the fineness requirement of the first object is consistent with The distance is related; select N pieces of rendering data from the M pieces of rendering data as the buffer data, and the fineness of the N pieces of rendering data is less than or equal to the fineness of the target rendering data.
  • the first data request includes a data index
  • the data index is used to indicate the first target data
  • the data index is the second electronic device according to the first The position of the observation point is determined.
  • the receiving unit 1801 is further configured to receive a second data request from the second electronic device; the processing unit 1802 is further configured to, according to the second data request, Determine the second target data from the data of the three-dimensional model, the second target data is related to the position of the second observation point in the three-dimensional model running in the second electronic device; the processing unit 1802 is further configured to The first target data and the second target data determine third target data, and generate first information, wherein the first target data does not include the third target data, and the second target data includes the The third target data, the first information is used to indicate the fourth target data to be deleted, the first target data includes the fourth target data, and the second target data does not include the fourth target data Data; the sending unit 1803 is further configured to send the third target data and the first information to the second electronic device.
  • the data generation device includes: an acquisition unit 1901 and a processing unit 1902; the acquisition unit 1901 is configured to acquire a 3D model file, and the 3D model file includes a plurality of meshes for constituting a 3D model;
  • the processing unit 1902 is configured to: divide the plurality of patches into a plurality of first patch sets, each of the plurality of first patch sets includes a plurality of connected patches , and the number of patches in each first patch set is the same; the multiple first patch sets are divided into multiple first patch set groups, and the multiple first patch set groups are Each first mesh set group includes a plurality of first mesh sets, and the plurality of first mesh set groups are used to form a first three-dimensional model; for the plurality of first mesh set groups Each first mesh set group performs mesh reduction processing to obtain a plurality of second mesh set groups after the number of meshes is reduced, and the plurality of
  • the processing unit 1902 is further configured to: perform a patch reduction process on each first patch set group in the plurality of first patch set groups, so as to obtain a patch A plurality of third mesh collection groups whose number has been reduced, the plurality of third mesh collection groups are used to form a third three-dimensional model, and each third mesh collection group in the plurality of third mesh collection groups The number of patches is different from the number of patches of each second patch set in the plurality of second patch sets.
  • the number of patches in each of the second patch sets is 1/N of the number of patches in each of the first patch sets, where N is an integer greater than 1;
  • the number of patches in each of the third patch sets is 1/M of the number of patches in each of the first patch sets, where M is an integer greater than N.
  • the processing unit 1902 is further configured to: generate a tree structure according to the multiple first patch sets and the multiple second patch sets; wherein the tree structure includes A plurality of first nodes and a plurality of second nodes, the plurality of first nodes respectively corresponding to the plurality of first patch sets, and the plurality of second nodes respectively corresponding to the plurality of second patches Each node in the plurality of second nodes is connected to a plurality of nodes in the plurality of first nodes.
  • each of the plurality of patches is composed of a plurality of vertices; the processing unit 1902 is specifically configured to: target vertices, the first patch set is any patch set group in the multiple first patch set groups; lock the multiple target vertices, and regenerate multiple patches to obtain the second A set of patches, the number of patches in the second set of patches is smaller than the number of patches in the first set of patches, and the second set of patches is the plurality of second set of patches A collection of patches in .
  • FIG. 20 is a schematic structural diagram of the electronic device provided by the embodiment of the present application. Smart wearable devices, servers, etc. are not limited here.
  • the sound processing apparatus described in the embodiment corresponding to FIG. 11 may be deployed on the electronic device 2000 to realize the function of sound processing in the embodiment corresponding to FIG. 11 .
  • the electronic device 2000 includes: a receiver 2001, a transmitter 2002, a processor 2003, and a memory 2004 (the number of processors 2003 in the electronic device 2000 can be one or more, and one processor is taken as an example in FIG. 20 ) , where the processor 2003 may include an application processor 20031 and a communication processor 20032 .
  • the receiver 2001 , the transmitter 2002 , the processor 2003 and the memory 2004 may be connected through a bus or in other ways.
  • the memory 2004 may include read-only memory and random-access memory, and provides instructions and data to the processor 2003 .
  • a part of the memory 2004 may also include a non-volatile random access memory (non-volatile random access memory, NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 2004 stores processors and operating instructions, executable modules or data structures, or their subsets, or their extended sets, wherein the operating instructions may include various operating instructions for implementing various operations.
  • the processor 2003 controls the operation of the electronic device.
  • various components of an electronic device are coupled together through a bus system, where the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus systems in the figures.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 2003 or implemented by the processor 2003 .
  • the processor 2003 may be an integrated circuit chip, which has a signal processing capability. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in the processor 2003 or instructions in the form of software.
  • the above-mentioned processor 2003 can be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor or a microcontroller, and can further include an application-specific integrated circuit (application specific integrated circuit, ASIC), field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable Field-programmable gate array
  • the processor 2003 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 2004, and the processor 2003 reads the information in the memory 2004, and completes the steps of the above method in combination with its hardware.
  • the receiver 2001 can be used to receive input digital or character information, and generate signal input related to related settings and function control of electronic equipment.
  • the transmitter 2002 can be used to output digital or character information through the first interface; the transmitter 2002 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 2002 can also include display devices such as a display screen .
  • the embodiment of the present application also provides a computer program product, which, when running on a computer, causes the computer to perform the steps performed by the aforementioned execution device, or enables the computer to perform the steps performed by the aforementioned training device.
  • An embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program for signal processing, and when it is run on a computer, the computer executes the steps performed by the aforementioned executing device , or, causing the computer to perform the steps performed by the aforementioned training device.
  • the execution device or terminal device provided in the embodiment of the present application may specifically be a chip.
  • the chip includes: a processing unit and a communication unit.
  • the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, a pin or circuit etc.
  • the processing unit may execute the computer-executable instructions stored in the storage unit, so that the chip in the execution device executes the compiling method described in the above-mentioned embodiments.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit may also be a storage unit located outside the chip in the wireless access device, such as only Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM), etc.
  • ROM Read-only memory
  • RAM random access memory
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be A physical unit can be located in one place, or it can be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to realize the purpose of the solution of this embodiment.
  • the connection relationship between the modules indicates that they have communication connections, which can be specifically implemented as one or more communication buses or signal lines.
  • the essence of the technical solution of this application or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products are stored in readable storage media, such as computer floppy disks, U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer, training device, or network device, etc.) execute the method described in each embodiment of the application .
  • readable storage media such as computer floppy disks, U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transferred from a website, computer, training device, or data
  • the center transmits to another website site, computer, training device or data center via wired (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wired eg, coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a training device or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (Solid State Disk, SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

L'invention concerne un procédé de traitement de données de modèle tridimensionnel, appliqué au domaine technique du rendu de modèle tridimensionnel. Le procédé comprend les étapes suivantes : un premier dispositif électronique reçoit une première demande de données en provenance d'un second dispositif électronique, la première demande de données comprenant une première position de point d'observation (1301) ; le premier dispositif électronique détermine des premières données cibles à partir de données d'un modèle tridimensionnel selon la première demande de données (1302), les premières données cibles étant associées à la première position de point d'observation dans le modèle tridimensionnel s'exécutant dans le second dispositif électronique ; et le premier dispositif électronique envoie les premières données cibles au second dispositif électronique (1303). Sur la base du procédé, l'efficacité des dispositifs dans le rendu d'une scène de grande échelle peut être assurée, et le phénomène de brouillage d'application tridimensionnel sur les dispositifs ne peut pas se produire.
PCT/CN2022/125053 2021-10-22 2022-10-13 Procédé de traitement de données de modèle tridimensionnel, procédé de génération de données de modèle tridimensionnel et appareils associés WO2023066122A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111236216.4A CN116012506A (zh) 2021-10-22 2021-10-22 一种三维模型数据的处理方法、生成方法及相关装置
CN202111236216.4 2021-10-22

Publications (1)

Publication Number Publication Date
WO2023066122A1 true WO2023066122A1 (fr) 2023-04-27

Family

ID=86021726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125053 WO2023066122A1 (fr) 2021-10-22 2022-10-13 Procédé de traitement de données de modèle tridimensionnel, procédé de génération de données de modèle tridimensionnel et appareils associés

Country Status (2)

Country Link
CN (1) CN116012506A (fr)
WO (1) WO2023066122A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013097210A1 (fr) * 2011-12-31 2013-07-04 华为技术有限公司 Procédé de rendu en ligne et procédé de rendu hors ligne et dispositif pertinent basé sur une application cloud
CN107590858A (zh) * 2017-08-21 2018-01-16 上海妙影医疗科技有限公司 基于ar技术的医学样品展示方法和计算机设备、存储介质
CN108109191A (zh) * 2017-12-26 2018-06-01 深圳创维新世界科技有限公司 渲染方法及系统
CN110443893A (zh) * 2019-08-02 2019-11-12 广联达科技股份有限公司 大规模建筑场景渲染加速方法、系统、装置和存储介质
CN110738721A (zh) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 基于视频几何分析的三维场景渲染加速方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013097210A1 (fr) * 2011-12-31 2013-07-04 华为技术有限公司 Procédé de rendu en ligne et procédé de rendu hors ligne et dispositif pertinent basé sur une application cloud
CN107590858A (zh) * 2017-08-21 2018-01-16 上海妙影医疗科技有限公司 基于ar技术的医学样品展示方法和计算机设备、存储介质
CN108109191A (zh) * 2017-12-26 2018-06-01 深圳创维新世界科技有限公司 渲染方法及系统
CN110443893A (zh) * 2019-08-02 2019-11-12 广联达科技股份有限公司 大规模建筑场景渲染加速方法、系统、装置和存储介质
CN110738721A (zh) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 基于视频几何分析的三维场景渲染加速方法及系统

Also Published As

Publication number Publication date
CN116012506A (zh) 2023-04-25

Similar Documents

Publication Publication Date Title
KR101692193B1 (ko) 크라우드소싱된 비디오 렌더링 시스템
US11288857B2 (en) Neural rerendering from 3D models
EP2807587B1 (fr) Détermination d'informations de modèle 3d à partir d'images stockées
CN110706341B (zh) 一种城市信息模型的高性能渲染方法、装置及存储介质
KR101130407B1 (ko) 향상된 그래픽 파이프라인을 제공하는 시스템 및 방법
CN107533771B (zh) 通过3d模型重建进行网格简化
JP2017199354A (ja) 3dシーンのグローバル・イルミネーションの描画
US11887241B2 (en) Learning 2D texture mapping in volumetric neural rendering
US11418769B1 (en) Viewport adaptive volumetric content streaming and/or rendering
CN114100118A (zh) 基于网络状况的动态图像平滑
WO2023066122A1 (fr) Procédé de traitement de données de modèle tridimensionnel, procédé de génération de données de modèle tridimensionnel et appareils associés
US11948338B1 (en) 3D volumetric content encoding using 2D videos and simplified 3D meshes
US20240177399A1 (en) Learning 2d texture mapping in volumetric neural rendering
JP7419522B2 (ja) 視覚的忠実度を保持しながらのボリュームデータの低減
US20230267650A1 (en) Data compression
CN116012532A (zh) 一种实景三维模型轻量化方法及系统
CN114429512A (zh) 一种选煤厂bim和实景三维模型的融合展示方法及装置
Terrace Content Conditioning and Distribution for Dynamic Virtual Worlds
Azim Simplification Algorithms for Large Virtual Worlds
Shegeda A GPU-based Framework for Real-time Free Viewpoint Television

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882732

Country of ref document: EP

Kind code of ref document: A1