WO2021146930A1 - 显示处理方法、显示处理装置、电子设备及存储介质 - Google Patents
显示处理方法、显示处理装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2021146930A1 WO2021146930A1 PCT/CN2020/073556 CN2020073556W WO2021146930A1 WO 2021146930 A1 WO2021146930 A1 WO 2021146930A1 CN 2020073556 W CN2020073556 W CN 2020073556W WO 2021146930 A1 WO2021146930 A1 WO 2021146930A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- sub
- model
- models
- displayed
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 92
- 238000003672 processing method Methods 0.000 title claims abstract description 69
- 238000013507 mapping Methods 0.000 claims description 43
- 238000004590 computer program Methods 0.000 claims description 20
- 238000009877 rendering Methods 0.000 claims description 20
- 230000000717 retained effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 15
- 230000009467 reduction Effects 0.000 abstract description 15
- 238000005457 optimization Methods 0.000 abstract 1
- 238000000034 method Methods 0.000 description 33
- 238000010586 diagram Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 10
- 210000004185 liver Anatomy 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 7
- 210000002474 sphenoid bone Anatomy 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000008676 import Effects 0.000 description 5
- 210000000056 organ Anatomy 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 101000823051 Homo sapiens Amyloid-beta precursor protein Proteins 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000001079 digestive effect Effects 0.000 description 1
- 210000002249 digestive system Anatomy 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 102000046783 human APP Human genes 0.000 description 1
- 210000000987 immune system Anatomy 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000001850 reproductive effect Effects 0.000 description 1
- 210000004994 reproductive system Anatomy 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 210000002345 respiratory system Anatomy 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000000952 spleen Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Definitions
- the embodiments of the present disclosure relate to a display processing method, a display processing apparatus, an electronic device, and a storage medium.
- Health is always an important topic of people's attention. With the development of computer technology and communication technology, people hope to monitor their health at any time to quickly determine which part of the body may have problems and prevent them as soon as possible. Since the human body is a complex and integrated organic system, people first need to have a clear and accurate understanding of the human body. For example, they can rely on the existing knowledge of anatomy and the existing display technology to allow people to understand the whole body and various parts of the human body. Structure, which can realize the monitoring of its own health status.
- At least one embodiment of the present disclosure provides a display processing method, including: obtaining multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; obtaining display environment parameters of the object to be displayed; Environmental parameters, for each sub-model of the plurality of sub-models, determine the display detail level of the sub-model, and based on the determined display detail level of the sub-model, determine the number of display surfaces of the sub-model; For each of the sub-models, the display sub-model of the sub-model is determined based on the determined number of display sides of the sub-model.
- the display detail level of the sub-model is determined, and based on the determined
- the display detail level of the sub-model, and determining the number of display sides of the sub-model includes: for each sub-model of the plurality of sub-models, the display detail level of the sub-model is divided into a first display level and a second display level, Wherein, the second display level is greater than the first display level; when the sub-model is displayed at a first distance, the first display level is used to determine the number of display surfaces of the sub-model; when the second distance is used to display the sub-model In the case of a sub-model, the second display level is used to determine the number of display surfaces of the sub-model; the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
- the number of display sides of the sub-model determined using the first display level is less than the number of display sides of the sub-model determined using the second display level .
- the display processing method provided by at least one embodiment of the present disclosure further includes: importing the basic model of the object to be displayed into an image rendering engine, and splitting the basic model of the object to be displayed into an image rendering engine through the image rendering engine The multiple sub-models.
- the image rendering engine includes the Unreal 4 engine
- the display processing method further includes: obtaining the image of the object to be displayed from the Unreal 4 engine through a plug-in. Multiple sub-models, and the display sub-models of the multiple sub-models are respectively determined through the plug-in.
- the sub-model includes a plurality of sub-texture mapping information
- the display processing method further includes: retaining the sub-model The sub-texture mapping information consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the reserved sub-texture mapping information.
- the display processing method provided by at least one embodiment of the present disclosure further includes: for each sub-model of the plurality of sub-models, modifying the name of the sub-model to keep the name of the basic model corresponding to the sub-model Unanimous.
- the display processing method provided by at least one embodiment of the present disclosure further includes: using the display sub-models of the multiple sub-models to display the to-be-displayed object.
- the display sub-models of the multiple sub-models are imported into three-dimensional software; the display sub-models of the multiple sub-models are combined in the three-dimensional software to obtain the The display model of the basic model; the display model is displayed to display the object to be displayed.
- the object to be displayed is a human body
- the basic model is a three-dimensional human body model.
- At least one embodiment of the present disclosure further provides a display processing device, including: a first obtaining unit configured to obtain multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; and a second obtaining unit configured to Obtain the display environment parameters of the object to be displayed; the display surface number determining unit is configured to determine the display detail level of the sub-model for each of the plurality of sub-models based on the display environment parameters, and based on all the sub-models The determined display detail level of the sub-model is determined, and the display surface number of the sub-model is determined; the display sub-model determining unit is configured to, for each sub-model of the plurality of sub-models, based on the determined display surface number of the sub-model To determine the display sub-model of the sub-model.
- a display processing device including: a first obtaining unit configured to obtain multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; and a second obtaining unit configured to
- the display surface number determining unit is further configured to: for each of the plurality of sub-models, divide the display detail level of the sub-model into the first A display level and a second display level, the second display level is greater than the first display level; when the sub-model is displayed using the first distance, the first display level is used to determine the number of display sides of the sub-model; When the second distance is used to display the sub-model, the second display level is used to determine the number of display surfaces of the sub-model; the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance Greater than the second distance.
- the sub-model includes a plurality of sub-texture mapping information
- the display processing device further includes: texture mapping information determination The unit is configured to retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the retained sub-texture mapping information.
- the display processing device provided by at least one embodiment of the present disclosure further includes: a name determining unit configured to modify the name of each sub-model of the plurality of sub-models to correspond to all the sub-models.
- a name determining unit configured to modify the name of each sub-model of the plurality of sub-models to correspond to all the sub-models.
- the display processing device provided by at least one embodiment of the present disclosure further includes: a display unit configured to use the display sub-models of the multiple sub-models to display the to-be-displayed object.
- At least one embodiment of the present disclosure further provides a display processing device, including: a processor; a memory; one or more computer program modules, wherein the one or more computer program modules are stored in the memory and configured To be executed by the processor, the one or more computer program modules include instructions for executing the display processing method provided by any embodiment of the present disclosure.
- At least one embodiment of the present disclosure further provides an electronic device, including: the display processing device and the display screen provided by any embodiment of the present disclosure; when an instruction to display the object to be displayed is received, the display screen is configured to slave
- the display processing device receives and displays the display sub-models of the multiple sub-models to display the to-be-displayed object.
- At least one embodiment of the present disclosure further provides a storage medium that non-temporarily stores computer-readable instructions, and when the computer-readable instructions are executed by a computer, the display processing method provided by any embodiment of the present disclosure can be executed.
- Figure 1A is an effect diagram of a reduced display model
- FIG. 1B is a flowchart of an example of a display processing method provided by at least one embodiment of the present disclosure
- FIG. 2A is a schematic diagram of a three-dimensional human body model provided by at least one embodiment of the present disclosure
- 2B is a process for determining the number of display sides of a sub-model provided by at least one embodiment of the present disclosure
- Fig. 2C is a schematic diagram of an original model of a three-dimensional human body model
- FIG. 2D is a display model obtained by subtracting the surface of the three-dimensional human body model shown in FIG. 2C by using the display processing method provided in at least one embodiment of the present disclosure
- Figure 2E is a schematic diagram of the original model of the sphenoid bone of the human body
- FIG. 2F is a schematic diagram of the sphenoid bone shown in FIG. 2E after surface reduction processing is performed on the sphenoid bone shown in FIG. 2E by using the display processing method provided by at least one embodiment of the present disclosure;
- FIG. 3 is a flowchart of another display processing method provided by at least one embodiment of the present disclosure.
- FIG. 4 is a flow chart of a method for displaying objects to be displayed according to at least one embodiment of the present disclosure
- 5A is a system flowchart of a display processing method provided by at least one embodiment of the present disclosure
- FIG. 5B is a system flowchart of a specific implementation example of the display processing method shown in FIG. 5A;
- FIG. 6 is a schematic block diagram of a display processing apparatus provided by at least one embodiment of the present disclosure.
- FIG. 7 is a schematic block diagram of another display processing apparatus provided by at least one embodiment of the present disclosure.
- FIG. 8 is a schematic block diagram of still another display processing apparatus provided by at least one embodiment of the present disclosure.
- FIG. 9 is a schematic structural diagram of an electronic device provided by at least one embodiment of the present disclosure.
- FIG. 10 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
- polygons for example, triangles
- the surface of the 3D model of the human body created by this method includes multiple polygons. The more polygons, the more the 3D model of the human body. The more the number of sides, the more the number of sides, the greater the amount of data, and the slower the processing speed of the system.
- the first method is to manually optimize the human body 3D model directly in the 3D software to achieve surface reduction;
- the second method is to set the preset value of the display surface number of the three-dimensional human body model to realize automatic surface reduction in the three-dimensional software.
- Figure 1A is an effect diagram of a display model after surface reduction.
- the 3D model of the human body is reduced too much, the situation shown in Figure 1A will appear.
- the parts of the 3D model of the human body that are connected but not welded will have different gaps. .
- At least one embodiment of the present disclosure provides a display processing method, including: obtaining multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; obtaining display environment parameters of the object to be displayed; For each sub-model of the multiple sub-models, determine the display detail level of the sub-model, and determine the display surface number of the sub-model based on the determined display detail level of the sub-model; for each sub-model of the multiple sub-models, Based on the determined number of display sides of the sub-model, the display sub-model of the sub-model is determined.
- Some embodiments of the present disclosure also provide a display processing device and a storage medium corresponding to the above-mentioned display processing method.
- the display processing method provided by the above-mentioned embodiments of the present disclosure can determine the number of display surfaces of each sub-model based on the display environment and the display detail level, so that while reducing the number of display surfaces of each sub-model, it can increase the controllable data of each sub-model ( For example, displaying the level of detail, etc.), which improves the display effect of the object to be displayed, reduces the data processing volume of the system, and reduces resource consumption.
- FIG. 1B is a flowchart of an example of a display processing method provided by at least one embodiment of the present disclosure.
- the display processing method can be implemented in the form of software, hardware, firmware or any combination thereof, loaded and executed by processors in devices such as mobile phones, tablet computers, notebook computers, desktop computers, network servers, etc., which can reduce human body
- the controllable data of the three-dimensional human body model is increased, and the display effect of the three-dimensional human body model is improved.
- the display processing method is applicable to a computing device.
- the computing device includes any electronic device with computing functions, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, a network server, etc., which can load and execute the display processing Method, the embodiment of the present disclosure does not limit this.
- the computing device may include a central processing unit (CPU) or a graphics processing unit (Graphics Processing Unit, GPU) and other forms of processing units, storage units, etc. that have data processing capabilities and/or instruction execution capabilities.
- the computing device is also installed with an operating system, an application programming interface (for example, OpenGL (Open Graphics Library), Metal, etc.), etc., and the display processing method provided by the embodiment of the present disclosure is implemented by running code or instructions.
- the computing device may also include a display component, such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a quantum dot light emitting diode (Quantum Dot Light Emitting).
- a display component such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a quantum dot light emitting diode (Quantum Dot Light Emitting).
- Diode, QLED display screens, projection components, VR head-mounted display devices (for example, VR helmets, VR glasses), etc., which are not limited in the embodiments of the present disclosure.
- the display part can display the object to be displayed.
- the display processing method includes step S110 to step S140.
- Step S110 Obtain multiple sub-models of the basic model of the object to be displayed, and each sub-model includes multiple faces.
- Step S120 Obtain the display environment parameters of the object to be displayed.
- Step S130 Based on the display environment parameters, for each of the multiple sub-models, determine the display detail level of the sub-model, and determine the display surface number of the sub-model based on the determined display detail level of the sub-model.
- Step S140 For each sub-model of the plurality of sub-models, a display sub-model of the sub-model is determined based on the determined number of display surfaces of the sub-model.
- the object to be displayed is a human body
- the basic model is a three-dimensional human body model (as shown in FIG. 2A).
- the object to be displayed may also be other animals, plants, objects, etc. that can be displayed, which is not limited in the embodiment of the present disclosure.
- the object to be displayed is a human body and the basic model is a three-dimensional model of the human body as an example for description, which is not limited in the embodiment of the present disclosure.
- the optimized three-dimensional human body model ie, display model
- the electronic device after turning on the electronic device to enter the display interface for displaying human body information (for example, turning on the digital human body APP), the optimized three-dimensional human body model (ie, display model) is displayed on the electronic device.
- the basic model of the object to be displayed is imported into the image rendering engine (that is, the renderer, for example, includes a two-dimensional image engine or a three-dimensional image engine, such as an image processor (GPU)), and passes the image rendering engine Split the basic model of the object to be displayed into multiple sub-models.
- the image rendering engine provides the multiple sub-models to the first acquiring unit described below, so as to process the multiple sub-models in subsequent steps to determine the display sub-models of the multiple sub-models.
- the image rendering engine includes Unreal Engine 4 (UE4 for short). The following takes the image rendering engine as UE4 as an example for description, which is not limited in the embodiment of the present disclosure.
- the basic model of the object to be displayed can also be imported into other types of engines such as physics engine, script engine, or network engine for split processing according to the functions that need to be implemented, which is not limited in the embodiments of the present disclosure.
- the basic model is a human body three-dimensional model
- the human body three-dimensional model includes human body organ category information, human body system category information, or human body parameter information.
- three-dimensional human body models are classified according to organs, which can include three-dimensional models of various organs such as heart, liver, spleen, lungs, and kidneys. They are classified according to human system categories, including circulatory, digestive, respiratory, reproductive, and immune systems
- organs which can include three-dimensional models of various organs such as heart, liver, spleen, lungs, and kidneys. They are classified according to human system categories, including circulatory, digestive, respiratory, reproductive, and immune systems
- Three-dimensional models of systems such as systems can also be classified according to parts, including three-dimensional models of local systems such as head, chest, upper and lower limbs.
- the multiple sub-models of the three-dimensional model of the organ liver and the Multiple elements have a one-to-one correspondence, that is, a sub-model is formed for each element that is split.
- each submodel includes multiple faces.
- polygons for example, triangles
- the surface of the 3D model of the human body created by this method includes multiple polygons. Therefore, multiple sub-models are split
- the surface of each sub-model in also includes multiple polygons, that is, each sub-model includes multiple faces.
- each sub-model is optimized and reduced, that is, the display surface number of each sub-model is determined to determine the display sub-model of each sub-model, so as to reduce the display surface number of the final three-dimensional human body model.
- a first acquisition unit can be provided, and multiple sub-models of the basic model of the object to be displayed can be acquired through the first acquisition unit, for example, the multiple sub-models can be acquired from an image rendering engine; for example, the central processing unit ( CPU), image processor (GPU), tensor processor (TPU), field programmable logic gate array (FPGA) or other forms of processing units with data processing capabilities and/or instruction execution capabilities and corresponding computer instructions
- the first acquisition unit may be a general-purpose processor or a special-purpose processor, and may be a processor based on the X86 or ARM architecture.
- the plug-in obtains multiple sub-models of the object to be displayed from the image rendering engine (for example, UE4), and determines the multiple sub-models respectively through the plug-in
- the display sub-model of, that is, the corresponding instructions are executed in the plug-in to implement the subsequent steps S120-S140.
- the plug-in may be developed for the image rendering engine to implement corresponding functions.
- the display environment parameters of the to-be-displayed object include the position, angle, and distance from the display screen that the display model of the to-be-displayed object needs to be presented in the display interface.
- the display screen is equivalent to a virtual camera, and the distance of the display model of the object to be displayed from the display screen is the distance of the display model of the object to be displayed from the virtual camera in a 3-dimensional space.
- the display environment parameters of the object to be displayed include: when the sub-model is displayed at a close distance, that is, when the sub-model is enlarged, the display model of the object to be displayed needs to be presented in the display interface close to the display screen, that is, the display of the object to be displayed
- the distance between the model and the display screen is small, for example, it is recorded as the second distance; when the sub-model is displayed at a long distance, that is, when the sub-model is zoomed out, the display model of the object to be displayed in the display interface needs to be displayed far away from the display screen , That is, the distance between the display model of the object to be displayed and the display screen is larger, for example, it is recorded as the first distance (for example, the first distance is greater than the second distance).
- the first distance for example, the first distance is greater than the second distance
- a second acquisition unit may be provided, and the display environment parameters of the object to be displayed may be acquired through the second acquisition unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor (TPU) may also be acquired through the second acquisition unit. ), a Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the second acquisition unit.
- CPU central processing unit
- GPU image processor
- TPU tensor processor
- FPGA Field Programmable Logic Gate Array
- the display level of detail can be obtained through LOD (Levels of Detail, multiple levels of detail) technology.
- LOD technology refers to determining the resource allocation of object rendering according to the position and importance of the node of the object model in the display environment, reducing the number of faces and details of non-important objects, and increasing the number of faces and details of important objects, so as to obtain Efficient rendering operation.
- LOD1 is the first display level
- LOD2 is the second display level
- LOD3 is the third display level, and so on.
- the display detail level of the sub-model is determined.
- FIG. 2B is a flowchart of a method for determining the number of display sides of a sub-model provided by at least one embodiment of the present disclosure. That is, FIG. 2B is a flowchart of some examples of step S130 shown in FIG. 1B. For example, in the example shown in FIG. 2B, the determining method includes step S131 to step S133.
- the display processing method provided by at least one embodiment of the present disclosure will be introduced in detail with reference to FIG. 2B.
- Step S131 For each of the multiple sub-models, the display detail level of the sub-model is divided into a first display level and a second display level.
- Step S132 When the sub-model is displayed using the first distance, the first display level is used to determine the number of display sides of the sub-model.
- Step S133 When the sub-model is displayed using the second distance, the second display level is used to determine the number of display surfaces of the sub-model.
- the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
- the first display level and the second display level respectively represent the percentage of the display surface number of the sub-model (that is, the display surface number of the sub-model) and the original surface number of the sub-model.
- the second display level is greater than the first display level, that is, the number of display sides of the sub-model corresponding to the second display level is more than the number of display sides of the sub-model corresponding to the first display level. Therefore, the second display level is adopted.
- the display sub-model determined by the display level has a higher level of detail, and therefore, is more suitable for close display (for example, enlarged display); the display sub-model determined by the first display level has a lower level of detail, and therefore more Suitable for long-distance display (for example, reduced display).
- the second display level is 40% (for example, the number of faces of the sub-model is 1000, and the number of display faces of the sub-model (the number of faces of the sub-model is displayed) is 400), and the first display level is 10% (for example, the sub-model The number of faces of is 1000, and the number of display faces of the sub-model (for example, the number of faces of the displayed sub-model is 100).
- the display detail level can also be continuously divided into a third display level and a fourth display level, etc., which is not limited in the embodiment of the present disclosure.
- step S132 for example, when the sub-model is displayed at the first distance, that is, when the sub-model is displayed at a long distance, a lower display level (for example, the first display level) is used to determine the number of display sides of the sub-model, thereby You can determine the display sub-models with fewer display sides.
- a lower display level for example, the first display level
- the preset value of the first display level is set in the system, and when the sub-model is displayed at the first distance, the preset value of the first display level is called to set the number of display surfaces of the sub-model.
- step S133 for example, when the sub-model is displayed at the second distance, that is, when the sub-model is displayed at a close distance, a higher display level (for example, the second display level) is used to determine the number of display sides of the sub-model, thereby It is possible to determine a display sub-model with a larger number of display surfaces, that is, the number of display surfaces of the sub-model determined by the first display level is less than the number of display surfaces of the sub-model determined by the second display level.
- a higher display level for example, the second display level
- the preset value of the second display level is set in the system, and when the sub-model is displayed at the second distance, the preset value of the second display level is called to set the display surface number of the sub-model.
- step S133 when an instruction to expand the display of the sub-model is received, for example, when a finger or a mouse clicks the button to expand the display of the sub-model on the display interface of the electronic device (for example, the interface of the digital human body APP), the display obtained through step S133 is displayed
- the display sub-model determined by the number of display faces; when an instruction to zoom out the sub-model is received, for example, when a finger or a mouse clicks the button to zoom out the sub-model on the display interface of the electronic device (for example, the interface of the digital human APP) , Displaying the display sub-model determined by the number of display sides obtained in step S132.
- a preset distance threshold can be set according to the actual situation.
- the display distance the distance between the sub-model and the display screen
- the first distance is used to display the sub-model
- the display distance is less than the preset threshold , That is, the second distance display sub-model is adopted, which is not limited in the embodiment of the present disclosure.
- the display detail level according to the display environment parameters, for example, the distance from the display position of the sub-model to the display screen. For example, when the sub-model is expanded (using the second distance to display the sub-model), the second display level with a larger value is used Determine the number of display surfaces of the sub-model, so that the sub-model can be displayed in more detail, and the display effect can be improved; when the sub-model is reduced (using the first distance to display the sub-model), the first display level with a smaller value is used to determine the sub-model's
- the number of display faces can reduce the number of faces and details of non-important models, thereby reducing the amount of data processed by the system, reducing resource consumption, and improving rendering efficiency.
- a display surface number determination unit may be provided, and the display surface number of the sub-model may be determined by the display surface number determination unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor may also be used. (TPU), Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the display surface number determination unit.
- CPU central processing unit
- GPU image processor
- FPGA Field Programmable Logic Gate Array
- step S140 for example, for each sub-model of the plurality of sub-models, based on the determined display surface number of the sub-model, a modeling method in this field is adopted to determine the display sub-model of the sub-model.
- the display sub-model is the optimized model after surface reduction, for example, it is also a three-dimensional model.
- Using the display sub-model to display on the display interface of the electronic device can reduce the number of display surfaces of each sub-model. The display effect of the object to be displayed is improved, the data processing volume of the system is reduced, and the resource consumption is reduced.
- a display sub-model determination unit may be provided, and the display sub-model of the sub-model may be determined by the display sub-model determination unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor may also be used. (TPU), Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the display sub-model determination unit.
- CPU central processing unit
- GPU image processor
- FPGA Field Programmable Logic Gate Array
- FIG. 3 is a flowchart of another display processing method provided by at least one embodiment of the present disclosure. As shown in FIG. 3, the display processing method further includes step S150-step S180.
- Step S150 retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model.
- the sub-model includes multiple sub-texture mapping information UV0, UV1, UV2.
- the texture of the sub-model includes information such as the color and brightness of the sub-model
- the sub-texture mapping information includes the coordinates of the texture of each point in the sub-model on the picture, so that the texture of each point on the picture can be changed according to the coordinates. It accurately corresponds to the surface of the three-dimensional model, so that the texture of the three-dimensional model can be displayed correctly.
- the serial number arrangement of the six faces after expansion can be 123456 or 612345, as long as Ensure that the texture mapping information of each surface corresponds to the relative position of the sub-model in the three-dimensional space (that is, the texture mapping information of surface 1 corresponds to surface 1 on the three-dimensional model, and the texture mapping information of surface 2 corresponds to the three-dimensional model.
- the texture mapping information of surface 2,..., surface 6 corresponds to surface 6 on the 3D model. Therefore, different expansion methods (and different serial number arrangement methods) correspond to different texture mapping coordinates. Therefore, the sub-model includes multiple Sub-texture map information.
- the sub-texture mapping information UV0 is consistent with the texture mapping information in the base model, and the rest may be inconsistent. Therefore, keep the sub-model and the base
- the sub-texture mapping information UV0 with the same texture mapping information of the model is used for subsequent display of the sub-model display.
- Step S160 Delete other sub-texture map information except the reserved sub-texture map information.
- Step S170 For each sub-model of the multiple sub-models, the name of the sub-model is modified to be consistent with the name of the basic model corresponding to the sub-model.
- the name of each sub-model may be changed. This will cause the names of other software to be unable to correspond to the subsequent operations after the multiple sub-models are exported to the engine. Therefore, the corresponding operation cannot be performed.
- the name of the sub-model is modified to be consistent with the name of the base model corresponding to the sub-model.
- the name of the sub-model should also be liver.
- Step S180 Use the display sub-models of the multiple sub-models to display the object to be displayed.
- the display sub-model is the optimized reduced surface model. Displaying the display sub-model on the display interface of the electronic device can reduce the number of display surfaces of each sub-model, improve the display effect of the object to be displayed, and reduce The data processing capacity of the system is reduced, and the resource consumption is reduced.
- the display processing methods of each sub-model can be performed in parallel, so as to increase the speed of the display processing method, reduce time-consuming, and improve display processing efficiency.
- FIG. 4 is a flowchart of a method for displaying an object to be displayed according to at least one embodiment of the present disclosure. That is, FIG. 4 is a flowchart of some examples of step S180 shown in FIG. 3. For example, in the example shown in FIG. 4, the determining method includes step S181 to step S183.
- a method for displaying an object to be displayed provided by at least one embodiment of the present disclosure will be introduced in detail with reference to FIG. 4.
- Step S181 Import the display sub-models of the multiple sub-models into the three-dimensional software.
- step S170 after obtaining the final display sub-model, the display sub-model is output to the three-dimensional software for merging.
- the three-dimensional software may include 3ds max, Maya, Cinema 4D, zbrush, etc.
- Step S182 Combine the display sub-models of the multiple sub-models in the three-dimensional software to obtain the display model of the basic model.
- the display sub-models of the sub-models corresponding to multiple elements are combined in the three-dimensional software to obtain the display model of the basic model, for example, to obtain the display model of the organ and liver, or to obtain the display model of the human body.
- the display model is also a three-dimensional model.
- the display sub-models of each sub-model can be merged by a merge method in the field, which is not limited in the embodiment of the present disclosure, and will not be repeated here.
- the basic model has a large number of faces
- the display model is a three-dimensional human body model of the object to be displayed after optimized face reduction.
- the number of display faces is much less than that of the basic model, which can reduce the amount of data processing of the system and reduce LF.
- Step S183 Display the display model to display the object to be displayed.
- displaying the display model on the display interface of the electronic device can reduce the number of display surfaces of each sub-model, improve the display effect of the object to be displayed, reduce the amount of system data processing, and reduce resource consumption.
- the flow of the display processing method provided by the foregoing various embodiments of the present disclosure may include more or fewer operations, and these operations may be performed sequentially or in parallel.
- the flow of the display processing method described above includes multiple operations appearing in a specific order, it should be clearly understood that the order of the multiple operations is not limited.
- the display processing method described above may be executed once, or may be executed multiple times according to predetermined conditions.
- the display processing method provided by the above-mentioned embodiments of the present disclosure can determine the number of display surfaces of each sub-model based on the display environment and the display detail level, so that while reducing the number of display surfaces of each sub-model, it can increase the controllable data of each sub-model.
- the display effect of the object to be displayed is improved, the data processing volume of the system is reduced, and the resource consumption is reduced.
- FIG. 2C is a schematic diagram of the original model of a three-dimensional human body model (ie, a model without surface reduction);
- FIG. 2D is a display processing method provided by at least one embodiment of the present disclosure after surface reduction is performed on the three-dimensional human body model shown in FIG. 2C Obtained display model;
- FIG. 2E is a schematic diagram of the original model of the sphenoid bone of the human body;
- FIG. 2F is a schematic diagram of the sphenoid bone shown in FIG. 2E after surface reduction processing is provided by the display processing method provided by at least one embodiment of the present disclosure.
- the unreduced three-dimensional model of the human body includes 4916042 triangular surfaces
- the displayed model after the reduced surface includes 2,800316 triangular surfaces, for example, for the muscles and bones in the three-dimensional human body model. Areas such as the connection with bones have been reduced, and the remaining parts have not been subjected to the above-mentioned display processing (that is, the area has not been reduced).
- the original model of the sphenoid bone includes 10189 triangular faces.
- the reduced face sphenoid bone model includes 1527 triangular faces, which is about 15% of the basic model, which basically reaches the limit of face reduction.
- the use of the display processing method provided by at least one embodiment of the present disclosure can not only achieve maximum surface reduction processing to reduce the amount of data processing of the system, but also does not affect the display effect of the display model, and improves the display of the object to be displayed. display effect.
- FIG. 5A is a system flowchart of a display processing method provided by at least one embodiment of the present disclosure
- FIG. 5B is a system flowchart of a specific implementation example of the display processing method shown in FIG. 5A.
- the display processing method provided by at least one embodiment of the present disclosure will be described in detail below with reference to FIGS. 5A and 5B.
- the engine can be the UE4 engine.
- the basic model is split into multiple sub-models in the UE4 engine, and the names of the multiple sub-models are changed. , The number of UVs (sub-texture mapping information) has been increased, and the split multiple sub-models will be imported into the plug-in developed according to UE4.
- set the preset value of the display surface of the control sub-model for example, set the first display level LOD1 and the second display level LOD2, increase the LOD level, for example, when the sub-model is displayed at the first distance .
- the first display level is used to determine the number of display sides of the sub-model
- the second display level is used to determine the number of display sides of the sub-model, so as to reduce the number of display sides of the sub-model
- the display sub-model of the sub-model is determined based on the number of display sides of the sub-model.
- the plug-in also sets the name of each sub-model and the number of UV (texture map information), for example, retain the sub-model's texture map information that is consistent with the base model's texture map information, and delete the sub-texture maps except the reserved ones.
- For other sub-texture information other than the information for each sub-model of the multiple sub-models, modify the name of the sub-model to be consistent with the name of the base model corresponding to the sub-model.
- steps S160-S170 The related description will not be repeated here.
- steps S181-S183 for details, please refer to the relevant descriptions of steps S181-S183.
- output the display model and display the display model on the display interface of the electronic device to display the object to be displayed.
- FIG. 6 is a schematic block diagram of a display processing apparatus provided by at least one embodiment of the present disclosure.
- the display processing device 100 includes a first acquisition unit 110, a second acquisition unit 120, a face number determination unit 130 and a display sub-model determination unit 140.
- these units may be implemented by hardware (for example, circuit) modules or software modules, etc.
- the following embodiments are the same as this, and will not be repeated.
- a central processing unit CPU
- an image processor GPU
- TPU tensor processor
- FPGA field programmable logic gate array
- Processing units and corresponding computer instructions implement these units.
- the first obtaining unit 110 is configured to obtain multiple sub-models of the basic model of the object to be displayed. For example, each submodel includes multiple faces.
- the first acquiring unit 110 may implement step S110, and the specific implementation method can refer to the related description of step S110, which will not be repeated here.
- the second acquiring unit 120 is configured to acquire the display environment parameters of the object to be displayed.
- the second acquiring unit 120 may implement step S120, and its specific implementation method can refer to the related description of step S120, which will not be repeated here.
- the display surface number determining unit 130 is configured to determine the display detail level of the sub-model for each of the multiple sub-models based on the display environment parameters, and determine the sub-model based on the determined display detail level of the sub-model The number of display sides.
- the display surface number determining unit 130 can implement step S130, and the specific implementation method can refer to the related description of step S130, which will not be repeated here.
- the display sub-model determining unit 140 is configured to determine the display sub-model of the sub-model based on the determined number of display surfaces of the sub-model for each sub-model of the plurality of sub-models.
- the display sub-model determining unit 140 can implement step S140, and the specific implementation method can refer to the related description of step S140, which will not be repeated here.
- the display surface number determining unit 130 is further configured to: for each of the multiple sub-models, divide the display detail level of the sub-model into a first display level and a second display level; When a sub-model is displayed at a distance, the first display level is used to determine the number of display surfaces of the sub-model; when a second distance is used to display the sub-model, the second display level is used to determine the number of display surfaces of the sub-model.
- the second display level is greater than the first display level
- the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
- FIG. 7 is a schematic diagram of another display processing device provided by at least one embodiment of the present disclosure.
- the display processing device 100 further includes texture mapping information determination The unit 150, the name determination unit 160, and the display unit 170.
- the texture mapping information determining unit 150 is configured to retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the retained sub-texture mapping information .
- the texture mapping information determining unit 150 can implement steps S150 and S160, and the specific implementation method can refer to related descriptions of steps S150 and S160, which will not be repeated here.
- the name determining unit 160 is configured to modify the name of each sub-model of the plurality of sub-models to be consistent with the name of the base model corresponding to the sub-model.
- the name determining unit 160 can implement step S170, and the specific implementation method can refer to the related description of step S170, which will not be repeated here.
- the display unit 170 is configured to use display sub-models of multiple sub-models to display the object to be displayed.
- the display unit 170 can implement step S180, and the specific implementation method can refer to the related description of step S180, which will not be repeated here.
- the display unit 170 may be a display screen in an electronic device, such as a liquid crystal display screen or an organic light emitting diode display screen, etc., which is not limited in the embodiments of the present disclosure.
- the display processing device 100 may include more or fewer circuits or units, and the connection relationship between the various circuits or units is not limited, and can be determined according to actual needs. .
- the specific structure of each circuit is not limited, and may be composed of analog devices according to the circuit principle, or may be composed of digital chips, or be composed in other suitable manners.
- FIG. 8 is a schematic block diagram of still another display processing apparatus provided by at least one embodiment of the present disclosure.
- the display processing apparatus 200 includes a processor 210, a memory 220, and one or more computer program modules 221.
- the processor 210 and the memory 220 are connected through a bus system 230.
- one or more computer program modules 221 are stored in the memory 220.
- one or more computer program modules 221 include instructions for executing the display processing method provided by any embodiment of the present disclosure.
- instructions in one or more computer program modules 221 may be executed by the processor 210.
- the bus system 230 may be a commonly used serial or parallel communication bus, etc., which is not limited in the embodiments of the present disclosure.
- the processor 210 may be a central processing unit (CPU), a digital signal processor (DSP), an image processor (GPU), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may be general-purpose processing units.
- CPU central processing unit
- DSP digital signal processor
- GPU image processor
- the memory 220 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
- the non-volatile memory may include read-only memory (ROM), hard disk, flash memory, etc., for example.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 210 may run the program instructions to implement the functions (implemented by the processor 210) and/or other desired functions in the embodiments of the present disclosure, For example, display processing methods and so on.
- Various application programs and various data such as display environment parameters, display detail levels, and various data used and/or generated by the application programs, can also be stored in the computer-readable storage medium.
- the embodiment of the present disclosure does not provide all the constituent units of the display processing apparatus 200.
- those skilled in the art may provide and set other unshown component units according to specific needs, and the embodiments of the present disclosure do not limit this.
- FIG. 9 is a schematic structural diagram of an electronic device provided by at least one embodiment of the present disclosure.
- Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 9 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device includes the display processing device 100/200 provided by any embodiment of the present disclosure and a display screen (for example, the output device 307 shown in FIG. 9); upon receiving an instruction to display the object to be displayed, the display screen It is configured to receive and display display sub-models of multiple sub-models from the display processing device to display the object to be displayed. For example, import the display sub-models of multiple sub-models into the 3D software, and import the display sub-models of the multiple sub-models into the 3D software.
- the display sub-models of the sub-models corresponding to multiple elements are combined in the three-dimensional software to obtain the display model of the basic model, for example, to obtain the display model of the organ and liver, or to obtain the display model of the human body.
- the display model is also a three-dimensional model.
- the display screen receiving the display sub-models of the multiple sub-models from the display processing device and displaying them includes receiving the combined display model from the display processing device to display the final display model of the object to be displayed on the display screen.
- the electronic device 300 includes a processing device (such as a central processing unit, a graphics processor, etc.) 301, which can be based on a program stored in a read-only memory (ROM) 302 or from a storage device.
- the device 308 loads a program in a random access memory (RAM) 303 to perform various appropriate actions and processing.
- RAM 303 various programs and data required for the operation of the computer system are also stored.
- the processing device 301, the ROM 302, and the RAM 303 are connected by the bus 304 here.
- An input/output (I/O) interface 305 is also connected to the bus 304.
- the following components can be connected to the I/O interface 305: including input devices 306 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including input devices such as liquid crystal display (LCD), speakers, vibration An output device 307 such as a device; a storage device 308 such as a magnetic tape, a hard disk, etc.; and a communication device 309 that includes a network interface card such as a LAN card, a modem, and the like.
- the communication device 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data, and perform communication processing via a network such as the Internet.
- the driver 310 is also connected to the I/O interface 305 as needed.
- FIG. 9 shows the electronic device 300 including various devices, it should be understood that it is not required to implement or include all of the illustrated devices. More or fewer devices may be implemented alternatively or included.
- the electronic device 300 may further include a peripheral interface (not shown in the figure) and the like.
- the peripheral interface can be various types of interfaces, such as a USB interface, a lightning interface, and the like.
- the communication device 309 can communicate with a network and other devices through wireless communication, such as the Internet, an intranet, and/or a wireless network such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN).
- wireless communication such as the Internet, an intranet, and/or a wireless network such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN).
- LAN wireless local area network
- MAN metropolitan area network
- Wireless communication can use any of a variety of communication standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (W-CDMA) , Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi (e.g. based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n standards), voice transmission based on Internet protocol (VoIP), Wi-MAX, protocols used for e-mail, instant messaging and/or short message service (SMS), or any other suitable communication protocol.
- GSM Global System for Mobile Communications
- EDGE Enhanced Data GSM Environment
- W-CDMA Wideband Code Division Multiple Access
- CDMA Code Division Multiple Access
- TDMA Time Division Multiple Access
- Wi-Fi e.g. based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n standards
- VoIP Internet protocol
- Wi-MAX
- the electronic device can be any device such as a mobile phone, a tablet computer, a notebook computer, an e-book, a game console, a television, a digital photo frame, a navigator, etc., or can be any combination of electronic devices and hardware. This is not limited.
- the process described above with reference to the flowchart may be implemented as a computer software program.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302.
- the processing device 301 the above-mentioned display processing function defined in the method of the embodiment of the present disclosure is executed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
- Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
- This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
- Communication e.g., communication network
- Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs.
- the electronic device obtains at least two Internet protocol addresses; A node evaluation request for an Internet Protocol address, the node evaluation device selects an Internet Protocol address from the at least two Internet Protocol addresses and returns it; receives the Internet Protocol address returned by the node evaluation device; the obtained Internet Protocol address Indicates the edge node in the content distribution network.
- the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device: receives a node evaluation request including at least two Internet Protocol addresses; Among at least two Internet Protocol addresses, an Internet Protocol address is selected; the selected Internet Protocol address is returned; the received Internet Protocol address indicates an edge node in the content distribution network.
- the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
- LAN local area network
- WAN wide area network
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Product
- SOC System on Chip
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
- machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM compact disk read only memory
- magnetic storage device or any suitable combination of the foregoing.
- FIG. 10 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
- the storage medium 400 non-transitory stores computer-readable instructions 401, and when the non-transitory computer-readable instructions are executed by a computer (including a processor), it can execute any of the embodiments of the present disclosure Show processing method.
- the storage medium may be any combination of one or more computer-readable storage media.
- one computer-readable storage medium contains computer-readable program code for determining the number of display faces of the sub-model, and another computer-readable storage medium
- the medium contains computer-readable program code that determines the display sub-model of the sub-model.
- the computer can execute the program code stored in the computer storage medium, and execute, for example, the display processing method provided in any embodiment of the present disclosure.
- the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), Portable compact disk read-only memory (CD-ROM), flash memory, or any combination of the foregoing storage media may also be other suitable storage media.
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- CD-ROM Portable compact disk read-only memory
- flash memory or any combination of the foregoing storage media may also be other suitable storage media.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Geometry (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (18)
- 一种显示处理方法,包括:获取待显示对象的基础模型的多个子模型,其中,每个子模型包括多个面;获取所述待显示对象的显示环境参数;基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
- 根据权利要求1所述的显示处理方法,其中,基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数,包括:对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,其中,所述第二显示层级大于所述第一显示层级;在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;其中,第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
- 根据权利要求2所述的显示处理方法,其中,采用所述第一显示层级确定的该子模型的显示面数少于采用所述第二显示层级确定的该子模型的显示面数。
- 根据权利要求1-3任一所述的显示处理方法,还包括:将所述待显示对象的基础模型导入图像渲染引擎,并通过所述图像渲染引擎将所述待显示对象的基础模型拆分为所述多个子模型。
- 根据权利要求4所述的显示处理方法,其中,所述图像渲染引擎包括 虚幻4引擎,所述显示处理方法还包括:通过插件从所述虚幻4引擎中获取所述待显示对象的多个子模型,并通过所述插件分别确定所述多个子模型的显示子模型。
- 根据权利要求1-5任一所述的显示处理方法,其中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理方法还包括:保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
- 根据权利要求1-6任一所述的显示处理方法,还包括:对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
- 根据权利要求1-7任一所述的显示处理方法,还包括:利用所述多个子模型的显示子模型,来显示所述待显示对象。
- 根据权利要求8所述的显示处理方法,其中,将所述多个子模型的显示子模型导入三维软件;在所述三维软件中合并所述多个子模型的显示子模型以获得所述基础模型的显示模型;显示所述显示模型以显示所述待显示对象。
- 根据权利要求1-9任一所述的显示处理方法,其中,所述待显示对象为人体,所述基础模型为人体三维模型。
- 一种显示处理装置,包括:第一获取单元,配置为获取待显示对象的基础模型的多个子模型,其中,每个子模型包括多个面;第二获取单元,配置为获取所述待显示对象的显示环境参数;显示面数确定单元,配置为基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;显示子模型确定单元,配置为对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
- 根据权利要求11所述的显示处理装置,其中,所述显示面数确定单元还配置为:对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,其中,所述第二显示层级大于所述第一显示层级;在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;其中,第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
- 根据权利要求11或12所述的显示处理装置,其中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理装置还包括:纹理贴图信息确定单元,配置为保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
- 根据权利要求11-13任一所述的显示处理装置,还包括:名称确定单元,配置为对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
- 根据权利要求14所述的显示处理装置,还包括:显示单元,配置为利用所述多个子模型的显示子模型,来显示所述待显示对象。
- 一种显示处理装置,包括:处理器;存储器;一个或多个计算机程序模块,其中,所述一个或多个计算机程序模块被存储在所述存储器中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现权利要求1-10任一所述的显示处理方法的指令。
- 一种电子设备,包括:权利要求11-16任一所述的显示处理装置和 显示屏;其中,在接收到显示所述待显示对象的指令时,所述显示屏配置为从所述显示处理装置中接收所述多个子模型的显示子模型并进行显示,以显示所述待显示对象。
- 一种存储介质,非暂时性地存储计算机可读指令,当所述计算机可读指令由计算机执行时可以执行根据权利要求1-10任一所述的显示处理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080000062.6A CN113498532B (zh) | 2020-01-21 | 2020-01-21 | 显示处理方法、显示处理装置、电子设备及存储介质 |
PCT/CN2020/073556 WO2021146930A1 (zh) | 2020-01-21 | 2020-01-21 | 显示处理方法、显示处理装置、电子设备及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/073556 WO2021146930A1 (zh) | 2020-01-21 | 2020-01-21 | 显示处理方法、显示处理装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021146930A1 true WO2021146930A1 (zh) | 2021-07-29 |
Family
ID=76992797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/073556 WO2021146930A1 (zh) | 2020-01-21 | 2020-01-21 | 显示处理方法、显示处理装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113498532B (zh) |
WO (1) | WO2021146930A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114153516B (zh) * | 2021-10-18 | 2022-12-09 | 深圳追一科技有限公司 | 数字人显示面板配置方法、装置、电子设备及存储介质 |
CN114022616B (zh) * | 2021-11-16 | 2023-07-07 | 北京城市网邻信息技术有限公司 | 模型处理方法及装置、电子设备及存储介质 |
CN113963127B (zh) * | 2021-12-22 | 2022-03-15 | 深圳爱莫科技有限公司 | 一种基于仿真引擎的模型自动生成方法及处理设备 |
CN114470766A (zh) * | 2022-02-14 | 2022-05-13 | 网易(杭州)网络有限公司 | 模型防穿插方法及装置、电子设备、存储介质 |
CN116188686B (zh) * | 2023-02-08 | 2023-09-08 | 北京鲜衣怒马文化传媒有限公司 | 通过局部减面组合成角色低面模型的方法、系统和介质 |
CN116414316B (zh) * | 2023-06-08 | 2023-12-22 | 北京掌舵互动科技有限公司 | 基于数字城市中的bim模型的虚幻引擎渲染方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102289839A (zh) * | 2011-08-04 | 2011-12-21 | 天津中科遥感信息技术有限公司 | 一种面向三维数字城市的高效多细节层次渲染方法 |
US20130194260A1 (en) * | 2011-08-01 | 2013-08-01 | Peter Kunath | System for visualizing three dimensional objects or terrain |
CN103914877A (zh) * | 2013-01-09 | 2014-07-09 | 南京理工大学 | 一种基于扩展合并的三维模型多细节层次结构 |
CN107590858A (zh) * | 2017-08-21 | 2018-01-16 | 上海妙影医疗科技有限公司 | 基于ar技术的医学样品展示方法和计算机设备、存储介质 |
CN110427532A (zh) * | 2019-07-23 | 2019-11-08 | 中南民族大学 | 温室三维可视化方法、装置、设备及存储介质 |
-
2020
- 2020-01-21 CN CN202080000062.6A patent/CN113498532B/zh active Active
- 2020-01-21 WO PCT/CN2020/073556 patent/WO2021146930A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130194260A1 (en) * | 2011-08-01 | 2013-08-01 | Peter Kunath | System for visualizing three dimensional objects or terrain |
CN102289839A (zh) * | 2011-08-04 | 2011-12-21 | 天津中科遥感信息技术有限公司 | 一种面向三维数字城市的高效多细节层次渲染方法 |
CN103914877A (zh) * | 2013-01-09 | 2014-07-09 | 南京理工大学 | 一种基于扩展合并的三维模型多细节层次结构 |
CN107590858A (zh) * | 2017-08-21 | 2018-01-16 | 上海妙影医疗科技有限公司 | 基于ar技术的医学样品展示方法和计算机设备、存储介质 |
CN110427532A (zh) * | 2019-07-23 | 2019-11-08 | 中南民族大学 | 温室三维可视化方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113498532A (zh) | 2021-10-12 |
CN113498532B (zh) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021146930A1 (zh) | 显示处理方法、显示处理装置、电子设备及存储介质 | |
US11344806B2 (en) | Method for rendering game, and method, apparatus and device for generating game resource file | |
KR102663617B1 (ko) | 증강 현실 객체의 조건부 수정 | |
JP5960368B2 (ja) | ビジビリティ情報を用いたグラフィックスデータのレンダリング | |
WO2021008627A1 (zh) | 游戏角色渲染方法、装置、电子设备及计算机可读介质 | |
CN114820905B (zh) | 虚拟形象生成方法、装置、电子设备及可读存储介质 | |
KR20140139553A (ko) | 그래픽 프로세싱 유닛들에서 가시성 기반 상태 업데이트들 | |
CN111882631B (zh) | 一种模型渲染方法、装置、设备及存储介质 | |
JP2022505118A (ja) | 画像処理方法、装置、ハードウェア装置 | |
CN110211017B (zh) | 图像处理方法、装置及电子设备 | |
US12013844B2 (en) | Concurrent hash map updates | |
CN109766319B (zh) | 压缩任务处理方法、装置、存储介质及电子设备 | |
WO2022095526A1 (zh) | 图形引擎和适用于播放器的图形处理方法 | |
WO2017129105A1 (zh) | 一种图形界面更新方法和装置 | |
CN117523062B (zh) | 光照效果的预览方法、装置、设备及存储介质 | |
CN114049403A (zh) | 一种多角度三维人脸重建方法、装置及存储介质 | |
CN112807695A (zh) | 游戏场景生成方法和装置、可读存储介质、电子设备 | |
CN113744379B (zh) | 图像生成方法、装置和电子设备 | |
CN117557712A (zh) | 渲染方法、装置、设备及存储介质 | |
CN114898029A (zh) | 对象渲染方法及装置、存储介质、电子装置 | |
WO2023185476A1 (zh) | 对象渲染方法、装置、电子设备、存储介质及程序产品 | |
CN115953553B (zh) | 虚拟形象生成方法、装置、电子设备以及存储介质 | |
CN113487708B (zh) | 基于图形学的流动动画实现方法、存储介质及终端设备 | |
RU2810701C2 (ru) | Гибридный рендеринг | |
WO2021043128A1 (zh) | 粒子计算方法、装置、电子设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20915024 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20915024 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20915024 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.03.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20915024 Country of ref document: EP Kind code of ref document: A1 |