WO2021146930A1 - 显示处理方法、显示处理装置、电子设备及存储介质 - Google Patents

显示处理方法、显示处理装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021146930A1
WO2021146930A1 PCT/CN2020/073556 CN2020073556W WO2021146930A1 WO 2021146930 A1 WO2021146930 A1 WO 2021146930A1 CN 2020073556 W CN2020073556 W CN 2020073556W WO 2021146930 A1 WO2021146930 A1 WO 2021146930A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
sub
model
models
displayed
Prior art date
Application number
PCT/CN2020/073556
Other languages
English (en)
French (fr)
Inventor
白光
白桦
王秉东
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202080000062.6A priority Critical patent/CN113498532B/zh
Priority to PCT/CN2020/073556 priority patent/WO2021146930A1/zh
Publication of WO2021146930A1 publication Critical patent/WO2021146930A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the embodiments of the present disclosure relate to a display processing method, a display processing apparatus, an electronic device, and a storage medium.
  • Health is always an important topic of people's attention. With the development of computer technology and communication technology, people hope to monitor their health at any time to quickly determine which part of the body may have problems and prevent them as soon as possible. Since the human body is a complex and integrated organic system, people first need to have a clear and accurate understanding of the human body. For example, they can rely on the existing knowledge of anatomy and the existing display technology to allow people to understand the whole body and various parts of the human body. Structure, which can realize the monitoring of its own health status.
  • At least one embodiment of the present disclosure provides a display processing method, including: obtaining multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; obtaining display environment parameters of the object to be displayed; Environmental parameters, for each sub-model of the plurality of sub-models, determine the display detail level of the sub-model, and based on the determined display detail level of the sub-model, determine the number of display surfaces of the sub-model; For each of the sub-models, the display sub-model of the sub-model is determined based on the determined number of display sides of the sub-model.
  • the display detail level of the sub-model is determined, and based on the determined
  • the display detail level of the sub-model, and determining the number of display sides of the sub-model includes: for each sub-model of the plurality of sub-models, the display detail level of the sub-model is divided into a first display level and a second display level, Wherein, the second display level is greater than the first display level; when the sub-model is displayed at a first distance, the first display level is used to determine the number of display surfaces of the sub-model; when the second distance is used to display the sub-model In the case of a sub-model, the second display level is used to determine the number of display surfaces of the sub-model; the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
  • the number of display sides of the sub-model determined using the first display level is less than the number of display sides of the sub-model determined using the second display level .
  • the display processing method provided by at least one embodiment of the present disclosure further includes: importing the basic model of the object to be displayed into an image rendering engine, and splitting the basic model of the object to be displayed into an image rendering engine through the image rendering engine The multiple sub-models.
  • the image rendering engine includes the Unreal 4 engine
  • the display processing method further includes: obtaining the image of the object to be displayed from the Unreal 4 engine through a plug-in. Multiple sub-models, and the display sub-models of the multiple sub-models are respectively determined through the plug-in.
  • the sub-model includes a plurality of sub-texture mapping information
  • the display processing method further includes: retaining the sub-model The sub-texture mapping information consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the reserved sub-texture mapping information.
  • the display processing method provided by at least one embodiment of the present disclosure further includes: for each sub-model of the plurality of sub-models, modifying the name of the sub-model to keep the name of the basic model corresponding to the sub-model Unanimous.
  • the display processing method provided by at least one embodiment of the present disclosure further includes: using the display sub-models of the multiple sub-models to display the to-be-displayed object.
  • the display sub-models of the multiple sub-models are imported into three-dimensional software; the display sub-models of the multiple sub-models are combined in the three-dimensional software to obtain the The display model of the basic model; the display model is displayed to display the object to be displayed.
  • the object to be displayed is a human body
  • the basic model is a three-dimensional human body model.
  • At least one embodiment of the present disclosure further provides a display processing device, including: a first obtaining unit configured to obtain multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; and a second obtaining unit configured to Obtain the display environment parameters of the object to be displayed; the display surface number determining unit is configured to determine the display detail level of the sub-model for each of the plurality of sub-models based on the display environment parameters, and based on all the sub-models The determined display detail level of the sub-model is determined, and the display surface number of the sub-model is determined; the display sub-model determining unit is configured to, for each sub-model of the plurality of sub-models, based on the determined display surface number of the sub-model To determine the display sub-model of the sub-model.
  • a display processing device including: a first obtaining unit configured to obtain multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; and a second obtaining unit configured to
  • the display surface number determining unit is further configured to: for each of the plurality of sub-models, divide the display detail level of the sub-model into the first A display level and a second display level, the second display level is greater than the first display level; when the sub-model is displayed using the first distance, the first display level is used to determine the number of display sides of the sub-model; When the second distance is used to display the sub-model, the second display level is used to determine the number of display surfaces of the sub-model; the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance Greater than the second distance.
  • the sub-model includes a plurality of sub-texture mapping information
  • the display processing device further includes: texture mapping information determination The unit is configured to retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the retained sub-texture mapping information.
  • the display processing device provided by at least one embodiment of the present disclosure further includes: a name determining unit configured to modify the name of each sub-model of the plurality of sub-models to correspond to all the sub-models.
  • a name determining unit configured to modify the name of each sub-model of the plurality of sub-models to correspond to all the sub-models.
  • the display processing device provided by at least one embodiment of the present disclosure further includes: a display unit configured to use the display sub-models of the multiple sub-models to display the to-be-displayed object.
  • At least one embodiment of the present disclosure further provides a display processing device, including: a processor; a memory; one or more computer program modules, wherein the one or more computer program modules are stored in the memory and configured To be executed by the processor, the one or more computer program modules include instructions for executing the display processing method provided by any embodiment of the present disclosure.
  • At least one embodiment of the present disclosure further provides an electronic device, including: the display processing device and the display screen provided by any embodiment of the present disclosure; when an instruction to display the object to be displayed is received, the display screen is configured to slave
  • the display processing device receives and displays the display sub-models of the multiple sub-models to display the to-be-displayed object.
  • At least one embodiment of the present disclosure further provides a storage medium that non-temporarily stores computer-readable instructions, and when the computer-readable instructions are executed by a computer, the display processing method provided by any embodiment of the present disclosure can be executed.
  • Figure 1A is an effect diagram of a reduced display model
  • FIG. 1B is a flowchart of an example of a display processing method provided by at least one embodiment of the present disclosure
  • FIG. 2A is a schematic diagram of a three-dimensional human body model provided by at least one embodiment of the present disclosure
  • 2B is a process for determining the number of display sides of a sub-model provided by at least one embodiment of the present disclosure
  • Fig. 2C is a schematic diagram of an original model of a three-dimensional human body model
  • FIG. 2D is a display model obtained by subtracting the surface of the three-dimensional human body model shown in FIG. 2C by using the display processing method provided in at least one embodiment of the present disclosure
  • Figure 2E is a schematic diagram of the original model of the sphenoid bone of the human body
  • FIG. 2F is a schematic diagram of the sphenoid bone shown in FIG. 2E after surface reduction processing is performed on the sphenoid bone shown in FIG. 2E by using the display processing method provided by at least one embodiment of the present disclosure;
  • FIG. 3 is a flowchart of another display processing method provided by at least one embodiment of the present disclosure.
  • FIG. 4 is a flow chart of a method for displaying objects to be displayed according to at least one embodiment of the present disclosure
  • 5A is a system flowchart of a display processing method provided by at least one embodiment of the present disclosure
  • FIG. 5B is a system flowchart of a specific implementation example of the display processing method shown in FIG. 5A;
  • FIG. 6 is a schematic block diagram of a display processing apparatus provided by at least one embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of another display processing apparatus provided by at least one embodiment of the present disclosure.
  • FIG. 8 is a schematic block diagram of still another display processing apparatus provided by at least one embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by at least one embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
  • polygons for example, triangles
  • the surface of the 3D model of the human body created by this method includes multiple polygons. The more polygons, the more the 3D model of the human body. The more the number of sides, the more the number of sides, the greater the amount of data, and the slower the processing speed of the system.
  • the first method is to manually optimize the human body 3D model directly in the 3D software to achieve surface reduction;
  • the second method is to set the preset value of the display surface number of the three-dimensional human body model to realize automatic surface reduction in the three-dimensional software.
  • Figure 1A is an effect diagram of a display model after surface reduction.
  • the 3D model of the human body is reduced too much, the situation shown in Figure 1A will appear.
  • the parts of the 3D model of the human body that are connected but not welded will have different gaps. .
  • At least one embodiment of the present disclosure provides a display processing method, including: obtaining multiple sub-models of a basic model of an object to be displayed, each sub-model including multiple faces; obtaining display environment parameters of the object to be displayed; For each sub-model of the multiple sub-models, determine the display detail level of the sub-model, and determine the display surface number of the sub-model based on the determined display detail level of the sub-model; for each sub-model of the multiple sub-models, Based on the determined number of display sides of the sub-model, the display sub-model of the sub-model is determined.
  • Some embodiments of the present disclosure also provide a display processing device and a storage medium corresponding to the above-mentioned display processing method.
  • the display processing method provided by the above-mentioned embodiments of the present disclosure can determine the number of display surfaces of each sub-model based on the display environment and the display detail level, so that while reducing the number of display surfaces of each sub-model, it can increase the controllable data of each sub-model ( For example, displaying the level of detail, etc.), which improves the display effect of the object to be displayed, reduces the data processing volume of the system, and reduces resource consumption.
  • FIG. 1B is a flowchart of an example of a display processing method provided by at least one embodiment of the present disclosure.
  • the display processing method can be implemented in the form of software, hardware, firmware or any combination thereof, loaded and executed by processors in devices such as mobile phones, tablet computers, notebook computers, desktop computers, network servers, etc., which can reduce human body
  • the controllable data of the three-dimensional human body model is increased, and the display effect of the three-dimensional human body model is improved.
  • the display processing method is applicable to a computing device.
  • the computing device includes any electronic device with computing functions, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, a network server, etc., which can load and execute the display processing Method, the embodiment of the present disclosure does not limit this.
  • the computing device may include a central processing unit (CPU) or a graphics processing unit (Graphics Processing Unit, GPU) and other forms of processing units, storage units, etc. that have data processing capabilities and/or instruction execution capabilities.
  • the computing device is also installed with an operating system, an application programming interface (for example, OpenGL (Open Graphics Library), Metal, etc.), etc., and the display processing method provided by the embodiment of the present disclosure is implemented by running code or instructions.
  • the computing device may also include a display component, such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a quantum dot light emitting diode (Quantum Dot Light Emitting).
  • a display component such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a quantum dot light emitting diode (Quantum Dot Light Emitting).
  • Diode, QLED display screens, projection components, VR head-mounted display devices (for example, VR helmets, VR glasses), etc., which are not limited in the embodiments of the present disclosure.
  • the display part can display the object to be displayed.
  • the display processing method includes step S110 to step S140.
  • Step S110 Obtain multiple sub-models of the basic model of the object to be displayed, and each sub-model includes multiple faces.
  • Step S120 Obtain the display environment parameters of the object to be displayed.
  • Step S130 Based on the display environment parameters, for each of the multiple sub-models, determine the display detail level of the sub-model, and determine the display surface number of the sub-model based on the determined display detail level of the sub-model.
  • Step S140 For each sub-model of the plurality of sub-models, a display sub-model of the sub-model is determined based on the determined number of display surfaces of the sub-model.
  • the object to be displayed is a human body
  • the basic model is a three-dimensional human body model (as shown in FIG. 2A).
  • the object to be displayed may also be other animals, plants, objects, etc. that can be displayed, which is not limited in the embodiment of the present disclosure.
  • the object to be displayed is a human body and the basic model is a three-dimensional model of the human body as an example for description, which is not limited in the embodiment of the present disclosure.
  • the optimized three-dimensional human body model ie, display model
  • the electronic device after turning on the electronic device to enter the display interface for displaying human body information (for example, turning on the digital human body APP), the optimized three-dimensional human body model (ie, display model) is displayed on the electronic device.
  • the basic model of the object to be displayed is imported into the image rendering engine (that is, the renderer, for example, includes a two-dimensional image engine or a three-dimensional image engine, such as an image processor (GPU)), and passes the image rendering engine Split the basic model of the object to be displayed into multiple sub-models.
  • the image rendering engine provides the multiple sub-models to the first acquiring unit described below, so as to process the multiple sub-models in subsequent steps to determine the display sub-models of the multiple sub-models.
  • the image rendering engine includes Unreal Engine 4 (UE4 for short). The following takes the image rendering engine as UE4 as an example for description, which is not limited in the embodiment of the present disclosure.
  • the basic model of the object to be displayed can also be imported into other types of engines such as physics engine, script engine, or network engine for split processing according to the functions that need to be implemented, which is not limited in the embodiments of the present disclosure.
  • the basic model is a human body three-dimensional model
  • the human body three-dimensional model includes human body organ category information, human body system category information, or human body parameter information.
  • three-dimensional human body models are classified according to organs, which can include three-dimensional models of various organs such as heart, liver, spleen, lungs, and kidneys. They are classified according to human system categories, including circulatory, digestive, respiratory, reproductive, and immune systems
  • organs which can include three-dimensional models of various organs such as heart, liver, spleen, lungs, and kidneys. They are classified according to human system categories, including circulatory, digestive, respiratory, reproductive, and immune systems
  • Three-dimensional models of systems such as systems can also be classified according to parts, including three-dimensional models of local systems such as head, chest, upper and lower limbs.
  • the multiple sub-models of the three-dimensional model of the organ liver and the Multiple elements have a one-to-one correspondence, that is, a sub-model is formed for each element that is split.
  • each submodel includes multiple faces.
  • polygons for example, triangles
  • the surface of the 3D model of the human body created by this method includes multiple polygons. Therefore, multiple sub-models are split
  • the surface of each sub-model in also includes multiple polygons, that is, each sub-model includes multiple faces.
  • each sub-model is optimized and reduced, that is, the display surface number of each sub-model is determined to determine the display sub-model of each sub-model, so as to reduce the display surface number of the final three-dimensional human body model.
  • a first acquisition unit can be provided, and multiple sub-models of the basic model of the object to be displayed can be acquired through the first acquisition unit, for example, the multiple sub-models can be acquired from an image rendering engine; for example, the central processing unit ( CPU), image processor (GPU), tensor processor (TPU), field programmable logic gate array (FPGA) or other forms of processing units with data processing capabilities and/or instruction execution capabilities and corresponding computer instructions
  • the first acquisition unit may be a general-purpose processor or a special-purpose processor, and may be a processor based on the X86 or ARM architecture.
  • the plug-in obtains multiple sub-models of the object to be displayed from the image rendering engine (for example, UE4), and determines the multiple sub-models respectively through the plug-in
  • the display sub-model of, that is, the corresponding instructions are executed in the plug-in to implement the subsequent steps S120-S140.
  • the plug-in may be developed for the image rendering engine to implement corresponding functions.
  • the display environment parameters of the to-be-displayed object include the position, angle, and distance from the display screen that the display model of the to-be-displayed object needs to be presented in the display interface.
  • the display screen is equivalent to a virtual camera, and the distance of the display model of the object to be displayed from the display screen is the distance of the display model of the object to be displayed from the virtual camera in a 3-dimensional space.
  • the display environment parameters of the object to be displayed include: when the sub-model is displayed at a close distance, that is, when the sub-model is enlarged, the display model of the object to be displayed needs to be presented in the display interface close to the display screen, that is, the display of the object to be displayed
  • the distance between the model and the display screen is small, for example, it is recorded as the second distance; when the sub-model is displayed at a long distance, that is, when the sub-model is zoomed out, the display model of the object to be displayed in the display interface needs to be displayed far away from the display screen , That is, the distance between the display model of the object to be displayed and the display screen is larger, for example, it is recorded as the first distance (for example, the first distance is greater than the second distance).
  • the first distance for example, the first distance is greater than the second distance
  • a second acquisition unit may be provided, and the display environment parameters of the object to be displayed may be acquired through the second acquisition unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor (TPU) may also be acquired through the second acquisition unit. ), a Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the second acquisition unit.
  • CPU central processing unit
  • GPU image processor
  • TPU tensor processor
  • FPGA Field Programmable Logic Gate Array
  • the display level of detail can be obtained through LOD (Levels of Detail, multiple levels of detail) technology.
  • LOD technology refers to determining the resource allocation of object rendering according to the position and importance of the node of the object model in the display environment, reducing the number of faces and details of non-important objects, and increasing the number of faces and details of important objects, so as to obtain Efficient rendering operation.
  • LOD1 is the first display level
  • LOD2 is the second display level
  • LOD3 is the third display level, and so on.
  • the display detail level of the sub-model is determined.
  • FIG. 2B is a flowchart of a method for determining the number of display sides of a sub-model provided by at least one embodiment of the present disclosure. That is, FIG. 2B is a flowchart of some examples of step S130 shown in FIG. 1B. For example, in the example shown in FIG. 2B, the determining method includes step S131 to step S133.
  • the display processing method provided by at least one embodiment of the present disclosure will be introduced in detail with reference to FIG. 2B.
  • Step S131 For each of the multiple sub-models, the display detail level of the sub-model is divided into a first display level and a second display level.
  • Step S132 When the sub-model is displayed using the first distance, the first display level is used to determine the number of display sides of the sub-model.
  • Step S133 When the sub-model is displayed using the second distance, the second display level is used to determine the number of display surfaces of the sub-model.
  • the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
  • the first display level and the second display level respectively represent the percentage of the display surface number of the sub-model (that is, the display surface number of the sub-model) and the original surface number of the sub-model.
  • the second display level is greater than the first display level, that is, the number of display sides of the sub-model corresponding to the second display level is more than the number of display sides of the sub-model corresponding to the first display level. Therefore, the second display level is adopted.
  • the display sub-model determined by the display level has a higher level of detail, and therefore, is more suitable for close display (for example, enlarged display); the display sub-model determined by the first display level has a lower level of detail, and therefore more Suitable for long-distance display (for example, reduced display).
  • the second display level is 40% (for example, the number of faces of the sub-model is 1000, and the number of display faces of the sub-model (the number of faces of the sub-model is displayed) is 400), and the first display level is 10% (for example, the sub-model The number of faces of is 1000, and the number of display faces of the sub-model (for example, the number of faces of the displayed sub-model is 100).
  • the display detail level can also be continuously divided into a third display level and a fourth display level, etc., which is not limited in the embodiment of the present disclosure.
  • step S132 for example, when the sub-model is displayed at the first distance, that is, when the sub-model is displayed at a long distance, a lower display level (for example, the first display level) is used to determine the number of display sides of the sub-model, thereby You can determine the display sub-models with fewer display sides.
  • a lower display level for example, the first display level
  • the preset value of the first display level is set in the system, and when the sub-model is displayed at the first distance, the preset value of the first display level is called to set the number of display surfaces of the sub-model.
  • step S133 for example, when the sub-model is displayed at the second distance, that is, when the sub-model is displayed at a close distance, a higher display level (for example, the second display level) is used to determine the number of display sides of the sub-model, thereby It is possible to determine a display sub-model with a larger number of display surfaces, that is, the number of display surfaces of the sub-model determined by the first display level is less than the number of display surfaces of the sub-model determined by the second display level.
  • a higher display level for example, the second display level
  • the preset value of the second display level is set in the system, and when the sub-model is displayed at the second distance, the preset value of the second display level is called to set the display surface number of the sub-model.
  • step S133 when an instruction to expand the display of the sub-model is received, for example, when a finger or a mouse clicks the button to expand the display of the sub-model on the display interface of the electronic device (for example, the interface of the digital human body APP), the display obtained through step S133 is displayed
  • the display sub-model determined by the number of display faces; when an instruction to zoom out the sub-model is received, for example, when a finger or a mouse clicks the button to zoom out the sub-model on the display interface of the electronic device (for example, the interface of the digital human APP) , Displaying the display sub-model determined by the number of display sides obtained in step S132.
  • a preset distance threshold can be set according to the actual situation.
  • the display distance the distance between the sub-model and the display screen
  • the first distance is used to display the sub-model
  • the display distance is less than the preset threshold , That is, the second distance display sub-model is adopted, which is not limited in the embodiment of the present disclosure.
  • the display detail level according to the display environment parameters, for example, the distance from the display position of the sub-model to the display screen. For example, when the sub-model is expanded (using the second distance to display the sub-model), the second display level with a larger value is used Determine the number of display surfaces of the sub-model, so that the sub-model can be displayed in more detail, and the display effect can be improved; when the sub-model is reduced (using the first distance to display the sub-model), the first display level with a smaller value is used to determine the sub-model's
  • the number of display faces can reduce the number of faces and details of non-important models, thereby reducing the amount of data processed by the system, reducing resource consumption, and improving rendering efficiency.
  • a display surface number determination unit may be provided, and the display surface number of the sub-model may be determined by the display surface number determination unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor may also be used. (TPU), Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the display surface number determination unit.
  • CPU central processing unit
  • GPU image processor
  • FPGA Field Programmable Logic Gate Array
  • step S140 for example, for each sub-model of the plurality of sub-models, based on the determined display surface number of the sub-model, a modeling method in this field is adopted to determine the display sub-model of the sub-model.
  • the display sub-model is the optimized model after surface reduction, for example, it is also a three-dimensional model.
  • Using the display sub-model to display on the display interface of the electronic device can reduce the number of display surfaces of each sub-model. The display effect of the object to be displayed is improved, the data processing volume of the system is reduced, and the resource consumption is reduced.
  • a display sub-model determination unit may be provided, and the display sub-model of the sub-model may be determined by the display sub-model determination unit; for example, a central processing unit (CPU), an image processor (GPU), or a tensor processor may also be used. (TPU), Field Programmable Logic Gate Array (FPGA), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the display sub-model determination unit.
  • CPU central processing unit
  • GPU image processor
  • FPGA Field Programmable Logic Gate Array
  • FIG. 3 is a flowchart of another display processing method provided by at least one embodiment of the present disclosure. As shown in FIG. 3, the display processing method further includes step S150-step S180.
  • Step S150 retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model.
  • the sub-model includes multiple sub-texture mapping information UV0, UV1, UV2.
  • the texture of the sub-model includes information such as the color and brightness of the sub-model
  • the sub-texture mapping information includes the coordinates of the texture of each point in the sub-model on the picture, so that the texture of each point on the picture can be changed according to the coordinates. It accurately corresponds to the surface of the three-dimensional model, so that the texture of the three-dimensional model can be displayed correctly.
  • the serial number arrangement of the six faces after expansion can be 123456 or 612345, as long as Ensure that the texture mapping information of each surface corresponds to the relative position of the sub-model in the three-dimensional space (that is, the texture mapping information of surface 1 corresponds to surface 1 on the three-dimensional model, and the texture mapping information of surface 2 corresponds to the three-dimensional model.
  • the texture mapping information of surface 2,..., surface 6 corresponds to surface 6 on the 3D model. Therefore, different expansion methods (and different serial number arrangement methods) correspond to different texture mapping coordinates. Therefore, the sub-model includes multiple Sub-texture map information.
  • the sub-texture mapping information UV0 is consistent with the texture mapping information in the base model, and the rest may be inconsistent. Therefore, keep the sub-model and the base
  • the sub-texture mapping information UV0 with the same texture mapping information of the model is used for subsequent display of the sub-model display.
  • Step S160 Delete other sub-texture map information except the reserved sub-texture map information.
  • Step S170 For each sub-model of the multiple sub-models, the name of the sub-model is modified to be consistent with the name of the basic model corresponding to the sub-model.
  • the name of each sub-model may be changed. This will cause the names of other software to be unable to correspond to the subsequent operations after the multiple sub-models are exported to the engine. Therefore, the corresponding operation cannot be performed.
  • the name of the sub-model is modified to be consistent with the name of the base model corresponding to the sub-model.
  • the name of the sub-model should also be liver.
  • Step S180 Use the display sub-models of the multiple sub-models to display the object to be displayed.
  • the display sub-model is the optimized reduced surface model. Displaying the display sub-model on the display interface of the electronic device can reduce the number of display surfaces of each sub-model, improve the display effect of the object to be displayed, and reduce The data processing capacity of the system is reduced, and the resource consumption is reduced.
  • the display processing methods of each sub-model can be performed in parallel, so as to increase the speed of the display processing method, reduce time-consuming, and improve display processing efficiency.
  • FIG. 4 is a flowchart of a method for displaying an object to be displayed according to at least one embodiment of the present disclosure. That is, FIG. 4 is a flowchart of some examples of step S180 shown in FIG. 3. For example, in the example shown in FIG. 4, the determining method includes step S181 to step S183.
  • a method for displaying an object to be displayed provided by at least one embodiment of the present disclosure will be introduced in detail with reference to FIG. 4.
  • Step S181 Import the display sub-models of the multiple sub-models into the three-dimensional software.
  • step S170 after obtaining the final display sub-model, the display sub-model is output to the three-dimensional software for merging.
  • the three-dimensional software may include 3ds max, Maya, Cinema 4D, zbrush, etc.
  • Step S182 Combine the display sub-models of the multiple sub-models in the three-dimensional software to obtain the display model of the basic model.
  • the display sub-models of the sub-models corresponding to multiple elements are combined in the three-dimensional software to obtain the display model of the basic model, for example, to obtain the display model of the organ and liver, or to obtain the display model of the human body.
  • the display model is also a three-dimensional model.
  • the display sub-models of each sub-model can be merged by a merge method in the field, which is not limited in the embodiment of the present disclosure, and will not be repeated here.
  • the basic model has a large number of faces
  • the display model is a three-dimensional human body model of the object to be displayed after optimized face reduction.
  • the number of display faces is much less than that of the basic model, which can reduce the amount of data processing of the system and reduce LF.
  • Step S183 Display the display model to display the object to be displayed.
  • displaying the display model on the display interface of the electronic device can reduce the number of display surfaces of each sub-model, improve the display effect of the object to be displayed, reduce the amount of system data processing, and reduce resource consumption.
  • the flow of the display processing method provided by the foregoing various embodiments of the present disclosure may include more or fewer operations, and these operations may be performed sequentially or in parallel.
  • the flow of the display processing method described above includes multiple operations appearing in a specific order, it should be clearly understood that the order of the multiple operations is not limited.
  • the display processing method described above may be executed once, or may be executed multiple times according to predetermined conditions.
  • the display processing method provided by the above-mentioned embodiments of the present disclosure can determine the number of display surfaces of each sub-model based on the display environment and the display detail level, so that while reducing the number of display surfaces of each sub-model, it can increase the controllable data of each sub-model.
  • the display effect of the object to be displayed is improved, the data processing volume of the system is reduced, and the resource consumption is reduced.
  • FIG. 2C is a schematic diagram of the original model of a three-dimensional human body model (ie, a model without surface reduction);
  • FIG. 2D is a display processing method provided by at least one embodiment of the present disclosure after surface reduction is performed on the three-dimensional human body model shown in FIG. 2C Obtained display model;
  • FIG. 2E is a schematic diagram of the original model of the sphenoid bone of the human body;
  • FIG. 2F is a schematic diagram of the sphenoid bone shown in FIG. 2E after surface reduction processing is provided by the display processing method provided by at least one embodiment of the present disclosure.
  • the unreduced three-dimensional model of the human body includes 4916042 triangular surfaces
  • the displayed model after the reduced surface includes 2,800316 triangular surfaces, for example, for the muscles and bones in the three-dimensional human body model. Areas such as the connection with bones have been reduced, and the remaining parts have not been subjected to the above-mentioned display processing (that is, the area has not been reduced).
  • the original model of the sphenoid bone includes 10189 triangular faces.
  • the reduced face sphenoid bone model includes 1527 triangular faces, which is about 15% of the basic model, which basically reaches the limit of face reduction.
  • the use of the display processing method provided by at least one embodiment of the present disclosure can not only achieve maximum surface reduction processing to reduce the amount of data processing of the system, but also does not affect the display effect of the display model, and improves the display of the object to be displayed. display effect.
  • FIG. 5A is a system flowchart of a display processing method provided by at least one embodiment of the present disclosure
  • FIG. 5B is a system flowchart of a specific implementation example of the display processing method shown in FIG. 5A.
  • the display processing method provided by at least one embodiment of the present disclosure will be described in detail below with reference to FIGS. 5A and 5B.
  • the engine can be the UE4 engine.
  • the basic model is split into multiple sub-models in the UE4 engine, and the names of the multiple sub-models are changed. , The number of UVs (sub-texture mapping information) has been increased, and the split multiple sub-models will be imported into the plug-in developed according to UE4.
  • set the preset value of the display surface of the control sub-model for example, set the first display level LOD1 and the second display level LOD2, increase the LOD level, for example, when the sub-model is displayed at the first distance .
  • the first display level is used to determine the number of display sides of the sub-model
  • the second display level is used to determine the number of display sides of the sub-model, so as to reduce the number of display sides of the sub-model
  • the display sub-model of the sub-model is determined based on the number of display sides of the sub-model.
  • the plug-in also sets the name of each sub-model and the number of UV (texture map information), for example, retain the sub-model's texture map information that is consistent with the base model's texture map information, and delete the sub-texture maps except the reserved ones.
  • For other sub-texture information other than the information for each sub-model of the multiple sub-models, modify the name of the sub-model to be consistent with the name of the base model corresponding to the sub-model.
  • steps S160-S170 The related description will not be repeated here.
  • steps S181-S183 for details, please refer to the relevant descriptions of steps S181-S183.
  • output the display model and display the display model on the display interface of the electronic device to display the object to be displayed.
  • FIG. 6 is a schematic block diagram of a display processing apparatus provided by at least one embodiment of the present disclosure.
  • the display processing device 100 includes a first acquisition unit 110, a second acquisition unit 120, a face number determination unit 130 and a display sub-model determination unit 140.
  • these units may be implemented by hardware (for example, circuit) modules or software modules, etc.
  • the following embodiments are the same as this, and will not be repeated.
  • a central processing unit CPU
  • an image processor GPU
  • TPU tensor processor
  • FPGA field programmable logic gate array
  • Processing units and corresponding computer instructions implement these units.
  • the first obtaining unit 110 is configured to obtain multiple sub-models of the basic model of the object to be displayed. For example, each submodel includes multiple faces.
  • the first acquiring unit 110 may implement step S110, and the specific implementation method can refer to the related description of step S110, which will not be repeated here.
  • the second acquiring unit 120 is configured to acquire the display environment parameters of the object to be displayed.
  • the second acquiring unit 120 may implement step S120, and its specific implementation method can refer to the related description of step S120, which will not be repeated here.
  • the display surface number determining unit 130 is configured to determine the display detail level of the sub-model for each of the multiple sub-models based on the display environment parameters, and determine the sub-model based on the determined display detail level of the sub-model The number of display sides.
  • the display surface number determining unit 130 can implement step S130, and the specific implementation method can refer to the related description of step S130, which will not be repeated here.
  • the display sub-model determining unit 140 is configured to determine the display sub-model of the sub-model based on the determined number of display surfaces of the sub-model for each sub-model of the plurality of sub-models.
  • the display sub-model determining unit 140 can implement step S140, and the specific implementation method can refer to the related description of step S140, which will not be repeated here.
  • the display surface number determining unit 130 is further configured to: for each of the multiple sub-models, divide the display detail level of the sub-model into a first display level and a second display level; When a sub-model is displayed at a distance, the first display level is used to determine the number of display surfaces of the sub-model; when a second distance is used to display the sub-model, the second display level is used to determine the number of display surfaces of the sub-model.
  • the second display level is greater than the first display level
  • the first distance and the second distance represent the distance from the sub-model to the display screen, and the first distance is greater than the second distance.
  • FIG. 7 is a schematic diagram of another display processing device provided by at least one embodiment of the present disclosure.
  • the display processing device 100 further includes texture mapping information determination The unit 150, the name determination unit 160, and the display unit 170.
  • the texture mapping information determining unit 150 is configured to retain the sub-texture mapping information of the sub-model that is consistent with the texture mapping information of the base model; delete other sub-texture mapping information except for the retained sub-texture mapping information .
  • the texture mapping information determining unit 150 can implement steps S150 and S160, and the specific implementation method can refer to related descriptions of steps S150 and S160, which will not be repeated here.
  • the name determining unit 160 is configured to modify the name of each sub-model of the plurality of sub-models to be consistent with the name of the base model corresponding to the sub-model.
  • the name determining unit 160 can implement step S170, and the specific implementation method can refer to the related description of step S170, which will not be repeated here.
  • the display unit 170 is configured to use display sub-models of multiple sub-models to display the object to be displayed.
  • the display unit 170 can implement step S180, and the specific implementation method can refer to the related description of step S180, which will not be repeated here.
  • the display unit 170 may be a display screen in an electronic device, such as a liquid crystal display screen or an organic light emitting diode display screen, etc., which is not limited in the embodiments of the present disclosure.
  • the display processing device 100 may include more or fewer circuits or units, and the connection relationship between the various circuits or units is not limited, and can be determined according to actual needs. .
  • the specific structure of each circuit is not limited, and may be composed of analog devices according to the circuit principle, or may be composed of digital chips, or be composed in other suitable manners.
  • FIG. 8 is a schematic block diagram of still another display processing apparatus provided by at least one embodiment of the present disclosure.
  • the display processing apparatus 200 includes a processor 210, a memory 220, and one or more computer program modules 221.
  • the processor 210 and the memory 220 are connected through a bus system 230.
  • one or more computer program modules 221 are stored in the memory 220.
  • one or more computer program modules 221 include instructions for executing the display processing method provided by any embodiment of the present disclosure.
  • instructions in one or more computer program modules 221 may be executed by the processor 210.
  • the bus system 230 may be a commonly used serial or parallel communication bus, etc., which is not limited in the embodiments of the present disclosure.
  • the processor 210 may be a central processing unit (CPU), a digital signal processor (DSP), an image processor (GPU), or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may be general-purpose processing units.
  • CPU central processing unit
  • DSP digital signal processor
  • GPU image processor
  • the memory 220 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include read-only memory (ROM), hard disk, flash memory, etc., for example.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 210 may run the program instructions to implement the functions (implemented by the processor 210) and/or other desired functions in the embodiments of the present disclosure, For example, display processing methods and so on.
  • Various application programs and various data such as display environment parameters, display detail levels, and various data used and/or generated by the application programs, can also be stored in the computer-readable storage medium.
  • the embodiment of the present disclosure does not provide all the constituent units of the display processing apparatus 200.
  • those skilled in the art may provide and set other unshown component units according to specific needs, and the embodiments of the present disclosure do not limit this.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by at least one embodiment of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 9 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device includes the display processing device 100/200 provided by any embodiment of the present disclosure and a display screen (for example, the output device 307 shown in FIG. 9); upon receiving an instruction to display the object to be displayed, the display screen It is configured to receive and display display sub-models of multiple sub-models from the display processing device to display the object to be displayed. For example, import the display sub-models of multiple sub-models into the 3D software, and import the display sub-models of the multiple sub-models into the 3D software.
  • the display sub-models of the sub-models corresponding to multiple elements are combined in the three-dimensional software to obtain the display model of the basic model, for example, to obtain the display model of the organ and liver, or to obtain the display model of the human body.
  • the display model is also a three-dimensional model.
  • the display screen receiving the display sub-models of the multiple sub-models from the display processing device and displaying them includes receiving the combined display model from the display processing device to display the final display model of the object to be displayed on the display screen.
  • the electronic device 300 includes a processing device (such as a central processing unit, a graphics processor, etc.) 301, which can be based on a program stored in a read-only memory (ROM) 302 or from a storage device.
  • the device 308 loads a program in a random access memory (RAM) 303 to perform various appropriate actions and processing.
  • RAM 303 various programs and data required for the operation of the computer system are also stored.
  • the processing device 301, the ROM 302, and the RAM 303 are connected by the bus 304 here.
  • An input/output (I/O) interface 305 is also connected to the bus 304.
  • the following components can be connected to the I/O interface 305: including input devices 306 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including input devices such as liquid crystal display (LCD), speakers, vibration An output device 307 such as a device; a storage device 308 such as a magnetic tape, a hard disk, etc.; and a communication device 309 that includes a network interface card such as a LAN card, a modem, and the like.
  • the communication device 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data, and perform communication processing via a network such as the Internet.
  • the driver 310 is also connected to the I/O interface 305 as needed.
  • FIG. 9 shows the electronic device 300 including various devices, it should be understood that it is not required to implement or include all of the illustrated devices. More or fewer devices may be implemented alternatively or included.
  • the electronic device 300 may further include a peripheral interface (not shown in the figure) and the like.
  • the peripheral interface can be various types of interfaces, such as a USB interface, a lightning interface, and the like.
  • the communication device 309 can communicate with a network and other devices through wireless communication, such as the Internet, an intranet, and/or a wireless network such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN).
  • wireless communication such as the Internet, an intranet, and/or a wireless network such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN).
  • LAN wireless local area network
  • MAN metropolitan area network
  • Wireless communication can use any of a variety of communication standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (W-CDMA) , Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wi-Fi (e.g. based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n standards), voice transmission based on Internet protocol (VoIP), Wi-MAX, protocols used for e-mail, instant messaging and/or short message service (SMS), or any other suitable communication protocol.
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • W-CDMA Wideband Code Division Multiple Access
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • Wi-Fi e.g. based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n standards
  • VoIP Internet protocol
  • Wi-MAX
  • the electronic device can be any device such as a mobile phone, a tablet computer, a notebook computer, an e-book, a game console, a television, a digital photo frame, a navigator, etc., or can be any combination of electronic devices and hardware. This is not limited.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302.
  • the processing device 301 the above-mentioned display processing function defined in the method of the embodiment of the present disclosure is executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device obtains at least two Internet protocol addresses; A node evaluation request for an Internet Protocol address, the node evaluation device selects an Internet Protocol address from the at least two Internet Protocol addresses and returns it; receives the Internet Protocol address returned by the node evaluation device; the obtained Internet Protocol address Indicates the edge node in the content distribution network.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device: receives a node evaluation request including at least two Internet Protocol addresses; Among at least two Internet Protocol addresses, an Internet Protocol address is selected; the selected Internet Protocol address is returned; the received Internet Protocol address indicates an edge node in the content distribution network.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • FIG. 10 is a schematic diagram of a storage medium provided by at least one embodiment of the present disclosure.
  • the storage medium 400 non-transitory stores computer-readable instructions 401, and when the non-transitory computer-readable instructions are executed by a computer (including a processor), it can execute any of the embodiments of the present disclosure Show processing method.
  • the storage medium may be any combination of one or more computer-readable storage media.
  • one computer-readable storage medium contains computer-readable program code for determining the number of display faces of the sub-model, and another computer-readable storage medium
  • the medium contains computer-readable program code that determines the display sub-model of the sub-model.
  • the computer can execute the program code stored in the computer storage medium, and execute, for example, the display processing method provided in any embodiment of the present disclosure.
  • the storage medium may include a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), Portable compact disk read-only memory (CD-ROM), flash memory, or any combination of the foregoing storage media may also be other suitable storage media.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • CD-ROM Portable compact disk read-only memory
  • flash memory or any combination of the foregoing storage media may also be other suitable storage media.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Geometry (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种显示处理方法、显示处理装置、电子设备及存储介质。该显示处理方法包括:获取待显示对象的基础模型的多个子模型,每个子模型包括多个面;获取待显示对象的显示环境参数;基于显示环境参数,对于多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;对于多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。该显示处理方法可以根据显示环境对基础模型进行优化减面,提高展示效果,降低资源消耗。

Description

显示处理方法、显示处理装置、电子设备及存储介质 技术领域
本公开的实施例涉及一种显示处理方法、显示处理装置、电子设备和存储介质。
背景技术
健康始终是人们关注的重要议题,随着计算机技术以及通信技术的发展,人们希望能够随时监控自身的健康状况,以快速判断身体哪一部分可能出现问题,并尽早预防。由于人体是一个复杂的综合有机系统,首先人们需要对人体具有清晰和准确的认识,例如,可以依靠现有的解剖学知识以及现有的显示技术,让人们能够了解人体整体以及人体的各部分结构,从而可以实现对自身健康状况的监控。
发明内容
本公开至少一实施例提供一种显示处理方法,包括:获取待显示对象的基础模型的多个子模型,每个子模型包括多个面;获取所述待显示对象的显示环境参数;基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
例如,在本公开至少一实施例提供的显示处理方法中,基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数,包括:对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,其中,所述第二显示层级大于所述第一显示层级;在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
例如,在本公开至少一实施例提供的显示处理方法中,采用所述第一显示层级确定的该子模型的显示面数少于采用所述第二显示层级确定的该子模型的显示面数。
例如,本公开至少一实施例提供的显示处理方法,还包括:将所述待显示对象的基础模型导入图像渲染引擎,并通过所述图像渲染引擎将所述待显示对象的基础模型拆分为所述多个子模型。
例如,在本公开至少一实施例提供的显示处理方法中,所述图像渲染引擎包括虚幻4引擎,所述显示处理方法还包括:通过插件从所述虚幻4引擎中获取所述待显示对象的多个子模型,并通过所述插件分别确定所述多个子模型的显示子模型。
例如,在本公开至少一实施例提供的显示处理方法中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理方法还包括:保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
例如,本公开至少一实施例提供的显示处理方法,还包括:对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
例如,本公开至少一实施例提供的显示处理方法,还包括:利用所述多个子模型的显示子模型,来显示所述待显示对象。
例如,在本公开至少一实施例提供的显示处理方法中,将所述多个子模型的显示子模型导入三维软件;在所述三维软件中合并所述多个子模型的显示子模型以获得所述基础模型的显示模型;显示所述显示模型以显示所述待显示对象。
例如,在本公开至少一实施例提供的显示处理方法中,所述待显示对象为人体,所述基础模型为人体三维模型。
本公开至少一实施例还提供一种显示处理装置,包括:第一获取单元,配置为获取待显示对象的基础模型的多个子模型,每个子模型包括多个面;第二获取单元,配置为获取所述待显示对象的显示环境参数;显示面数确定单元,配置为基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级, 确定该子模型的显示面数;显示子模型确定单元,配置为对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
例如,在本公开至少一实施例提供的显示处理装置中,所述显示面数确定单元还配置为:对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,所述第二显示层级大于所述第一显示层级;在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
例如,在本公开至少一实施例提供的显示处理装置中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理装置还包括:纹理贴图信息确定单元,配置为保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
例如,本公开至少一实施例提供的显示处理装置,还包括:名称确定单元,配置为对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
例如,本公开至少一实施例提供的显示处理装置,还包括:显示单元,配置为利用所述多个子模型的显示子模型,来显示所述待显示对象。
本公开至少一实施例还提供一种显示处理装置,包括:处理器;存储器;一个或多个计算机程序模块,其中,所述一个或多个计算机程序模块被存储在所述存储器中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现本公开任一实施例提供的显示处理方法的指令。
本公开至少一实施例还提供一种电子设备,包括:本公开任一实施例提供的显示处理装置和显示屏;在接收到显示所述待显示对象的指令时,所述显示屏配置为从所述显示处理装置中接收所述多个子模型的显示子模型并进行显示,以显示所述待显示对象。
本公开至少一实施例还提供一种存储介质,非暂时性地存储计算机可读指令,当所述计算机可读指令由计算机执行时可以执行本公开任一实施例提 供的的显示处理方法。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1A为一种减面后的显示模型的效果图;
图1B为本公开至少一实施例提供的一种显示处理方法的一个示例的流程图;
图2A为本公开至少一实施例提供的一种人体三维模型的示意图;
图2B为本公开至少一实施例提供的一种确定子模型的显示面数的流程;
图2C为一种人体三维模型的原始模型的示意图;
图2D为采用本公开至少一实施例提供显示处理方法对图2C所示的人体三维模型进行减面后获得的显示模型;
图2E为人体的蝶骨的原始模型的示意图;
图2F为采用本公开至少一实施例提供显示处理方法对图2E所示的蝶骨进行减面处理后的示意图;
图3为本公开至少一实施例提供的另一种显示处理方法的流程图;
图4为本公开至少一实施例提供的一种显示待显示对象的方法的流程;
图5A为本公开至少一实施例提供的一种显示处理方法的系统流程图;
图5B为图5A所示的显示处理方法的一种具体实现示例的系统流程图;
图6为本公开至少一实施例提供的一种显示处理装置的示意框图;
图7为本公开至少一实施例提供的另一种显示处理装置的示意框图;
图8为本公开至少一实施例提供的又一种显示处理装置的示意框图;
图9为本公开至少一实施例提供的一种电子设备的结构示意图;以及
图10为本公开至少一实施例提供的一种存储介质的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然, 所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
通常通过建立并展示人体的三维模型,以让人们能够了解人体整体以及人体的各部分结构。在通过三维软件的建模过程中,一般采用多边形(例如,三角形)构建人体的三维模型,用这种方法创建的人体三维模型的表面包括多个多边形,多边形的数量越多,该人体三维模型的面数越多,面数越多,数据量越大,系统的处理速度越慢。为了减少系统的数据处理量,提高系统的处理速度,目前对人体三维模型的减面方法主要包括两种:第一种方法是直接在三维软件中手动优化人体三维模型,以实现减面;第二种方法是通过设置人体三维模型的显示面数的预设值,以在三维软件中实现自动减面。
但是,上述两种对人体三维模型的减面方法分别包括如下缺陷:第一种方法虽然步骤简洁,但是需要人工手动减面,其耗费的人力和时间较大,且对于复杂的三维模型,人工操作的难度也非常大;第二种方法虽然速度较快,但是该方案仅能够控制人体三维模型的显示面数,人体三维模型的其他方面的可控数据(例如,对显示细节的要求等数据)较少,减面后无法实现预期的效果,显示效果较差。图1A为一种减面后的显示模型的效果图。当对人体三维模型减面过多时,会出现图1A所示的情况,例如,根据减面程度的差异,减面后的人体三维模型上的相接但未焊接的顶点的部分会出现不同缝隙。另外,当当模型数量巨大时,难以进行模型的减面处理。
本公开至少一实施例提供一种显示处理方法,包括:获取待显示对象的基础模型的多个子模型,每个子模型包括多个面;获取待显示对象的显示环境参数;基于显示环境参数,对于多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;对于多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
本公开一些实施例还提供对应于上述显示处理方法的显示处理装置和存储介质。
本公开上述实施例提供的显示处理方法可以基于显示环境和显示细节层级确定各个子模型的显示面数,从而可以在减少各个子模型的显示面数的同时,增加各个子模型的可控数据(例如,显示细节层级等),提高了待显示对象的显示效果,减少了系统的数据处理量,降低了资源消耗。
下面结合附图对本公开的实施例及其示例进行详细说明。
本公开至少一实施例提供一种显示处理方法,例如,可以应用于展示人体三维模型等。图1B为本公开至少一实施例提供的一种显示处理方法的一个示例的流程图。例如,该显示处理方法可以以软件、硬件、固件或其任意组合的方式实现,由例如手机、平板电脑、笔记本电脑、桌面电脑、网络服务器等设备中的处理器加载并执行,可以在减少人体三维模型的显示面数的同时,增加人体三维模型的可控数据,提高了人体三维模型的显示效果。
例如,该显示处理方法适用于一计算装置,该计算装置是包括具有计算功能的任何电子设备,例如可以为手机、笔记本电脑、平板电脑、台式计算机、网络服务器等,可以加载并执行该显示处理方法,本公开的实施例对此不作限制。例如,该计算装置可以包括中央处理单元(Central Processing Unit,CPU)或图形处理单元(Graphics Processing Unit,GPU)等具有具有数据处理能力和/或指令执行能力的其它形式的处理单元、存储单元等,该计算装置上还安装有操作系统、应用程序编程接口(例如,OpenGL(Open Graphics Library)、Metal等)等,通过运行代码或指令的方式实现本公开实施例提供的显示处理方法。例如,该计算装置还可以包括显示部件,该显示部件例如为液晶显示屏(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light Emitting Diode,OLED)显示屏、量子点发光二极管(Quantum Dot Light  Emitting Diode,QLED)显示屏、投影部件、VR头戴式显示设备(例如VR头盔、VR眼镜)等,本公开的实施例对此不作限制。例如,该显示部件可以显示待显示对象。
如图1B所示,该显示处理方法包括步骤S110至步骤S140。
步骤S110:获取待显示对象的基础模型的多个子模型,每个子模型包括多个面。
步骤S120:获取待显示对象的显示环境参数。
步骤S130:基于显示环境参数,对于多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数。
步骤S140:对于多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
对于步骤S110,例如,在一些示例中,待显示对象为人体,基础模型为人体三维模型(如图2A所示)。当然,待显示对象还可以是其他可以显示的动物、植物以及物体等,本公开的实施例对此不做限制。下面以待显示对象为人体,基础模型为人体三维模型为例进行说明,本公开的实施例对此不作限制。
例如,在一些示例中,在打开电子设备进入人体信息展示的显示界面(例如,打开数字人体APP)后,该优化后的人体三维模型(即显示模型)在电子设备上进行展示。
例如,在一些示例中,将待显示对象的基础模型导入图像渲染引擎(即渲染器,例如,包括二维图像引擎或三维图像引擎,例如,图像处理器(GPU)),并通过图像渲染引擎将待显示对象的基础模型拆分为多个子模型。例如,图像渲染引擎向下面介绍的第一获取单元提供该多个子模型,以在后续步骤中对该多个子模型进行处理以确定该多个子模型的显示子模型。例如,该图像渲染引擎包括虚幻4引擎(Unreal Engine4,简称UE4)。下面以图像渲染引擎为UE4为例进行说明,本公开的实施例对此不作限制。
需要注意的是,还可以根据需要实现的功能将待显示对象的基础模型导入物理引擎、脚本引擎或网络引擎等其他类型的引擎中进行拆分处理,本公开的实施例对此不作限制。
例如,基础模型为人体三维模型,人体三维模型中包括人体器官类别信息、人体系统类别信息或人体参数信息。例如,人体三维模型按照器官进行分类,可以包括心、肝脏、脾、肺、肾等各个器官的三维模型,按照人体系统类别进行分类,可以包括循环系统、消化系统、呼吸系统、生殖系统、免疫系统等系统的三维模型,也可按照局部进行分类,包括头部、胸部、上下肢等局部系统的三维模型。
例如,将器官肝脏的三维模型拆分为不能再继续拆分的多个元素,例如,可以拆分为肝和脏下面不能再继续拆分的元素,器官肝脏的三维模型的多个子模型与该多个元素一一对应,即针对拆分的每个元素形成一个子模型。
例如,每个子模型包括多个面。由于在通过三维软件的建模过程中,一般采用多边形(例如,三角形)构建人体的三维模型,用这种方法创建的人体三维模型的表面包括多个多边形,因此,被拆分的多个子模型中的每个子模型的表面也包括多个多边形,即每个子模型包括多个面。
在下面的步骤中,对每个子模型分别进行优化减面,即分别确定各个子模型的显示面数,以确定各个子模型的显示子模型,从而减少最终的人体三维模型的显示面数。
例如,可以提供第一获取单元,并通过该第一获取单元获取待显示对象的基础模型的多个子模型,例如,从图像渲染引擎中获取该多个子模型;例如,也可以通过中央处理单元(CPU)、图像处理器(GPU)、张量处理器(TPU)、现场可编程逻辑门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元以及相应计算机指令来实现该第一获取单元。例如,该处理单元可以为通用处理器或专用处理器,可以是基于X86或ARM架构的处理器等。
例如,在图像渲染引擎将待显示对象的基础模型拆分为多个子模型后,通过插件从图像渲染引擎(例如,UE4)中获取待显示对象的多个子模型,并通过插件分别确定多个子模型的显示子模型,即在插件中执行相应的指令以实现后续步骤S120-步骤S140。例如,在一些示例中,该插件可以是针对图像渲染引擎开发的,以实现对应的功能。
对于步骤S120,例如,待显示对象的显示环境参数包括待显示对象的显示模型在显示界面中需要呈现的位置、角度和距离显示屏幕的距离等。
例如,该显示屏幕相当于虚拟相机,待显示对象的显示模型距离显示屏幕的距离即待显示对象的显示模型在3维空间中距离该虚拟相机的距离。
例如,待显示对象的显示环境参数包括:在近距离显示子模型,即扩大显示该子模型时,待显示对象的显示模型在显示界面中需要呈现的位置靠近显示屏幕,即待显示对象的显示模型距离显示屏幕的距离较小,例如,记作第二距离;在远距离显示该子模型,即缩小显示该子模型时,待显示对象的显示模型在显示界面中需要呈现的位置远离显示屏幕,即待显示对象的显示模型距离显示屏幕的距离较大,例如,记作第一距离(例如,第一距离大于第二距离)。具体介绍可参考下面图2B中的介绍,在此不再赘述。
例如,可以提供第二获取单元,并通过该第二获取单元获取待显示对象的显示环境参数;例如,也可以通过中央处理单元(CPU)、图像处理器(GPU)、张量处理器(TPU)、现场可编程逻辑门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元以及相应计算机指令来实现该第二获取单元。
对于步骤S130,例如,显示细节层级可以通过LOD(Levels of Detail,多细节层次)技术获得。LOD技术指根据物体模型的节点在显示环境中所处的位置和重要度,决定物体渲染的资源分配,降低非重要物体的面数和细节度,提高重要物体的面数和细节度,从而获得高效率的渲染运算。例如,LOD1为第一显示层级,LOD2为第二显示层级,LOD3为第三显示层级,以此类推。
例如,对于多个子模型中的每个子模型,基于步骤S120中获取的显示环境参数,确定该子模型的显示细节层级。
图2B为本公开至少一实施例提供的一种确定子模型的显示面数的方法的流程图。也就是说,图2B为图1B中所示的步骤S130的一些示例的流程图。例如,在图2B所示的示例中,该确定方法包括步骤S131至步骤S133。下面,结合图2B对本公开至少一实施例提供的显示处理方法进行详细地介绍。
步骤S131:对于多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级。
步骤S132:在采用第一距离显示该子模型时,采用第一显示层级确定该 子模型的显示面数。
步骤S133:在采用第二距离显示该子模型时,采用第二显示层级确定该子模型的显示面数。
例如,第一距离和第二距离表示子模型至显示屏幕的距离,第一距离大于第二距离。
对于步骤S131,例如,在一些示例中,第一显示层级和第二显示层级分别表示子模型的显示面数(即显示子模型的面数)和子模型的原本的面数的百分比。例如,在一些示例中,第二显示层级大于第一显示层级,即第二显示层级对应的子模型的显示面数多于第一显示层级对应的子模型的显示面数,因此,采用第二显示层级确定的显示子模型的细节度的展示更高,因此,更适合近距离展示(例如,放大展示);采用第一显示层级确定的显示子模型的细节度的展示较低,因此,更适合远距离展示(例如,缩小展示)。例如,第二显示层级为40%(例如,子模型的面数为1000,子模型的显示面数(显示子模型的面数)为400),第一显示层级为10%(例如,子模型的面数为1000,子模型的显示面数(例如,显示子模型的面数)为100)。
需要注意的是,还可以继续将显示细节层级分为第三显示层级和第四显示层级等,本公开的实施例对此不作限制。
对于步骤S132,例如,在采用第一距离显示该子模型时,即远距离显示该子模型时,采用较低的显示层级(例如,第一显示层级)确定该子模型的显示面数,从而可以确定显示面数较少的显示子模型。
例如,在系统中设置第一显示层级的预设值,在采用第一距离显示该子模型时,调用该第一显示层级的预设值设置该子模型的显示面数。
对于步骤S133,例如,在采用第二距离显示该子模型时,即近距离显示该子模型时,采用较高的显示层级(例如,第二显示层级)确定该子模型的显示面数,从而可以确定显示面数较多的显示子模型,即采用第一显示层级确定的该子模型的显示面数少于采用第二显示层级确定的该子模型的显示面数。
例如,在系统中设置第二显示层级的预设值,在采用第二距离显示该子模型时,调用该第二显示层级的预设值设置该子模型的显示面数。
例如,当接收到扩大展示该子模型的指令时,例如,手指或鼠标在电子 设备的显示界面(例如,数字人体APP的界面)点击扩大展示该子模型的按钮时,展示通过步骤S133获取的显示面数确定的显示子模型;当接收到缩小展示该子模型的指令时,例如,手指或鼠标在电子设备的显示界面(例如,数字人体APP的界面)点击缩小展示该子模型的按钮时,展示通过步骤S132获取的显示面数确定的显示子模型。
例如,可以根据实际情况设置距离的预设阈值,当显示距离(子模型距离显示屏幕的距离)大于该预设阈值时,即采用第一距离显示子模型,当显示距离小于该预设阈值时,即采用第二距离显示子模型,本公开的实施例对此不作限制。
根据显示环境参数,例如,该子模型的显示位置至显示屏幕的距离,确定显示细节层级,例如,扩大展示子模型(采用第二距离显示子模型)时,采用数值较大的第二显示层级确定子模型的显示面数,从而可以更细节的展示该子模型,提高展示效果;缩小展示子模型(采用第一距离显示子模型)时,采用数值较小的第一显示层级确定子模型的显示面数,可以降低非重要模型的面数和细节度,从而降低系统的处理数据量,减少资源消耗,提高渲染效率。
例如,可以提供显示面数确定单元,并通过该显示面数确定单元确定该子模型的显示面数;例如,也可以通过中央处理单元(CPU)、图像处理器(GPU)、张量处理器(TPU)、现场可编程逻辑门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元以及相应计算机指令来实现该显示面数确定单元。
对于步骤S140,例如,对于多个子模型中的每个子模型,基于所确定的该子模型的显示面数,采用本领域的建模方法,确定该子模型的显示子模型。
例如,该显示子模型即为优化减面后的模型,例如,也是三维立体模型,采用该显示子模型在电子设备的显示界面上进行展示,可以在减少各个子模型的显示面数的同时,提高待显示对象的显示效果,减少了系统的数据处理量,降低了资源消耗。
例如,可以提供显示子模型确定单元,并通过该显示子模型确定单元确定该子模型的显示子模型;例如,也可以通过中央处理单元(CPU)、图像处理器(GPU)、张量处理器(TPU)、现场可编程逻辑门阵列(FPGA)或者具 有数据处理能力和/或指令执行能力的其它形式的处理单元以及相应计算机指令来实现该显示子模型确定单元。
图3为本公开至少一实施例提供的另一种显示处理方法的流程图。如图3所示,该显示处理方法还包括步骤S150-步骤S180。
步骤S150:保留该子模型的与基础模型的纹理贴图信息一致的子纹理贴图信息。
例如,对于多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息UV0、UV1、UV2……。例如,该子模型的纹理包括该子模型的颜色和亮度等信息,子纹理贴图信息包括子模型中的每个点的纹理在图片上的坐标,从而可以根据该坐标将图片上各个点的纹理精确对应到三维模型的表面上,从而可以正确的显示该三维模型的纹理。
由于将该子模型(例如,一个六面体)的各个面展开至二维平面中时,展开方式有多种,例如,展开后六个面的序号排布方式可以是123456,也可以是612345,只要保证各个面的纹理贴图信息与该子模型在三维空间中的相对位置关系对应即可(即,面1的纹理贴图信息对应三维模型上的面1,面2的纹理贴图信息对应三维模型上的面2,……,面6的纹理贴图信息对应三维模型上的面6),因此,不同的展开方式(及不同的序号排布方式)对应不同的纹理贴图坐标,因此,该子模型包括多个子纹理贴图信息。
例如,在子模型包括的多个子纹理贴图信息UV0、UV1、UV2……中,子纹理贴图信息UV0与基础模型中的纹理贴图信息一致,其余的可能不一致,因此,保留该子模型的与基础模型的纹理贴图信息一致的子纹理贴图信息UV0以用于后续显示子模型的显示。
步骤S160:删除除保留的子纹理贴图信息之外的其他子纹理贴图信息。
例如,在一些示例中,由于在大多数情况下,创建显示子模型时并不需要多套子纹理贴图信息,仅需要一套与基础模型一致的纹理贴图信息即可,因此删除除保留的子纹理贴图信息之外的其他子纹理贴图信息,以减少文件的数据量,提高系统的响应速度。
步骤S170:对于多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的基础模型的名称保持一致。
例如,图像渲染引擎在将基础模型拆分成为多个子模型进行编辑时,可 能会改变各个子模型的名称,这样会导致将多个子模型导出引擎后其他软件再进行后续操作时名称将无法对应,从而无法进行相应的操作。为了避免出现这种情况,对于多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的基础模型的名称保持一致。
例如,假设器官肝脏为其中一个子模型,其在基础模型中的名称是肝脏,那么该子模型的名称也应该是肝脏。
步骤S180:利用多个子模型的显示子模型,来显示待显示对象。
例如,该显示子模型即为优化减面后的模型,在电子设备的显示界面上展示该显示子模型,可以在减少各个子模型的显示面数的同时,提高待显示对象的显示效果,减少了系统的数据处理量,降低了资源消耗。
例如,各个子模型的显示处理方法(例如,上述步骤S120-S170)可以并行进行,从而可以提高该显示处理方法的速度,减少耗时,提高显示处理效率。
图4为本公开至少一实施例提供的一种显示待显示对象的方法的流程图。也就是说,图4为图3中所示的步骤S180的一些示例的流程图。例如,在图4所示的示例中,该确定方法包括步骤S181至步骤S183。下面,结合图4对本公开至少一实施例提供的显示待显示对象的方法进行详细地介绍。
步骤S181:将多个子模型的显示子模型导入三维软件。
例如,在步骤S170中,获取最终的显示子模型后,将该显示子模型输出至三维软件中进行合并。例如,该三维软件可以包括3ds max、Maya、Cinema 4D、zbrush等。
步骤S182:在三维软件中合并多个子模型的显示子模型以获得基础模型的显示模型。
例如,在三维软件中合并多个元素对应的子模型的显示子模型以获得基础模型的显示模型,例如,获得器官肝脏的显示模型,或者获得人体的显示模型。例如,该显示模型也是三维立体模型。
例如,可以通过本领域的合并方法合并各个子模型的显示子模型,本公开的实施例对此不作限制,在此不再赘述。
例如,该基础模型的面数很多,该显示模型为优化减面后的待显示对象的三维人体模型,其显示面数远少于基础模型的面数,从而可以减少系统的 数据处理量,降低资源消耗。
步骤S183:显示显示模型以显示待显示对象。
例如,在电子设备的显示界面上展示该显示模型,可以在减少各个子模型的显示面数的同时,提高待显示对象的显示效果,减少了系统的数据处理量,降低了资源消耗。
需要说明的是,在本公开的实施例中,本公开上述各个实施例提供的显示处理方法的流程可以包括更多或更少的操作,这些操作可以顺序执行或并行执行。虽然上文描述的显示处理方法的流程包括特定顺序出现的多个操作,但是应该清楚地了解,多个操作的顺序并不受限制。上文描述的显示处理方法可以执行一次,也可以按照预定条件执行多次。
本公开上述实施例提供的显示处理方法可以基于显示环境和显示细节层级确定各个子模型的显示面数,从而可以在减少各个子模型的显示面数的同时,增加各个子模型的可控数据,提高了待显示对象的显示效果,减少了系统的数据处理量,降低了资源消耗。
图2C为一种人体三维模型的原始模型(即,未减面的模型)的示意图;图2D为采用本公开至少一实施例提供显示处理方法对图2C所示的人体三维模型进行减面后获得的显示模型;图2E为人体的蝶骨的原始模型的示意图;图2F为采用本公开至少一实施例提供显示处理方法对图2E所示的蝶骨进行减面处理后的示意图。
如图2C所示,未减面的人体三维模型例如包括4916042个三角面,如图2D所示,减完面后的显示模型包括2800316个三角面,例如,对人体三维模型中的肌肉、骨骼和骨连接等部位均已进行减面,其余部分尚未进行上述显示处理(即,尚未进行减面)。如图2E所示,蝶骨的原始模型包括10189个三角面,如图2F所示,减面后蝶骨模型包括1527个三角面,约为基础模型的15%,基本达到减面极限。
由此可见,采用本公开至少一实施例提供的显示处理方法不仅可以实现最大限度的减面处理,以降低系统的数据处理量,同时还不影响显示模型的显示效果,提高了待显示对象的显示效果。
图5A为本公开至少一实施例提供的一种显示处理方法的系统流程图;图5B为图5A所示的显示处理方法的一种具体实现示例的系统流程图。下面 参考图5A和图5B对本公开至少一实施例提供的显示处理方法进行详细地介绍。
例如,如图5A和图5B所示,首先,将基础模型导入引擎,例如,该引擎可以是UE4引擎,在UE4引擎中将基础模型拆分为多个子模型,该多个子模型的名称发生改变,UV(子纹理贴图信息)的数量增加,将拆分的多个子模型导入根据UE4开发的插件中。例如,在该插件中设置控制子模型的显示面数的预设值,例如,设置第一显示层级LOD1和第二显示层级LOD2,增加LOD分级,例如,在采用第一距离显示该子模型时,采用第一显示层级确定该子模型的显示面数,在采用第二距离显示该子模型时,采用第二显示层级确定该子模型的显示面数,以降低子模型的显示面数,并基于该子模型的显示面数确定子模型的显示子模型,具体介绍可参考步骤S110-S130的相关描述,在此不再赘述。例如,在该插件中还设置各个子模型的名称和UV(纹理贴图信息)数量,例如,保留该子模型的与基础模型的纹理贴图信息一致的子纹理贴图信息,删除除保留的子纹理贴图信息之外的其他子纹理贴图信息,对于多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的基础模型的名称保持一致,具体介绍可参考步骤S160-S170的相关描述,在此不再赘述。然后,输出各个子模型的显示子模型,并导入三维软件,在三维软件中合并多个子模型的显示子模型以获得基础模型的显示模型,具体介绍可参考步骤S181-S183的相关描述。最后,输出显示模型并在电子设备的显示界面显示该显示模型以显示待显示对象。
图6为本公开至少一实施例提供的一种显示处理装置的示意框图。例如,在图6所示的示例中,该显示处理装置100包括第一获取单元110、第二获取单元120、面数确定单元130和显示子模型确定单元140。例如,这些单元可以通过硬件(例如电路)模块或软件模块等实现,以下是实施例与此相同,不再赘述。例如,可以通过中央处理单元(CPU)、图像处理器(GPU)、张量处理器(TPU)、现场可编程逻辑门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元以及相应计算机指令来实现这些单元。
第一获取单元110,配置为获取待显示对象的基础模型的多个子模型。例如,每个子模型包括多个面。例如,该第一获取单元110可以实现步骤S110, 其具体实现方法可以参考步骤S110的相关描述,在此不再赘述。
第二获取单元120配置获取待显示对象的显示环境参数。例如,该第二获取单元120可以实现步骤S120,其具体实现方法可以参考步骤S120的相关描述,在此不再赘述。
显示面数确定单元130,配置为基于显示环境参数,对于多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数。例如,显示面数确定单元130可以实现步骤S130,其具体实现方法可以参考步骤S130的相关描述,在此不再赘述。
显示子模型确定单元140,配置为对于多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。例如,显示子模型确定单元140可以实现步骤S140,其具体实现方法可以参考步骤S140的相关描述,在此不再赘述。
例如,在一些示例中,显示面数确定单元130还配置为:对于多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级;在采用第一距离显示子模型时,采用第一显示层级确定该子模型的显示面数;在采用第二距离显示子模型时,采用第二显示层级确定该子模型的显示面数。例如,第二显示层级大于第一显示层级,第一距离和第二距离表示子模型至显示屏幕的距离,第一距离大于第二距离。
图7为本公开至少一实施例提供的另一种显示处理装置的示意图,例如,如图7所示,在图6所示的示例的基础上,该显示处理装置100还包括纹理贴图信息确定单元150、名称确定单元160和显示单元170。
例如,在一些示例中,纹理贴图信息确定单元150配置为保留该子模型的与基础模型的纹理贴图信息一致的子纹理贴图信息;删除除保留的子纹理贴图信息之外的其他子纹理贴图信息。例如,纹理贴图信息确定单元150可以实现步骤S150和S160,其具体实现方法可以参考步骤S150和S160的相关描述,在此不再赘述。
例如,名称确定单元160配置为对于多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的基础模型的名称保持一致。例如,名称确定单元160可以实现步骤S170,其具体实现方法可以参考步骤S170的 相关描述,在此不再赘述。
例如,显示单元170,配置为利用多个子模型的显示子模型,来显示待显示对象。例如,显示单元170可以实现步骤S180,其具体实现方法可以参考步骤S180的相关描述,在此不再赘述。例如,在一些示例中,该显示单元170可以是电子设备中的显示屏,例如,液晶显示屏或有机发光二极管显示屏等,本公开的实施例对此不作限制。
需要注意的是,在本公开的实施例中,该显示处理装置100可以包括更多或更少的电路或单元,并且各个电路或单元之间的连接关系不受限制,可以根据实际需求而定。各个电路的具体构成方式不受限制,可以根据电路原理由模拟器件构成,也可以由数字芯片构成,或者以其他适用的方式构成。
图8为本公开至少一实施例提供的又一种显示处理装置的示意框图。例如,如图8所示,该显示处理装置200包括处理器210、存储器220以及一个或多个计算机程序模块221。
例如,处理器210与存储器220通过总线系统230连接。例如,一个或多个计算机程序模块221被存储在存储器220中。例如,一个或多个计算机程序模块221包括用于执行本公开任一实施例提供的显示处理方法的指令。例如,一个或多个计算机程序模块221中的指令可以由处理器210执行。例如,总线系统230可以是常用的串行、并行通信总线等,本公开的实施例对此不作限制。
例如,该处理器210可以是中央处理单元(CPU)、数字信号处理器(DSP)、图像处理器(GPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,可以为通用处理器或专用处理器,并且可以控制显示处理装置200中的其它组件以执行期望的功能。
存储器220可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器210可以运行该程序指令,以实现本公开实施例中(由处理器210实现)的功能以及/或者其它期望的功能,例如显示处理方法等。在该计算机 可读存储介质中还可以存储各种应用程序和各种数据,例如显示环境参数、显示细节层级以及应用程序使用和/或产生的各种数据等。
需要说明的是,为表示清楚、简洁,本公开实施例并没有给出该显示处理装置200的全部组成单元。为实现显示处理装置200的必要功能,本领域技术人员可以根据具体需要提供、设置其他未示出的组成单元,本公开的实施例对此不作限制。
关于不同实施例中的显示处理装置100和显示处理装置200的技术效果可以参考本公开的实施例中提供的显示处理方法的技术效果,这里不再赘述。
显示处理装置100和显示处理装置200可以用于各种适当的电子设备。图9为本公开至少一实施例提供的一种电子设备的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
例如,该电子设备,包括本公开任一实施例提供的显示处理装置100/200和显示屏(例如,图9所示的输出装置307);在接收到显示待显示对象的指令时,显示屏配置为从显示处理装置中接收多个子模型的显示子模型并进行显示,以显示待显示对象。例如,将多个子模型的显示子模型导入三维软件,将多个子模型的显示子模型导入三维软件。例如,在三维软件中合并多个元素对应的子模型的显示子模型以获得基础模型的显示模型,例如,获得器官肝脏的显示模型,或者获得人体的显示模型。例如,该显示模型也是三维立体模型。
例如,显示屏从显示处理装置中接收多个子模型的显示子模型并进行显示包括从显示处理装置中接收上述合并形成的显示模型以在显示屏上展示待显示对象的最终显示模型。
例如,如图9所示,在一些示例中,电子设备300包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储器(ROM)302中的程序或者从存储装置308加载到随机访问存储器(RAM)303中的程序而执行各种适当的动作和处理。在RAM 303中,还存储有计算机系统 操作所需的各种程序和数据。处理装置301、ROM302以及RAM303通过总线304被此相连。输入/输出(I/O)接口305也连接至总线304。
例如,以下部件可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括诸如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据,经由诸如因特网的网络执行通信处理。驱动器310也根据需要连接至I/O接口305。可拆卸介质311,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器310上,以便于从其上读出的计算机程序根据需要被安装入存储装置309。虽然图9示出了包括各种装置的电子设备300,但是应理解的是,并不要求实施或包括所有示出的装置。可以替代地实施或包括更多或更少的装置。
例如,该电子设备300还可以进一步包括外设接口(图中未示出)等。该外设接口可以为各种类型的接口,例如为USB接口、闪电(lighting)接口等。该通信装置309可以通过无线通信来与网络和其他设备进行通信,该网络例如为因特网、内部网和/或诸如蜂窝电话网络之类的无线网络、无线局域网(LAN)和/或城域网(MAN)。无线通信可以使用多种通信标准、协议和技术中的任何一种,包括但不局限于全球移动通信系统(GSM)、增强型数据GSM环境(EDGE)、宽带码分多址(W-CDMA)、码分多址(CDMA)、时分多址(TDMA)、蓝牙、Wi-Fi(例如基于IEEE 802.11a、IEEE 802.11b、IEEE 802.11g和/或IEEE 802.11n标准)、基于因特网协议的语音传输(VoIP)、Wi-MAX,用于电子邮件、即时消息传递和/或短消息服务(SMS)的协议,或任何其他合适的通信协议。
例如,电子设备可以为手机、平板电脑、笔记本电脑、电子书、游戏机、电视机、数码相框、导航仪等任何设备,也可以为任意的电子设备及硬件的组合,本公开的实施例对此不作限制。
例如,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行 流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述显示处理功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取至少两个网际协议地址;向节点评价设备发送包括所述至少两个网际协议地址的节点评价请求,所述节点评价设备从所述至少两个网际协议地址中,选取网际协议地址并返回;接收所述节点评价设备返回的网际协议地址;所获取的网际协议地址指示内容分发网络中的边缘节点。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:接收包括至少两个网际协议地址的节点评价请求;从所述至少两个网际协议地址中,选取网际协议地址;返回选取出的网际协议地址;接收到的网际协议地址指示内容分发网络中的边缘节点。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的各个实施例中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计 算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
本公开至少一实施例还提供一种存储介质。图10为本公开至少一实施例提供的一种存储介质的示意图。例如,如图10所示,该存储介质400非暂时性地存储计算机可读指令401,当非暂时性计算机可读指令由计算机(包括处理器)执行时可以执行本公开任一实施例提供的显示处理方法。
例如,该存储介质可以是一个或多个计算机可读存储介质的任意组合,例如一个计算机可读存储介质包含确定该子模型的显示面数的计算机可读的程序代码,另一个计算机可读存储介质包含确定该子模型的显示子模型的计算机可读的程序代码。例如,当该程序代码由计算机读取时,计算机可以执行该计算机存储介质中存储的程序代码,执行例如本公开任一实施例提供的显示处理方法。
例如,存储介质可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、闪存、或者上述存储介质的任意组合,也可以为其他适用的存储介质。
有以下几点需要说明:
(1)本公开实施例附图只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(2)在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上所述仅是本公开的示范性实施方式,而非用于限制本公开的保护范围,本公开的保护范围由所附的权利要求确定。

Claims (18)

  1. 一种显示处理方法,包括:
    获取待显示对象的基础模型的多个子模型,其中,每个子模型包括多个面;
    获取所述待显示对象的显示环境参数;
    基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;
    对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
  2. 根据权利要求1所述的显示处理方法,其中,基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数,包括:
    对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,其中,所述第二显示层级大于所述第一显示层级;
    在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;
    在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;
    其中,第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
  3. 根据权利要求2所述的显示处理方法,其中,采用所述第一显示层级确定的该子模型的显示面数少于采用所述第二显示层级确定的该子模型的显示面数。
  4. 根据权利要求1-3任一所述的显示处理方法,还包括:
    将所述待显示对象的基础模型导入图像渲染引擎,并通过所述图像渲染引擎将所述待显示对象的基础模型拆分为所述多个子模型。
  5. 根据权利要求4所述的显示处理方法,其中,所述图像渲染引擎包括 虚幻4引擎,所述显示处理方法还包括:
    通过插件从所述虚幻4引擎中获取所述待显示对象的多个子模型,并通过所述插件分别确定所述多个子模型的显示子模型。
  6. 根据权利要求1-5任一所述的显示处理方法,其中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理方法还包括:
    保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;
    删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
  7. 根据权利要求1-6任一所述的显示处理方法,还包括:
    对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
  8. 根据权利要求1-7任一所述的显示处理方法,还包括:
    利用所述多个子模型的显示子模型,来显示所述待显示对象。
  9. 根据权利要求8所述的显示处理方法,其中,
    将所述多个子模型的显示子模型导入三维软件;
    在所述三维软件中合并所述多个子模型的显示子模型以获得所述基础模型的显示模型;
    显示所述显示模型以显示所述待显示对象。
  10. 根据权利要求1-9任一所述的显示处理方法,其中,所述待显示对象为人体,所述基础模型为人体三维模型。
  11. 一种显示处理装置,包括:
    第一获取单元,配置为获取待显示对象的基础模型的多个子模型,其中,每个子模型包括多个面;
    第二获取单元,配置为获取所述待显示对象的显示环境参数;
    显示面数确定单元,配置为基于所述显示环境参数,对于所述多个子模型中的每个子模型,确定该子模型的显示细节层级,并基于所确定的该子模型的显示细节层级,确定该子模型的显示面数;
    显示子模型确定单元,配置为对于所述多个子模型中的每个子模型,基于所确定的该子模型的显示面数,确定该子模型的显示子模型。
  12. 根据权利要求11所述的显示处理装置,其中,所述显示面数确定单元还配置为:
    对于所述多个子模型中的每个子模型,将该子模型的显示细节层级分为第一显示层级和第二显示层级,其中,所述第二显示层级大于所述第一显示层级;
    在采用第一距离显示该子模型时,采用所述第一显示层级确定该子模型的显示面数;
    在采用第二距离显示该子模型时,采用所述第二显示层级确定该子模型的显示面数;
    其中,第一距离和第二距离表示所述子模型至显示屏幕的距离,所述第一距离大于所述第二距离。
  13. 根据权利要求11或12所述的显示处理装置,其中,对于所述多个子模型中的每个子模型,该子模型包括多个子纹理贴图信息,所述显示处理装置还包括:
    纹理贴图信息确定单元,配置为保留该子模型的与所述基础模型的纹理贴图信息一致的子纹理贴图信息;删除除所述保留的子纹理贴图信息之外的其他子纹理贴图信息。
  14. 根据权利要求11-13任一所述的显示处理装置,还包括:
    名称确定单元,配置为对于所述多个子模型中的每个子模型,修改该子模型的名称,以与该子模型对应的所述基础模型的名称保持一致。
  15. 根据权利要求14所述的显示处理装置,还包括:
    显示单元,配置为利用所述多个子模型的显示子模型,来显示所述待显示对象。
  16. 一种显示处理装置,包括:
    处理器;
    存储器;
    一个或多个计算机程序模块,其中,所述一个或多个计算机程序模块被存储在所述存储器中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现权利要求1-10任一所述的显示处理方法的指令。
  17. 一种电子设备,包括:权利要求11-16任一所述的显示处理装置和 显示屏;
    其中,在接收到显示所述待显示对象的指令时,所述显示屏配置为从所述显示处理装置中接收所述多个子模型的显示子模型并进行显示,以显示所述待显示对象。
  18. 一种存储介质,非暂时性地存储计算机可读指令,当所述计算机可读指令由计算机执行时可以执行根据权利要求1-10任一所述的显示处理方法。
PCT/CN2020/073556 2020-01-21 2020-01-21 显示处理方法、显示处理装置、电子设备及存储介质 WO2021146930A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080000062.6A CN113498532B (zh) 2020-01-21 2020-01-21 显示处理方法、显示处理装置、电子设备及存储介质
PCT/CN2020/073556 WO2021146930A1 (zh) 2020-01-21 2020-01-21 显示处理方法、显示处理装置、电子设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073556 WO2021146930A1 (zh) 2020-01-21 2020-01-21 显示处理方法、显示处理装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021146930A1 true WO2021146930A1 (zh) 2021-07-29

Family

ID=76992797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073556 WO2021146930A1 (zh) 2020-01-21 2020-01-21 显示处理方法、显示处理装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113498532B (zh)
WO (1) WO2021146930A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153516B (zh) * 2021-10-18 2022-12-09 深圳追一科技有限公司 数字人显示面板配置方法、装置、电子设备及存储介质
CN114022616B (zh) * 2021-11-16 2023-07-07 北京城市网邻信息技术有限公司 模型处理方法及装置、电子设备及存储介质
CN113963127B (zh) * 2021-12-22 2022-03-15 深圳爱莫科技有限公司 一种基于仿真引擎的模型自动生成方法及处理设备
CN114470766A (zh) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 模型防穿插方法及装置、电子设备、存储介质
CN116188686B (zh) * 2023-02-08 2023-09-08 北京鲜衣怒马文化传媒有限公司 通过局部减面组合成角色低面模型的方法、系统和介质
CN116414316B (zh) * 2023-06-08 2023-12-22 北京掌舵互动科技有限公司 基于数字城市中的bim模型的虚幻引擎渲染方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289839A (zh) * 2011-08-04 2011-12-21 天津中科遥感信息技术有限公司 一种面向三维数字城市的高效多细节层次渲染方法
US20130194260A1 (en) * 2011-08-01 2013-08-01 Peter Kunath System for visualizing three dimensional objects or terrain
CN103914877A (zh) * 2013-01-09 2014-07-09 南京理工大学 一种基于扩展合并的三维模型多细节层次结构
CN107590858A (zh) * 2017-08-21 2018-01-16 上海妙影医疗科技有限公司 基于ar技术的医学样品展示方法和计算机设备、存储介质
CN110427532A (zh) * 2019-07-23 2019-11-08 中南民族大学 温室三维可视化方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130194260A1 (en) * 2011-08-01 2013-08-01 Peter Kunath System for visualizing three dimensional objects or terrain
CN102289839A (zh) * 2011-08-04 2011-12-21 天津中科遥感信息技术有限公司 一种面向三维数字城市的高效多细节层次渲染方法
CN103914877A (zh) * 2013-01-09 2014-07-09 南京理工大学 一种基于扩展合并的三维模型多细节层次结构
CN107590858A (zh) * 2017-08-21 2018-01-16 上海妙影医疗科技有限公司 基于ar技术的医学样品展示方法和计算机设备、存储介质
CN110427532A (zh) * 2019-07-23 2019-11-08 中南民族大学 温室三维可视化方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113498532A (zh) 2021-10-12
CN113498532B (zh) 2024-01-26

Similar Documents

Publication Publication Date Title
WO2021146930A1 (zh) 显示处理方法、显示处理装置、电子设备及存储介质
US11344806B2 (en) Method for rendering game, and method, apparatus and device for generating game resource file
KR102663617B1 (ko) 증강 현실 객체의 조건부 수정
JP5960368B2 (ja) ビジビリティ情報を用いたグラフィックスデータのレンダリング
WO2021008627A1 (zh) 游戏角色渲染方法、装置、电子设备及计算机可读介质
CN114820905B (zh) 虚拟形象生成方法、装置、电子设备及可读存储介质
KR20140139553A (ko) 그래픽 프로세싱 유닛들에서 가시성 기반 상태 업데이트들
CN111882631B (zh) 一种模型渲染方法、装置、设备及存储介质
JP2022505118A (ja) 画像処理方法、装置、ハードウェア装置
CN110211017B (zh) 图像处理方法、装置及电子设备
US12013844B2 (en) Concurrent hash map updates
CN109766319B (zh) 压缩任务处理方法、装置、存储介质及电子设备
WO2022095526A1 (zh) 图形引擎和适用于播放器的图形处理方法
WO2017129105A1 (zh) 一种图形界面更新方法和装置
CN117523062B (zh) 光照效果的预览方法、装置、设备及存储介质
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
CN112807695A (zh) 游戏场景生成方法和装置、可读存储介质、电子设备
CN113744379B (zh) 图像生成方法、装置和电子设备
CN117557712A (zh) 渲染方法、装置、设备及存储介质
CN114898029A (zh) 对象渲染方法及装置、存储介质、电子装置
WO2023185476A1 (zh) 对象渲染方法、装置、电子设备、存储介质及程序产品
CN115953553B (zh) 虚拟形象生成方法、装置、电子设备以及存储介质
CN113487708B (zh) 基于图形学的流动动画实现方法、存储介质及终端设备
RU2810701C2 (ru) Гибридный рендеринг
WO2021043128A1 (zh) 粒子计算方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915024

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915024

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20915024

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20915024

Country of ref document: EP

Kind code of ref document: A1