CN118037929A - Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment - Google Patents
Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment Download PDFInfo
- Publication number
- CN118037929A CN118037929A CN202410333438.5A CN202410333438A CN118037929A CN 118037929 A CN118037929 A CN 118037929A CN 202410333438 A CN202410333438 A CN 202410333438A CN 118037929 A CN118037929 A CN 118037929A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- prosthesis
- texture mapping
- model
- final model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013507 mapping Methods 0.000 title claims abstract description 52
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 57
- 238000009877 rendering Methods 0.000 claims abstract description 41
- 239000012634 fragment Substances 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims abstract description 14
- 238000009434 installation Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000001131 transforming effect Effects 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
The application provides a texture mapping three-dimensional reconstruction method, a device, equipment and a computer readable storage medium based on deep learning. The method comprises the following steps: transmitting the surface data of the bones and the prosthesis to be rendered to the memory of the GPU; setting vertex shaders and fragment shaders corresponding to bones and prostheses according to the rendering requirements of the surfaces; setting a related drawing state before drawing; using the graphics API, sending a drawing instruction to the GPU; the GPU can use a large number of parallel computing units to process rendering computation of a plurality of surfaces in parallel to obtain rendering results; respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; and adjusting the pose of the three-dimensional prosthesis final model to realize the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model. According to the embodiment of the application, the efficiency and the authenticity of three-dimensional reconstruction of bones and prostheses can be improved.
Description
Technical Field
The application belongs to the field of simulation installation, and particularly relates to a texture mapping three-dimensional reconstruction method, device and equipment based on deep learning and a computer readable storage medium.
Background
Currently, three-dimensional reconstruction of bone and prosthesis is required prior to simulated installation of three-dimensional bone models and three-dimensional prosthesis models.
The three-dimensional reconstruction method for bones and prostheses in the traditional technology is respectively volume rendering and surface rendering, but the speed of volume rendering and surface rendering is slower. Moreover, the model reconstructed by volume rendering and surface rendering is not provided with details such as textures and the like, and the reality is not high.
Therefore, how to improve the efficiency and the realism of three-dimensional reconstruction of bones and prostheses is a technical problem that needs to be solved by the person skilled in the art.
Disclosure of Invention
The embodiment of the application provides a texture mapping three-dimensional reconstruction method, device and equipment based on deep learning and a computer readable storage medium, which can improve the efficiency and the authenticity of three-dimensional reconstruction of bones and prostheses.
In a first aspect, an embodiment of the present application provides a texture mapping three-dimensional reconstruction method based on deep learning, including:
Transmitting the surface data of the bones and the prosthesis to be rendered to the memory of the GPU;
Setting vertex shaders and fragment shaders corresponding to bones and prostheses according to the rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
Setting a related drawing state before drawing;
Using the graphics API, sending a drawing instruction to the GPU;
The GPU can use a large number of parallel computing units to process rendering computation of a plurality of surfaces in parallel to obtain rendering results; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
Respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
And adjusting the pose of the three-dimensional prosthesis final model to realize the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
Further, the method comprises:
The generator network comprises 2 downsampled convolution blocks, each convolution block comprising a Conv3D layer, an instance normalization function layer and a ReLU activation function layer; next are 9 residual blocks, each containing 2 convolutions, then connecting the output with the input of the residual block; then, two up-sampling convolution blocks are used to output the generated result.
Further, the method comprises:
The generated results output by the generator network are fed to a discriminator network comprising 3 hidden blocks, each hidden block comprising a Conv3D layer, an instance normalization function layer and a leak ReLU activation function layer.
Further, the method further comprises:
Calculating a cyclic loss function for measuring a difference between an input image and an image obtained after converting it into a different class using a generator network and then restoring it into an original class using the generator network;
Calculating an identity loss function for measuring a difference between an input image and an image generated after the generator network is applied;
Based on the cyclic loss function and the identity loss function, an overall loss function is calculated.
Further, setting the relevant drawing state includes: depth test, illumination mode, texture binding.
Further, adjusting the pose of the three-dimensional prosthetic final model includes:
And adjusting the displacement and the rotation angle of the final model of the three-dimensional prosthesis according to actual requirements.
Further, after the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model is achieved, further comprising:
And carrying out omnibearing slicing operation on the three-dimensional skeleton final model and the three-dimensional prosthesis final model from a preset angle according to actual requirements to obtain slice images.
In a second aspect, an embodiment of the present application provides a texture mapping three-dimensional reconstruction device based on deep learning, including:
The data transmission module is used for transmitting the surface data of the bones and the prosthesis to be rendered into the memory of the GPU;
a shader setting module for setting vertex shaders and fragment shaders corresponding to bones and prostheses according to rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
a drawing state setting module for setting a relevant drawing state before drawing;
the drawing instruction sending module is used for sending drawing instructions to the GPU by using the graphic API;
The parallel rendering calculation module is used for the GPU to use a large number of parallel calculation units to process rendering calculation of a plurality of surfaces in parallel to obtain a rendering result; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
The texture mapping module is used for respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
The pose adjusting module is used for adjusting the pose of the three-dimensional prosthesis final model so as to realize the simulation installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
the processor implements a depth learning based texture mapping three-dimensional reconstruction method when executing the computer program instructions.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement a deep learning based texture mapping three-dimensional reconstruction method.
The texture mapping three-dimensional reconstruction method, the device, the equipment and the computer readable storage medium based on the deep learning can improve the efficiency and the authenticity of three-dimensional reconstruction of bones and prostheses.
The texture mapping three-dimensional reconstruction method based on the deep learning comprises the following steps: transmitting the surface data of the bones and the prosthesis to be rendered to the memory of the GPU; setting vertex shaders and fragment shaders corresponding to bones and prostheses according to the rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel; setting a related drawing state before drawing; using the graphics API, sending a drawing instruction to the GPU; the GPU can use a large number of parallel computing units to process rendering computation of a plurality of surfaces in parallel to obtain rendering results; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model; respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network; and adjusting the pose of the three-dimensional prosthesis final model to realize the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a texture mapping three-dimensional reconstruction method based on deep learning according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a texture mapping network model according to one embodiment of the present application;
FIG. 3 is a schematic illustration of a three-dimensional spinal model provided in accordance with one embodiment of the present application;
FIG. 4 is a schematic structural diagram of a texture mapping three-dimensional reconstruction device based on deep learning according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problems in the prior art, the embodiment of the application provides a texture mapping three-dimensional reconstruction method, a device, equipment and a computer readable storage medium based on deep learning. The following first describes a texture mapping three-dimensional reconstruction method based on deep learning provided by the embodiment of the present application.
Fig. 1 is a schematic flow chart of a texture mapping three-dimensional reconstruction method based on deep learning according to an embodiment of the present application. As shown in fig. 1, the texture mapping three-dimensional reconstruction method based on depth learning includes:
s101, transmitting surface data of bones and prostheses to be rendered into a memory of a GPU;
S102, setting vertex shaders and fragment shaders corresponding to bones and prostheses according to rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
s103, setting a related drawing state before drawing;
s104, sending a drawing instruction to the GPU by using the graphic API;
S105, the GPU uses a large number of parallel computing units to process rendering computation of a plurality of surfaces in parallel to obtain a rendering result; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
s106, respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
and S107, adjusting the pose of the three-dimensional prosthesis final model to realize the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
As shown in fig. 2, in one embodiment, includes:
The generator network comprises 2 downsampled convolution blocks, each convolution block comprising a Conv3D layer, an instance normalization function layer and a ReLU activation function layer; next are 9 residual blocks, each containing 2 convolutions, then connecting the output with the input of the residual block; then, two up-sampling convolution blocks are used to output the generated result.
In one embodiment, the method comprises:
The generated results output by the generator network are fed to a discriminator network comprising 3 hidden blocks, each hidden block comprising a Conv3D layer, an instance normalization function layer and a leak ReLU activation function layer.
The three-dimensional spine model after texture mapping can be as shown in fig. 3.
In one embodiment, further comprising:
Calculating a cyclic loss function for measuring a difference between an input image and an image obtained after converting it into a different class using a generator network and then restoring it into an original class using the generator network;
Calculating an identity loss function for measuring a difference between an input image and an image generated after the generator network is applied;
Based on the cyclic loss function and the identity loss function, an overall loss function is calculated.
In one embodiment, setting the associated drawing state includes: depth test, illumination mode, texture binding.
In one embodiment, adjusting the pose of the three-dimensional prosthetic final model includes:
And adjusting the displacement and the rotation angle of the final model of the three-dimensional prosthesis according to actual requirements.
In one embodiment, after the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model is achieved, further comprising:
And carrying out omnibearing slicing operation on the three-dimensional skeleton final model and the three-dimensional prosthesis final model from a preset angle according to actual requirements to obtain slice images.
Fig. 4 is a schematic structural diagram of a texture mapping three-dimensional reconstruction device based on deep learning according to an embodiment of the present application, where the texture mapping three-dimensional reconstruction device based on deep learning includes:
the data transmission module 401 is configured to transmit surface data of the bone and the prosthesis to be rendered to a memory of the GPU;
A shader setting module 402, configured to set vertex shaders and fragment shaders corresponding to bones and prostheses according to rendering requirements of the surface; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
a drawing state setting module 403 for setting a relevant drawing state before drawing;
A drawing instruction sending module 404, configured to send a drawing instruction to the GPU using the graphics API;
The parallel rendering calculation module 405 is configured to use a large number of parallel calculation units of the GPU to process rendering calculations of multiple planes in parallel, so as to obtain a rendering result; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
The texture mapping module 406 is configured to input the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, respectively, and output a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
The pose adjustment module 407 is configured to adjust the pose of the three-dimensional prosthesis final model, so as to implement the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
Fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 301 and a memory 302 storing computer program instructions.
In particular, the processor 301 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may comprise a hard disk drive (HARD DISK DRIVE, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of the foregoing. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 302 may be a non-volatile solid state memory.
In one embodiment, memory 302 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 301 implements any of the deep learning based texture mapping three-dimensional reconstruction methods of the above embodiments by reading and executing computer program instructions stored in the memory 302.
In one example, the electronic device may also include a communication interface 303 and a bus 310. As shown in fig. 5, the processor 301, the memory 302, and the communication interface 303 are connected to each other by a bus 310 and perform communication with each other.
The communication interface 303 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiment of the present application.
Bus 310 includes hardware, software, or both, that couple components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 310 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, in combination with the texture mapping three-dimensional reconstruction method based on deep learning in the above embodiment, the embodiment of the application can be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the deep learning based texture mapping three-dimensional reconstruction methods of the above embodiments.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present application are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.
Claims (10)
1. The texture mapping three-dimensional reconstruction method based on deep learning is characterized by comprising the following steps of:
Transmitting the surface data of the bones and the prosthesis to be rendered to the memory of the GPU;
Setting vertex shaders and fragment shaders corresponding to bones and prostheses according to the rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
Setting a related drawing state before drawing;
Using the graphics API, sending a drawing instruction to the GPU;
The GPU can use a large number of parallel computing units to process rendering computation of a plurality of surfaces in parallel to obtain rendering results; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
Respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model, and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
And adjusting the pose of the three-dimensional prosthesis final model to realize the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
2. The depth learning based texture mapping three-dimensional reconstruction method according to claim 1, comprising:
The generator network comprises 2 downsampled convolution blocks, each convolution block comprising a Conv3D layer, an instance normalization function layer and a ReLU activation function layer; next are 9 residual blocks, each containing 2 convolutions, then connecting the output with the input of the residual block; then, two up-sampling convolution blocks are used to output the generated result.
3. The depth learning based texture mapping three-dimensional reconstruction method according to claim 2, comprising:
The generated results output by the generator network are fed to a discriminator network comprising 3 hidden blocks, each hidden block comprising a Conv3D layer, an instance normalization function layer and a leak ReLU activation function layer.
4. The depth learning based texture mapping three-dimensional reconstruction method according to claim 3, further comprising:
Calculating a cyclic loss function for measuring a difference between an input image and an image obtained after converting it into a different class using a generator network and then restoring it into an original class using the generator network;
Calculating an identity loss function for measuring a difference between an input image and an image generated after the generator network is applied;
Based on the cyclic loss function and the identity loss function, an overall loss function is calculated.
5. The depth learning based texture mapping three-dimensional reconstruction method according to claim 1, wherein setting the relevant rendering state comprises: depth test, illumination mode, texture binding.
6. The depth learning based texture mapping three-dimensional reconstruction method according to claim 1, wherein adjusting the pose of the three-dimensional prosthetic final model comprises:
And adjusting the displacement and the rotation angle of the final model of the three-dimensional prosthesis according to actual requirements.
7. The depth learning based texture mapping three-dimensional reconstruction method according to claim 1, further comprising, after the simulated installation of the three-dimensional bone final model and the three-dimensional prosthesis final model:
And carrying out omnibearing slicing operation on the three-dimensional skeleton final model and the three-dimensional prosthesis final model from a preset angle according to actual requirements to obtain slice images.
8. A depth learning based texture mapping three-dimensional reconstruction apparatus, the apparatus comprising:
The data transmission module is used for transmitting the surface data of the bones and the prosthesis to be rendered into the memory of the GPU;
a shader setting module for setting vertex shaders and fragment shaders corresponding to bones and prostheses according to rendering requirements of the surfaces; wherein, the vertex shader is responsible for calculating and transforming the vertex position and normal attribute of the surface, and the fragment shader is responsible for shading each pixel;
a drawing state setting module for setting a relevant drawing state before drawing;
the drawing instruction sending module is used for sending drawing instructions to the GPU by using the graphic API;
The parallel rendering calculation module is used for the GPU to use a large number of parallel calculation units to process rendering calculation of a plurality of surfaces in parallel to obtain a rendering result; wherein, the rendering result comprises a three-dimensional bone initial model and a three-dimensional prosthesis initial model;
The texture mapping module is used for respectively inputting the three-dimensional bone initial model and the three-dimensional prosthesis initial model into a preset texture mapping network model and outputting a corresponding three-dimensional bone final model and a corresponding three-dimensional prosthesis final model; wherein the texture mapping network model comprises a generator network and a discriminator network;
The pose adjusting module is used for adjusting the pose of the three-dimensional prosthesis final model so as to realize the simulation installation of the three-dimensional bone final model and the three-dimensional prosthesis final model.
9. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a depth learning based texture mapping three-dimensional reconstruction method as defined in any one of claims 1-7.
10. A computer readable storage medium, having stored thereon computer program instructions, which when executed by a processor, implement the depth learning based texture mapping three-dimensional reconstruction method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410333438.5A CN118037929B (en) | 2024-03-22 | 2024-03-22 | Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410333438.5A CN118037929B (en) | 2024-03-22 | 2024-03-22 | Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118037929A true CN118037929A (en) | 2024-05-14 |
CN118037929B CN118037929B (en) | 2024-10-01 |
Family
ID=90991322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410333438.5A Active CN118037929B (en) | 2024-03-22 | 2024-03-22 | Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118037929B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100203968A1 (en) * | 2007-07-06 | 2010-08-12 | Sony Computer Entertainment Europe Limited | Apparatus And Method Of Avatar Customisation |
CN111179350A (en) * | 2020-02-13 | 2020-05-19 | 张逸凌 | Hip joint image processing method based on deep learning and computing equipment |
CN112489172A (en) * | 2020-11-12 | 2021-03-12 | 杭州电魂网络科技股份有限公司 | Method, system, electronic device and storage medium for producing skeleton animation |
CN112634456A (en) * | 2020-12-29 | 2021-04-09 | 浙江传媒学院 | Real-time high-reality drawing method of complex three-dimensional model based on deep learning |
US20220020214A1 (en) * | 2018-12-12 | 2022-01-20 | Twikit Nv | A system for optimizing a 3d mesh |
WO2023082089A1 (en) * | 2021-11-10 | 2023-05-19 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and apparatus, device and computer storage medium |
US20230196651A1 (en) * | 2021-12-17 | 2023-06-22 | Samsung Electronics Co., Ltd. | Method and apparatus with rendering |
CN116310045A (en) * | 2023-04-24 | 2023-06-23 | 天度(厦门)科技股份有限公司 | Three-dimensional face texture creation method, device and equipment |
CN117197345A (en) * | 2023-08-30 | 2023-12-08 | 北京长木谷医疗科技股份有限公司 | Intelligent bone joint three-dimensional reconstruction method, device and equipment based on polynomial fitting |
-
2024
- 2024-03-22 CN CN202410333438.5A patent/CN118037929B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100203968A1 (en) * | 2007-07-06 | 2010-08-12 | Sony Computer Entertainment Europe Limited | Apparatus And Method Of Avatar Customisation |
US20220020214A1 (en) * | 2018-12-12 | 2022-01-20 | Twikit Nv | A system for optimizing a 3d mesh |
CN111179350A (en) * | 2020-02-13 | 2020-05-19 | 张逸凌 | Hip joint image processing method based on deep learning and computing equipment |
CN112489172A (en) * | 2020-11-12 | 2021-03-12 | 杭州电魂网络科技股份有限公司 | Method, system, electronic device and storage medium for producing skeleton animation |
CN112634456A (en) * | 2020-12-29 | 2021-04-09 | 浙江传媒学院 | Real-time high-reality drawing method of complex three-dimensional model based on deep learning |
WO2023082089A1 (en) * | 2021-11-10 | 2023-05-19 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and apparatus, device and computer storage medium |
US20230196651A1 (en) * | 2021-12-17 | 2023-06-22 | Samsung Electronics Co., Ltd. | Method and apparatus with rendering |
CN116310045A (en) * | 2023-04-24 | 2023-06-23 | 天度(厦门)科技股份有限公司 | Three-dimensional face texture creation method, device and equipment |
CN117197345A (en) * | 2023-08-30 | 2023-12-08 | 北京长木谷医疗科技股份有限公司 | Intelligent bone joint three-dimensional reconstruction method, device and equipment based on polynomial fitting |
Non-Patent Citations (2)
Title |
---|
吴东等: "个性化定制手术导板在全髋关节置换术中的应用", 骨科, vol. 10, no. 5, 30 September 2019 (2019-09-30) * |
杜俊俐等: "基于VTK的医学图像快速重建系统", 计算机应用, vol. 27, no. 6, 30 June 2007 (2007-06-30) * |
Also Published As
Publication number | Publication date |
---|---|
CN118037929B (en) | 2024-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109949219B (en) | Reconstruction method, device and equipment of super-resolution image | |
Bertalmio et al. | Real-time, accurate depth of field using anisotropic diffusion and programmable graphics cards | |
US10762700B2 (en) | Generating random sampling distributions using stochastic rasterization | |
US11403807B2 (en) | Learning hybrid (surface-based and volume-based) shape representation | |
CN112734910B (en) | Real-time human face three-dimensional image reconstruction method and device based on RGB single image and electronic equipment | |
CN114863002A (en) | Virtual image generation method and device, terminal equipment and computer readable medium | |
CN112101073A (en) | Face image processing method, device, equipment and computer storage medium | |
CN113610958A (en) | 3D image construction method and device based on style migration and terminal | |
CN118037929B (en) | Texture mapping three-dimensional reconstruction method and device based on deep learning and electronic equipment | |
CN110928610B (en) | Method, device and computer storage medium for verifying shader function | |
CN111383314A (en) | Method and device for verifying shader function and computer storage medium | |
CN113205579B (en) | Three-dimensional reconstruction method, device, equipment and storage medium | |
CN115667967A (en) | Method for magnetic resonance scanner simulation | |
CN116563314B (en) | Lumbar vertebrae segmentation method, device, electronic equipment and computer readable storage medium | |
CN117679160B (en) | Method, device, equipment and readable storage medium for reducing wound fracture | |
CN110969651A (en) | 3D field depth estimation method and device and terminal equipment | |
Uccheddu et al. | Comparison of mesh simplification tools in a 3d watermarking framework | |
Miandji et al. | Real-time multi-band synthesis of ocean water with new iterative up-sampling technique | |
CN117376484B (en) | Electronic license anti-counterfeiting oriented generation type steganography method | |
CN117789258A (en) | Diffusion model-based field fingerprint generation method and device | |
CN114022504B (en) | Generation method, device, terminal and storage medium of non-coincident rough joint surface | |
CN103164867A (en) | Three-dimensional figure data processing method and device | |
Ilinkin | Marching Cubes for Teaching GLSL Programming | |
KR101983909B1 (en) | Digital Hologram Application System and Method | |
CN118691548A (en) | Lumbar osteoporosis diagnosis method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |