CN114693884A - Head surface drawing file generation method, rendering method and readable storage medium - Google Patents

Head surface drawing file generation method, rendering method and readable storage medium Download PDF

Info

Publication number
CN114693884A
CN114693884A CN202210332128.2A CN202210332128A CN114693884A CN 114693884 A CN114693884 A CN 114693884A CN 202210332128 A CN202210332128 A CN 202210332128A CN 114693884 A CN114693884 A CN 114693884A
Authority
CN
China
Prior art keywords
head
image
head surface
vertex
generation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210332128.2A
Other languages
Chinese (zh)
Other versions
CN114693884B (en
Inventor
张延慧
杨镇郡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yinhe Fangyuan Technology Co ltd
Original Assignee
Beijing Yone Galaxy Technology Co ltd
Beijing Yinhe Fangyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yone Galaxy Technology Co ltd, Beijing Yinhe Fangyuan Technology Co ltd filed Critical Beijing Yone Galaxy Technology Co ltd
Priority to CN202210332128.2A priority Critical patent/CN114693884B/en
Publication of CN114693884A publication Critical patent/CN114693884A/en
Application granted granted Critical
Publication of CN114693884B publication Critical patent/CN114693884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a generation method of a head surface drawing file, a rendering method of a head surface image and a readable storage medium, and belongs to the field of medical image processing. The generation method comprises the following steps: step S1, the head magnetic resonance imaging image is processed by image preprocessing to obtain a head surface mask; step S2 is to generate a head surface polygon mesh representation image by the head surface mask to obtain a vertex data file of polygon primitives in the head surface polygon mesh representation image; step S3 obtains a vertex normal vector file of the polygon primitive from the head polygon mesh representation image. The generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium provided by the invention can realize that the head surface drawing file suitable for Web interactive interface rendering can be obtained only by inputting the head magnetic resonance imaging image by a user.

Description

Head surface drawing file generation method, rendering method and readable storage medium
Technical Field
The present invention relates to the field of medical image processing, and in particular, to a method for generating a head surface rendering file, a method for rendering a head surface image, and a readable storage medium.
Background
Modern Transcranial Magnetic Stimulation (TMS for short) usually combines with optical navigation to achieve the purpose of precise treatment, and the optical navigation needs to register the head of a patient with head Magnetic Resonance Imaging (MRI) data of the patient himself to acquire a registration relationship between a head coordinate system of the patient and a Magnetic resonance imaging data coordinate system, so that a key point for registration needs to be selected on the surface of the head Magnetic resonance imaging; meanwhile, in order to visually display the positional relationship between the probe device (e.g., treatment beat) for treatment and the head of the patient, surface rendering of head MRI is also indispensable.
Therefore, it is necessary to provide a method for generating a precise and lightweight head surface drawing file, a method for rendering a head surface image, and a readable storage medium, which can be used for Web interactive interface rendering.
Disclosure of Invention
In order to solve at least one aspect of the above problems and disadvantages in the prior art, the present invention provides a method for generating a head surface rendering file, a method for rendering a head surface image, and a readable storage medium, which can at least partially enable a user to obtain a head surface rendering file suitable for Web interactive interface rendering by inputting a head magnetic resonance imaging image. The technical scheme is as follows:
the invention aims to provide a generation method of a head surface drawing file.
Another object of the present invention is to provide a method for rendering an image of a head surface.
It is a further object of the present invention to provide a readable storage medium.
According to an aspect of the present invention, there is provided a generation method of a head surface drawing file, the generation method including the steps of:
step S1 is to obtain a head surface mask from the head magnetic resonance imaging image through image preprocessing;
step S2 is to generate a head surface polygon mesh representation image from the head surface mask to obtain a vertex data file of polygon primitives in the head surface polygon mesh representation image;
step S3 obtains a vertex normal vector file of the polygon primitive from the head polygon mesh representation image.
The generation method of the head surface drawing file, the rendering method of the head surface image, and the readable storage medium according to the present invention have at least one of the following advantages:
(1) the generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium provided by the invention can realize that the head surface drawing file suitable for Web interactive interface rendering can be obtained only by inputting the head magnetic resonance imaging image by a user;
(2) the generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium provided by the invention can better support the use of a neurosurgery imaging system based on cloud service, and fill the blank that the traditional brain analysis software lacks the display of the outer surface of the head;
(3) according to the head surface drawing file generation method, the head surface image rendering method and the readable storage medium, the vertex data file and the vertex normal vector file are lightened by removing the vertex at a certain ratio through sampling the head surface polygonal grid image, so that the storage space is saved, the transmission efficiency between networks is improved, and meanwhile, a user can check the scalp surface drawing file in real time;
(4) according to the head surface drawing file generation method, the head surface image rendering method and the readable storage medium, methods such as normalization, segmentation, smoothing and sampling are comprehensively considered in the head surface drawing file generation process, so that the whole processing process is more robust, the head surface drawing file is more accurate and smooth, and the visual effect is improved.
Drawings
These and/or other aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method of generating a header surface rendering file according to one embodiment of the invention;
fig. 2a and fig. 2b are schematic diagrams illustrating comparison of effects before and after image segmentation of the head magnetic resonance imaging image in fig. 1, wherein fig. 2a is a schematic diagram illustrating the effects before image segmentation of the head magnetic resonance imaging image, and fig. 2b is a schematic diagram illustrating the effects after image segmentation of the head magnetic resonance imaging image;
fig. 3 is a schematic view of the head surface mask of fig. 1.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of the embodiments of the present invention with reference to the accompanying drawings is intended to explain the general inventive concept of the present invention and should not be construed as limiting the invention.
In order to be able to accurately and quickly display the surface of a head Magnetic Resonance Imaging (MRI) image at an interactive interface to more accurately pick up the key points for registration and/or to be able to visually display the positional relationship between a detection device (e.g. a treatment beat) and the head of a patient, we thus provide a method of generating a head surface rendering file.
The term "head" as used herein includes the scalp and face. Wherein the term scalp is to be understood in a broad sense as covering the soft tissue outside the skull. The term "face" is to be understood broadly as all exposed skin in front of the head.
The term "volume data" as used herein is to be understood broadly as three-dimensional image data, which is used to describe a certain signal intensity distribution in a rectangular area in a three-dimensional space, for example, the volume data is an image obtained by a magnetic resonance scan.
The term "voxel" as used herein, like a pixel in a two-dimensional image, can be broadly understood as the smallest unit of digital data over a three-dimensional spatial segmentation.
The term "coronal" as used herein is broadly construed as a division of a human body into anterior and posterior portions by a plane of the body along a vertical axis and a horizontal axis. The term "axial plane" can be broadly understood as a plane normal to the height of a human body, which divides the human body into upper and lower parts. The term "sagittal plane" is to be broadly construed as a plane passing through the body along the vertical and longitudinal axes, which divides the body into left and right parts.
Referring to fig. 1, a method of generating a head surface drawing file according to an embodiment of the present invention is shown. The generation method comprises the following steps:
step S1 is to obtain a head surface mask from the head magnetic resonance imaging image through image preprocessing;
step S2, generating a head surface polygon mesh representation image from the head surface mask to obtain a vertex data file of polygon primitives in the head surface polygon mesh representation image;
step S3 obtains a vertex normal vector file of the polygon primitive from the head polygon mesh representation image.
In one example, the head magnetic resonance imaging image includes a head T1 weighted image, a head T2 weighted image, a head diffusion weighted image, and the like. Preferably, the head magnetic resonance imaging image is a weighted image of the head T1.
In one example, since high resolution T1 weighted images generated by magnetic resonance scanners are typically corrupted by magnetic susceptibility artifacts and radio frequency field inhomogeneities, the head T1 weighted images need to be normalized prior to image preprocessing to reduce the effects of magnetic susceptibility artifacts and RF field inhomogeneities on the images. Normalizing the head T1 weighted image includes scale normalization and gray scale normalization. The scale normalization includes normalizing the three-dimensional size and spatial resolution of the head T1 weighted image. Those skilled in the art can perform scale normalization first, followed by gray scale normalization, as desired. It is needless to say that the gradation normalization may be performed first and then the scale normalization may be performed as long as the three-dimensional size, the spatial resolution, and the gradation range of the image can be adjusted to predetermined sizes.
In one example, the patient's head T1 weighted image is first loaded into memory, and the data format may be dicom, nifti, mgz or other format. The three-dimensional size of the raw head MRI data is then normalized to, for example, 256 × 256 × 256, although one skilled in the art can also normalize the three-dimensional size to other sizes, as long as it is possible to achieve a size normalization of different scan data of the same patient or different patients to the same size in all three dimensions.
In one example, after normalizing the three-dimensional size of the original head MRI image, the spatial resolution of the head MRI image may also be normalized to, for example, 1mm × 1mm × 1mm (i.e., the size of each voxel is 1mm × 1mm × 1mm), and it will be understood by those skilled in the art that the size of the voxel size in the image determines the spatial resolution, i.e., the smaller the size of the voxel, the higher the spatial resolution; and vice versa. The spatial resolution of the image can be adjusted by the skilled person according to the actual need.
In one example, after the spatial resolution of the head MRI image is normalized, a gray scale normalization is also required, i.e., the gray scale value of each voxel is normalized to be in the range of 0-255. The head MRI image processed by the scale normalization and the gray scale normalization can adjust the three-dimensional size, the spatial resolution and the gray scale range of the original head T1 weighted image to preset sizes so as to reduce the complexity of the processing of the subsequent steps, further improve the robustness of the whole processing flow, and simultaneously reduce or even eliminate the influence of magnetic susceptibility artifacts and RF field inhomogeneity on the image, thereby improving the accuracy of the image processed in the subsequent steps.
After the image is normalized, it is also possible to calculate the center position of the entire head in the head T1 weighted image, and adjust the position of the head in the entire volume data according to the center position so that the head center position is at the center position of the volume data.
In one example, a method of obtaining and adjusting a head center position includes the steps of:
statistics is performed according to the gray values of all voxels in the weighted image of the head T1 to obtain the average coordinate of all voxels having a gray value greater than a preset gray threshold (e.g., 0), and the average coordinate is taken as the head center position in the weighted image of the head T1. Then, an offset value between the head center position and the coordinates (e.g., (128,128,128)) of the center position of the volume data is calculated, and then all voxels in the head T1 weighted image are shifted according to the offset value, so that the head center position can be located at the voxel center position. During the displacement, when any one value in the three-dimensional coordinates of a voxel exceeds the [0,256] coordinate range, the voxel is discarded.
In one example, a head surface mask (mask) is obtained by image pre-processing. As shown in fig. 2a and 2b, a head mask 11 is obtained by image segmentation from a weighted image 10 of a head T1. The image segmentation method may be a binary method, a region growing method, an image segmentation method based on template matching, an image segmentation method based on machine learning, or the like. In the research, it was found that when the image segmentation is performed by the binary method, since some parts of the surface of the head are cavities (such as oral cavity, nasal cavity, external auditory canal, etc.) containing air, and the skull bone of some parts is low in gray scale value, these parts are easily recognized as the background by mistake, and thus an accurate head mask cannot be obtained. The image segmentation method based on template matching and machine learning has the disadvantages of high development cost and long algorithm running time due to the fact that registration, data annotation and model training are required. After repeated research, it was found that segmenting the image using the region growing method can overcome the above-mentioned problems and obtain a relatively accurate head mask efficiently and quickly.
In one example, the image segmentation is performed on the weighted image of head T1 using region growing. Preferably, the image segmentation is performed using a region growing method. After the region growing method is performed on each coronal plane image of the same patient, the same region growing method can be considered to be performed on the whole head three-dimensional data, so that the mask of the whole head of the patient can be obtained without additional splicing. Of course, those skilled in the art may perform image segmentation using, for example, any of the sagittal plane and axial plane of the weighted image of the head T1.
In one example, in the region growing method, the upper left corner point of each two-dimensional head T1 weighted image to be processed is set as a seed point, and region expansion is performed with 8 neighborhoods. In one example, when the head T1 weighted image is subjected to region growing, whether the pixel point is a pixel point in the same region as the seed point is determined by determining whether the gray difference between the seed point and the adjacent pixel point in the 8-neighbor region is within the gray threshold range. For example, the gradation threshold range is set to a gradation value less than or equal to 10. And when the gray difference between the seed point and the adjacent pixel point in the 8 neighborhoods is less than or equal to 10, determining the pixel point as the pixel point in the same area as the seed point.
In one example, since the region growing method is sensitive to noise, and since the gray values of the ear canal, nasal cavity and oral cavity are generally lower than the gray values of the head surface, some small holes are formed in the extracted coronal region of the head, i.e., some small holes are in the head mask, we need to perform a second morphological treatment ("first morphological treatment" will be described in detail below) on the extracted head mask to fill the holes in the head mask.
In one example, the second morphological treatment is a close operation. The close operation is to expand and then etch each extracted crown of the head with a structuring element (e.g., diamond-type structuring element) to obtain a complete and refined head mask 11 (as shown in fig. 2 b). Not only is the hole in the head mask 11 eliminated by the closing operation, but the contour of the head mask 11 is also smoothed.
In one example, the head mask subjected to the closing operation process is subjected to a first morphological process to obtain the head surface mask 12. The first morphological treatment is an etching operation and the head surface mask 12 is a mask having a predetermined thickness δ (as shown in fig. 3).
In one example, the structural element used in the etching operation is a spherical structural element having a radius set to a predetermined thickness δ. In the research, when the thickness is only 1mm, the step effect of the polygonal mesh representation image extracted in the subsequent step is very obvious, and the reconstruction effect is poor. When the thickness exceeds 10mm, a significant double-layer surface is generated, visual confusion is caused, and the visual effect is greatly influenced. We therefore set the range of the preset thickness δ to 1mm < δ < 10 mm.
In order to improve the drawing efficiency, the data quality and produce better visual effect, the head surface mask with the preset thickness is required to generate a head surface polygon mesh representation image so as to extract a vertex data file of the polygon primitive. In one example, the polygon mesh representation image may be a triangle mesh representation image, a quadrilateral mesh representation image, or the like. Since the bow-tie problem occurs when the number of sides of the figure is greater than 3, for example, a quadrangular mesh image, while the performance of the computer can be optimized using an image represented by a triangular mesh, it is preferable that the triangular mesh representation image is directly generated by masking the head surface. Of course, one skilled in the art may also generate other types of polygon mesh representation images with the head surface mask, i.e., images in which the number of sides of the polygons of the generated polygon mesh representation image is greater than 3. When the generated image is another type of polygon mesh representing image, it is necessary to refine (e.g., triangulate) the other type of polygon mesh representing image and convert the other type of polygon mesh representing image into triangle data. The number of patches in the polygon mesh representation image is doubled due to the need to convert other types of polygons into triangles, thereby increasing the computational load of the computer.
In one example, the obtained head surface mask is passed through a Marching Cube (Marching Cube) algorithm to generate a head surface triangle mesh representation image. Since the generated head surface mask with the preset thickness has some noise, so that a local roughness effect exists, and the smoothness of the triangular mesh representation image generated in the subsequent step is directly influenced by the roughness effect, the head surface mask needs to be subjected to smoothing processing to obtain the noise-reduced head surface mask. The smoothing process may be a gaussian smoothing process performed by a gaussian convolution kernel, or a constrained smoothing (i.e., smoothing constraint) process performed by a smoothing operator. While the speed is relatively fast when the image is processed by gaussian smoothing, some details are destroyed, and more details can be retained by constrained smoothing. Meanwhile, the constraint smoothing is suitable for processing data with small volume, so that more details can be kept and a better smoothing effect can be obtained. The head surface mask provided by the present invention has been size normalized and normalized to a smaller volume of data (e.g., 256 × 256 × 256), so that it is preferable to constrain smoothing to denoise a head surface mask of a preset thickness.
In one example, the noise-reduced head surface mask is input into a marching cubes algorithm, outputting a head surface triangle mesh representing a vertex data file of triangle primitives in the image. In step S2, the vertex data file is vertex data including all the triangle primitives in the image.
In one example, during the subsequent rendering process, we need to transmit the obtained vertex data file to the WebGL server for rendering. The vertex data file may be any data that we need, that is, all data that we need to define the vertices according to actual needs. For example, attributes of the vertex data are defined to include vertex position, vertex texture, and vertex color, among others. Of course, diffuse and specular colors, coloring effects, matrix rows, etc. may also be included.
In one example, the vertex position is at least one of an ordered list of vertices and an ordered list of positions, and may of course also include an ordered list of vertex indices. By orderly setting the vertex information (such as the vertex number, the position, the sorting, the triangular patch number and the like) in the list, the orientation of the triangular patch in the drawn image can be determined by combining the vertex order (such as the clockwise direction) set in the layout in the subsequent rendering process.
In one example, to ensure that the rendered image has a surface effect, we should also generate a vertex normal vector file. The vertex normal vector file is an ordered normal vector list comprising vertex normal vectors of all vertexes of all triangular patches. In one example, the method of obtaining the vertex normal vector comprises the steps of:
step S31, according to the vertex data file of the head surface triangle mesh representation image, obtaining the sharing vertex and the normal vectors of all triangle primitives connected with the sharing vertex;
step S32, the normal vectors of all triangle primitives connected with the shared vertex are averaged and normalized to obtain the normal vector of the shared vertex;
step S33 iterates steps S31-S32 to obtain vertex normal vectors for all triangle primitives in the image.
In one example, even today's graphics workstations have difficulty storing and rendering models of this size, since the amount of data obtained for a triangle primitive extracted in the head surface triangle representation image can be as high as 5000K-2000K. In order to reduce the number of triangle primitives in the triangle mesh and obtain a lightweight vertex data file and a vertex normal vector file, thereby saving storage space, we can also sample the head surface triangle mesh representation image. The step S31 further includes:
sampling the head surface polygon mesh representation image to remove vertexes with a preset ratio so as to obtain a vertex data file (namely a lightweight vertex data file) with reserved vertexes;
and obtaining all sharing vertexes and normal vectors of all triangle primitives connected with the corresponding sharing vertexes (namely, a lightweight vertex normal vector file) according to the vertex data file of the reserved vertexes.
The lightweight vertex data file and the lightweight vertex normal vector file are obtained through sampling, so that a user can not be limited by the available memory capacity and network bandwidth of a browser any more, can view the head surface drawing file in real time and view the head surface image through WebGL rendering in real time.
In one example, the sampling method is a mesh reduction algorithm, and a static reduction algorithm, a dynamic reduction algorithm, or a view-based reduction algorithm may be employed. Preferably, the sampling method is a dynamic simplification algorithm, such as an edge collapse operation. In one example, the edge collapse operation can be implemented using, for example, an edge folding simplification algorithm, a grid simplification algorithm based on a quadratic error metric.
In one example, the predetermined ratio of vertices to be removed is set as a condition for termination of the iteration, i.e., when the ratio of deleted vertices reaches the predetermined ratio, the iteration is stopped. In one example, the predetermined ratio r is set in a range of 0.5. ltoreq. r.ltoreq.0.9. For example, when the predetermined ratio r is set to 0.9, it means that 90% of the vertices are removed and only 10% of the vertices are left. By means of the design, more detail information in the image can be reserved.
In one example, step S32 further includes:
averaging the normal vectors of all triangle primitives connected with the shared vertex to obtain the average value of the normal vectors of the shared vertex;
and normalizing the normal vector average value to obtain the normal vector of the shared vertex.
In one example, the method for generating the header surface drawing file further includes the steps of:
storing the lightweight vertex data file and the lightweight vertex normal vector file in a file format (such as a format of obj, json and the like) supported by a Web graphics library (WebGL); and/or
And compressing the lightweight vertex data file and the lightweight vertex normal vector file through a compression format (such as an obj. gz file format) supported by the Web graphics library (WebGL) to reduce the time of network transmission, so that a user can not be limited by the available memory capacity and network bandwidth of a browser any more, and can view the head surface drawing file in real time and view the drawn head surface image through WebGL rendering in real time.
For example, in the obj format, when performing a saving operation, a mesh (mesh) type (such as P for a polygon and L for a line segment), a mesh material, vertex coordinates, a vertex normal vector, an RGBD value, and a vertex combination index of a triangle patch (i.e., a triangle primitive) need to be sequentially written, and finally saved as a surface rendering file with ". obj" as a suffix. To reduce network transmission time, the above surface rendering file can be compressed using script commands, resulting in a compressed surface rendering file ending in ". obj.gz", typically at a compression rate of around 50%.
In one example, a method of rendering an image of a head surface is provided according to another embodiment of the present invention. The rendering method of the head surface image comprises the following steps:
generating a head surface drawing file according to the generation method of the head surface drawing file;
rendered by a shader to obtain a rendered head surface image.
In one example, the head surface drawing file is sequentially input into a vertex shader and a fragment shader for rendering to obtain a rendered head surface image. The vertex shader performs a large amount of calculations through a vertex data file and a vertex normal vector file to obtain the position of a vertex on an interactive interface, the vertex color and the like. And the fragment shader is used for calculating the final color of the fragment on the interactive interface through the vertex data file and the vertex normal vector file.
In one example, the rendering framework of the vertex data file and the vertex normal vector file needs to be able to support WebGL. Taking the VUE framework as an example, the saved or compressed surface rendering file is imported into the VUE framework, so that the surface rendering effect of the head surface image can be viewed in the visualization window.
In one example, a readable storage medium is provided according to yet another embodiment of the present invention. "readable storage medium" of embodiments of the present invention refers to any medium that participates in providing programs or instructions to a processor for execution. The medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage devices. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Common forms of readable storage media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
The readable storage medium stores thereon a program or instructions that when executed by a processor perform the above-described head surface rendering file generation method and head surface image rendering method.
The generation method of the head surface drawing file, the rendering method of the head surface image, and the readable storage medium according to the present invention have at least one of the following advantages:
(1) the generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium provided by the invention can realize that the head surface drawing file suitable for Web interactive interface rendering can be obtained only by inputting the head magnetic resonance imaging image by a user;
(2) the generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium provided by the invention can better support the use of a neurosurgery imaging system based on cloud service, and fill the blank that the traditional brain analysis software lacks the display of the outer surface of the head;
(3) according to the head surface drawing file generation method, the head surface image rendering method and the readable storage medium, the vertex data file and the vertex normal vector file are lightened by removing the vertex at a certain ratio through sampling the head surface polygonal grid image, so that the storage space is saved, the transmission efficiency between networks is improved, and meanwhile, a user can check the scalp surface drawing file in real time;
(4) according to the head surface drawing file generation method, the head surface image rendering method and the readable storage medium, methods such as normalization, segmentation, smoothing and sampling are comprehensively considered in the head surface drawing file generation process, so that the whole processing process is more robust, the head surface drawing file is more accurate and smooth, and the visual effect is improved.
Although a few embodiments of the present general inventive concept have been shown and described, it will be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the appended claims and their equivalents.

Claims (18)

1. A generation method of a head surface drawing file, the generation method comprising the steps of:
step S1 is to obtain a head surface mask from the head magnetic resonance imaging image through image preprocessing;
step S2 is to generate a head surface polygon mesh representation image from the head surface mask to obtain a vertex data file of polygon primitives in the head surface polygon mesh representation image;
step S3 obtains a vertex normal vector file of the polygon primitive from the head polygon mesh representation image.
2. The generation method according to claim 1,
in step S1, the image preprocessing includes the steps of:
obtaining a head mask by image segmentation of the head magnetic resonance imaging image;
and subjecting the head mask to a first morphological treatment to obtain a head surface mask.
3. The generation method according to claim 2,
prior to the first morphological treatment, the head mask is passed through a second morphological treatment to obtain a refined head mask.
4. The generation method according to claim 3, wherein,
the second morphological process is to process the head mask through a close operation to obtain a refined head mask.
5. The generation method according to claim 2,
the head surface mask has a preset thickness δ, and the preset thickness δ is set to range from 1mm < δ < 10 mm.
6. The generation method according to claim 5,
the first morphological treatment is to etch the head mask to obtain a head surface mask.
7. The generation method according to any one of claims 2 to 6,
in step S2, the vertex data file includes all vertex data in the head surface polygon mesh representation image,
the attributes of the vertex data include vertex position, vertex texture, and vertex color.
8. The generation method according to claim 7,
the vertex position is at least one of an ordered list of vertices and an ordered list of positions.
9. The generation method according to any one of claims 2 to 6,
the step S2 further includes:
subjecting the head surface mask to smoothing processing to obtain a noise-reduced head surface mask;
and generating the head surface polygonal mesh representation image by the head surface mask subjected to noise reduction through a shifting cube algorithm.
10. The generation method according to claim 9,
the head surface polygonal mesh representation image is a head surface triangular mesh representation image.
11. The generation method according to claim 1,
the vertex normal vector file comprises vertex normal vectors of all polygon primitives in the head surface polygon mesh representation image,
the method for obtaining the vertex normal vector comprises the following steps:
step S31, according to the vertex data file of the head surface polygon mesh representation image, obtaining the sharing vertex and the normal vectors of all polygon primitives connected with the sharing vertex;
step S32, carrying out mean value and normalization on normal vectors of all polygon primitives connected with the shared vertex to obtain the normal vector of the shared vertex;
step S33 iterates steps S31-S32 to obtain vertex normal vectors for all polygon primitives.
12. The generation method according to claim 11,
the vertex normal vector file is an ordered normal vector list formed by vertex normal vectors of all the polygon primitives.
13. The generation method according to claim 12,
the step S31 further includes:
removing vertexes with a preset ratio from the head surface polygonal mesh representation image through sampling to obtain a vertex data file with reserved vertexes;
and obtaining the normal vectors of the shared vertex and all polygon primitives connected with the shared vertex according to the vertex data file of the reserved vertex.
14. The generation method according to claim 1,
prior to the image preprocessing, further comprising normalizing the head magnetic resonance imaging image, the method of normalizing the head magnetic resonance imaging image comprising the steps of:
subjecting the magnetic resonance imaging image to scale normalization and grey scale normalization to obtain a normalized head magnetic resonance imaging image;
obtaining the average coordinates of all voxel points with the gray value larger than a preset gray threshold value according to the normalized head magnetic resonance imaging image;
obtaining an offset value between the average coordinate and the coordinate of the central position of the volume data according to the average coordinate;
and adjusting the central position of the head to the central position of the volume data according to the deviation value.
15. The generation method according to claim 1,
the generation method further comprises the following steps:
storing the generated vertex data file and the generated vertex normal vector file through a file format supported by a Web graphic library; and/or
And compressing the generated vertex data file and the vertex normal vector file through a compression format supported by the Web graphic library.
16. A method of rendering an image of a head surface, the method comprising the steps of:
the generation method of a head surface drawing file according to any one of claims 1 to 15, generating a head surface drawing file;
inputting the head surface drawing file into a shader for rendering to obtain a rendered head surface image.
17. The rendering method according to claim 16,
and sequentially inputting the head surface drawing file into a vertex shader and a fragment shader for rendering so as to obtain the rendered head surface image.
18. A readable storage medium comprising, in combination,
the readable storage medium stores thereon a program or instructions that when executed by a processor performs at least one of:
a generation method of the head surface drawing file according to any one of claims 1 to 15; and
a method of rendering an image of a surface of a head as claimed in claim 16 or 17.
CN202210332128.2A 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium Active CN114693884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210332128.2A CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210332128.2A CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Publications (2)

Publication Number Publication Date
CN114693884A true CN114693884A (en) 2022-07-01
CN114693884B CN114693884B (en) 2023-10-13

Family

ID=82140128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210332128.2A Active CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114693884B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1663527A (en) * 2005-03-16 2005-09-07 南京航空航天大学 Brain function MRI movement correcting method
US20110069065A1 (en) * 2009-09-24 2011-03-24 Kabushiki Kaisha Toshiba Image processing apparatus, computer readable medium and method thereof
US20110090222A1 (en) * 2009-10-15 2011-04-21 Siemens Corporation Visualization of scaring on cardiac surface
CN103077557A (en) * 2013-02-07 2013-05-01 河北大学 Adaptive hierarchical chest large data display implementation method
KR20160068204A (en) * 2014-12-05 2016-06-15 삼성전기주식회사 Data processing method for mesh geometry and computer readable storage medium of recording the same
CN113129418A (en) * 2021-03-02 2021-07-16 武汉联影智融医疗科技有限公司 Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1663527A (en) * 2005-03-16 2005-09-07 南京航空航天大学 Brain function MRI movement correcting method
US20110069065A1 (en) * 2009-09-24 2011-03-24 Kabushiki Kaisha Toshiba Image processing apparatus, computer readable medium and method thereof
US20110090222A1 (en) * 2009-10-15 2011-04-21 Siemens Corporation Visualization of scaring on cardiac surface
CN103077557A (en) * 2013-02-07 2013-05-01 河北大学 Adaptive hierarchical chest large data display implementation method
KR20160068204A (en) * 2014-12-05 2016-06-15 삼성전기주식회사 Data processing method for mesh geometry and computer readable storage medium of recording the same
CN113129418A (en) * 2021-03-02 2021-07-16 武汉联影智融医疗科技有限公司 Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王新德: "基于三维重建的计算机辅助骨科手术系统研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, pages 7 - 32 *

Also Published As

Publication number Publication date
CN114693884B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
Han et al. CRUISE: cortical reconstruction using implicit surface evolution
CN109754394B (en) Three-dimensional medical image processing device and method
US20220122263A1 (en) System and method for processing colon image data
CN109584349B (en) Method and apparatus for rendering material properties
US6978039B2 (en) Method and system for segmentation of medical images
JP2007537770A (en) A dynamic crop box determination method for display optimization of luminal structures in endoscopic images
US20130034276A1 (en) Method and Apparatus for Correction of Errors in Surfaces
CN111311705B (en) High-adaptability medical image multi-plane reconstruction method and system based on webgl
CN111583385B (en) Personalized deformation method and system for deformable digital human anatomy model
Jaffar et al. Anisotropic diffusion based brain MRI segmentation and 3D reconstruction
US20230042000A1 (en) Apparatus and method for quantification of the mapping of the sensory areas of the brain
Feuerstein et al. Adaptive branch tracing and image sharpening for airway tree extraction in 3-D chest CT
CN115830016A (en) Medical image registration model training method and equipment
Yu et al. Biomedical image segmentation via constrained graph cuts and pre-segmentation
CN114693884B (en) Method for generating head surface drawing file, rendering method, and readable storage medium
CN107170009B (en) Medical image-based goggle base curve data measurement method
Zhu et al. 3D automatic MRI level set segmentation of inner ear based on statistical shape models prior
CN109741439A (en) A kind of three-dimensional rebuilding method of two dimension MRI fetus image
Eskildsen et al. Quantitative comparison of two cortical surface extraction methods using MRI phantoms
Reska et al. Fast 3D segmentation of hepatic images combining region and boundary criteria
CN113706687A (en) Nose environment modeling method and device for path planning
CN111145353A (en) Method for generating 3D point cloud through image segmentation and grid characteristic point extraction algorithm
Aloui et al. A new useful biometrics tool based on 3D brain human geometrical characterizations
Bosc et al. Statistical atlas-based sub-voxel segmentation of 3D brain MRI
CN112837226B (en) Morphology-based mid-brain sagittal plane extraction method, system, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230918

Address after: Room 1201-1208, 12th Floor, Building 1, Haipengyuan, No. 229 Guyuan Road, High tech Development Zone, Changsha City, Hunan Province, 410221

Applicant after: Younao Yinhe (Hunan) Technology Co.,Ltd.

Address before: Room 504, floor 5, building 2, No. 9 hospital, medical Road, Life Science Park, Changping District, Beijing 100083

Applicant before: Beijing Yinhe Fangyuan Technology Co.,Ltd.

Applicant before: Beijing yone Galaxy Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240506

Address after: Room 504, floor 5, building 2, hospital 9, Yiyi Road, Life Science Park, Changping District, Beijing 102206

Patentee after: Beijing Yinhe Fangyuan Technology Co.,Ltd.

Country or region after: China

Address before: Room 1201-1208, 12th Floor, Building 1, Haipengyuan, No. 229 Guyuan Road, High tech Development Zone, Changsha City, Hunan Province, 410221

Patentee before: Younao Yinhe (Hunan) Technology Co.,Ltd.

Country or region before: China