CN114693884B - Method for generating head surface drawing file, rendering method, and readable storage medium - Google Patents

Method for generating head surface drawing file, rendering method, and readable storage medium Download PDF

Info

Publication number
CN114693884B
CN114693884B CN202210332128.2A CN202210332128A CN114693884B CN 114693884 B CN114693884 B CN 114693884B CN 202210332128 A CN202210332128 A CN 202210332128A CN 114693884 B CN114693884 B CN 114693884B
Authority
CN
China
Prior art keywords
head
head surface
image
vertex
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210332128.2A
Other languages
Chinese (zh)
Other versions
CN114693884A (en
Inventor
张延慧
杨镇郡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Younao Yinhe Hunan Technology Co ltd
Original Assignee
Younao Yinhe Hunan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Younao Yinhe Hunan Technology Co ltd filed Critical Younao Yinhe Hunan Technology Co ltd
Priority to CN202210332128.2A priority Critical patent/CN114693884B/en
Publication of CN114693884A publication Critical patent/CN114693884A/en
Application granted granted Critical
Publication of CN114693884B publication Critical patent/CN114693884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a method for generating a head surface drawing file, a method for rendering a head surface image and a readable storage medium, and belongs to the field of medical image processing. The generating method comprises the following steps: step S1, a head magnetic resonance imaging image is preprocessed to obtain a head surface mask; step S2, generating a head surface polygonal grid representation image by the head surface mask so as to obtain a vertex data file of a polygonal primitive in the head surface polygonal grid representation image; and step S3, obtaining a vertex normal vector file of the polygon primitive according to the head polygon mesh representation image. The method for generating the head surface drawing file, the method for rendering the head surface image and the readable storage medium can enable a user to obtain the head surface drawing file suitable for Web interactive interface rendering only by inputting the head magnetic resonance imaging image.

Description

Method for generating head surface drawing file, rendering method, and readable storage medium
Technical Field
The present invention relates to the field of medical image processing, and in particular, to a method for generating a head surface drawing file, a method for rendering a head surface image, and a readable storage medium.
Background
Modern transcranial magnetic stimulation (Transcranial Magnetic Stimulation, TMS for short) is usually combined with optical navigation to achieve the purpose of accurate treatment, and in the optical navigation, the head of a patient and the head Magnetic Resonance Imaging (MRI) data of the patient need to be registered to acquire the registration relation between the head coordinate system of the patient and the magnetic resonance imaging data coordinate system, so that key points for registration need to be selected on the surface of the head magnetic resonance imaging; in order to visually display the positional relationship between the probe device for treatment (e.g., a treatment flap) and the patient's head, surface rendering of the head MRI is also indispensable.
Therefore, it is necessary to provide a method of generating a head surface drawing file, a method of rendering a head surface image, and a readable storage medium, which are accurate and lightweight, and which can be used for Web interactive interface rendering.
Disclosure of Invention
In order to solve at least one aspect of the above-mentioned problems and disadvantages in the prior art, the present invention provides a method for generating a head surface drawing file, a method for rendering a head surface image, and a readable storage medium, which can at least partially realize that a user can obtain a head surface drawing file suitable for Web interactive interface rendering only by inputting a head magnetic resonance imaging image. The technical scheme is as follows:
an object of the present invention is to provide a method of generating a head surface drawing file.
Another object of the present invention is to provide a method of rendering a head surface image.
It is a further object of the invention to provide a readable storage medium.
According to an aspect of the present invention, there is provided a method of generating a head surface drawing file, the method comprising the steps of:
step S1, a head magnetic resonance imaging image is preprocessed to obtain a head surface mask;
step S2, generating a head surface polygonal grid representation image by the head surface mask so as to obtain a vertex data file of a polygonal primitive in the head surface polygonal grid representation image;
and step S3, obtaining the vertex normal vector file of the polygon primitive according to the head polygon mesh representation image.
The method of generating a head surface drawing file, the method of rendering a head surface image, and the readable storage medium according to the present invention have at least one of the following advantages:
(1) The generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium can enable a user to obtain the head surface drawing file suitable for Web interactive interface rendering only by inputting the head magnetic resonance imaging image;
(2) The generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium can better support the use of a neurosurgery imaging system based on cloud service, and fill the blank that the display of the outer surface of the head is lacking in the traditional brain analysis software;
(3) According to the method for generating the head surface drawing file, the method for rendering the head surface image and the readable storage medium, provided by the invention, a ratio of vertexes are removed by sampling the head surface polygonal grid image so as to lighten the vertex data file and the vertex normal vector file, so that the storage space is saved, the transmission efficiency between networks is improved, and meanwhile, a user can check the scalp surface drawing file in real time;
(4) According to the method for generating the head surface drawing file, the method for rendering the head surface image and the readable storage medium, normalization, segmentation, smoothing, sampling and other methods are comprehensively considered in the process of generating the head surface drawing file, so that the whole processing process is more robust, the head surface drawing file is more accurate and smooth, and the visual effect is improved.
Drawings
These and/or other aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method of generating a head surface drawing file according to one embodiment of the invention;
fig. 2a and 2b are schematic diagrams of contrast between effects of the head mri image of fig. 1 before and after image segmentation, wherein fig. 2a is a schematic diagram of effects of the head mri image before image segmentation, and fig. 2b is a schematic diagram of effects of the head mri image after image segmentation;
fig. 3 is a schematic view of the head surface mask of fig. 1.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings. In the specification, the same or similar reference numerals denote the same or similar components. The following description of embodiments of the present invention with reference to the accompanying drawings is intended to illustrate the general inventive concept and should not be taken as limiting the invention.
In order to be able to precisely and quickly display the surface of the head Magnetic Resonance Imaging (MRI) image at the interactive interface to more precisely select key points for registration and/or to be able to intuitively display the positional relationship between the detection means (e.g. a treatment flap) and the patient's head, we therefore provide a method of generating a head surface rendering file.
The term "head" as used herein includes the scalp and face. Wherein the term "scalp" can be understood broadly as soft tissue overlaying the skull bone. The term "face" is to be understood broadly as all exposed skin in front of the head.
The term volume data as used herein is to be understood broadly as three-dimensional image data, which is used to describe a certain signal intensity distribution in a cuboid region of three-dimensional space, for example, an image obtained by magnetic resonance scanning is volume data.
The term "voxel" as used herein is similar to a pixel in a two-dimensional image and can be interpreted broadly as the smallest unit of digital data across a three-dimensional spatial partition.
The term "coronal plane" as used herein is to be understood broadly as a plane passing through the human body along the vertical and horizontal axes, dividing the human body into front and rear portions. The term "axial plane" is to be understood in a broad sense as a plane normal to the height of the body, which divides the body into an upper and a lower part. The term "sagittal plane" is to be understood broadly as a plane through the human body along the vertical axis and the longitudinal axis, which divides the human body into left and right parts.
Referring to fig. 1, a method of generating a head surface drawing file according to an embodiment of the present invention is shown. The generating method comprises the following steps:
step S1, a head magnetic resonance imaging image is preprocessed to obtain a head surface mask;
step S2, generating a head surface polygonal grid representation image by the head surface mask so as to obtain a vertex data file of a polygonal primitive in the head surface polygonal grid representation image;
and step S3, obtaining the vertex normal vector file of the polygon primitive according to the head polygon mesh representation image.
In one example, the head magnetic resonance imaging image includes a head T1 weighted image, a head T2 weighted image, a head diffusion weighted image, and the like. Preferably, the head magnetic resonance imaging image is a head T1 weighted image.
In one example, since high resolution T1 weighted images generated by magnetic resonance scanners are typically corrupted by susceptibility artifacts and radio frequency field inhomogeneities, it is desirable to normalize the head T1 weighted images prior to image preprocessing to reduce the effects of susceptibility artifacts and RF field inhomogeneities on the image. The normalized head T1 weighted image includes scale normalization and gray scale normalization. The scale normalization includes normalizing the three-dimensional size and spatial resolution of the weighted image of the head T1. The scale normalization can be performed first and then the gray scale normalization can be performed as needed by a person skilled in the art. Of course, the gradation normalization may be performed first, and then the scale normalization may be performed, so long as the three-dimensional size, the spatial resolution, and the gradation range of the image can be adjusted to a predetermined size.
In one example, the patient's head T1 weighted image is first loaded into memory, and the data format may be dicom, nifti, mgz or other format. The three-dimensional dimensions of the raw head MRI data are then normalized to, for example, 256 x 256, although those skilled in the art may also normalize the three-dimensional dimensions to other dimensions, provided that it is possible to achieve dimensional normalization of different scan data of the same patient or different patients to the same dimensions in each of the three dimensions.
In one example, after normalizing the three-dimensional size of the original head MRI image, the spatial resolution of the head MRI image may also be normalized to, for example, 1mm×1mm (i.e., the size of each voxel is 1mm×1 mm), it will be appreciated by those skilled in the art that the size of the voxel size in the image determines the magnitude of the spatial resolution, i.e., the smaller the size of the voxel, the higher the spatial resolution; and vice versa. The spatial resolution of the image can be adjusted by a person skilled in the art according to the actual needs.
In one example, after normalizing the spatial resolution of the head MRI image, gray scale normalization is also required, i.e., the gray scale value of each voxel is normalized to be in the range of 0 to 255. The head MRI image processed by scale normalization and gray scale normalization can adjust the three-dimensional size, the spatial resolution and the gray scale range of the original head T1 weighted image to a preset size so as to reduce the complexity of the processing of the subsequent steps, further improve the robustness of the whole processing flow, reduce and even eliminate the influence of susceptibility artifacts and RF field non-uniformity on the image, and further improve the accuracy of the processed image in the subsequent steps.
After the normalization of the image, the center position of the entire head in the weighted image of the head T1 may also be calculated, and the position of the head in the entire volume data may be adjusted according to the center position, so that the center position of the head is located at the center position of the volume data.
In one example, a method of obtaining and adjusting a head center position includes the steps of:
and counting according to the gray values of all voxels in the weighted image of the head T1 to obtain average coordinates of all voxels with gray values larger than a preset gray threshold (for example, 0), and taking the average coordinates as the center position of the head in the weighted image of the head T1. Then, an offset value between the coordinates (for example, (128,128,128)) of the head center position and the center position of the volume data is calculated, and then, all voxels in the weighted image of the head T1 are displaced according to the offset value, so that the head center position is located at the voxel center position. During displacement, a voxel is discarded when any one of its three-dimensional coordinates falls outside of the [0,256] coordinate range.
In one example, a head surface mask (mask) is obtained by image preprocessing. As shown in fig. 2a and 2b, the head T1 weighted image 10 is subjected to image segmentation to obtain a head mask 11. The image segmentation method may be a binary method, a region growing method, an image segmentation method based on template matching, an image segmentation method based on machine learning, or the like. In the research, it is found that when the binary method is adopted to segment the image, since some parts of the surface of the head are cavities (such as oral cavity, nasal cavity, external auditory canal, etc.) containing air, and some parts of the skull are easy to be mistakenly identified as the background due to low gray values, and thus an accurate head mask cannot be obtained. The image segmentation method based on template matching and machine learning has the defects of high development cost and long algorithm running time due to the fact that registration, data annotation and model training are required. After repeated research, it was found that segmenting the image using the region growing method can overcome the above problems and can obtain a more accurate head mask efficiently and quickly.
In one example, the region growing method is used to image segment the head T1 weighted image. Preferably, the image segmentation is performed using a region growing method. After the region growing method is executed on each coronal image of the same patient, the same region growing method can be executed on the whole head three-dimensional data, so that a mask of the whole head of the patient can be obtained without additional stitching. Of course, one skilled in the art may also use any one of the sagittal plane and axial plane of the weighted image of the head T1 for image segmentation.
In one example, in the region growing method, the upper left corner of each two-dimensional head T1 weighted image to be processed is set as a seed point, and region expansion is performed in 8 neighborhoods. In one example, when the head T1 weighted image is subjected to region growing, whether the pixel point is the same region as the seed point is determined by determining whether the gray level difference value between the seed point and the adjacent pixel point in the 8-neighbor region is within the gray level threshold range. For example, the gradation threshold range is set such that the gradation value is 10 or less. And when the gray difference between the seed point and the adjacent pixel point in the 8 neighborhood is less than or equal to 10, determining the pixel point as the pixel point in the same area with the seed point.
In one example, since the region growing method is relatively sensitive to noise and at the same time since the gray values of the auditory canal, nasal cavity and oral cavity are typically lower than the gray values of the head surface, resulting in some small holes being formed in the extracted coronal region of the head, i.e. in the head mask, we need to perform a second morphological treatment ("first morphological treatment" to be described in detail below) on the extracted head mask to fill the holes in the head mask.
In one example, the second morphological treatment is a closed operation. The closing operation is performed by expanding and then etching each extracted head coronal plane with a structural element (e.g. diamond-type structural element) to obtain a complete and refined head mask 11 (as shown in fig. 2 b). Not only is the hole in the head mask 11 eliminated by the closing operation, but the contour of the head mask 11 is smoothed.
In one example, the closed-cell processed head mask is passed through a first morphological process to obtain the head surface mask 12. The first morphological treatment is an etching operation and the head surface mask 12 is a mask having a predetermined thickness delta (as shown in fig. 3).
In one example, the structural element used in the etching operation is a spherical structural element, the radius of which is set to a preset thickness δ. In the research, when the thickness is only 1mm, the step effect of the polygonal grid representation image extracted in the subsequent step is obvious, and the reconstruction effect is poor. When the thickness exceeds 10mm, a remarkable double-layer surface is generated, which causes visual confusion and greatly affects the visual effect. Therefore we set the range of the preset thickness delta to be 1mm < delta < 10mm.
In order to improve the drawing efficiency, the data quality and generate better visual effect, we need to generate the head surface polygonal mesh representation image from the obtained head surface mask with preset thickness to extract the vertex data file of the polygonal primitive. In one example, the polygonal mesh representation image may be a triangular mesh representation image, a quadrilateral mesh representation image, or the like. Since bow tie problems occur when the number of sides of the pattern is greater than 3, such as a quadrilateral mesh image, while using an image represented by a triangular mesh enables the performance of the computer to be optimized, it is preferable to mask the head surface directly to generate a triangular mesh representation image. Of course, the person skilled in the art may also generate other types of polygonal mesh representation images from the head surface mask, i.e. images in which the number of sides of the polygon of the generated polygonal mesh representation image is larger than 3. When the generated image is a polygon mesh representation image of another type, the polygon mesh representation image of another type needs to be thinned (e.g., triangulated) and converted into triangle data. Since other types of polygons need to be converted into triangles, the number of patches in the polygon mesh representation image is doubled, and thus the calculation amount of a computer is increased.
In one example, the obtained head surface mask is passed through a moving Cube (Marching Cube) algorithm to generate a head surface triangle mesh representation image. Since the generated head surface mask with the preset thickness has some noise, so that the head surface mask has local roughening effect, and the roughening effect directly influences the smoothness of the triangular mesh representation image generated in the subsequent step, the head surface mask needs to be subjected to smoothing treatment to obtain the head surface mask after noise reduction. The smoothing process may be gaussian smoothing process by a gaussian convolution kernel, or constraint smoothing (i.e., smoothing constraint) process by a smoothing operator. While gaussian smoothing processes images, while relatively fast, breaks down some details, constraint smoothing can preserve more details. Meanwhile, constraint smoothing is suitable for processing data with smaller volume, so that better smoothing effect can be obtained while more details are kept. Whereas the head surface mask provided by the present invention has been size normalized and normalized to a smaller volume of data (e.g., 256 x 256), it is preferable that constraint smoothing noise reduction be performed on a head surface mask of a preset thickness.
In one example, the noise-reduced head surface mask is input into a moving cube algorithm and the head surface triangle mesh is output as a vertex data file representing triangle primitives in the image. In step S2, the vertex data file is vertex data including all triangle primitives in the image.
In one example, during a subsequent rendering process, we need to transmit the obtained vertex data file to the WebGL server for rendering. The vertex data file may be any data that we need, that is, we need to define all the data that make up the vertex according to the actual needs. For example, the attributes of the vertex data are defined to include vertex position, vertex texture, vertex color, and the like. But may also include diffuse and specular colors, coloring effects, matrix rows, etc.
In one example, the vertex position is at least one of an ordered vertex list and an ordered position list, although an ordered vertex index list may also be included. By orderly setting the vertex information (such as the number of vertices, the positions, the ordering, the number of triangular patches and the like) in the list, the vertex information can be combined with the vertex order (such as the clockwise direction) set in the layout in the subsequent rendering process, and the orientation of the triangular patches in the drawn image is further determined.
In one example, to ensure that the rendered image has a curved surface effect, we should also generate a vertex normal vector file. The vertex normal vector file is an ordered list of vertex normal vectors including the vertices of all triangular patches. In one example, the method of obtaining the vertex normal vector includes the steps of:
step S31, obtaining a shared vertex and normal vectors of all triangle primitives connected with the shared vertex according to the vertex data file of the head surface triangle mesh representation image;
step S32, carrying out mean value and normalization on normal vectors of all triangle primitives connected with the shared vertex to obtain the normal vector of the shared vertex;
step S33 iterates steps S31-S32 to obtain vertex normal vectors of all triangle primitives in the image.
In one example, this size model is difficult to store and render even today's graphics workstations because the amount of data for triangle primitives extracted in the acquired head surface triangle representation image can be as high as 5000K-2000K. In order to reduce the number of triangle primitives in the triangle mesh to obtain lightweight vertex data files and vertex normal vector files, thereby saving storage space, we can also sample the head surface triangle mesh representation image. The step S31 further includes:
sampling the head surface polygon mesh representation image to remove a predetermined ratio of vertices to obtain a vertex data file retaining vertices (i.e., a lightweight vertex data file);
and obtaining normal vectors (namely lightweight vertex normal vector files) of all shared vertexes and all triangle primitives connected with the corresponding shared vertexes according to the vertex data file of the reserved vertexes.
The lightweight vertex data file and the lightweight vertex normal vector file are obtained through sampling, so that a user can view the head surface drawing file in real time and view the head surface image through WebGL rendering in real time without being limited by the available memory capacity of a browser and network bandwidth.
In one example, the sampling method is a grid reduction algorithm, and a static reduction algorithm, a dynamic reduction algorithm, or a viewpoint-based reduction algorithm may be employed. Preferably, the sampling method is a dynamic reduced algorithm, such as an edge collapse operation. In one example, the edge collapse operation may be implemented using, for example, an edge collapse reduction algorithm, a grid reduction algorithm based on a quadratic error metric.
In one example, the setting of the predetermined ratio of vertices to be removed is a condition for the termination of the iteration, i.e., when the ratio of the deleted vertices reaches the predetermined ratio, the iteration is stopped. In one example, the predetermined ratio r is set in a range of 0.5.ltoreq.r.ltoreq.0.9. For example, when the predetermined ratio r is set to 0.9, it means that 90% of vertices are removed and only 10% of vertices remain. By the design, more detail information in the image can be reserved.
In one example, step S32 further includes:
averaging the normal vector of all triangle primitives connected with the shared vertex to obtain the normal vector average value of the shared vertex;
normalizing the normal vector average to obtain the normal vector of the shared vertex.
In one example, the method of generating the head surface drawing file further includes the steps of:
saving the lightweight vertex data file and the lightweight vertex normal vector file in a file format (such as a. Obj,. Json and other formats) supported by a Web graphic library (WebGL); and/or
The lightweight vertex data file and the lightweight vertex normal vector file are compressed by the compression format (e.g. the. Obj. Gz file format) pair supported by the Web graphics library (WebGL) to reduce the time of network transmission, so that the user can view the head surface rendering file in real time and render through WebGL to view the rendered head surface image without being limited by the memory capacity and network bandwidth available to the browser.
Taking the obj format as an example, when performing a save operation, a mesh (mesh) type (such as P represents a polygon, L represents a line segment, etc.), a mesh material, a vertex coordinate, a vertex normal vector, an RGBD value, and a vertex combination index of a triangle patch (i.e., a triangle primitive) need to be written in sequence, and finally saved as a surface drawing file with ". Obj" as a suffix. To reduce network transmission time, the above-described surface drawing file may be compressed using script commands to obtain a compressed surface drawing file ending with ". Obj.gz", typically with a compression rate of around 50%.
In one example, a method of rendering a head surface image is provided according to another embodiment of the present invention. The rendering method of the head surface image comprises the following steps:
generating a head surface drawing file according to the generation method of the head surface drawing file;
rendering by a shader to obtain a rendered head surface image.
In one example, the head surface rendering file is sequentially input into a vertex shader and a fragment shader for rendering to obtain a rendered head surface image. The vertex shader performs a number of computations on the vertex data file and the vertex normal vector file to obtain the position of the vertex on the interactive interface, the vertex color, etc. The patch shader calculates the final color of the patch on the interactive interface through the vertex data file and the vertex normal vector file.
In one example, the rendering framework of vertex data files and vertex normal vector files needs to be able to support WebGL. Taking the VUE frame as an example, the saved or compressed surface drawing file is imported into the VUE frame, so that the surface drawing effect of the head surface image can be checked in the visual window.
In one example, a readable storage medium is provided according to yet another embodiment of the present invention. A "readable storage medium" of embodiments of the present invention refers to any medium that participates in providing programs or instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as a storage device. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Common forms of readable storage media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
The readable storage medium stores thereon a program or instructions that, when executed by a processor, perform the above-described method of generating a head surface drawing file and method of rendering a head surface image.
The method of generating a head surface drawing file, the method of rendering a head surface image, and the readable storage medium according to the present invention have at least one of the following advantages:
(1) The generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium can enable a user to obtain the head surface drawing file suitable for Web interactive interface rendering only by inputting the head magnetic resonance imaging image;
(2) The generation method of the head surface drawing file, the rendering method of the head surface image and the readable storage medium can better support the use of a neurosurgery imaging system based on cloud service, and fill the blank that the display of the outer surface of the head is lacking in the traditional brain analysis software;
(3) According to the method for generating the head surface drawing file, the method for rendering the head surface image and the readable storage medium, provided by the invention, a ratio of vertexes are removed by sampling the head surface polygonal grid image so as to lighten the vertex data file and the vertex normal vector file, so that the storage space is saved, the transmission efficiency between networks is improved, and meanwhile, a user can check the scalp surface drawing file in real time;
(4) According to the method for generating the head surface drawing file, the method for rendering the head surface image and the readable storage medium, normalization, segmentation, smoothing, sampling and other methods are comprehensively considered in the process of generating the head surface drawing file, so that the whole processing process is more robust, the head surface drawing file is more accurate and smooth, and the visual effect is improved.
Although a few embodiments of the present general inventive concept have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the claims and their equivalents.

Claims (17)

1. A method of generating a head surface rendering file for Web interactive interface rendering, the method comprising the steps of:
step S1, a head magnetic resonance imaging image is preprocessed to obtain a head surface mask;
step S2, generating a head surface polygonal grid representation image by the head surface mask so as to obtain a vertex data file of a polygonal primitive in the head surface polygonal grid representation image;
step S3, obtaining normal vectors of shared vertexes and all polygon primitives connected with the shared vertexes according to vertex data files of the head surface polygon mesh representation image, then obtaining normal vectors of the shared vertexes based on the normal vectors of all polygon primitives connected with the shared vertexes, iteratively obtaining normal vectors of the shared vertexes and all polygon primitives connected with the shared vertexes, and obtaining vertex normal vector files of the polygon primitives,
wherein, obtaining shared vertexes and normal vectors of all polygon primitives connected with the shared vertexes according to vertex data files of the head polygon mesh representation image, comprising the following steps:
removing vertices of a predetermined ratio from the head surface polygon mesh representation image by sampling to obtain a vertex data file retaining the vertices;
and obtaining normal vectors of the shared vertex and all polygon primitives connected with the shared vertex according to the vertex data file of the reserved vertex.
2. The method of generating according to claim 1, wherein,
in step S1, the image preprocessing includes the steps of:
obtaining a head mask through image segmentation of the head magnetic resonance imaging image;
the head mask is subjected to a first morphological treatment to obtain a head surface mask.
3. The generating method according to claim 2, wherein,
the head mask is passed through a second morphological process to obtain a refined head mask prior to the first morphological process.
4. The method of generating according to claim 3, wherein,
the second morphological processing is to process the head mask through a closing operation to obtain a refined head mask.
5. The generating method according to claim 2, wherein,
the head surface mask has a preset thickness delta, and the preset thickness delta is set to be 1mm < delta < 10mm.
6. The method of generating according to claim 5, wherein,
the first morphological treatment is to obtain a head surface mask from the head mask by an etching operation.
7. The production method according to any one of claims 2 to 6, wherein,
in step S2, the vertex data file includes all vertex data in the head surface polygon mesh representation image,
the attributes of the vertex data include vertex position, vertex texture, and vertex color.
8. The generating method according to claim 7, wherein,
the vertex position is at least one of an ordered vertex list and an ordered position list.
9. The production method according to any one of claims 2 to 6, wherein,
the step S2 further includes:
subjecting the head surface mask to smoothing processing to obtain a noise-reduced head surface mask;
and generating the head surface polygonal grid representation image through a moving cube algorithm by using the head surface mask after noise reduction.
10. The generating method according to claim 9, wherein,
the head surface polygonal mesh representation image is a head surface triangular mesh representation image.
11. The method of generating according to claim 1, wherein,
the vertex normal vector file includes vertex normal vectors of all polygon primitives in the head surface polygon mesh representation image,
the method for obtaining the vertex normal vector further comprises the following steps:
and carrying out mean value and normalization on normal vectors of all polygon primitives connected with the shared vertex to obtain the normal vector of the shared vertex.
12. The method of generating according to claim 11, wherein,
the vertex normal vector file is an ordered normal vector list formed by the vertex normal vectors of all the polygon primitives.
13. The method of generating according to claim 1, wherein,
before the image preprocessing, the method further comprises normalizing the head magnetic resonance imaging image, wherein the method for normalizing the head magnetic resonance imaging image comprises the following steps:
normalizing the magnetic resonance imaging image through scale normalization and gray scale normalization to obtain a normalized head magnetic resonance imaging image;
obtaining average coordinates of all voxel points with gray values larger than a preset gray threshold according to the normalized head magnetic resonance imaging image;
obtaining an offset value between the average coordinate and the coordinate of the central position of the volume data according to the average coordinate;
and adjusting the center position of the head to the center position of the volume data according to the offset value.
14. The method of generating according to claim 1, wherein,
the generating method further comprises the following steps:
storing the generated vertex data file and vertex normal vector file through a file format supported by a Web graphic library; and/or
And compressing the generated vertex data file and vertex normal vector file through a compression format supported by the Web graphic library.
15. A method of rendering a head surface image, the method comprising the steps of:
the head surface drawing file according to any one of claims 1 to 14, which is generated by a generation method of the head surface drawing file;
the head surface rendering file is input into a shader for rendering to obtain a rendered head surface image.
16. The rendering method of claim 15, wherein,
the head surface drawing file is sequentially input into a vertex shader and a fragment shader for rendering to obtain the rendered head surface image.
17. A readable storage medium, characterized in that,
the readable storage medium has stored thereon a program or instructions that when executed by a processor perform at least one of:
a method of generating a head surface drawing file as claimed in any one of claims 1 to 14; and
a method of rendering a head surface image as claimed in claim 15 or 16.
CN202210332128.2A 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium Active CN114693884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210332128.2A CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210332128.2A CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Publications (2)

Publication Number Publication Date
CN114693884A CN114693884A (en) 2022-07-01
CN114693884B true CN114693884B (en) 2023-10-13

Family

ID=82140128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210332128.2A Active CN114693884B (en) 2022-03-30 2022-03-30 Method for generating head surface drawing file, rendering method, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114693884B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1663527A (en) * 2005-03-16 2005-09-07 南京航空航天大学 Brain function MRI movement correcting method
CN103077557A (en) * 2013-02-07 2013-05-01 河北大学 Adaptive hierarchical chest large data display implementation method
KR20160068204A (en) * 2014-12-05 2016-06-15 삼성전기주식회사 Data processing method for mesh geometry and computer readable storage medium of recording the same
CN113129418A (en) * 2021-03-02 2021-07-16 武汉联影智融医疗科技有限公司 Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5320334B2 (en) * 2009-09-24 2013-10-23 株式会社東芝 Image processing apparatus and program
US20110090222A1 (en) * 2009-10-15 2011-04-21 Siemens Corporation Visualization of scaring on cardiac surface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1663527A (en) * 2005-03-16 2005-09-07 南京航空航天大学 Brain function MRI movement correcting method
CN103077557A (en) * 2013-02-07 2013-05-01 河北大学 Adaptive hierarchical chest large data display implementation method
KR20160068204A (en) * 2014-12-05 2016-06-15 삼성전기주식회사 Data processing method for mesh geometry and computer readable storage medium of recording the same
CN113129418A (en) * 2021-03-02 2021-07-16 武汉联影智融医疗科技有限公司 Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN114037803A (en) * 2022-01-11 2022-02-11 真健康(北京)医疗科技有限公司 Medical image three-dimensional reconstruction method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于三维重建的计算机辅助骨科手术系统研究;王新德;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;第7-32页 *
王新德.基于三维重建的计算机辅助骨科手术系统研究.《中国优秀硕士学位论文全文数据库 医药卫生科技辑》.2013,第7-32页. *

Also Published As

Publication number Publication date
CN114693884A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
US11710242B2 (en) Methods and systems for image segmentation
CN109754394B (en) Three-dimensional medical image processing device and method
CN107798682B (en) Image segmentation system, method, apparatus and computer-readable storage medium
Han et al. CRUISE: cortical reconstruction using implicit surface evolution
US9367958B2 (en) Method and apparatus for correction of errors in surfaces
US20190096064A1 (en) System and method for processing colon image data
EP2102675B1 (en) Segmentation of magnetic resonance diffusion data
CN113344799A (en) System and method for reducing colored noise in medical images using deep neural networks
Ballester et al. Segmentation and measurement of brain structures in MRI including confidence bounds
CN111311705B (en) High-adaptability medical image multi-plane reconstruction method and system based on webgl
Jaffar et al. Anisotropic diffusion based brain MRI segmentation and 3D reconstruction
CN115965750B (en) Vascular reconstruction method, vascular reconstruction device, vascular reconstruction computer device, and vascular reconstruction program
CN112183541A (en) Contour extraction method and device, electronic equipment and storage medium
CN112233132A (en) Brain magnetic resonance image segmentation method and device based on unsupervised learning
US20110090222A1 (en) Visualization of scaring on cardiac surface
Yu et al. Biomedical image segmentation via constrained graph cuts and pre-segmentation
CN114693884B (en) Method for generating head surface drawing file, rendering method, and readable storage medium
CN109087357A (en) Scan orientation method, apparatus, computer equipment and computer readable storage medium
CN107170009B (en) Medical image-based goggle base curve data measurement method
CN109741439A (en) A kind of three-dimensional rebuilding method of two dimension MRI fetus image
Reska et al. Fast 3D segmentation of hepatic images combining region and boundary criteria
Bosc et al. Statistical atlas-based sub-voxel segmentation of 3D brain MRI
CN112837226B (en) Morphology-based mid-brain sagittal plane extraction method, system, terminal and medium
Reska et al. HIST-an application for segmentation of hepatic images
Hewer et al. Tongue mesh extraction from 3D MRI data of the human vocal tract

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230918

Address after: Room 1201-1208, 12th Floor, Building 1, Haipengyuan, No. 229 Guyuan Road, High tech Development Zone, Changsha City, Hunan Province, 410221

Applicant after: Younao Yinhe (Hunan) Technology Co.,Ltd.

Address before: Room 504, floor 5, building 2, No. 9 hospital, medical Road, Life Science Park, Changping District, Beijing 100083

Applicant before: Beijing Yinhe Fangyuan Technology Co.,Ltd.

Applicant before: Beijing yone Galaxy Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant