CN115063386A - Medical image processing method, device, equipment and storage medium - Google Patents

Medical image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115063386A
CN115063386A CN202210766049.2A CN202210766049A CN115063386A CN 115063386 A CN115063386 A CN 115063386A CN 202210766049 A CN202210766049 A CN 202210766049A CN 115063386 A CN115063386 A CN 115063386A
Authority
CN
China
Prior art keywords
anatomical
medical image
anatomical point
point
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210766049.2A
Other languages
Chinese (zh)
Inventor
刘鸣谦
陈旭
王涛
赵大平
王吉喆
潘志君
邓争光
黄智勇
孙嘉明
李茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Winning Health Technology Group Co Ltd
Original Assignee
Winning Health Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Winning Health Technology Group Co Ltd filed Critical Winning Health Technology Group Co Ltd
Priority to CN202210766049.2A priority Critical patent/CN115063386A/en
Publication of CN115063386A publication Critical patent/CN115063386A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a medical image processing method, a medical image processing device, medical image processing equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object; determining a skull segmentation result of the medical image, and displaying the skull segmentation result, wherein the skull segmentation result is a three-dimensional model of the head of the target object; inputting the medical image into a target anatomical point marking model to obtain three-dimensional position information of a plurality of anatomical points on the medical image; and according to the three-dimensional position information of the plurality of anatomical points, adding an anatomical point identifier on a skull segmentation result, and generating and displaying a head anatomical point distribution map of the target object. According to the scheme, the head anatomical point distribution map of the target object can be displayed in a three-dimensional mode, so that a clinician can know the distribution situation of the head key anatomical points of the target object more intuitively, and the problem that all anatomical points cannot be displayed in a three-dimensional mode in the prior art is solved.

Description

Medical image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a medical image processing method, apparatus, device, and storage medium.
Background
Medical images such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are obtained by scanning a certain part of a human body on different cross sections, and can assist a doctor to visually know a focus point of a patient from a CT/MRI medical image. Therefore, medical images such as CT/MRI are of great significance in applications such as auxiliary diagnosis and surgical planning.
At present, before an oral plastic surgery, an oral CT medical image of a patient is first acquired, and then, all anatomical points on the oral CT medical image of the patient are marked manually, so as to assist a clinician to make a treatment plan according to the marked anatomical points.
However, the manual labeling of all anatomical points on a CT medical image has a problem that the specific position of each anatomical point in a three-dimensional space cannot be known exactly, so that all anatomical points cannot be displayed in a three-dimensional manner.
Disclosure of Invention
The present application aims to provide a medical image processing method, apparatus, device and storage medium to solve the problem in the prior art that all anatomical points cannot be displayed in three dimensions.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a medical image processing method, where the method includes:
acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object;
determining a skull segmentation result of the medical image, and displaying the skull segmentation result, wherein the skull segmentation result is a three-dimensional model of the head of the target object;
inputting the medical image into a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image;
and according to the three-dimensional position information of the plurality of anatomical points, adding an anatomical point identifier on the skull segmentation result, and generating and displaying a head anatomical point distribution map of the target object.
Optionally, the adding an anatomical point identifier to the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generating and displaying a head anatomical point distribution map of the target object includes:
acquiring a virtual volume of the anatomical point;
and adding an anatomical point identifier to the skull segmentation result for the anatomical point according to the three-dimensional position information of the anatomical point and the virtual volume of the anatomical point, and generating an anatomical point distribution map.
Optionally, the target anatomical point marker model is obtained by adopting the following training mode:
acquiring a training sample set consisting of a plurality of training samples; wherein the training sample is a medical image of the head;
randomly selecting at least one training sample from the training sample set;
inputting the at least one training sample into an initial anatomical point marker model, performing iterative training on the initial anatomical point marker model until a loss value of the initial anatomical point marker model meets a preset loss threshold, and taking the initial anatomical point marker model meeting the loss threshold as the target anatomical point marker model.
Optionally, the method further comprises:
determining a mark surface where the anatomical point is located according to the three-dimensional position information of the anatomical point and a surface equation of a preset plane;
determining the position relationship between the anatomical point and the anatomical points except the anatomical point according to the mark surface where the anatomical point is located, wherein the position relationship comprises the following steps: the distance value and the included angle between the anatomical point and the anatomical points except the anatomical point.
Optionally, the preset plane is a frankfurt plane;
determining the mark surface where the anatomical point is located according to the position information of the anatomical point and a surface equation of a preset plane, wherein the determining comprises the following steps:
judging whether the anatomical point passes through a first plane parallel to the Frankfurt plane or not according to the position information of the anatomical point;
and if so, determining that the mark surface where the anatomical point is located is a horizontal plane.
Optionally, the method further comprises:
displaying a positional relationship of the anatomical point and an anatomical point other than the anatomical point.
Optionally, the determining a skull segmentation result of the medical image comprises:
according to the pixel value of each pixel point on the medical image and a preset gray threshold, removing non-osseous elements from the medical image to obtain a target image;
and performing three-dimensional reconstruction on the target image to obtain a skull segmentation result of the medical image.
In a second aspect, an embodiment of the present application further provides a medical image processing apparatus, including:
an acquisition module for acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object;
a processing module for determining a skull segmentation result of the medical image;
a display module, configured to display the skull segmentation result, where the skull segmentation result is a three-dimensional model of the head of the target object;
the processing module is further used for inputting the medical image to a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image;
the display module is further configured to add an anatomical point identifier to the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generate and display a head anatomical point distribution map of the target object.
Optionally, the display module is further configured to:
acquiring a virtual volume of the anatomical point;
and adding an anatomical point identifier to the skull segmentation result for the anatomical point according to the three-dimensional position information of the anatomical point and the virtual volume of the anatomical point, and generating an anatomical point distribution map.
Optionally, the obtaining module is further configured to obtain a training sample set composed of a plurality of training samples; wherein the training sample is a medical image of the head;
the device further comprises:
a selecting module, configured to randomly select at least one training sample from the training sample set;
and the training module is used for inputting the at least one training sample into an initial anatomical point marker model, performing iterative training on the initial anatomical point marker model until the loss value of the initial anatomical point marker model meets a preset loss threshold, and taking the initial anatomical point marker model meeting the loss threshold as the target anatomical point marker model.
Optionally, the processing module is further configured to:
determining a mark surface where the anatomical point is located according to the three-dimensional position information of the anatomical point and a surface equation of a preset plane;
determining the position relationship between the anatomical point and the anatomical points except the anatomical point according to the mark surface where the anatomical point is located, wherein the position relationship comprises the following steps: the distance value and the included angle between the anatomical point and the anatomical points except the anatomical point.
Optionally, the preset plane is a frankfurt plane;
the processing module is further configured to:
judging whether the anatomical point passes through a first plane parallel to the Frankfurt plane or not according to the position information of the anatomical point;
and if so, determining that the mark surface where the anatomical point is located is a horizontal plane.
Optionally, the display module is further configured to:
displaying a positional relationship of the anatomical point and an anatomical point other than the anatomical point.
Optionally, the processing module is further configured to:
according to the pixel value of each pixel point on the medical image and a preset gray threshold, removing non-osseous elements from the medical image to obtain a target image;
and performing three-dimensional reconstruction on the target image to obtain a skull segmentation result of the medical image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method as provided by the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method as provided in the first aspect.
The beneficial effect of this application is:
the embodiment of the application provides a medical image processing method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object; determining a skull segmentation result of the medical image, and displaying the skull segmentation result, wherein the skull segmentation result is a three-dimensional model of the head of the target object; inputting the medical image into a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image; and adding an anatomical point identifier on the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generating and displaying a head anatomical point distribution map of the target object. In the scheme, a skull segmentation result of a target object is obtained according to a head medical image of the target object, and the skull segmentation result of the target object is displayed in a three-dimensional space; meanwhile, a head medical image of a target object is input into a target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image output by the target anatomical point marker model, then, an anatomical point identifier is added to each anatomical point in a skull segmentation result of the target object by combining the three-dimensional position information of the plurality of anatomical points to generate a head anatomical point distribution map of the target object, and finally, the head anatomical point distribution map of the target object is displayed in a three-dimensional mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a medical image processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a skull segmentation result provided in an embodiment of the present application;
fig. 4 is a schematic flow chart of another medical image processing method provided in the embodiment of the present application;
FIG. 5 is a distribution map of a plurality of anatomical points on a head of a target subject provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating a distribution map of anatomical points of a head of a target object and a medical image of the head of the target object on different cross-sections according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another medical image processing method provided in an embodiment of the present application;
fig. 8 is a schematic flow chart of another medical image processing method provided in the embodiment of the present application;
fig. 9 is a schematic flowchart of another medical image processing method provided in an embodiment of the present application;
fig. 10 is a schematic flow chart of another medical image processing method provided in the embodiment of the present application;
fig. 11 is a schematic structural diagram of a medical image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure; the electronic device may be a processing device such as a computer or a server, for example, to implement the medical image processing method provided by the present application. As shown in fig. 1, the electronic apparatus includes: a processor 101 and a memory 102.
The processor 101 and the memory 102 are electrically connected directly or indirectly to realize data transmission or interaction. For example, electrical connections may be made through one or more communication buses or signal lines.
The processor 101 may be an integrated circuit chip having signal processing capability. The Processor 101 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 102 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
It will be appreciated that the configuration of fig. 1 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 1 or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The memory 102 is used for storing a program, and the processor 101 calls the program stored in the memory 102 to execute the medical image processing method provided by the following embodiment.
The medical image processing method and the corresponding beneficial effects provided by the present application will be explained by several embodiments as follows.
Fig. 2 is a schematic flow chart of a medical image processing method provided in an embodiment of the present application, and optionally, an execution subject of the method may be an electronic device such as a server or a computer, and the electronic device has a data processing function.
It should be understood that in other embodiments, the order of some steps in the medical image processing method may be interchanged according to actual needs, or some steps may be omitted or deleted. As shown in fig. 2, the method includes:
s201, acquiring a medical image to be processed.
Wherein the medical image is a head CT medical image of the target object. Illustratively, the target object may be a subject, such as a patient suffering from a certain disease or a normal person, for example.
A head CT medical image is also called a medical image, and is an image or an image obtained by non-invasively acquiring a plurality of cross sections of a head internal tissue of a target object for medical treatment or medical research.
In the present embodiment, only the head CT medical image of the target object is processed. Similarly, the medical image processing method provided by the present application may also be used to process CT medical images of other parts of the target object, so as to realize three-dimensional display of all anatomical points on other parts of the target object, which is not described herein in detail.
S202, determining a skull segmentation result of the medical image, and displaying the skull segmentation result.
Wherein the skull segmentation result is a three-dimensional model of the head of the target object.
In this embodiment, in order to perform a three-dimensional model on the head of a target object by combining with pixel information included in a medical image, it is proposed that a skull segmentation image is generated based on a template segmentation method, then a skull segmentation result is obtained according to the skull segmentation image and an existing body reconstruction method, and the skull segmentation result of the head of the target object is displayed in a three-dimensional space.
Referring to fig. 3, a schematic diagram of a skull segmentation result of a head of a target object in a three-dimensional space is shown. In addition, the skull segmentation result shown in the three-dimensional space can be rotated or translated, and the like, so that the focal point on the head of the target object can be analyzed and diagnosed from multiple visual angles.
S203, inputting the medical image into a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image.
Illustratively, the pre-trained target anatomical point labeling model may be a network model obtained by training a pre-acquired medical image through a deep neural network model, for example. The deep Neural network model may be a Convolutional Neural Network (CNN), a deconvolution Neural network (DCNN), or the like, and may include Neural network layers such as an input layer, a Convolutional layer, a pooling layer, and a connection layer.
The anatomical point may be a key anatomical point on the head, for example, the key anatomical point may be a nasion point, a skull base point, and the like, that is, the three-dimensional position information of the nasion point, the three-dimensional position information of the skull base point, and the like are output by the target anatomical point marker model.
In this embodiment, specifically, the head CT medical image of the target object is input to a pre-trained target anatomical point marker model, the target anatomical point marker model searches for a key anatomical point on the medical image, and outputs three-dimensional position information of the key anatomical point on the head CT medical image of the target object, that is, the automatic extraction of the three-dimensional position information of the key anatomical point on the head of the target object is realized.
Therefore, the key anatomical points in the head medical image of the target object can be automatically positioned and extracted through the target anatomical point marking model, the positioning efficiency of the key anatomical points in the medical image is improved, and the problem that the specific position of each anatomical point in a three-dimensional space cannot be known exactly in the marking mode of all anatomical points (or key anatomical points) on the CT medical image by adopting a manual mode in the prior art is solved.
And S204, adding an anatomical point identifier on the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generating and displaying a head anatomical point distribution map of the target object.
In this embodiment, in order to three-dimensionally represent key anatomical points on the head of the target object, anatomical point identifiers may be added to the skull segmentation result represented in the three-dimensional space according to three-dimensional position information of a plurality of anatomical points, so as to generate a head anatomical point distribution map of the target object, and represent the head anatomical point distribution map of the target object. Therefore, a clinician can more intuitively acquire the distribution condition of the head key anatomical points of the target object, and the problem that all anatomical points cannot be displayed in a three-dimensional mode in the prior art is solved.
In summary, an embodiment of the present application provides a medical image processing method, including: acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object; determining a skull segmentation result of the medical image, and displaying the skull segmentation result, wherein the skull segmentation result is a three-dimensional model of the head of the target object; inputting the medical image into a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image; and adding an anatomical point identifier on the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generating and displaying a head anatomical point distribution map of the target object. In the scheme, a skull segmentation result of a target object is obtained according to a head medical image of the target object, and the skull segmentation result of the target object is displayed in a three-dimensional space; meanwhile, a head medical image of the target object is input into a target anatomical point marking model to obtain three-dimensional position information of a plurality of anatomical points on the medical image output by the target anatomical point marking model, then, an anatomical point identifier is added to each anatomical point in a skull segmentation result of the target object by combining the three-dimensional position information of the plurality of anatomical points to generate a head anatomical point distribution map of the target object, and finally, the head anatomical point distribution map of the target object is displayed in a three-dimensional mode.
How to add anatomical point identifiers to the skull segmentation result according to the three-dimensional position information of a plurality of anatomical points in the above step S204 will be specifically explained through the following embodiments, and a head anatomical point distribution map of the target object is generated and displayed.
Alternatively, referring to fig. 4, the step S204 includes:
s401, obtaining a virtual volume of the anatomical point.
S402, adding anatomical point marks to the skull segmentation result of the anatomical points according to the three-dimensional position information of the anatomical points and the virtual volumes of the anatomical points, and generating an anatomical point distribution map.
Illustratively, for example, the virtual volume of each anatomical point may be set to 5 pixels, or the virtual volume of each anatomical point may be set to a different number of pixels, so as to highlight the position information of the head key anatomical point of the target object in the three-dimensional space.
In the present embodiment, for example, referring to fig. 5, for ease of understanding, only 5 anatomical points on the head of the target object are shown in three dimensions, that is, anatomical point 1, anatomical point 2, anatomical point 3, anatomical point 4, and anatomical point 5. Specifically, in the three-dimensional space, anatomical point identifiers of 5 pixel virtual volumes may be added to the skull segmentation result for the 5 anatomical points respectively according to the three-dimensional position information of the 5 anatomical points, so as to form 5 three-dimensional beads, that is, the 5 three-dimensional beads may be displayed in the three-dimensional space, so as to generate a head anatomical point distribution map of the target object.
In addition, in the present embodiment, refer to fig. 6, wherein the left side in fig. 6 is a distribution diagram of head anatomical points of the target object, and the right side in fig. 6 is a CT medical image of the head of the target object on different cross sections.
In this embodiment, the actual coordinate position of each anatomical point in the CT medical image (i.e., the cross-sectional view, the sagittal view, and the frontal view) may also be calculated based on the mapping relationship between the three-dimensional space and the two-dimensional space and the three-dimensional position information of each anatomical point. When the current position of an anatomical point displayed in the head anatomical point distribution map of the target object is dragged in the three-dimensional map on the left side in fig. 6, correspondingly, the relative position change of the anatomical point is displayed in the two-dimensional map on the right side in fig. 6, and the dragged effect of the anatomical point is displayed in the three-dimensional space according to the updated position information of the anatomical point.
Similarly, in the two-dimensional diagram on the right side of fig. 6, when an anatomical point is dragged in any one plane, the display position of the anatomical point on the two-dimensional diagram is changed, and accordingly, on the three-dimensional diagram on the left side of fig. 6, the relative position change of the anatomical point can be displayed.
How to train the target anatomical point marker model will be specifically explained by the following embodiments.
Alternatively, referring to fig. 7, the pre-trained target anatomical point labeling model may be trained as follows:
s701, obtaining a training sample set composed of a plurality of training samples.
The training sample is a medical image of the head, and in this embodiment, the medical image of the head is a computed tomography image of the head of different objects.
In this embodiment, a specific number of head medical images Im are randomly selected as training samples, where M is 0, 1, …, and M-1, and then the technician manually labels three-dimensional position information of a plurality of anatomical points in the training samples according to personal experience, for example, one training sample includes N anatomical points Ln with anatomical significance, where N is 0, 1, …, and N-1, and three-dimensional position information of all anatomical points in the training samples is recorded. Thus, a training sample set { V } for training the initial anatomical point marker model can be obtained.
In addition, each training sample in the training sample set { V } can be constructed in the following way:
in this embodiment, the head medical images may be registered based on the 3-dimensional pixel dimension, and specifically, for each medical image, the cutting is performed from the selected three-dimensional point to generate the cut batch.
Positive sample: selecting 2 head medical images of different objects from a plurality of head medical images, and selecting a batch (which is a positive sample) of a matching image containing the same three-dimensional point; respectively obtaining global feature vectors and local feature vectors of the batchs by using a deep learning network, and calculating global similarity (f _ global similarity) and local similarity (f _ local similarity) of the two batchs;
negative sample: constructing a negative sample batch that does not contain a matching image (e.g., selecting a batch that is at least 20 pixels away from the pixel and constructing a three-dimensional batch) sets it as a negative sample.
S702, randomly selecting at least one training sample from the training sample set.
S703, inputting at least one training sample into the initial anatomical point marker model, performing iterative training on the initial anatomical point marker model until the loss value of the initial anatomical point marker model meets a preset loss threshold, and taking the initial anatomical point marker model meeting the loss threshold as a target anatomical point marker model.
In this embodiment, the selected loss function is shown in the following formula (1):
Figure BDA0003722164540000111
in the present embodiment, the pixel sampling parameters of each medical image of the head are npos, nneg, ngrand, nland, where npos, nneg, ngrand, nland are pixel boundary values on four vertices of the medical image of the head respectively.
In each batch of training batches, randomly selecting b training samples from the head medical image set { V }, inputting the selected training samples into an initial anatomical point marking model, and executing the following operations for each selected training sample:
(1) running a random data expansion to obtain a patch pair (x, x');
(2) calculating global and local embedding tensors Fg, Fg ', Fl' of the two;
(3) from the overlapping region of x and x', the positive pixel pair (pi, p0i) is sampled, and then the global and local positive embedded pairs are extracted: (fgi, fg0i), (fli, fl0i), 1. ltoreq. i.ltoreq. npos;
(4) for each pair of patches, calculating similarity mappings Sgi, Sg 'i, Sli and Sl' i, and searching a global negative sample hgij, wherein j is more than or equal to 1 and less than or equal to nneg;
(5) globally and randomly sampling hgik, wherein k is more than or equal to 1 and less than or equal to ngrand, and 1 patch pair exists in a b-1 training batch;
(6) hard and random negative information is aggregated to obtain final global negative information.
(7) And searching local negative samples hlij, j is more than or equal to 1 and less than or equal to nlcan, and randomly extracting nneg from the local negative samples hlij to be used as a final local negative sample slice.
(8) Calculating global and local information losses Lg and Ll by adopting the formula (1), wherein the final loss is L ═ Lg + Ll;
and circularly executing the steps, and performing iterative training on the initial anatomical point marker model until the loss value of the initial anatomical point marker model meets a preset loss threshold value, and taking the initial anatomical point marker model meeting the loss threshold value as a target anatomical point marker model.
Table 1 below shows the partial anatomical points to be extracted from the head of the target object, which are specifically as follows:
TABLE 1 partial anatomical points on the head of a target object
Figure BDA0003722164540000121
Optionally, referring to fig. 8, the method further includes:
s801, determining a mark surface where the anatomical point is located according to the three-dimensional position information of the anatomical point and a surface equation of a preset plane.
Illustratively, for example, the plane of the landmark in which an anatomical point on the head of the target subject is located may be a horizontal plane, a midsagittal plane, a coronal plane, a base plane, a palatal plane, an occlusal plane, a mandibular plane, or the like. The above-mentioned respective reference surfaces are specifically described with reference to table 2 below.
TABLE 2 identifying surface
Figure BDA0003722164540000131
In this embodiment, the three-dimensional position information of each anatomical point and the surface equation of the preset plane may be combined to determine the landmark surface where each anatomical point is located.
S802, according to the mark surface where the anatomical point is located, the position relation between the anatomical point and the anatomical points except the anatomical point is determined.
Wherein, the position relation includes: the distance value and the included angle between the anatomical point and the anatomical points except the anatomical point.
In this embodiment, the three-dimensional relationship of the intracranial structure of the head of the target object may be measured based on the three-dimensional position information of the plurality of anatomical points, thereby realizing quantitative analysis of each anatomical point. Specifically, the linear distance between the anatomical point and other anatomical points and the included angle between any two anatomical points and a certain mark surface can be calculated according to information such as a certain anatomical point and the mark surface where the anatomical point is located.
Alternatively, in order to simplify the amount of calculation, only the positional relationship of a part of the anatomical points and anatomical points other than these anatomical points may be calculated. In particular, reference may be made to table 3 below for the measurement information of certain anatomical points to be calculated.
TABLE 3 measurement terms to be calculated
Figure BDA0003722164540000141
Taking the first behavior example in table 3, the measurement term SNA, where S is a sphenoid point, N is a nasion point, and a is an upper alveolar seat point, that is, SNA is a positional relationship among the sphenoid point, the nasion point, and the upper alveolar seat point, and the positional relationship between the upper jaw and the skull base can be calculated and obtained according to the three-dimensional positional information of the sphenoid point, the three-dimensional positional information of the nasion point, and the three-dimensional positional information of the upper alveolar seat point, and is denoted as SNA.
Alternatively, referring to fig. 9, the predetermined plane is a frankfort plane;
the step S801 determines a landmark plane where the anatomical point is located according to the position information of the anatomical point and the plane equation of the preset plane, and includes:
s901, judging whether the anatomical point passes through a first plane parallel to the Frankfurt plane or not according to the position information of the anatomical point.
And S902, if so, determining the mark surface where the anatomical point is located as a horizontal plane.
In this embodiment, if a first plane parallel to the frankfurt plane through the nasion root point of an anatomical point is obtained through calculation according to the position information of the anatomical point, it may be determined that the landmark plane where the anatomical point is located is a horizontal plane. The frankfurt plane may also be referred to as an ear plane, and is a plane defined by three points, i.e., a left upper point (po) of the left and right lateral ears and a left lower edge point (or), and the right lower edge point is used to replace the left lower edge point when the left lower edge point is damaged.
In addition, the information of other marker surfaces listed in table 2 can be used to further determine the marker surface where a certain anatomical point is located according to the position information of the anatomical point.
Optionally, the method further comprises: and displaying the position relation of the anatomical points and the anatomical points except the anatomical points.
In this embodiment, in order to more intuitively show the positional relationship between anatomical points on the head of the target object, the measurement items calculated in table 3 may be displayed, that is, the positional relationship between the anatomical points and anatomical points other than the anatomical points is displayed in the three-dimensional space interface.
How to determine the skull segmentation result of the medical image in the above step S202 will be specifically explained by the following embodiments.
Alternatively, referring to fig. 10, the step S202 includes:
s1001, according to the pixel value of each pixel point on the medical image and a preset gray threshold, non-osseous elements are removed from the medical image, and a target image is obtained.
S1002, performing three-dimensional reconstruction on the target image to obtain a skull segmentation result of the medical image.
It should be understood that the pixel values of different regions on the medical image of the head of the target object are different, for example, the display color of the skin region is slightly darker than the display color of the bone region, i.e., the pixel value of the skin element is greater than the pixel value of the bone element.
In this embodiment, different elements on the head medical image of the target object may be segmented according to the pixel value of each pixel point on the head medical image of the target object and the preset gray threshold. For example, if the pixel value of a certain pixel point in the head medical image of the target object is greater than a preset gray threshold, the pixel point can be removed from the medical image, that is, non-bone elements in the head medical image of the target object are removed to obtain a target image, wherein the target image is a skull segmentation graph of the target object; and then, obtaining a skull segmentation result according to a surface reconstruction method or a body reconstruction method, and displaying the skull segmentation result in a three-dimensional space.
Based on the same inventive concept, the embodiment of the present application further provides a medical image processing apparatus corresponding to the medical image processing method, and as the principle of the apparatus in the embodiment of the present application for solving the problem is similar to that of the medical image processing method in the embodiment of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Optionally, referring to fig. 11, an embodiment of the present application further provides a medical image processing apparatus, including:
an acquiring module 1101, configured to acquire a medical image to be processed, where the medical image is a head computed tomography image of a target object;
a processing module 1102 for determining a skull segmentation result of the medical image;
a displaying module 1103, configured to display a skull segmentation result, where the skull segmentation result is a three-dimensional model of a head of a target object;
the processing module 1102 is further configured to input the medical image into a pre-trained target anatomical point marker model, so as to obtain three-dimensional position information of a plurality of anatomical points on the medical image;
the displaying module 1103 is further configured to add an anatomical point identifier to the skull segmentation result according to the three-dimensional position information of the multiple anatomical points, and generate and display a head anatomical point distribution map of the target object.
Optionally, the presentation module 1103 is further configured to:
acquiring a virtual volume of an anatomical point;
and adding anatomical point identification to the skull segmentation result of the anatomical point according to the three-dimensional position information of the anatomical point and the virtual volume of the anatomical point, and generating an anatomical point distribution map.
Optionally, the obtaining module 1101 is further configured to obtain a training sample set composed of a plurality of training samples; wherein the training sample is a head medical image;
the device also includes:
the selection module is used for randomly selecting at least one training sample from the training sample set;
and the training module is used for inputting at least one training sample into the initial anatomical point marker model, performing iterative training on the initial anatomical point marker model until the loss value of the initial anatomical point marker model meets a preset loss threshold value, and taking the initial anatomical point marker model meeting the loss threshold value as a target anatomical point marker model.
Optionally, the processing module 1102 is further configured to:
determining a mark surface where the anatomical point is located according to the three-dimensional position information of the anatomical point and a surface equation of a preset plane;
determining the position relationship between the anatomical points and the anatomical points except the anatomical points according to the mark surface where the anatomical points are located, wherein the position relationship comprises the following steps: the distance value and the included angle between the anatomical point and the anatomical points except the anatomical point.
Optionally, the preset plane is a frankfurt plane;
the processing module 1102 is further configured to:
judging whether the anatomical point passes through a first plane parallel to the Frankfurt plane or not according to the position information of the anatomical point;
and if so, determining the mark surface where the anatomical point is located as a horizontal plane.
Optionally, the presentation module 1103 is further configured to:
and displaying the position relationship of the anatomical points and the anatomical points except the anatomical points.
Optionally, the processing module 1102 is further configured to:
according to the pixel value of each pixel point on the medical image and a preset gray threshold, removing non-osseous elements from the medical image to obtain a target image;
and performing three-dimensional reconstruction on the target image to obtain a skull segmentation result of the medical image.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. As another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Optionally, the present application also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other media capable of storing program codes.

Claims (10)

1. A method of medical image processing, the method comprising:
acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object;
determining a skull segmentation result of the medical image, and displaying the skull segmentation result, wherein the skull segmentation result is a three-dimensional model of the head of the target object;
inputting the medical image into a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image;
and according to the three-dimensional position information of the plurality of anatomical points, adding an anatomical point identifier on the skull segmentation result, and generating and displaying a head anatomical point distribution map of the target object.
2. The method according to claim 1, wherein the generating and displaying a distribution map of head anatomical points of the target object by adding anatomical point identifiers to the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points comprises:
acquiring a virtual volume of the anatomical point;
and adding an anatomical point identifier to the skull segmentation result for the anatomical point according to the three-dimensional position information of the anatomical point and the virtual volume of the anatomical point, and generating an anatomical point distribution map.
3. The method of claim 1, wherein the target anatomical point labeling model is obtained by training as follows:
acquiring a training sample set consisting of a plurality of training samples; wherein the training sample is a medical image of the head;
randomly selecting at least one training sample from the training sample set;
inputting the at least one training sample into an initial anatomical point marker model, performing iterative training on the initial anatomical point marker model until a loss value of the initial anatomical point marker model meets a preset loss threshold, and taking the initial anatomical point marker model meeting the loss threshold as the target anatomical point marker model.
4. The method of claim 1, further comprising:
determining a mark surface where the anatomical point is located according to the three-dimensional position information of the anatomical point and a surface equation of a preset plane;
determining the position relationship between the anatomical point and the anatomical points except the anatomical point according to the mark surface where the anatomical point is located, wherein the position relationship comprises the following steps: the distance value and the included angle between the anatomical point and the anatomical points except the anatomical point.
5. The method of claim 4, wherein the predetermined plane is a Frankfurt plane;
determining the mark surface where the anatomical point is located according to the position information of the anatomical point and a surface equation of a preset plane, wherein the determining comprises the following steps:
judging whether the anatomical point passes through a first plane parallel to the Frankfurt plane or not according to the position information of the anatomical point;
and if so, determining that the mark surface where the anatomical point is located is a horizontal plane.
6. The method of claim 4, further comprising:
displaying a positional relationship of the anatomical point and an anatomical point other than the anatomical point.
7. The method according to any one of claims 1-6, wherein said determining a skull segmentation result for the medical image comprises:
according to the pixel value of each pixel point on the medical image and a preset gray threshold, removing non-osseous elements from the medical image to obtain a target image;
and performing three-dimensional reconstruction on the target image to obtain a skull segmentation result of the medical image.
8. A medical image processing apparatus, characterized in that the apparatus comprises:
an acquisition module for acquiring a medical image to be processed, wherein the medical image is a head computed tomography image of a target object;
a processing module for determining a skull segmentation result of the medical image;
a display module, configured to display the skull segmentation result, where the skull segmentation result is a three-dimensional model of the head of the target object;
the processing module is further used for inputting the medical image to a pre-trained target anatomical point marker model to obtain three-dimensional position information of a plurality of anatomical points on the medical image;
the display module is further configured to add an anatomical point identifier to the skull segmentation result according to the three-dimensional position information of the plurality of anatomical points, and generate and display a head anatomical point distribution map of the target object.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the method according to any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210766049.2A 2022-06-30 2022-06-30 Medical image processing method, device, equipment and storage medium Pending CN115063386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210766049.2A CN115063386A (en) 2022-06-30 2022-06-30 Medical image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210766049.2A CN115063386A (en) 2022-06-30 2022-06-30 Medical image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115063386A true CN115063386A (en) 2022-09-16

Family

ID=83203970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210766049.2A Pending CN115063386A (en) 2022-06-30 2022-06-30 Medical image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115063386A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409835A (en) * 2022-10-31 2022-11-29 成都浩目科技有限公司 Three-dimensional imaging method, device, electronic equipment, system and readable storage medium
CN115620053A (en) * 2022-10-11 2023-01-17 皖南医学院第一附属医院(皖南医学院弋矶山医院) Airway type determination system and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620053A (en) * 2022-10-11 2023-01-17 皖南医学院第一附属医院(皖南医学院弋矶山医院) Airway type determination system and electronic equipment
CN115620053B (en) * 2022-10-11 2024-01-02 皖南医学院第一附属医院(皖南医学院弋矶山医院) Airway type determining system and electronic equipment
CN115409835A (en) * 2022-10-31 2022-11-29 成都浩目科技有限公司 Three-dimensional imaging method, device, electronic equipment, system and readable storage medium

Similar Documents

Publication Publication Date Title
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
CN108520519B (en) Image processing method and device and computer readable storage medium
CN111772792B (en) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning
WO2021238438A1 (en) Tumor image processing method and apparatus, electronic device, and storage medium
CN115063386A (en) Medical image processing method, device, equipment and storage medium
US8837791B2 (en) Feature location method and system
CN107067398B (en) Completion method and device for missing blood vessels in three-dimensional medical model
JP7309986B2 (en) Medical image processing method, medical image processing apparatus, medical image processing system, and medical image processing program
CN106659424A (en) Medical image display processing method, medical image display processing device, and program
CN109767841B (en) Similar model retrieval method and device based on craniomaxillofacial three-dimensional morphological database
EP3910592A1 (en) Image matching method, apparatus and device, and storage medium
CN106709920B (en) Blood vessel extraction method and device
CN109215104B (en) Brain structure image display method and device for transcranial stimulation treatment
JP2020171687A (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN110993067A (en) Medical image labeling system
JP6755406B2 (en) Medical image display devices, methods and programs
CN105844687B (en) Device and method for handling medical image
Santoro et al. Photogrammetric 3D skull/photo superimposition: a pilot study
US10580136B2 (en) Mapping image generation device, method, and program
CN115953359A (en) Digital oral cavity model mark point identification method and device and electronic equipment
Fabijańska et al. Assessment of hydrocephalus in children based on digital image processing and analysis
US20220108525A1 (en) Patient-specific cortical surface tessellation into dipole patches
Ng et al. Salient features useful for the accurate segmentation of masticatory muscles from minimum slices subsets of magnetic resonance images
CN113592768A (en) Rib fracture detection method, rib fracture detection information display method and system
US20230102745A1 (en) Medical image display apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination