CN107133946B - Medical image processing method, device and equipment - Google Patents

Medical image processing method, device and equipment Download PDF

Info

Publication number
CN107133946B
CN107133946B CN201710296578.XA CN201710296578A CN107133946B CN 107133946 B CN107133946 B CN 107133946B CN 201710296578 A CN201710296578 A CN 201710296578A CN 107133946 B CN107133946 B CN 107133946B
Authority
CN
China
Prior art keywords
medical image
image
registration
pixel point
transformation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710296578.XA
Other languages
Chinese (zh)
Other versions
CN107133946A (en
Inventor
严计超
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710296578.XA priority Critical patent/CN107133946B/en
Publication of CN107133946A publication Critical patent/CN107133946A/en
Application granted granted Critical
Publication of CN107133946B publication Critical patent/CN107133946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the invention provides a medical image processing method, a medical image processing device and medical image processing equipment. The embodiment of the invention obtains a first medical image and a second medical image of a specified target; registering the second medical image to the first medical image based on the first spatial transformation model to obtain a registered image; based on a second space transformation model, the first registration image is registered to the first medical image by using an unsupervised depth learning registration method to obtain a second registration image, the microstructures in the specified target are aligned by using the unsupervised depth learning registration, and the artifacts are reduced, so that the precision of the registration result is improved, a better registration effect can be obtained particularly for tissues containing more microstructures, and the problem of lower precision of the registration result for the tissues containing more microstructures in the prior art is solved to a certain extent.

Description

Medical image processing method, device and equipment
[ technical field ] A method for producing a semiconductor device
The present disclosure relates to the field of medical image processing, and in particular, to a method, an apparatus, and a device for medical image processing.
[ background of the invention ]
Medical imaging technology is one of the more active research fields in medical radiation diagnostics, and X-ray imaging technology is the main technology of medical imaging. In the digital X-ray imaging technology, indirect digital X-ray imaging (CR) and Digital Radiography (DR) combine the computer digital image processing technology and the X-ray radiation technology, and have the advantages of small radiation dose, high image quality, high disease detectable rate, high diagnosis accuracy, and the like. .
In applications, it is often necessary to perform registration processing on medical images acquired at different times of the same object. The existing medical image registration mostly adopts rigid body registration or fitting registration method based on polynomial. Such a registration method may substantially achieve alignment of medical images, such as alignment of a registration target host structure. However, this registration method cannot align the fine structures in the registration target, and therefore, the registration accuracy is low for a tissue containing more fine structures.
In the process of implementing the scheme, the inventor finds that at least the following problems exist in the prior art:
the existing medical image registration method cannot align the fine structure in the registration target, so that the registration accuracy is low for the tissue containing more fine structures.
[ summary of the invention ]
In view of this, the present disclosure provides a method, an apparatus, and a device for processing a medical image, so as to solve the problem that in the prior art, a fine structure in a registration target cannot be aligned, and therefore, for a tissue containing more fine structures, the accuracy of registration is low.
In a first aspect, an embodiment of the present disclosure provides a medical image processing method, where the method includes:
acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image;
registering the second medical image to the first medical image based on a first spatial transformation model, resulting in a first registered image corresponding to the second medical image;
and registering the first registered image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registered image corresponding to the first registered image.
The above-described aspect and any possible implementation manner further provide an implementation manner that, based on a second spatial transformation model, uses an unsupervised depth learning registration method to register the first registration image to the first medical image, and obtains a second registration image corresponding to the first registration image, including:
establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain characteristic information of the appointed pixel points;
determining parameters of the second space transformation model according to the characteristic information of the designated pixel point;
and performing corresponding transformation on the first registration image according to the parameters of the second spatial transformation model to obtain a second registration image.
In the above aspect and any possible implementation manner, an implementation manner is further provided, where the unsupervised deep learning model obtains the feature information of the specified pixel point by using a stacked self-coding training manner.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the parameter of the second spatial transformation model is a deformation field of the second spatial transformation model;
determining parameters of the second spatial transformation model according to the feature information of the designated pixel point, including:
acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point;
performing at least one confidence propagation based on the four-neighbor domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence propagation;
determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point;
determining a deformation field of the second spatial transformation model based on the offset values of the designated pixel points at the first registered image relative to the first medical image.
The above aspect and any possible implementation further provide an implementation in which the confidence propagation is at least 20 times.
First medical image second medical image
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
and carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
determining a first selected area from the first medical image and calculating a first number of pixel points in the first selected area having a gray scale value within a set range;
determining a first corresponding region corresponding to the first selected region from the second registered image, and calculating a second number of pixel points in the first corresponding region having a gray scale value in the set range;
and acquiring the number of the first medical image changed relative to the second medical image pixel points according to the first pixel point number and the second pixel point number.
In a second aspect, an embodiment of the present disclosure provides a medical image processing apparatus, including:
an acquisition module for acquiring a first medical image and a second medical image of a specified target, the imaging time of the second medical image being different from the imaging time of the first medical image;
a first registration module, configured to register the second medical image to the first medical image based on a first spatial transformation model, so as to obtain a first registered image corresponding to the second medical image;
and the second registration module is used for registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registration image corresponding to the first registration image.
In a third aspect, an embodiment of the present disclosure provides a medical image processing apparatus, where the apparatus includes:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to:
acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image;
registering the second medical image to the first medical image based on a first spatial transformation model, acquiring a first registered image corresponding to the second medical image;
and registering the first registered image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registered image corresponding to the first registered image.
The above-described aspects and any possible implementations further provide an implementation in which the processor is further configured to:
and carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
According to the embodiment of the invention, the first medical image and the second medical image which are acquired at different time of the specified target are acquired, the second medical image is registered to the first medical image, after the first registered image corresponding to the second medical image is obtained, the first registered image is further registered to the first medical image by using the unsupervised depth learning registration method, the microstructure in the specified target is aligned, and the artifact is reduced, so that the precision of the registration result is improved, and particularly, a better registration effect can be obtained for the tissue containing more microstructures.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a diagram illustrating a first flow of a medical image processing method according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating a second flow of a medical image processing method according to an embodiment of the present invention.
Fig. 3 is a third flowchart of a medical image processing method according to an embodiment of the present invention.
Fig. 4(a) is an exemplary view of a past X-ray image and a present X-ray image of the same human chest.
Fig. 4(b) is a subtraction image of the current X-ray image and the past X-ray image in fig. 4 (a).
Fig. 5 is a functional block diagram of a medical image processing apparatus according to an embodiment of the present invention.
Fig. 6 is a simplified block diagram of a medical image processing apparatus.
[ detailed description ] embodiments
In order to better understand the technical solution of the present solution, the following describes an embodiment of the present solution in detail with reference to the accompanying drawings.
It should be clear that the described embodiments are only a part of the present solution, not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments in the present solution, belong to the protection scope of the present solution.
The terminology used in the embodiments of the present solution is for the purpose of describing particular embodiments only and is not intended to be limiting of the present solution. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Example one
The embodiment of the invention provides a medical image processing method. The medical image processing method can be realized through an application program APP, and a computer, a medical workstation and other terminals can acquire corresponding medical image processing functions by installing the application program.
Fig. 1 is a diagram illustrating a first flow of a medical image processing method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the medical image processing method may include the following steps:
s101, acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image.
S102, registering the second medical image to the first medical image based on the first space transformation model to obtain a first registered image corresponding to the second medical image.
S103, registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on the second spatial transformation model to obtain a second registration image corresponding to the first registration image.
The designated target may be an organ or a part of a human body, for example, the organ of the human body may be a lung, a heart, a kidney, etc. of a patient, and the part of the human body may be a head, a chest, an abdomen, a pelvic cavity, etc. of the patient.
Optionally, the first medical image and the second medical image are medical images about the same designated target, the imaging time of the second medical image is earlier than the imaging time of the first medical image, and the imaging time of the first medical image is generally not less than 24 hours apart from the imaging time of the second medical image. For example: the second medical image is a lung X-ray image (also called a past moment X-ray image) generated by examining a patient by a digital X-ray imaging device in the past period; the first medical image is a lung X-ray image (also referred to as a current session X-ray image) generated by examination of the same patient at the current/current session by a digital X-ray imaging device. Alternatively, the expiration state and the inspiration state of the patient may not be consistent when the digital X-ray imaging is performed at the past time and the present time, for example, the imaging data corresponding to the first medical image may be acquired when the patient inhales, and the imaging data corresponding to the second medical image may be acquired when the patient exhales, thereby causing a possibility of deformation between the first medical image and the second medical image acquired based on different imaging data. Alternatively still, the past period may be before treatment of the patient, and the present period may be after treatment of the patient; of course, the past period may be when the patient has just completed the treatment, and the present period may be a period of time after the patient has completed the treatment, during which the patient's physical condition may change
It should be noted that the first medical image and/or the second medical image may also be a Computed Tomography (CT) image, a Magnetic Resonance (MR) image, an Ultrasound (US) image, a Positron Emission Tomography (PET) image, or the like. When the first medical image and the second medical image are MR images, the capturing/acquisition time interval of the two images may be 1 month, 2 months, 3 months, one year or longer; when the first medical image and the second medical image are PET images, the capturing/acquisition time interval of the two images may be 3 months, 4 months, 5 months, one year, or longer; when the first medical image and the second medical image are CT images, the interval between the capturing/collecting of the two images is usually more than 6 months, considering the influence on the radiation dose of the human body.
For another example, when digital X-ray imaging is performed at the past time and the present time, the relative positions of the patient and the digital X-ray imaging device cannot be completely consistent, such as: when the imaging data corresponding to the first medical image is acquired, the body center position of the patient is over against the center position of the data X-ray imaging equipment; when the imaging data corresponding to the second medical image is acquired, a certain position offset exists between the body center position of the patient and the center position of the data X-ray imaging equipment, so that the first medical image and the second medical image acquired based on different imaging data may have position offset or deformation.
Alternatively, the digital X-ray imaging device may be a CR or DR. In one embodiment, the digital X-ray imaging device is a C-arm DR including an X-ray tube for emitting X-rays, a detector for receiving X-rays that have passed through a designated object of a patient, and an X-ray image for reconstructing imaging data corresponding to the designated object. Alternatively, the C-arm may be provided as a mobile, suspended or floor standing type.
Wherein the first medical image is a reference image in the registration process and the second medical image is a floating image registered for the first time in the registration process.
Wherein the first medical image and the second medical image are two-dimensional images.
Wherein the imaging time of the second medical image is different from the imaging time of the first medical image. For example, the first medical image is an X-ray image of the patient's nail lung acquired at the present time, and the second medical image is an X-ray image of the patient's nail lung acquired at a past time.
In S102, the Affine registration based on the descent simplex and the mutual information may be adopted to register the second medical image to the first medical image. The problem of large deformation can be solved by Affinine registration, and most pixel points of the first medical image and the second medical image are matched, so that main structures of the two images are aligned, and the registration accuracy can be improved.
It should be noted that the affinity registration based on the descent simplex and the mutual information is only one specific registration manner that may be adopted by S102, and in other embodiments of the present invention, other registration methods may also be adopted to implement the registration in S102, and the present invention does not limit the specific registration method that may be adopted in S102.
In S103, the first registered image is registered to the first medical image by using an unsupervised deep learning registration method, and optical flow registration based on unsupervised deep learning may be adopted. This one-step registration can align the specified fine structures of the first medical image and the second medical image, reducing artifacts, and thus can improve the accuracy of the registration.
In the embodiment shown in fig. 1, the first medical image and the second medical image acquired at different times of the designated target are acquired, the second medical image is registered to the first medical image, the first registered image corresponding to the second medical image is obtained, the first registered image is registered to the first medical image by further using an unsupervised depth learning registration method, the microstructures in the designated target are aligned, and the artifacts are reduced, so that the accuracy of the registration result is improved, and particularly, a better registration effect can be obtained for a tissue containing more microstructures.
Wherein the registration of the images is realized by an image registration algorithm. The image registration algorithm mainly comprises three parts, namely a space transformation model, a similarity measure and an optimization method. In medical image registration, a spatial transformation model is selected according to actual registration requirements, and then a proper similarity measure and optimization method is selected according to actual requirements.
In one exemplary implementation, registering a first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model, and obtaining a second registration image corresponding to the first registration image, includes: establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain characteristic information of the appointed pixel points; determining parameters of a second space transformation model according to the characteristic information of the designated pixel point; and according to the parameters of the second space transformation model, carrying out the corresponding transformation of the second space transformation model on the first registration image to obtain a second registration image.
In an exemplary implementation process, an unsupervised deep learning model obtains feature information of a specified pixel point by adopting an SAE (Stacked Auto-Encoder) training mode.
In one exemplary implementation, the parameters of the second spatial transformation model are a deformation field of the second spatial transformation model; determining parameters of the second spatial transformation model according to the feature information of the designated pixel point may include: acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point; performing at least one confidence coefficient transmission based on the four-adjacent domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence coefficient transmission; determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point; the deformation field of the second spatial transformation model is determined based on the offset values of the designated pixel points in the first registered image relative to the first medical image.
In one exemplary implementation, the confidence propagation may be at least 20 times. In one exemplary implementation, the medical image processing method may further include: determining a first selected area from the first medical image, and calculating a first number of pixel points in the first selected area, wherein the gray scale value of the first selected area is within a set range; determining a first corresponding region corresponding to the first selected region from the second registration image, and calculating a second number of pixel points in the first corresponding region, the gray scale value of which is in a set range; and acquiring the number of the first medical image which changes relative to the pixel points of the second medical image according to the number of the first pixel points and the number of the second pixel points.
For example, a region in the first medical image which is brighter (the gray value is in a set range such as 10-30) is selected as a first selected region, and the volume (the number of pixels) of the first selected region is automatically calculated; meanwhile, through the registration process, a first corresponding region in the second registered image (corresponding to the second medical image) can be determined, and the number or volume of pixel points in the first corresponding region, of which the pixel values are also within the set gray value range, can be automatically calculated. Further, comparing the volume of the selected region in the first medical image with the volume of the pixels of the corresponding region in the second registration image, and obtaining the number of pixel points with changed gray value of the selected region in the first medical image or the change of the volume of the selected region in the first medical image.
Fig. 2 is a diagram illustrating a second flow of a medical image processing method according to an embodiment of the present invention. As shown in fig. 2, in this embodiment, the medical image processing method may include the following steps:
s201, a first medical image and a second medical image of a specified target are acquired, and the imaging time of the second medical image is different from that of the first medical image.
S202, registering the second medical image to the first medical image based on the first space transformation model to obtain a first registered image corresponding to the second medical image.
S203, registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on the second spatial transformation model to obtain a second registration image corresponding to the first registration image.
And S204, acquiring the number of the change of the selected region of the first medical image relative to the pixel points of the second medical image according to the second registration image.
Illustratively, the method comprises the following steps: determining a first selected area from the first medical image, wherein the first selected area is selected by an operator such as a doctor, and calculating a first pixel point number of the gray scale value in the first selected area in a set range; determining a first corresponding region corresponding to the first selected region from the second registered image, and calculating a second number of pixel points in the first corresponding region having a gray scale value within the set range; and acquiring the number of the first medical image (selected area) which changes relative to the second medical image pixel points according to the first pixel point number and the second pixel point number. Of course, by adopting the method, the second and third selected regions or any other regions can be determined from the first medical image, and the number of the pixels of the selected region of the first medical image which are changed relative to the pixels of the second medical image is calculated.
In the above embodiment, for example, a region in the first medical image that is brighter (the gray-scale value is in a set range such as 10-30) may be selected as the first selected region, and the volume (the number of pixels) of the first selected region is automatically calculated; meanwhile, through the registration process, a first corresponding region in the second registered image (corresponding to the second medical image) can be determined, and the number or volume of pixel points in the first corresponding region, of which the pixel values are also within the set gray value range, can be automatically calculated.
Fig. 3 is a third flowchart of a medical image processing method according to an embodiment of the present invention. As shown in fig. 3, in this embodiment, the medical image processing method may include the following steps:
s301, a first medical image and a second medical image of a specified target are acquired, and the imaging time of the second medical image is different from that of the first medical image.
S302, registering the second medical image to the first medical image based on the first space transformation model to obtain a first registered image corresponding to the second medical image.
S303, registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on the second space transformation model to obtain a second registration image corresponding to the first registration image.
S304, carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
Among them, subtraction is a means of visualizing the change in the time interval between medical images of the same part or organ acquired at different times.
The medical image may contain the identity information of the patient when stored, and the identity information may include the height, weight, medical insurance account number, identification number and other information that can uniquely determine the identity of the patient.
After acquiring the medical image data, the medical image data may be stored in a storage device, for example, on a cloud server or a medical device server. If necessary, the medical image data may be acquired from a storage device and then processed according to the method flow provided by the embodiment of the invention.
For example, after the patient takes the X-ray image a11 at hospital a, it is stored in a cloud server or a medical device server; after the patient takes the X-ray image a12 in the hospital B, the X-ray image is also uploaded to the cloud server or the medical device server. At the time of the patient visit, the doctor can remotely call the X-ray image a11 and the X-ray image a12 stored on the cloud server, and run the processing procedure shown in fig. 2 on the local server to obtain subtraction images of the X-ray image a11 and the X-ray image a 12. In S303, the first registration image is registered to the first medical image by using an unsupervised deep learning registration method, and the specified fine structures of the first medical image and the second medical image are aligned, so that artifacts are reduced, and the registration accuracy is improved, so that the subtraction effect is better on the basis.
S305, the subtraction image is displayed, or the first medical image, the second medical image and the subtraction image are displayed side by side.
The embodiment shown in fig. 3 may display the subtracted images (e.g., display the subtracted images in rows or columns) after they are obtained, for the user to view or store the subtracted images.
The embodiment shown in fig. 3 may also display the first medical image, the second medical image and the subtraction image side by side after obtaining the subtraction image, which not only facilitates the user to view or store the subtraction image, but also facilitates the user to view the related content of the original image while viewing the research subtraction image.
The medical image processing method according to an embodiment of the present invention is further described below by way of example.
In this example, assume that the first medical image is a current pulmonary X-ray image of the patient, denoted as image a; the second medical image is an X-ray image of the lungs of the patient B at a time in the past and is denoted as image B. Then, the processing procedure of image a and image B is as follows:
a1, image a and image B are acquired.
a2, performing global coarse registration on the image B by adopting Affinine registration based on the descent simplex shape so as to align the lungs in the two images, and obtaining a registered image B1.
Wherein, step a2 may include the following sub-steps:
a21, extracting interested pixel points, i.e. sampling points, in the image a, where the sampling points are 5 ten thousand pixel points/interested pixel points extracted randomly in the image a.
a22, setting the solution space of the initial solution according to the optimization algorithm of the descending simplex, wherein the solution space of the two-dimensional Affinine transformation is 7.
a23, calculating the mutual information measurement value of each solution in the solution space according to the sampling points extracted by a21 and a two-dimensional affinity transformation formula.
a24, updating the solution space according to the mutual information measurement value obtained by the calculation of a23 and the updating rule of the descent simplex algorithm.
a25, verifying whether the convergence condition of the descending simplex is satisfied, if not, returning to the step a23, if the convergence condition is satisfied, showing that the solution in the updated solution space is the optimal solution obtained by the descending simplex, and executing the step a 26.
a26, taking the optimal solution obtained by the descending simplex as the optimal parameter of Affinine transformation, and carrying out Affinine transformation on the image B according to the optimal parameter of Affinine transformation and a two-dimensional Affinine transformation formula to obtain an image B1.
The two-dimensional Affinine transformation formula is as follows:
Figure BDA0001283220130000131
in the formula (1), x 'and y' are space coordinates of pixels after Affinine transformation, x and y are coordinates before transformation, and a11、a12、a21、a22、tx、tyAnd converting parameters for Affinine.
a3, carrying out local structure fine registration on the image B1 by adopting an optical flow field transformation model based on unsupervised deep learning to obtain an image B2.
Wherein, step a3 may include the following sub-steps:
a31, taking the interested pixel point as the center in the two images of the image A and the image B1, extracting Patch as a training sample for deep learning, and obtaining the characteristic information of each pixel point through SAE training.
a32, calculating four-neighborhood space information of each pixel point based on the characteristics of each pixel point obtained in the first step by adopting a BP (belief propagation) algorithm based on MRF (Markov Random Fields), and then performing confidence propagation to calculate a trust vector of each propagation. After T times of propagation (T can be 100), calculating the trust vector of each pixel point, wherein the minimum element in the trust vector is the deviation value of the pixel point, and thus obtaining the deformation field of the whole image. In order to improve the speed of BP optimization, a multi-resolution optimization strategy can be adopted, and a coarse-to-fine optimization mode is adopted.
Here, the MRF-based BP algorithm will be described.
The energy of a general MRF can be expressed in the form of equation (2):
Figure BDA0001283220130000141
in the formula (2), the first term represents the number (label) fpThe second measure assigns a cost to p and the number fpAnd fqThe cost attached to both neighboring nodes p and q. P represents all nodes in the MRF and N represents neighbors (neighbors), either 4 neighbors or 8 neighbors in the image. If the Max-Product method in the BP algorithm is used, the formula of the message (message) transmitted between nodes can be expressed as formula (3):
Figure BDA0001283220130000142
at initialization (i.e., when t is 0), each message m may be made 0.
The process of BP is as follows:
(1) for each node p, the information it propagates to the neighborhood q is computed. The value of q is determined, and then p which minimizes the message is searched in the value space of p by calculation.
(2) After T times of message propagation, the trust vector of each node p can be calculated, and for each node p, bq(fq) F corresponding to the smallest element in the vectorqIs the solution of node q in the MRF. The calculation formula is as follows:
Figure BDA0001283220130000151
a33, taking the optimal solution of the deformation field obtained by the BP algorithm as the parameters of the optical flow field transformation model, and carrying out optical flow field transformation on the image B1 to obtain an image B2.
a5, subtracting image B2 from image a, and obtaining subtracted image C.
An example of the past X-ray image and the present X-ray image of the chest of the same human body is shown in fig. 4(a), and a subtraction image of the X-ray image at the present time and the X-ray image at the past time in fig. 4(a) is shown in fig. 4 (b).
According to the medical image processing method provided by the embodiment of the invention, the first medical image and the second medical image which are acquired at different time of the specified target are acquired, the second medical image is registered to the first medical image, after the first registered image corresponding to the second medical image is obtained, the first registered image is registered to the first medical image by further utilizing the unsupervised depth learning registration method, the micro-structures in the specified target are aligned, and the artifact is reduced, so that the precision of the registration result is improved, and particularly, a better registration effect can be obtained for the tissues containing more micro-structures.
Example two
An embodiment of the present invention provides a medical image processing apparatus, which is capable of implementing the steps of the medical image processing method in the foregoing embodiment.
Fig. 5 is a functional block diagram of a medical image processing apparatus according to an embodiment of the present invention. As shown in fig. 5, in the present embodiment, the medical image processing apparatus includes:
an acquiring module 510, configured to acquire a first medical image and a second medical image of a specified target, where an imaging time of the second medical image is different from an imaging time of the first medical image;
a first registration module 520, configured to register the second medical image to the first medical image based on the first spatial transformation model, so as to obtain a first registered image corresponding to the second medical image;
a second registration module 530, configured to register the first registration image to the first medical image by using an unsupervised deep learning registration method based on the second spatial transformation model, so as to obtain a second registration image corresponding to the first registration image.
In one exemplary implementation, the second registration module 530, when configured to register the first registration image to the first medical image by using an unsupervised depth learning registration method based on the second spatial transformation model, resulting in a second registration image corresponding to the first registration image, may be configured to: establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain characteristic information of the appointed pixel points; determining parameters of a second space transformation model according to the characteristic information of the designated pixel point; and according to the parameters of the second space transformation model, carrying out the corresponding transformation of the second space transformation model on the first registration image to obtain a second registration image.
In an exemplary implementation process, the unsupervised deep learning model adopts a stack type self-coding training mode to obtain the characteristic information of the designated pixel point.
In one exemplary implementation, the parameters of the second spatial transformation model are a deformation field of the second spatial transformation model; the second registration module 530, when configured to determine parameters of the second spatial transformation model according to the feature information of the specified pixel point, may be configured to: acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point; performing at least one confidence coefficient transmission based on the four-adjacent domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence coefficient transmission; determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point; the deformation field of the second spatial transformation model is determined based on the offset values of the designated pixel points in the first registered image relative to the first medical image. In one exemplary implementation, the confidence propagation may be at least 20 times.
In one exemplary implementation, the medical image processing apparatus may further include: and the subtraction module is used for carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
In one exemplary implementation, the medical image processing apparatus may further include: and the display module is used for displaying the subtraction image, or displaying the first medical image, the second medical image and the subtraction image side by side.
In one exemplary implementation, the medical image processing apparatus may further include: the first selection module is used for determining a first selected area from the first medical image and calculating the number of first pixel points with the gray value in a set range in the first selected area; the second selection module is used for determining a first corresponding area corresponding to the first selected area from the second registration image and calculating the number of second pixel points of which the gray values are in a set range in the first corresponding area; and acquiring the number of the first medical image which changes relative to the pixel points of the second medical image according to the number of the first pixel points and the number of the second pixel points.
Since the medical image processing apparatus in this embodiment is capable of executing the medical image processing method in the first embodiment, reference may be made to the related description of the medical image processing method in the first embodiment.
According to the medical image processing device provided by the embodiment of the invention, the first medical image and the second medical image which are acquired at different time of the specified target are acquired, the second medical image is registered to the first medical image, after the first registered image corresponding to the second medical image is obtained, the first registered image is registered to the first medical image by further using an unsupervised depth learning registration method, the micro-structures in the specified target are aligned, and the artifact is reduced, so that the precision of the registration result is improved, and particularly, a better registration effect can be obtained for the tissues containing more micro-structures.
EXAMPLE III
An embodiment of the present invention provides a medical image processing apparatus, which may be a computer. The apparatus may include: a processor; a memory for storing processor-executable instructions; the processor is configured to: acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image; registering the second medical image to the first medical image based on the first spatial transformation model to obtain a first registered image corresponding to the second medical image; and registering the first registered image to the first medical image by using an unsupervised deep learning registration method based on the second spatial transformation model to obtain a second registered image corresponding to the first registered image.
Fig. 6 is a simplified block diagram of a medical image processing apparatus. Referring to fig. 6, the medical image processing apparatus 600 may comprise a processor 601 connected to one or more data storage means, which may comprise a storage medium 606 and a memory unit 604. The medical image processing apparatus 600 may further comprise an input interface 605 and an output interface 607 for communicating with another device or system. The program codes executed by the CPU of the processor 601 may be stored in the memory unit 604 or the storage medium 606.
The processor 601 in the medical image processing apparatus 600 calls the program code stored in the memory unit 604 or the storage medium 606 to execute the following steps:
acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image;
registering the second medical image to the first medical image based on the first spatial transformation model to obtain a first registered image corresponding to the second medical image;
and registering the first registered image to the first medical image by using an unsupervised deep learning registration method based on the second spatial transformation model to obtain a second registered image corresponding to the first registered image.
Optionally, the processor may further perform the following operations: and acquiring the number of the change of the selected region of the first medical image relative to the pixel points of the second medical image according to the second registration image. Alternatively, the processor further performs the following operations: and performing subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
In an exemplary implementation, the medical image processing apparatus 600 may further include an input device such as a mouse and a keyboard, and an output device such as a display, and the input device and the output device are respectively connected to the processor. The selected area can be determined on the medical image through an input device such as a mouse, and the quantitative result such as the number of the pixels of the selected area of the first medical image which is obtained by the processor and changed relative to the pixels of the second medical image can be displayed through an output device such as a display, or one or more of the subtraction image, the first medical image, the second medical image and the like can be displayed.
In the above embodiments, the storage medium may be a Read-Only Memory (ROM), or may be a Read-write medium, such as a hard disk or a flash Memory. The Memory unit may be a Random Access Memory (RAM). The memory unit may be physically integrated with the processor or integrated in the memory or implemented as a separate unit.
The processor is a control center of the above-mentioned device (the above-mentioned device is the above-mentioned server or the above-mentioned client), and provides a processing device for executing instructions, performing interrupt operation, providing a timing function and various other functions. Optionally, the processor includes one or more Central Processing Units (CPUs), such as CPU 0 and CPU 1 shown in fig. 6. The apparatus may include one or more processors. The processor may be a single core (single CPU) processor or a multi-core (multi-CPU) processor. Unless otherwise stated, a component such as a processor or a memory described as performing a task may be implemented as a general component, which is temporarily used to perform the task at a given time, or as a specific component specially manufactured to perform the task. The term "processor" as used herein refers to one or more devices, circuits and/or processing cores that process data, such as computer program instructions.
The program code executed by the CPU of the processor may be stored in a memory unit or a storage medium. Alternatively, the program code stored in the storage medium may be copied into the memory unit for execution by the CPU of the processor. Treatment ofThe processor may execute at least one kernel (e.g., LINUX)TM、UNIXTM、WINDOWSTM、ANDROIDTM、IOSTM) It is well known for such cores to control the operation of such devices by controlling the execution of other programs or processes, controlling communication with peripheral devices, and controlling the use of computer device resources.
The above elements in the above devices may be connected to each other by a bus, such as one of a data bus, an address bus, a control bus, an expansion bus, and a local bus, or any combination thereof.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (7)

1. A method of medical image processing, the method comprising:
acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image;
registering the second medical image to the first medical image based on a first spatial transformation model, resulting in a first registered image corresponding to the second medical image;
registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registration image corresponding to the first registration image;
registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registration image corresponding to the first registration image, wherein the registration method comprises the following steps:
establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain characteristic information of the appointed pixel points;
determining parameters of the second space transformation model according to the characteristic information of the designated pixel point;
according to the parameters of the second space transformation model, carrying out corresponding transformation on the first registration image according to the second space transformation model to obtain a second registration image;
the unsupervised deep learning model adopts a stack type self-coding training mode to obtain the characteristic information of the specified pixel point;
the parameters of the second space transformation model are a deformation field of the second space transformation model;
determining parameters of the second spatial transformation model according to the feature information of the designated pixel point, including:
acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point;
performing at least one confidence propagation based on the four-neighbor domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence propagation;
determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point;
determining a deformation field of the second spatial transformation model based on the offset values of the designated pixel points at the first registered image relative to the first medical image.
2. The method of claim 1, wherein the confidence propagation is at least 20 times.
3. The method of claim 1, further comprising:
and carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
4. The method of claim 1, further comprising:
determining a first selected area from the first medical image and calculating a first number of pixel points in the first selected area having a gray scale value within a set range;
determining a first corresponding region corresponding to the first selected region from the second registered image, and calculating a second number of pixel points in the first corresponding region having a gray scale value in the set range;
and acquiring the number of the first medical image changed relative to the second medical image pixel points according to the first pixel point number and the second pixel point number.
5. A medical image processing apparatus, characterized in that the apparatus comprises:
an acquisition module for acquiring a first medical image and a second medical image of a specified target, the imaging time of the second medical image being different from the imaging time of the first medical image;
a first registration module, configured to register the second medical image to the first medical image based on a first spatial transformation model, so as to obtain a first registered image corresponding to the second medical image;
a second registration module, configured to register the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model, so as to obtain a second registration image corresponding to the first registration image;
the second registration module is further used for establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain feature information of the appointed pixel points;
determining parameters of the second space transformation model according to the characteristic information of the designated pixel point;
according to the parameters of the second space transformation model, carrying out corresponding transformation on the first registration image according to the second space transformation model to obtain a second registration image;
the unsupervised deep learning model adopts a stack type self-coding training mode to obtain the characteristic information of the specified pixel point;
the parameters of the second space transformation model are a deformation field of the second space transformation model;
the second registration module, when determining the parameters of the second spatial transformation model according to the feature information of the designated pixel point, may be configured to: acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point;
performing at least one confidence propagation based on the four-neighbor domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence propagation;
determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point;
determining a deformation field of the second spatial transformation model based on the offset values of the designated pixel points at the first registered image relative to the first medical image.
6. A medical image processing apparatus, characterized in that the apparatus comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to:
acquiring a first medical image and a second medical image of a specified target, wherein the imaging time of the second medical image is different from that of the first medical image;
registering the second medical image to the first medical image based on a first spatial transformation model, acquiring a first registered image corresponding to the second medical image;
registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registration image corresponding to the first registration image;
registering the first registration image to the first medical image by using an unsupervised deep learning registration method based on a second spatial transformation model to obtain a second registration image corresponding to the first registration image, wherein the registration method comprises the following steps:
establishing an unsupervised deep learning model, and training appointed pixel points in the first medical image and the first registration image according to the unsupervised deep learning model to obtain characteristic information of the appointed pixel points;
determining parameters of the second space transformation model according to the characteristic information of the designated pixel point;
according to the parameters of the second space transformation model, carrying out corresponding transformation on the first registration image according to the second space transformation model to obtain a second registration image;
the unsupervised deep learning model adopts a stack type self-coding training mode to obtain the characteristic information of the specified pixel point;
the parameters of the second space transformation model are a deformation field of the second space transformation model;
determining parameters of the second spatial transformation model according to the feature information of the designated pixel point, including:
acquiring four-neighborhood information of the designated pixel point by adopting a confidence coefficient propagation method according to the characteristic information of the designated pixel point;
performing at least one confidence propagation based on the four-neighbor domain information of the designated pixel point, and acquiring a trust vector of the designated pixel point after the confidence propagation;
determining an offset value of the designated pixel point in the first registration image relative to the first medical image according to the trust vector of the designated pixel point;
determining a deformation field of the second spatial transformation model based on the offset values of the designated pixel points at the first registered image relative to the first medical image.
7. The device of claim 6, wherein the processor is further configured to:
and carrying out subtraction processing on the first medical image and the second registration image to obtain a subtraction image of the first medical image and the second medical image.
CN201710296578.XA 2017-04-28 2017-04-28 Medical image processing method, device and equipment Active CN107133946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710296578.XA CN107133946B (en) 2017-04-28 2017-04-28 Medical image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710296578.XA CN107133946B (en) 2017-04-28 2017-04-28 Medical image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN107133946A CN107133946A (en) 2017-09-05
CN107133946B true CN107133946B (en) 2020-05-22

Family

ID=59715412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710296578.XA Active CN107133946B (en) 2017-04-28 2017-04-28 Medical image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN107133946B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886508B (en) * 2017-11-23 2021-11-23 上海联影医疗科技股份有限公司 Differential subtraction method and medical image processing method and system
CN108053428A (en) * 2017-12-28 2018-05-18 上海联影医疗科技有限公司 A kind of method for registering of medical image, device and equipment
CN108171738B (en) * 2018-01-25 2022-02-01 北京雅森科技发展有限公司 Multi-modal medical image registration method based on brain function template
CN109377522B (en) * 2018-10-19 2019-10-25 北京青燕祥云科技有限公司 A kind of Lung neoplasm medical image registration method and its device
CN109767461B (en) * 2018-12-28 2021-10-22 上海联影智能医疗科技有限公司 Medical image registration method and device, computer equipment and storage medium
CN109993709B (en) * 2019-03-18 2021-01-12 绍兴文理学院 Image registration error correction method based on deep learning
CN110517302B (en) * 2019-08-30 2022-06-24 联想(北京)有限公司 Image processing method and device
CN110782489B (en) * 2019-10-21 2022-09-30 科大讯飞股份有限公司 Image data matching method, device and equipment and computer readable storage medium
CN110853083B (en) * 2019-10-28 2023-02-17 上海联影智能医疗科技有限公司 Deformation field processing method and device, electronic equipment and storage medium
WO2021184314A1 (en) * 2020-03-19 2021-09-23 西安大医集团股份有限公司 Image registration method and apparatus, radiotherapy device and computer readable storage medium
CN111768393A (en) * 2020-07-01 2020-10-13 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112634250B (en) * 2020-12-29 2023-05-16 上海联影医疗科技股份有限公司 Image registration method, device, computer equipment and storage medium of multifunctional CT system
CN113012207A (en) * 2021-03-23 2021-06-22 北京安德医智科技有限公司 Image registration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853508A (en) * 2010-06-08 2010-10-06 浙江工业大学 Binocular stereo vision matching method based on generalized belief propagation of direction set
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
CN102609936A (en) * 2012-01-10 2012-07-25 四川长虹电器股份有限公司 Stereo image matching method based on belief propagation
CN103793904A (en) * 2012-10-29 2014-05-14 深圳先进技术研究院 Image registration device and method for image registration
CN103854276A (en) * 2012-12-04 2014-06-11 株式会社东芝 Image registration device and method, image segmentation device and method and medical image device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853508A (en) * 2010-06-08 2010-10-06 浙江工业大学 Binocular stereo vision matching method based on generalized belief propagation of direction set
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
CN102609936A (en) * 2012-01-10 2012-07-25 四川长虹电器股份有限公司 Stereo image matching method based on belief propagation
CN103793904A (en) * 2012-10-29 2014-05-14 深圳先进技术研究院 Image registration device and method for image registration
CN103854276A (en) * 2012-12-04 2014-06-11 株式会社东芝 Image registration device and method, image segmentation device and method and medical image device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于SAE深度特征学习的数字人脑切片图像分割;赵广军 等;《计算机辅助设计与图形学学报》;20160831;第28卷(第8期);全文 *
基于非监督特征学习的兴趣点检测算法;周来恩 等;《计算机科学》;20160930;第43卷(第9期);论文第1-4节 *

Also Published As

Publication number Publication date
CN107133946A (en) 2017-09-05

Similar Documents

Publication Publication Date Title
CN107133946B (en) Medical image processing method, device and equipment
CN107123137B (en) Medical image processing method and equipment
JP7134962B2 (en) Systems and methods for probabilistic segmentation in anatomical imaging
RU2677764C2 (en) Registration of medical images
CN106340015B (en) A kind of localization method and device of key point
US9875544B2 (en) Registration of fluoroscopic images of the chest and corresponding 3D image data based on the ribs and spine
EP2396765B1 (en) Group-wise image registration based on motion model
CN110766735B (en) Image matching method, device, equipment and storage medium
JP6541334B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
JP2019511268A (en) Determination of rotational orientation in three-dimensional images of deep brain stimulation electrodes
US10628963B2 (en) Automatic detection of an artifact in patient image
US20190035094A1 (en) Image processing apparatus, image processing method, image processing system, and program
JP5415245B2 (en) MEDICAL IMAGE DISPLAY DEVICE, METHOD, AND PROGRAM
EP3895600A1 (en) Method for measuring volume of organ by using artificial neural network, and apparatus therefor
US20190171467A1 (en) Anatomy-aware adaptation of graphical user interface
EP3828829B1 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
CN111563496A (en) Continuous learning for automatic view planning for image acquisition
CN114943690A (en) Medical image processing method, device, computer equipment and readable storage medium
JP5961512B2 (en) Image processing apparatus, operation method thereof, and image processing program
JP6688642B2 (en) Image processing apparatus and medical information management system
US11311207B2 (en) Systems and methods for pulmonary ventilation from image processing
US11645767B2 (en) Capturing a misalignment
US11138736B2 (en) Information processing apparatus and information processing method
JP2019500114A (en) Determination of alignment accuracy
CN111476768B (en) Image registration method, image registration device, path planning method, path planning device, path planning system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CP01 Change in the name or title of a patent holder