CN115713590A - Three-dimensional reconstruction image processing method and system based on CT - Google Patents

Three-dimensional reconstruction image processing method and system based on CT Download PDF

Info

Publication number
CN115713590A
CN115713590A CN202211181298.1A CN202211181298A CN115713590A CN 115713590 A CN115713590 A CN 115713590A CN 202211181298 A CN202211181298 A CN 202211181298A CN 115713590 A CN115713590 A CN 115713590A
Authority
CN
China
Prior art keywords
dimensional
organ
initial
patient
tissue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211181298.1A
Other languages
Chinese (zh)
Inventor
刘权兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital Army Medical University
Original Assignee
Second Affiliated Hospital Army Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital Army Medical University filed Critical Second Affiliated Hospital Army Medical University
Priority to CN202211181298.1A priority Critical patent/CN115713590A/en
Publication of CN115713590A publication Critical patent/CN115713590A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a three-dimensional reconstruction image processing method and system based on CT. Establishing an initial three-dimensional space according to the CT image data of the patient, segmenting the CT image data of the patient according to the marking points to obtain each tissue part, and finally generating a three-dimensional display model of the lesion organ according to each tissue part in the initial three-dimensional space. Therefore, the two-dimensional CT image of the patient is converted into the three-dimensional CT image to be displayed, so that the doctor can recognize each organ, the three-dimensional display model of the lesion organ is generated by each tissue part divided by the CT image, the lesion organ is directly displayed in the three-dimensional space, doctors and students and other inexperienced doctors are helped to recognize the focus, the accuracy of disease diagnosis is further improved, even non-professionals can recognize the lesion organ in the CT image through the scheme, the recognition difficulty of the CT image is greatly reduced, and the working efficiency of the doctor is improved.

Description

Three-dimensional reconstruction image processing method and system based on CT
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional reconstruction image processing method and system based on CT.
Background
In the CT scanning, an X-ray beam is used for scanning a layer with a certain thickness of a human body checking part, a detector receives X-rays penetrating through the layer, the received X-rays are converted into visible light, then the visible light is converted into electric signals by a photoelectric converter, the electric signals are converted into Digital signals by an Analog/Digital converter, and finally the Digital signals are input into a computer for processing to obtain a two-dimensional CT image.
CT scanning is a common medical auxiliary diagnosis and treatment means, but because of the abstraction of two-dimensional CT images and the complexity of organ structures, the identification difficulty is high, and only professional doctors such as imaging doctors and the like or doctors with abundant experience can understand the two-dimensional CT images quickly. It takes a long time for a medical student who just enters or a doctor who is inexperienced to see an abnormal lesion in a CT image of a patient.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a three-dimensional reconstruction image processing method and system based on CT, which can reduce the identification difficulty of CT images.
In a first aspect, the present invention provides a three-dimensional reconstruction image processing method based on CT.
In a first implementation manner, a method for processing a three-dimensional reconstruction image based on CT includes:
acquiring CT image data of a patient;
acquiring an initial three-dimensional space according to CT image data of a patient;
acquiring marking points, and segmenting the CT image data of the patient according to the marking points to obtain each tissue part;
and generating a three-dimensional display model of the lesion organ according to each tissue part in the initial three-dimensional space.
With reference to the first implementable manner, in a second implementable manner, acquiring an initial three-dimensional space from CT image data of a patient includes:
determining an initial position according to the image size, the number of pixel points and the image layer thickness in the CT image data of the patient;
acquiring three-dimensional coordinates according to coordinate information in the CT image data of the patient;
and establishing an initial three-dimensional space according to the initial position and the three-dimensional coordinates.
With reference to the first implementable manner, in a third implementable manner, the method for obtaining each tissue region by segmenting CT image data of a patient based on a marker point includes:
and expanding each initial origin point according to a preset growth rule by taking each mark point as the initial origin point to obtain a plurality of pixel point sets, wherein each pixel point set is each tissue part.
With reference to the third implementable manner, in a fourth implementable manner, the growth rule is: and bringing the pixel points adjacent and similar to the initial origin into the initial origin to form a new origin, and repeatedly executing the operation by taking the new origin as the initial origin.
With reference to the first implementable manner, in a fifth implementable manner, a three-dimensional display model of a diseased organ is generated according to each tissue site in an initial three-dimensional space, and the method comprises the following steps of:
obtaining labels of each tissue site; the label is used for representing the name and lesion condition of each tissue part;
generating a three-dimensional model according to each tissue part in the initial three-dimensional space;
taking the label of each tissue part as a first label of the corresponding organ of each tissue part in the three-dimensional model;
judging whether each organ in the three-dimensional model has abnormal conditions or not, and obtaining a judgment result; and taking the judgment result as a second label of each organ in the three-dimensional model;
and labeling the organ with the pathological change condition in the three-dimensional model according to the first label and the second label to obtain a three-dimensional display model of the pathological change organ.
With reference to the fifth implementable manner, in a sixth implementable manner, obtaining a tag for each tissue site includes:
matching each tissue part in a preset organ database to obtain a tissue image similar to each tissue part;
the name and lesion state corresponding to the tissue image having the highest similarity to each tissue region are used as the label of each tissue region.
With reference to the fifth implementable manner, in a seventh implementable manner, determining whether each organ in the three-dimensional model has an abnormal condition, and obtaining a determination result, the determining step includes:
comparing the three-dimensional model with a preset standard three-dimensional model, and segmenting parts which do not accord with the parameter range of the standard three-dimensional model;
amplifying the segmented parts in equal proportion, and acquiring texture information;
and analyzing according to the texture information, judging whether an abnormal condition exists or not, and obtaining a judgment result.
With reference to the fifth implementable manner, in an eighth implementable manner, labeling an organ with a pathological change condition in the three-dimensional model according to the first tag and the second tag, to obtain a three-dimensional display model of the pathological change organ, includes:
displaying only the first labeled organ in the three-dimensional model through a first color;
displaying only the organ with the second label in the three-dimensional model through a second color;
and displaying the organ with the first label and the second label in the three-dimensional model through a third color to obtain a three-dimensional display model of the lesion organ.
In a second aspect, the present invention provides a three-dimensional reconstruction image processing system based on CT.
In a ninth implementable manner, a CT-based three-dimensional reconstructed image processing system includes:
a patient CT image data acquisition module configured to acquire patient CT image data;
an initial three-dimensional space establishing module configured to establish an initial three-dimensional space according to the patient CT image data;
the tissue part acquisition module is configured to acquire the mark points and perform segmentation processing on the CT image data of the patient according to the mark points to obtain tissue parts;
and the lesion organ three-dimensional model display model generation module is configured to generate a lesion organ three-dimensional display model according to each tissue part in the initial three-dimensional space.
According to the technical scheme, the beneficial technical effects of the invention are as follows:
1. establishing an initial three-dimensional space according to the CT image data of the patient, segmenting the CT image data of the patient according to the marking points to obtain all tissue parts, and finally generating a three-dimensional display model of the lesion organ according to all the tissue parts in the initial three-dimensional space. Therefore, the two-dimensional CT image of the patient is converted into the three-dimensional CT image to be displayed, so that the doctor can recognize each organ, the three-dimensional display model of the lesion organ is generated by each tissue part divided by the CT image, the lesion organ is directly displayed in the three-dimensional space, doctors and students and other inexperienced doctors are helped to recognize the focus, the accuracy of disease diagnosis is further improved, even non-professionals can recognize the lesion organ in the CT image through the scheme, the recognition difficulty of the CT image is greatly reduced, and the working efficiency of the doctor is improved.
2. The lesion condition of the tissue part of the two-dimensional CT image is identified, and the organ abnormality of the three-dimensional model is judged, so that the lesion organ in the CT image of the patient is identified in different dimensions, the disease diagnosis accuracy is improved, and the understanding of a doctor on the lesion condition of the organ in different dimensions is deepened.
3. The identification conditions of different dimensions are marked through different colors, so that the identification difficulty of the diseased organ in the three-dimensional model is further reduced, and the display conditions of the diseased organ in different dimensions are favorably distinguished.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings used in the detailed description or the prior art descriptions will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of a CT-based three-dimensional reconstructed image processing method according to the present invention;
fig. 2 is a schematic structural diagram of a three-dimensional reconstruction image processing system based on CT provided in the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains. The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. The term "plurality" means two or more unless otherwise specified. In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B. The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B. The term "correspond" may refer to an association or binding relationship, and a corresponds to B refers to an association or binding relationship between a and B.
With reference to fig. 1, the present embodiment provides a three-dimensional reconstruction image processing method based on CT, including:
step S01, acquiring CT image data of a patient;
s02, acquiring an initial three-dimensional space according to CT image data of a patient;
s03, acquiring mark points, and segmenting the CT image data of the patient according to the mark points to obtain each tissue part;
and S04, generating a three-dimensional display model of the lesion organ according to each tissue part in the initial three-dimensional space.
Optionally, acquiring an initial three-dimensional space from the patient CT image data comprises: determining an initial position according to the image size, the number of pixel points and the image layer thickness in the CT image data of the patient; acquiring three-dimensional coordinates according to coordinate information in the CT image data of the patient; and establishing an initial three-dimensional space according to the initial position and the three-dimensional coordinates.
Optionally, performing table look-up operation on the image size, the number of pixel points and the image layer thickness in a preset human body position table to obtain an initial position. The preset human body position table stores the corresponding relation among the image size, the pixel point and the image layer thickness and the initial position.
In some embodiments, the human body part scanned by the CT image is determined according to the image size, the number of pixel points and the image layer thickness, and the position of the human body part in the human body model in the three-dimensional space is determined as an initial position, which represents the approximate position, range and direction of the human body part scanned by the CT image in the three-dimensional space.
Optionally, the coordinate information in the CT image data of the patient includes voxel coordinates, and the CT image is formed by dividing the selected slice plane into a plurality of cuboids having the same volume, each cuboid being a voxel.
Optionally, obtaining the three-dimensional coordinates according to the coordinate information is implemented by the following formula:
Figure BDA0003865526550000061
wherein, (X, Y) is the coordinate of the voxel at the point a in the CT image, (X ', Y ', z ') is the three-dimensional coordinate of the point a corresponding to the real world, and the patient coordinate is (X, Y, z); x x 、X y 、X z Respectively the included angle relationship between the X-axis direction of the CT image and the X-axis, Y-axis and z-axis directions of the patient, Y x 、Y y 、Y z The included angle relationship between the Y-axis direction of the CT image and the x, Y and z-axis directions of the patient respectively, wherein Deltax is the pixel interval in the x-direction, and Deltay is the pixel interval in the Y-direction, (S) x ,S y ,S z ) And the three-dimensional coordinates correspond to the pixel points at the upper left corner in the CT image. In some embodiments, the cosine of the included angle represents the relationship of the included angle, and the cosine of the included angle is the projection value of each direction of the CT image in three directions of the patient coordinate system.
Optionally, establishing an initial three-dimensional space according to the initial position and the three-dimensional coordinates includes: according to the initial position, the length and the width in the CT image, the number of pixel points and the three-dimensional coordinates, a three-dimensional space is initialized through a computer, and the prior art can be referred to.
Optionally, the acquiring the mark point includes: and selecting the coordinate of a certain point in the CT image as a mark point, or inputting the coordinate of any point in the CT image as the mark point by a doctor.
Optionally, the segmenting the CT image data of the patient according to the marked points to obtain each tissue region includes: and expanding each initial origin point according to a preset growth rule by taking each mark point as the initial origin point to obtain a plurality of pixel point sets, wherein each pixel point set is each tissue part.
Optionally, the growth rule is: and bringing the pixel points which are adjacent and similar to the initial origin into the initial origin to form a new origin, and repeatedly executing the operation by taking the new origin as the initial origin.
Optionally, whether the initial origin point and the adjacent pixel point are similar or not is judged according to the gray level and the texture of the initial origin point and the adjacent pixel point.
Optionally, in a case that a difference between the initial origin and its adjacent pixel point is within a preset range, it is determined that the two are similar.
In some embodiments, the segmentation of the CT image data of the patient based on the landmark points comprises:
step 21, taking the mark point as an initial origin;
step 22, judging whether the initial origin has adjacent and similar pixel points, if so, bringing the adjacent and similar pixel points into a set of the initial origin to form a new initial origin;
step 23, repeating step 22 until the initial origin point has no similar and adjacent pixel points, the set formed by the initial origin point is a partition region, namely a tissue part;
step 24, under the condition that pixel points which are not selected or are judged to be similar exist in the CT image of the patient, one pixel point which is not selected or is judged to be similar is selected as a mark point, and the steps 21, 22 and 23 are executed;
and step 25, under the condition that the pixel points which are not selected or have similarity judged do not exist in the CT image of the patient, determining that the CT image of the patient is completely segmented.
Optionally, generating a three-dimensional display model of the diseased organ from each tissue site in the initial three-dimensional space includes: obtaining labels of each tissue site; the label is used for representing the name and lesion condition of each tissue part; generating a three-dimensional model according to each tissue part in the initial three-dimensional space; taking the label of each tissue part as a first label of the corresponding organ of each tissue part in the three-dimensional model; judging whether each organ in the three-dimensional model has abnormal conditions or not, and obtaining a judgment result; and taking the judgment result as a second label of each organ in the three-dimensional model; and labeling the organ with the pathological change condition in the three-dimensional model according to the first label and the second label to obtain a three-dimensional display model of the pathological change organ.
Optionally, obtaining a label for each tissue site comprises: matching each tissue part in a preset organ database to obtain a tissue image similar to each tissue part; the name and lesion state corresponding to the tissue image having the highest similarity to each tissue region are used as the label of each tissue region.
In some embodiments, generating a three-dimensional model from tissue sites within an initial three-dimensional space may be referenced in the art.
Optionally, the determining whether each organ in the three-dimensional model has an abnormal condition, and obtaining a determination result includes: comparing the three-dimensional model with a preset standard three-dimensional model, and segmenting parts which do not accord with the parameter range of the standard three-dimensional model; amplifying the segmented parts in equal proportion, and acquiring texture information; and analyzing according to the texture information, judging whether an abnormal condition exists or not, and obtaining a judgment result.
Optionally, labeling an organ with a pathological change condition in the three-dimensional model according to the first label and the second label, to obtain a three-dimensional display model of the pathological change organ, including: displaying only the organ with the first label in the three-dimensional model through a first color; displaying only the organ with the second label in the three-dimensional model through a second color; and displaying the organ with the first label and the second label in the three-dimensional model through a third color to obtain a three-dimensional display model of the lesion organ. Therefore, the lesion condition identification is carried out on the tissue part of the two-dimensional CT image, and the organ abnormity of the three-dimensional model is judged, so that the abnormity identification is carried out on the CT image of the patient in two dimensions and three dimensions, the disease diagnosis accuracy is favorably provided, and the understanding of a doctor on the pathological change condition of the organ of the disease in different dimensions is further favorably deepened. The identification conditions of different dimensions are marked through different colors, so that the identification difficulty of the lesion organ in the three-dimensional model is further reduced, and the display conditions of the lesion organ in different dimensions are favorably distinguished.
Optionally, the pathological changes of the tissue parts and the abnormal conditions of the organs in the three-dimensional model are input into a disease case repository for analysis and search, similar cases are obtained, and the similar cases are used as reference data of the three-dimensional display model of the pathological organ and displayed to a doctor. Therefore, similar cases are searched in the disease case storage according to the pathological change conditions of all tissue parts and the abnormal conditions of organs in the three-dimensional model, more reference data are provided for doctors, the condition that the doctors do not need to find the similar cases again is avoided, the working efficiency is improved, the learning and accumulation opportunities are provided for doctors with insufficient experience, and the method is practical and convenient.
Optionally, when the similarity of the similar case is greater than a preset threshold, the disease diagnosis result of the similar case is directly displayed to a doctor as a reference disease result of the three-dimensional display model of the diseased organ.
Referring to fig. 2, the present embodiment provides a CT-based three-dimensional reconstruction image processing system, including: the system comprises a patient CT image data acquisition module 101, an initial three-dimensional space establishment module 102, a tissue part acquisition module 103 and a lesion organ three-dimensional model display model generation module 104. The patient CT image data acquisition module 101 is configured to acquire patient CT image data; the initial three-dimensional space establishing module 102 is configured to establish an initial three-dimensional space from the patient CT image data; the tissue part acquisition module 103 is configured to acquire the marker points and perform segmentation processing on the patient CT image data according to the marker points to obtain tissue parts; the diseased organ three-dimensional model display model generation module 104 is configured to generate a diseased organ three-dimensional display model from each tissue site within the initial three-dimensional space.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (9)

1. A three-dimensional reconstruction image processing method based on CT is characterized by comprising the following steps:
acquiring CT image data of a patient;
acquiring an initial three-dimensional space according to CT image data of a patient;
acquiring a mark point, and segmenting the CT image data of the patient according to the mark point to obtain each tissue part;
and generating a three-dimensional display model of the lesion organ according to each tissue part in the initial three-dimensional space.
2. The method of claim 1, wherein acquiring an initial three-dimensional space from CT image data of a patient comprises:
determining an initial position according to the image size, the number of pixel points and the image layer thickness in the CT image data of the patient;
acquiring three-dimensional coordinates according to coordinate information in CT image data of a patient;
and establishing an initial three-dimensional space according to the initial position and the three-dimensional coordinates.
3. The method of claim 1, wherein segmenting the CT image data of the patient based on the marker points to obtain tissue regions comprises:
and expanding each initial origin point by taking each mark point as the initial origin point according to a preset growth rule to obtain a plurality of pixel point sets, wherein each pixel point set is each tissue part.
4. The method of claim 3, wherein the growth rule is: and bringing the pixel points which are adjacent and similar to the initial origin into the initial origin to form a new origin, and repeatedly executing the operation by taking the new origin as the initial origin.
5. The method of claim 1, wherein generating a three-dimensional representation of a diseased organ from each tissue site within the initial three-dimensional space comprises:
obtaining labels of each tissue site; the label is used for representing the name and the pathological change condition of each tissue part;
generating a three-dimensional model from each tissue site within the initial three-dimensional space;
taking the label of each tissue part as a first label of the corresponding organ of each tissue part in the three-dimensional model;
judging whether each organ in the three-dimensional model has abnormal conditions or not, and obtaining a judgment result; taking the judgment result as a second label of each organ in the three-dimensional model;
and labeling the organ with the pathological change condition in the three-dimensional model according to the first label and the second label to obtain a three-dimensional display model of the pathological change organ.
6. The method of claim 5, wherein obtaining a label for each tissue site comprises:
matching each tissue part in a preset organ database to obtain a tissue image similar to each tissue part;
the name and lesion state corresponding to the tissue image having the highest similarity for each tissue region are used as the label for each tissue region.
7. The method of claim 5, wherein determining whether each organ in the three-dimensional model is abnormal and obtaining the determination result comprises:
comparing the three-dimensional model with a preset standard three-dimensional model, and segmenting parts which do not accord with the parameter range of the standard three-dimensional model;
amplifying the segmented parts in equal proportion, and acquiring texture information;
and analyzing according to the texture information, judging whether an abnormal condition exists or not, and obtaining a judgment result.
8. The method of claim 5, wherein labeling an organ with a pathological condition in the three-dimensional model according to the first label and the second label to obtain a three-dimensional display model of the pathological organ comprises:
displaying only the first labeled organ in the three-dimensional model through a first color;
displaying only the second labeled organ in the three-dimensional model by a second color;
and displaying the organ with the first label and the second label in the three-dimensional model through a third color to obtain a three-dimensional display model of the lesion organ.
9. A CT-based three-dimensional reconstructed image processing system, comprising:
a patient CT image data acquisition module configured to acquire patient CT image data;
an initial three-dimensional space establishing module configured to establish an initial three-dimensional space according to the patient CT image data;
the tissue part acquisition module is configured to acquire the mark points and perform segmentation processing on the CT image data of the patient according to the mark points to obtain tissue parts;
and the lesion organ three-dimensional model display model generation module is configured to generate a lesion organ three-dimensional display model according to each tissue part in the initial three-dimensional space.
CN202211181298.1A 2022-09-27 2022-09-27 Three-dimensional reconstruction image processing method and system based on CT Pending CN115713590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211181298.1A CN115713590A (en) 2022-09-27 2022-09-27 Three-dimensional reconstruction image processing method and system based on CT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211181298.1A CN115713590A (en) 2022-09-27 2022-09-27 Three-dimensional reconstruction image processing method and system based on CT

Publications (1)

Publication Number Publication Date
CN115713590A true CN115713590A (en) 2023-02-24

Family

ID=85230781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211181298.1A Pending CN115713590A (en) 2022-09-27 2022-09-27 Three-dimensional reconstruction image processing method and system based on CT

Country Status (1)

Country Link
CN (1) CN115713590A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870169A (en) * 2020-06-12 2021-12-31 杭州普健医疗科技有限公司 Medical image labeling method, medium and electronic equipment
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117541731A (en) * 2024-01-09 2024-02-09 天津市肿瘤医院(天津医科大学肿瘤医院) Pulmonary visualization three-dimensional reconstruction method based on ultrasonic data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870169A (en) * 2020-06-12 2021-12-31 杭州普健医疗科技有限公司 Medical image labeling method, medium and electronic equipment
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117541731A (en) * 2024-01-09 2024-02-09 天津市肿瘤医院(天津医科大学肿瘤医院) Pulmonary visualization three-dimensional reconstruction method based on ultrasonic data
CN117541731B (en) * 2024-01-09 2024-03-12 天津市肿瘤医院(天津医科大学肿瘤医院) Pulmonary visualization three-dimensional reconstruction method based on ultrasonic data

Similar Documents

Publication Publication Date Title
CN108520519B (en) Image processing method and device and computer readable storage medium
CN115713590A (en) Three-dimensional reconstruction image processing method and system based on CT
CN105719324B (en) Image processing apparatus and image processing method
CN111933251B (en) Medical image labeling method and system
US8929635B2 (en) Method and system for tooth segmentation in dental images
CN104637024B (en) Medical image-processing apparatus and medical image processing method
CN107405126B (en) Retrieving corresponding structures of pairs of medical images
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN112862833B (en) Blood vessel segmentation method, electronic device and storage medium
US8285013B2 (en) Method and apparatus for detecting abnormal patterns within diagnosis target image utilizing the past positions of abnormal patterns
US7304644B2 (en) System and method for performing a virtual endoscopy
US11200443B2 (en) Image processing apparatus, image processing method, and image processing system
CN110580948A (en) Medical image display method and display equipment
JP2010162340A (en) Image processing apparatus, image processing method, and image processing program
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
JP2019028887A (en) Image processing method
CN110993067A (en) Medical image labeling system
JP2004174241A (en) Image forming method
JP2007105352A (en) Difference image display device, difference image display method, and program thereof
CN116779093B (en) Method and device for generating medical image structured report and computer equipment
JPWO2005009242A1 (en) Medical image processing apparatus and method
US20030128890A1 (en) Method of forming different images of an object to be examined
CN113344873A (en) Blood vessel segmentation method, device and computer readable medium
CN111091605B (en) Rib visualization method, identification method and computer-readable storage medium
WO2023183699A2 (en) Method and system for cross-referencing of two-dimensional (2d) ultrasound scans of a tissue volume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination