CN112766314A - Anatomical structure recognition method, electronic device, and storage medium - Google Patents

Anatomical structure recognition method, electronic device, and storage medium Download PDF

Info

Publication number
CN112766314A
CN112766314A CN202011625657.9A CN202011625657A CN112766314A CN 112766314 A CN112766314 A CN 112766314A CN 202011625657 A CN202011625657 A CN 202011625657A CN 112766314 A CN112766314 A CN 112766314A
Authority
CN
China
Prior art keywords
anatomical structure
medical image
layer
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011625657.9A
Other languages
Chinese (zh)
Inventor
高菲菲
曹晓欢
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202011625657.9A priority Critical patent/CN112766314A/en
Publication of CN112766314A publication Critical patent/CN112766314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Abstract

The invention discloses an anatomical structure identification method, electronic equipment and a storage medium. The identification method comprises the following steps: carrying out anatomical structure recognition on the target medical image to obtain an initial anatomical structure category; carrying out part identification on the target medical image to obtain a part type; determining candidate anatomical structure types corresponding to the part types; and correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category. In addition to the identification of the initial anatomical structure category corresponding to the medical image, the invention also identifies the part category corresponding to the medical image, and corrects the identified initial anatomical structure category by using the candidate anatomical structure category determined according to the part category to obtain the final anatomical structure category.

Description

Anatomical structure recognition method, electronic device, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an anatomical structure recognition method, an electronic device, and a storage medium.
Background
The conventional method for identifying an anatomical structure in a medical image is to identify the anatomical structure in the medical image by using a trained anatomical structure detection model, and specifically, the trained anatomical structure detection model may be obtained by using an anatomical structure labeling golden standard and image data of the labeled anatomical structure, for example, the trained anatomical structure detection model may be implemented by a convolutional neural network, or by constructing a B-spline model of a conventional algorithm. However, there is a possibility that the recognition result of the trained anatomical structure detection model is erroneous, and thus adverse consequences may occur.
Disclosure of Invention
The invention aims to overcome the defect that the identification result of an anatomical structure detection model in the prior art is likely to be wrong, and provides an anatomical structure identification method, electronic equipment and a storage medium.
The invention solves the technical problems through the following technical scheme:
a method of identifying an anatomical structure, comprising:
carrying out anatomical structure recognition on the target medical image to obtain an initial anatomical structure category;
carrying out part identification on the target medical image to obtain a part type;
determining a candidate anatomical structure type corresponding to the part type;
and correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category.
Preferably, the step of correcting the initial anatomical structure class using the candidate anatomical structure class comprises:
and solving the intersection of the candidate anatomical structure category and the initial anatomical structure category to obtain a final anatomical structure category.
Preferably, the step of performing the part identification on the target medical image to obtain the part category includes:
inputting the target medical image into a part recognition model to obtain a target part label range, wherein the part recognition model is obtained by utilizing medical image training of each layer of image labeled with a part label;
the step of determining the candidate anatomical structure class corresponding to the part class comprises:
searching a preset dictionary by using the target part label range to obtain a candidate anatomical structure category;
the preset dictionary comprises second corresponding relations between the part labels and the anatomical structure types.
Preferably, the step of performing anatomical structure recognition on the target medical image to obtain an initial anatomical structure category includes:
inputting the target medical image into an anatomical structure recognition model to obtain an initial anatomical structure type and an initial position range corresponding to the initial anatomical structure type, wherein the anatomical structure recognition model is obtained by training a medical image marked with an anatomical structure marking frame, and marking information of the anatomical structure marking frame comprises the anatomical structure type and the position range of the anatomical structure marking frame in the medical image;
after the step of obtaining the target site label range, the method further comprises the following steps:
determining a layer position range corresponding to the final anatomical structure type in the target medical image by using a target part label range corresponding to the final anatomical structure type;
determining a corresponding candidate position range of the final anatomical structure category in the target medical image by using the corresponding layer position range of the final anatomical structure category in the target medical image;
and solving the intersection of the candidate position range corresponding to the final anatomical structure category and the initial position range to obtain the final position range corresponding to the final anatomical structure category.
Preferably, the target medical image includes a multi-layer image, and the step of inputting the target medical image into the part recognition model to obtain the target part tag includes:
inputting a top-level image in the target medical image into the part recognition model to obtain a top-level part label;
inputting a bottom layer image in the target medical image into the part identification model to obtain a bottom layer part label;
and obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
Preferably, the target medical image includes a multi-layer image, and the step of inputting the target medical image into the part recognition model to obtain the target part tag includes:
extracting a plurality of random layer images from the target medical image;
respectively inputting the random layer images into the part identification model to obtain a random layer part label of each random layer image;
fitting the random layer part labels of the random layer images and the layer positions of the random layer images in the target medical image to obtain a third corresponding relation between the part labels and the layer positions of the same layer image;
acquiring a top layer part label of a top layer image and a bottom layer part label of a bottom layer image in the target medical image according to the third corresponding relation;
and obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
Preferably, the step of fitting the random layer part labels of the random layer sub-images and the layer positions of the random layer sub-images in the target medical image comprises:
and performing linear fitting or continuous piecewise linear fitting on the random layer part labels of the plurality of random layer sub-images and the layer positions of the plurality of random layer sub-images in the target medical image.
Preferably, after the step of obtaining the final position range corresponding to the final anatomical structure category, the method further includes:
and filtering the target medical image according to the final position range corresponding to the final anatomical structure type to obtain a target anatomical structure image corresponding to the final anatomical structure type.
Preferably, after the step of obtaining the target anatomical structure image corresponding to the final anatomical structure category, the method further includes:
and processing the target anatomical structure image by utilizing an algorithm corresponding to the final anatomical structure type to obtain a processing result.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing any of the above anatomical structure identification methods when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the above-mentioned anatomical structure identification methods.
The positive progress effects of the invention are as follows: the method and the device identify the initial anatomical structure category corresponding to the medical image, identify the part category corresponding to the medical image, and correct the identified initial anatomical structure category by using the candidate anatomical structure category determined according to the part category to obtain the final anatomical structure category.
Drawings
Fig. 1 is a partial flowchart of an identification method of an anatomical structure according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of step S1021 in the anatomical structure identification method according to embodiment 1 of the present invention.
Fig. 3 is another flowchart of step S1021 in the anatomical structure identification method according to embodiment 1 of the present invention.
Fig. 4 is another partial flowchart of the anatomical structure identification method according to embodiment 1 of the present invention.
Fig. 5 is a block configuration diagram of an anatomical structure recognition system according to embodiment 2 of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1, the method for identifying an anatomical structure according to this embodiment includes:
s101, carrying out anatomical structure recognition on a target medical image to obtain an initial anatomical structure category;
s102, carrying out part identification on the target medical image to obtain a part type;
s103, determining the candidate anatomical structure type corresponding to the part type;
and S104, correcting the initial anatomical structure type by using the candidate anatomical structure type to obtain a final anatomical structure type.
In an embodiment, the target medical image to be identified may be a medical image acquired by a single-modality CT (Computed Tomography) device, a PET (Positron Emission Tomography) device, an MRI (Magnetic Resonance Imaging) device, a multi-modality PET/CT device, a PET/MR device, or the like. In an embodiment, the anatomical structure category corresponds to an organ category, such as lung, heart, stomach, etc., and the part category can be customized according to the actual application, such as head, neck, chest, abdomen, etc. It should be understood that the correspondence between the anatomical structure class and the site class is relatively deterministic, e.g., chest corresponds to organ classes such as lung, heart, etc.
Specifically, in this embodiment, anatomical structure recognition and part recognition are performed on the target medical image, respectively, to obtain an initial anatomical structure type of the anatomical structure corresponding to the target medical image and a part type of the corresponding part; then determining candidate anatomical structure categories corresponding to the identified part categories according to the corresponding relation between the anatomical structure categories and the part categories; and finally, correcting the identified initial anatomical structure category by using the determined candidate anatomical structure category.
For example, the initial anatomical structure type obtained by performing anatomical structure recognition on the target medical image is a lung, the part type obtained by performing part recognition is a chest, and the candidate anatomical structure types corresponding to the chest include a lung and a heart, and the initial anatomical structure type is corrected by using the candidate anatomical structure types to obtain a final anatomical structure which is the lung.
Compared with the method for only carrying out single identification on the anatomical structure on the target medical image, the method for carrying out double identification on the anatomical structure and the part on the target medical image carries out identification on the anatomical structure and the part, wherein identification on the anatomical structure aims at obtaining the initial anatomical structure type, identification on the part aims at obtaining the part type so as to correct the initial anatomical structure type, and the method for carrying out double identification on the anatomical structure and the part is beneficial to improving the accuracy and the robustness of the final anatomical structure type obtained by the identification method of the embodiment.
In this embodiment, step S101 and step S102 may be executed simultaneously or sequentially, and this embodiment is not intended to limit this. In addition, in this embodiment, the final anatomical structure category may be obtained by solving an intersection of the candidate anatomical structure category and the initial candidate anatomical structure category, and at this time, the step S104 may specifically include a step of obtaining the final anatomical structure category by solving the intersection of the candidate anatomical structure category and the initial anatomical structure category.
In this embodiment, step S102 may specifically include:
and S1021, inputting the target medical image into the part identification model to obtain the target part label range.
In this embodiment, the part recognition model is obtained by training medical images with part labels marked on each layer of image.
In particular, in the present embodiment, a plurality of anatomical key points may be utilized as markers for dividing the human body structurePoints and numbering the marked points, for example, when the number of the marked points is N +1 (where N is an integer), the number of the marked points can be marked as L0,L1,…,LN-1,LNTo divide the human body structure into N part categories. The number of the marking points can be set in a user-defined mode according to actual application.
On the basis, a template person is simulated through a large amount of data, and the template person is marked with N +1 preset mark points to acquire the anatomical structure proportion (P) between the N +1 mark points0:…:PN-1) Wherein P is0:…:PN-1=L1-L0:…:LN-LN-1And establishing part labels (T) corresponding to the N +1 marking points based on the anatomical structure proportion among the N +1 marking points0,T1,…,TN-1,TN) Wherein, T1-T0:…:TN-TN-1=P0:…:PN-1
Further, when the medical image comprises a plurality of layers of images, the part label corresponding to each layer of image is obtained by piecewise linear distribution of the part labels corresponding to the adjacent marking points. For example, adjacent marker points LxAnd Lx+1Includes (K +1) layers of images, and has a corresponding part label T for each layer of imagex+(Tx+1-Tx) X (K/K), wherein K is 0, …, K.
Thus, in the present embodiment, the part labels corresponding to the mark points are fixed for different medical images, and there is a first correspondence between the part labels and the part types. On the basis, the labeling of the part label of each layer of the image of the medical image to be trained can be realized, and then the part recognition model of the embodiment is obtained by utilizing the medical image training with each layer of the image labeled with the part label, wherein the part recognition model preferably adopts a regression model, loss functions adopted in the training process can include but are not limited to MSE, Huber, Log-Cosh and the like, and in addition, the part recognition model can be established by a traditional method or a deep learning method.
In this embodiment, the input of the part recognition model is the target medical image, and the output is the target part label corresponding to the target medical image. Further, in this embodiment, the input of the part recognition model is a single-layer image included in the target medical image, and the output is a target part tag corresponding to the single-layer image, that is, the input of the part recognition model may be a 2D medical image or a 2.5D medical image.
In this embodiment, when the target medical image (e.g., 2D image or 2.5D image) is a single-layer image, the target portion tag obtained by inputting the single-layer target medical image into the portion identification model is the target portion tag range corresponding to the target medical image. When the target medical image (e.g., the 3D image) includes a plurality of layers of images, each layer of image included in the target medical image may be input into the part recognition model to obtain a target part tag corresponding to each layer of image, and then obtain a target part tag range corresponding to the target medical image, or obtain a target part tag range corresponding to the target medical image by obtaining a top layer part tag corresponding to a top layer image and a bottom layer part tag corresponding to a bottom layer image of the target medical image.
Specifically, in one aspect, referring to fig. 2, step S1021 may include:
s1021-11, inputting a top-level image in the target medical image into a part recognition model to obtain a top-level part label;
s1021-12, inputting a bottom layer image in the target medical image into a part recognition model to obtain a bottom layer part label;
s1021-13, obtaining a target part label range according to the top layer part label and the bottom layer part label.
Specifically, after inputting a top-level image positioned on a first layer of a multi-layer image into a part recognition model, a top-level part label T is obtainedtopInputting the bottom layer image positioned at the last layer of the multilayer image into the part identification model to obtain a bottom layer part label TbottomTarget medical image corresponding to target part label range [ Ttop,Tbottom]。
On the other hand, referring to fig. 3, step S1021 may include:
s1021-21, extracting a plurality of random layer images from the target medical image;
s1021-22, respectively inputting the random layer images into the part identification model to obtain a random layer part label of each random layer image;
s1021-23, fitting the random layer part labels of the random layer images and the layer positions of the random layer images in the target medical image to obtain a third corresponding relation between the part labels and the layer positions of the same layer image;
s1021-24, acquiring a top layer part label of a top layer image and a bottom layer part label of a bottom layer image in the target medical image according to the third corresponding relation;
and S1021-25, obtaining a target part label range according to the top layer part label and the bottom layer part label.
Specifically, for each random layer image, the layer position of the random layer image in the target medical image is known, the random layer image is input into the part identification model to obtain a random layer part label, a list of random layer part labels corresponding to all the random layer images and a list of layer positions are fitted to obtain a third corresponding relation between the part label and the layer position of the same layer image, and a top layer part label T corresponding to the top layer image is obtained based on the third corresponding relationtopAnd a bottom part label T corresponding to the bottom imagebottomOn the basis, a target medical image corresponding to a target part label range [ Ttop,Tbottom]。
In this embodiment, the random layer part labels of the random layer sub-images and the layer positions of the random layer sub-images in the target medical image may be linearly fitted or continuously piecewise linearly fitted according to practical applications, and this embodiment is not intended to limit this.
Compared with an implementation mode that the top-layer image and the bottom-layer image are directly input into the part recognition model to respectively obtain the top-layer part label and the bottom-layer part label, the mode of indirectly obtaining the top-layer part label and the bottom-layer part label has better robustness, so that the obtained target part label range is more accurate.
In the present embodiment, the number of anatomical structure classes to be identified is set to M (where M is a positive integer), and each anatomical structure class O can be determined by a template person obtained through simulationmCorresponding region tag Range [ Ti,Tj]Wherein M is 0, …, M-1, TiA site tag, T, characterizing the corresponding start of the anatomical structure classjAnd characterizing the part labels of the tail ends corresponding to the anatomical structure classes, and then establishing a preset dictionary of the part labels of the anatomical structure classes based on the second corresponding relation between the part labels and the anatomical structure classes. On this basis, in this embodiment, the step S103 may specifically include a step of looking up the preset dictionary by using the target region label range to obtain the candidate anatomical structure category.
Based on this, the present embodiment realizes dual identification of the initial anatomical structure type and the part type corresponding to the target medical image, and corrects the initial anatomical structure type based on the candidate anatomical structure type corresponding to the part type, which is beneficial to improving the accuracy and robustness of the final anatomical structure type obtained by the identification method of the present embodiment.
Further, in this embodiment, in addition to identifying the anatomical structure category in the target medical image, the anatomical structure pointed by the identified anatomical structure category may also be located, for example, the anatomical structure may be located by using a labeling box (bounding box), and specifically, step S101 in this embodiment may include a step of inputting the target medical image into the anatomical structure identification model to obtain an initial anatomical structure category and an initial position range corresponding to the initial anatomical structure category.
In this embodiment, the anatomical structure identification model is obtained by training a medical image labeled with an anatomical structure labeling frame, where the labeling information of the anatomical structure labeling frame includes the anatomical structure category and the position range of the anatomical structure labeling frame in the medical image. In the present embodiment, the input of the anatomical structure recognition model is a target medical image, and the output is an initial anatomical structure category corresponding to the target medical image and an initial anatomical structure range corresponding to the initial anatomical structure category, wherein the input of the anatomical structure recognition model may be a 2D medical image or a 3D medical image.
In this embodiment, the anatomical structure recognition model preferably employs a target detection model for performing a classification task and a position regression task, in the training process of the anatomical structure recognition model, the loss function employed by the classification task may include, but is not limited to Cross entry, Focal, and the like, and the loss function employed by the position regression task may include, but is not limited to MAE, MSE, IoU, and the like, and in addition, the anatomical structure recognition model may be established by a conventional method or a deep learning method. It should be understood that the part recognition model and the anatomical structure recognition model in the present embodiment are separately trained.
In addition, each layer of image in the target medical image corresponds to a part tag, and further, based on the target part tag range corresponding to the final anatomical structure type, the layer position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined, and further, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined. Based on this, the embodiment realizes dual positioning of the anatomical structure in the target medical image, and is beneficial to improving the accuracy and robustness of the final position range corresponding to the final anatomical structure category.
Specifically, referring to fig. 4, the present embodiment may further include, after step S1021:
s105, determining a layer position range corresponding to the final anatomical structure type in the target medical image by using the target part label range corresponding to the final anatomical structure type;
s106, determining a candidate position range corresponding to the final anatomical structure type in the target medical image by utilizing the layer position range corresponding to the final anatomical structure type in the target medical image;
and S107, solving intersection of the candidate position range corresponding to the final anatomical structure type and the initial position range to obtain a final position range corresponding to the final anatomical structure type.
In this embodiment, when the target medical image includes a single-layer image, the candidate position range determined by the target portion tag is the single-layer target medical image, and when the target medical image includes a multi-layer image, there is a target medical image corresponding to the target portion tag range [ R [ ]top,Rbottom]And has a corresponding layer position range [ D0,Dtotal](where bottom-top is total), and further has a target site label range [ R ] corresponding to the final anatomical structure classorigin,Rend](wherein, [ R ]origin,Rend]∈[Rtop,Rbottom]) Followed by a target site tag range [ Rorigin,Rend]Layer position range [ D ] in target medical imagestart,Dfinish](wherein, [ D ]start,Dfinish]∈[D0,Dtotal]And end-origin is finish-start), a candidate position range corresponding to the final anatomical structure type may be finally obtained, and then, the initial anatomical structure position range output by the anatomical structure recognition model may be corrected.
Referring to fig. 4, the present embodiment further includes, after step S107:
s108, filtering the target medical image according to the final position range corresponding to the final anatomical structure type to obtain a target anatomical structure image corresponding to the final anatomical structure type;
and S109, processing the target anatomical structure image by utilizing an algorithm corresponding to the final anatomical structure type to obtain a processing result.
Specifically, the present embodiment may determine the target anatomical structure category from the final anatomical structure categories, then accurately split the target anatomical structure image from the target medical image based on the final position range corresponding to the target anatomical structure category, and remove image data irrelevant to the invoked algorithm to the maximum extent, so as to provide accurate invocation of the subsequent algorithm.
Example 2
The present embodiment provides an anatomical structure recognition system, and referring to fig. 5, the anatomical structure recognition system of the present embodiment includes:
the first identification module 101 is configured to perform anatomical structure identification on a target medical image to obtain an initial anatomical structure category;
the second identification module 102 is configured to perform part identification on the target medical image to obtain a part type;
a first determining module 103, configured to determine a candidate anatomical structure type corresponding to the part type;
a first correction module 104, configured to correct the initial anatomical structure category with the candidate anatomical structure category to obtain a final anatomical structure category.
In an embodiment, the target medical image to be identified may be a medical image acquired by a single-modality CT (Computed Tomography) device, a PET (Positron Emission Tomography) device, an MRI (Magnetic Resonance Imaging) device, a multi-modality PET/CT device, a PET/MR device, or the like. In an embodiment, the anatomical structure category corresponds to an organ category, such as lung, heart, stomach, etc., and the part category can be customized according to the actual application, such as head, neck, chest, abdomen, etc. It should be understood that the correspondence between the anatomical structure class and the site class is relatively deterministic, e.g., chest corresponds to organ classes such as lung, heart, etc.
Specifically, in this embodiment, anatomical structure recognition and part recognition are performed on the target medical image, respectively, to obtain an initial anatomical structure type of the anatomical structure corresponding to the target medical image and a part type of the corresponding part; then determining candidate anatomical structure categories corresponding to the identified part categories according to the corresponding relation between the anatomical structure categories and the part categories; and finally, correcting the identified initial anatomical structure category by using the determined candidate anatomical structure category.
For example, the initial anatomical structure type obtained by performing anatomical structure recognition on the target medical image is a lung, the part type obtained by performing part recognition is a chest, and the candidate anatomical structure types corresponding to the chest include a lung and a heart, and the initial anatomical structure type is corrected by using the candidate anatomical structure types to obtain a final anatomical structure which is the lung.
Compared with the method of only carrying out single identification of the anatomical structure on the target medical image, the method carries out double identification of the anatomical structure and the part on the target medical image, wherein the identification of the anatomical structure aims at obtaining the initial anatomical structure type, and the identification of the part aims at obtaining the part type so as to correct the initial anatomical structure type, so that the accuracy and the robustness of the final anatomical structure type obtained by the identification system of the embodiment are improved.
In this embodiment, the first identification module 101 and the second identification module 102 may be invoked at the same time or sequentially, and this embodiment is not intended to be limited thereto. In addition, in this embodiment, the final anatomical structure category may be obtained by solving an intersection of the candidate anatomical structure category and the initial candidate anatomical structure category, and at this time, the correction module 104 may be specifically configured to obtain the final anatomical structure category by solving the intersection of the candidate anatomical structure category and the initial candidate anatomical structure category.
In this embodiment, the second recognition module 102 may be specifically configured to input the target medical image into the part recognition model, so as to obtain a target part tag range.
In this embodiment, the part recognition model is obtained by training medical images with part labels marked on each layer of image.
Specifically, in this embodiment, a plurality of anatomical key points may be used as the marker points for dividing the human body structure, and the marker points are numbered, for example, when the number of the marker points is N +1 (where N is an integer), the number of the plurality of marker points may be denoted as L0,L1,…,LN-1,LNTo divide the human body structure into N part categories. The number of the marking points can be set in a user-defined mode according to actual application.
On the basis, a template person is simulated through a large amount of data, and the template person is marked with N +1 preset mark points to acquire the anatomical structure proportion (P) between the N +1 mark points0:…:PN-1) Wherein P is0:…:PN-1=L1-L0:…:LN-LN-1And establishing part labels (T) corresponding to the N +1 marking points based on the anatomical structure proportion among the N +1 marking points0,T1,…,TN-1,TN) Wherein, T1-T0:…:TN-TN-1=P0:…:PN-1
Further, when the medical image comprises a plurality of layers of images, the part label corresponding to each layer of image is obtained by piecewise linear distribution of the part labels corresponding to the adjacent marking points. For example, adjacent marker points LxAnd Lx+1Includes (K +1) layers of images, and has a corresponding part label T for each layer of imagex+(Tx+1-Tx) X (K/K), wherein K is 0, …, K.
Thus, in the present embodiment, the part labels corresponding to the mark points are fixed for different medical images, and there is a first correspondence between the part labels and the part types. On the basis, the labeling of the part label of each layer of the image of the medical image to be trained can be realized, and then the part recognition model of the embodiment is obtained by utilizing the medical image training with each layer of the image labeled with the part label, wherein the part recognition model preferably adopts a regression model, loss functions adopted in the training process can include but are not limited to MSE, Huber, Log-Cosh and the like, and in addition, the part recognition model can be established by a traditional method or a deep learning method.
In this embodiment, the input of the part recognition model is the target medical image, and the output is the target part label corresponding to the target medical image. Further, in this embodiment, the input of the part recognition model is a single-layer image included in the target medical image, and the output is a target part tag corresponding to the single-layer image, that is, the input of the part recognition model may be a 2D medical image or a 2.5D medical image.
In this embodiment, when the target medical image (e.g., 2D image or 2.5D image) is a single-layer image, the target portion tag obtained by inputting the single-layer target medical image into the portion identification model is the target portion tag range corresponding to the target medical image. When the target medical image (e.g., the 3D image) includes a plurality of layers of images, each layer of image included in the target medical image may be input into the part recognition model to obtain a target part tag corresponding to each layer of image, and then obtain a target part tag range corresponding to the target medical image, or obtain a target part tag range corresponding to the target medical image by obtaining a top layer part tag corresponding to a top layer image and a bottom layer part tag corresponding to a bottom layer image of the target medical image.
Specifically, in one aspect, the second identification module 102 may include:
the first identification unit is used for inputting a top-level image in the target medical image into the part identification model to obtain a top-level part label;
the second identification unit is used for inputting the bottom layer image in the target medical image into the part identification model to obtain a bottom layer part label;
and the first determining unit is used for obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
Specifically, after inputting a top-level image positioned on a first layer of a multi-layer image into a part recognition model, a top-level part label T is obtainedtopInputting the bottom layer image positioned at the last layer of the multilayer image into the part identification model to obtain a bottom layer part label TbottomTarget medical image corresponding to target part label range [ Ttop,Tbottom]。
In another aspect, the second identification module 102 may include:
the extraction unit is used for extracting a plurality of random layer images from the target medical image;
the third identification unit is used for respectively inputting the random layer images into the part identification model to obtain a random layer part label of each random layer image;
the fitting unit is used for fitting the random layer part labels of the random layer images and the layer positions of the random layer images in the target medical image to obtain a third corresponding relation between the part labels and the layer positions of the same layer image;
the second determining unit is used for acquiring a top layer part label of a top layer image and a bottom layer part label of a bottom layer image in the target medical image according to the third corresponding relation;
and the third determining unit is used for obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
Specifically, for each random layer image, the layer position of the random layer image in the target medical image is known, the random layer image is input into the part identification model to obtain a random layer part label, a list of random layer part labels corresponding to all the random layer images and a list of layer positions are fitted to obtain a third corresponding relation between the part label and the layer position of the same layer image, and a top layer part label T corresponding to the top layer image is obtained based on the third corresponding relationtopAnd a bottom part label T corresponding to the bottom imagebottomOn the basis, a target medical image corresponding to a target part label range [ Ttop,Tbottom]。
In this embodiment, the random layer part labels of the random layer sub-images and the layer positions of the random layer sub-images in the target medical image may be linearly fitted or continuously piecewise linearly fitted according to practical applications, and this embodiment is not intended to limit this.
Compared with an implementation mode that the top-layer image and the bottom-layer image are directly input into the part recognition model to respectively obtain the top-layer part label and the bottom-layer part label, the mode of indirectly obtaining the top-layer part label and the bottom-layer part label has better robustness, so that the obtained target part label range is more accurate.
In the present embodiment, the number of anatomical structure classes to be identified is set to M (where M is a positive integer), and each anatomical structure class O can be determined by a template person obtained through simulationmCorresponding region tag Range [ Ti,Tj]Wherein M is 0, …, M-1, TiA site tag, T, characterizing the corresponding start of the anatomical structure classjAnd characterizing the part labels of the tail ends corresponding to the anatomical structure classes, and then establishing a preset dictionary of the part labels of the anatomical structure classes based on the second corresponding relation between the part labels and the anatomical structure classes. On this basis, in this embodiment, the first determining module 103 may be specifically configured to look up the preset dictionary by using the target region label range to obtain the candidate anatomical structure category.
Based on this, the present embodiment realizes dual recognition of the initial anatomical structure category and the part category corresponding to the target medical image, and corrects the initial anatomical structure category based on the candidate anatomical structure category corresponding to the part category, which is beneficial to improving the accuracy and robustness of the final anatomical structure category obtained by the recognition system of the present embodiment.
Further, in this embodiment, in addition to identifying the anatomical structure type in the target medical image, the anatomical structure pointed by the identified anatomical structure type may also be located, for example, the anatomical structure may be located by using a labeling box (bounding box), and specifically, the first identification module 101 in this embodiment may be specifically configured to input the target medical image into an anatomical structure identification model, so as to obtain an initial anatomical structure type and an initial position range corresponding to the initial anatomical structure type.
In this embodiment, the anatomical structure identification model is obtained by training a medical image labeled with an anatomical structure labeling frame, where the labeling information of the anatomical structure labeling frame includes the anatomical structure category and the position range of the anatomical structure labeling frame in the medical image. In the present embodiment, the input of the anatomical structure recognition model is a target medical image, and the output is an initial anatomical structure category corresponding to the target medical image and an initial anatomical structure range corresponding to the initial anatomical structure category, wherein the input of the anatomical structure recognition model may be a 2D medical image or a 3D medical image.
In this embodiment, the anatomical structure recognition model preferably employs a target detection model for performing a classification task and a position regression task, in the training process of the anatomical structure recognition model, the loss function employed by the classification task may include, but is not limited to Cross entry, Focal, and the like, and the loss function employed by the position regression task may include, but is not limited to MAE, MSE, IoU, and the like, and in addition, the anatomical structure recognition model may be established by a conventional method or a deep learning method. It should be understood that the part recognition model and the anatomical structure recognition model in the present embodiment are separately trained.
In addition, each layer of image in the target medical image corresponds to a part tag, and further, based on the target part tag range corresponding to the final anatomical structure type, the layer position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined, and further, the position range of the anatomical structure pointed by the final anatomical structure type in the target medical image can be determined. Based on this, the embodiment realizes dual positioning of the anatomical structure in the target medical image, and is beneficial to improving the accuracy and robustness of the final position range corresponding to the final anatomical structure category.
Specifically, referring to fig. 5, the identification system of the present embodiment may further include:
a second determining module 105, configured to determine a layer position range of the final anatomical structure category in the target medical image by using the target region tag range corresponding to the final anatomical structure category;
the third determining module 106 determines a candidate position range corresponding to the final anatomical structure category in the target medical image by using the layer position range corresponding to the final anatomical structure category in the target medical image;
and the second correction module 107 is configured to find an intersection of the candidate position range corresponding to the final anatomical structure category and the initial position range, so as to obtain a final position range corresponding to the final anatomical structure category.
In this embodiment, when the target medical image includes a single-layer image, the candidate position range determined by the target portion tag is the single-layer target medical image, and when the target medical image includes a multi-layer image, there is a target medical image corresponding to the target portion tag range [ R [ ]top,Rbottom]And has a corresponding layer position range [ D0,Dtotal](where bottom-top is total), and further has a target site label range [ R ] corresponding to the final anatomical structure classorigin,Rend](wherein, [ R ]origin,Rend]∈[Rtop,Rbottom]) Followed by a target site tag range [ Rorigin,Rend]Layer position range [ D ] in target medical imagestart,Dfinish](wherein, [ D ]start,Dfinish]∈[D0,Dtotal]And end-origin is finish-start), a candidate position range corresponding to the final anatomical structure type may be finally obtained, and then, the initial anatomical structure position range output by the anatomical structure recognition model may be corrected.
Referring to fig. 5, the identification system of the present embodiment further includes:
a filtering module 108, configured to filter the target medical image according to the final position range corresponding to the final anatomical structure category, so as to obtain a target anatomical structure image corresponding to the final anatomical structure category;
and the processing module 109 is configured to process the target anatomical structure image by using an algorithm corresponding to the final anatomical structure category to obtain a processing result.
Specifically, the present embodiment may determine the target anatomical structure category from the final anatomical structure categories, then accurately split the target anatomical structure image from the target medical image based on the final position range corresponding to the target anatomical structure category, and remove image data irrelevant to the invoked algorithm to the maximum extent, so as to provide accurate invocation of the subsequent algorithm.
Example 3
The present embodiment provides an electronic device, which may be represented in the form of a computing device (for example, may be a server device), including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor may implement the anatomical structure identification method provided in embodiment 1 when executing the computer program.
Fig. 6 shows a schematic diagram of a hardware structure of the present embodiment, and as shown in fig. 6, the electronic device 9 specifically includes:
at least one processor 91, at least one memory 92, and a bus 93 for connecting the various system components (including the processor 91 and the memory 92), wherein:
the bus 93 includes a data bus, an address bus, and a control bus.
Memory 92 includes volatile memory, such as Random Access Memory (RAM)921 and/or cache memory 922, and can further include Read Only Memory (ROM) 923.
Memory 92 also includes a program/utility 925 having a set (at least one) of program modules 924, such program modules 924 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 91 executes various functional applications and data processing, such as the anatomical structure recognition method provided in embodiment 1 of the present invention, by executing the computer program stored in the memory 92.
The electronic device 9 may further communicate with one or more external devices 94 (e.g., a keyboard, a pointing device, etc.). Such communication may be through an input/output (I/O) interface 95. Also, the electronic device 9 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 96. The network adapter 96 communicates with the other modules of the electronic device 9 via the bus 93. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 9, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module, according to embodiments of the application. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the anatomical structure identification method provided in embodiment 1.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention can also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps of implementing the anatomical structure recognition method according to embodiment 1, when said program product is run on said terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may be executed entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (10)

1. A method of identifying an anatomical structure, comprising:
carrying out anatomical structure recognition on the target medical image to obtain an initial anatomical structure category;
carrying out part identification on the target medical image to obtain a part type;
determining a candidate anatomical structure type corresponding to the part type;
and correcting the initial anatomical structure category by using the candidate anatomical structure category to obtain a final anatomical structure category.
2. The method for identifying an anatomical structure according to claim 1, wherein the step of correcting the initial anatomical structure class using the candidate anatomical structure class comprises:
and solving the intersection of the candidate anatomical structure category and the initial anatomical structure category to obtain a final anatomical structure category.
3. The method for recognizing an anatomical structure according to claim 1, wherein the step of performing the part recognition on the target medical image to obtain the part category includes:
inputting the target medical image into a part recognition model to obtain a target part label range, wherein the part recognition model is obtained by utilizing medical image training of each layer of image labeled with a part label;
the step of determining the candidate anatomical structure class corresponding to the part class comprises:
searching a preset dictionary by using the target part label range to obtain a candidate anatomical structure category;
the preset dictionary comprises second corresponding relations between the part labels and the anatomical structure types.
4. The method for identifying an anatomical structure according to claim 3, wherein the step of performing anatomical structure identification on the target medical image to obtain an initial anatomical structure class comprises:
inputting the target medical image into an anatomical structure recognition model to obtain an initial anatomical structure type and an initial position range corresponding to the initial anatomical structure type, wherein the anatomical structure recognition model is obtained by training a medical image marked with an anatomical structure marking frame, and marking information of the anatomical structure marking frame comprises the anatomical structure type and the position range of the anatomical structure marking frame in the medical image;
after the step of obtaining the target site label range, the method further comprises the following steps:
determining a layer position range corresponding to the final anatomical structure type in the target medical image by using a target part label range corresponding to the final anatomical structure type;
determining a corresponding candidate position range of the final anatomical structure category in the target medical image by using the corresponding layer position range of the final anatomical structure category in the target medical image;
and solving the intersection of the candidate position range corresponding to the final anatomical structure category and the initial position range to obtain the final position range corresponding to the final anatomical structure category.
5. The method for identifying an anatomical structure according to claim 3, wherein the target medical image includes a multi-layer image, and the step of inputting the target medical image into the site recognition model to obtain the target site tag includes:
inputting a top-level image in the target medical image into the part recognition model to obtain a top-level part label;
inputting a bottom layer image in the target medical image into the part identification model to obtain a bottom layer part label;
and obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
6. The method for identifying an anatomical structure according to claim 3, wherein the target medical image includes a multi-layer image, and the step of inputting the target medical image into the site recognition model to obtain the target site tag includes:
extracting a plurality of random layer images from the target medical image;
respectively inputting the random layer images into the part identification model to obtain a random layer part label of each random layer image;
fitting the random layer part labels of the random layer images and the layer positions of the random layer images in the target medical image to obtain a third corresponding relation between the part labels and the layer positions of the same layer image;
acquiring a top layer part label of a top layer image and a bottom layer part label of a bottom layer image in the target medical image according to the third corresponding relation;
and obtaining the label range of the target part according to the top layer part label and the bottom layer part label.
7. The method for identifying anatomical structures as claimed in claim 6, wherein the step of fitting the stochastic layer site labels of the stochastic layer subimages and the layer positions of the stochastic layer subimages in the target medical image comprises:
and performing linear fitting or continuous piecewise linear fitting on the random layer part labels of the plurality of random layer sub-images and the layer positions of the plurality of random layer sub-images in the target medical image.
8. The method for identifying anatomical structures as set forth in claim 4, further including, after the step of obtaining a final position range corresponding to the final anatomical structure category:
and filtering the target medical image according to the final position range corresponding to the final anatomical structure type to obtain a target anatomical structure image corresponding to the final anatomical structure type.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of identification of an anatomical structure according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of identification of an anatomical structure according to any one of claims 1 to 8.
CN202011625657.9A 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium Pending CN112766314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011625657.9A CN112766314A (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011625657.9A CN112766314A (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN112766314A true CN112766314A (en) 2021-05-07

Family

ID=75698928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011625657.9A Pending CN112766314A (en) 2020-12-31 2020-12-31 Anatomical structure recognition method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112766314A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344926A (en) * 2021-08-05 2021-09-03 武汉楚精灵医疗科技有限公司 Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014451A1 (en) * 2004-11-10 2007-01-18 Jeff Dwyer Anatomical visualization and measurement system
CN1906634A (en) * 2003-11-19 2007-01-31 西门子共同研究公司 System and method for detecting and matching anatomical structures using appearance and shape
US20070055153A1 (en) * 2005-08-31 2007-03-08 Constantine Simopoulos Medical diagnostic imaging optimization based on anatomy recognition
US20100054525A1 (en) * 2008-08-27 2010-03-04 Leiguang Gong System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
US20110311116A1 (en) * 2010-06-17 2011-12-22 Creighton University System and methods for anatomical structure labeling
CN102428469A (en) * 2009-05-19 2012-04-25 皇家飞利浦电子股份有限公司 Retrieving and viewing medical images
US20170091574A1 (en) * 2014-05-16 2017-03-30 The Trustees Of The University Of Pennsylvania Applications of automatic anatomy recognition in medical tomographic imagery based on fuzzy anatomy models
US20170270663A1 (en) * 2016-03-15 2017-09-21 Matthias Hoffmann Automatic recognition of anatomical landmarks
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition
CN108542351A (en) * 2018-01-26 2018-09-18 徐州云联医疗科技有限公司 A kind of synchronous display system of medical image faultage image and 3 D anatomical image
CN109074665A (en) * 2016-12-02 2018-12-21 阿文特公司 System and method for navigating to targeted anatomic object in the program based on medical imaging
CN109754396A (en) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 Method for registering, device, computer equipment and the storage medium of image
CN109800805A (en) * 2019-01-14 2019-05-24 上海联影智能医疗科技有限公司 Image processing system and computer equipment based on artificial intelligence
CN110023995A (en) * 2016-11-29 2019-07-16 皇家飞利浦有限公司 Cardiac segmentation method for heart movement correction
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN110378876A (en) * 2019-06-18 2019-10-25 平安科技(深圳)有限公司 Image recognition method, device, equipment and storage medium based on deep learning
CN110490841A (en) * 2019-07-18 2019-11-22 上海联影智能医疗科技有限公司 Area of computer aided image analysis methods, computer equipment and storage medium
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
CN110914866A (en) * 2017-05-09 2020-03-24 哈特弗罗公司 System and method for anatomical segmentation in image analysis
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image
CN112102235A (en) * 2020-08-07 2020-12-18 上海联影智能医疗科技有限公司 Human body part recognition method, computer device, and storage medium

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1906634A (en) * 2003-11-19 2007-01-31 西门子共同研究公司 System and method for detecting and matching anatomical structures using appearance and shape
US20070014451A1 (en) * 2004-11-10 2007-01-18 Jeff Dwyer Anatomical visualization and measurement system
US20070055153A1 (en) * 2005-08-31 2007-03-08 Constantine Simopoulos Medical diagnostic imaging optimization based on anatomy recognition
US20100054525A1 (en) * 2008-08-27 2010-03-04 Leiguang Gong System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
TW201019905A (en) * 2008-08-27 2010-06-01 Ibm System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
CN102428469A (en) * 2009-05-19 2012-04-25 皇家飞利浦电子股份有限公司 Retrieving and viewing medical images
US20110311116A1 (en) * 2010-06-17 2011-12-22 Creighton University System and methods for anatomical structure labeling
US20170091574A1 (en) * 2014-05-16 2017-03-30 The Trustees Of The University Of Pennsylvania Applications of automatic anatomy recognition in medical tomographic imagery based on fuzzy anatomy models
US20170270663A1 (en) * 2016-03-15 2017-09-21 Matthias Hoffmann Automatic recognition of anatomical landmarks
US20180060652A1 (en) * 2016-08-31 2018-03-01 Siemens Healthcare Gmbh Unsupervised Deep Representation Learning for Fine-grained Body Part Recognition
CN110023995A (en) * 2016-11-29 2019-07-16 皇家飞利浦有限公司 Cardiac segmentation method for heart movement correction
CN109074665A (en) * 2016-12-02 2018-12-21 阿文特公司 System and method for navigating to targeted anatomic object in the program based on medical imaging
CN110914866A (en) * 2017-05-09 2020-03-24 哈特弗罗公司 System and method for anatomical segmentation in image analysis
CN108542351A (en) * 2018-01-26 2018-09-18 徐州云联医疗科技有限公司 A kind of synchronous display system of medical image faultage image and 3 D anatomical image
CN109754396A (en) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 Method for registering, device, computer equipment and the storage medium of image
CN109800805A (en) * 2019-01-14 2019-05-24 上海联影智能医疗科技有限公司 Image processing system and computer equipment based on artificial intelligence
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110378876A (en) * 2019-06-18 2019-10-25 平安科技(深圳)有限公司 Image recognition method, device, equipment and storage medium based on deep learning
CN110335259A (en) * 2019-06-25 2019-10-15 腾讯科技(深圳)有限公司 A kind of medical image recognition methods, device and storage medium
CN110490841A (en) * 2019-07-18 2019-11-22 上海联影智能医疗科技有限公司 Area of computer aided image analysis methods, computer equipment and storage medium
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
CN111160367A (en) * 2019-12-23 2020-05-15 上海联影智能医疗科技有限公司 Image classification method and device, computer equipment and readable storage medium
CN111709485A (en) * 2020-06-19 2020-09-25 腾讯科技(深圳)有限公司 Medical image processing method and device and computer equipment
CN112102235A (en) * 2020-08-07 2020-12-18 上海联影智能医疗科技有限公司 Human body part recognition method, computer device, and storage medium
CN112037200A (en) * 2020-08-31 2020-12-04 上海交通大学 Method for automatically identifying anatomical features and reconstructing model in medical image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344926A (en) * 2021-08-05 2021-09-03 武汉楚精灵医疗科技有限公司 Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image
CN113344926B (en) * 2021-08-05 2021-11-02 武汉楚精灵医疗科技有限公司 Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image

Similar Documents

Publication Publication Date Title
CN111160367B (en) Image classification method, apparatus, computer device, and readable storage medium
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
US8437521B2 (en) Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging
US8818057B2 (en) Methods and apparatus for registration of medical images
EP1895468A2 (en) Medical image processing apparatus
EP1394727A1 (en) Hierarchical component based object recognition
CN111311655B (en) Multi-mode image registration method, device, electronic equipment and storage medium
US20070269089A1 (en) Medical image part recognition apparatus and medical image part recognition program
WO2021189913A1 (en) Method and apparatus for target object segmentation in image, and electronic device and storage medium
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
CN112766323A (en) Image identification method and device
JPWO2020110774A1 (en) Image processing equipment, image processing methods, and programs
CN113159195A (en) Ultrasonic image classification method, system, electronic device and storage medium
CN113656706A (en) Information pushing method and device based on multi-mode deep learning model
CN108920661A (en) International Classification of Diseases labeling method, device, computer equipment and storage medium
CN112766314A (en) Anatomical structure recognition method, electronic device, and storage medium
CN114066905A (en) Medical image segmentation method, system and device based on deep learning
CN113313699A (en) X-ray chest disease classification and positioning method based on weak supervised learning and electronic equipment
CN113168914A (en) Interactive iterative image annotation
Li et al. Deformation and refined features based lesion detection on chest X-ray
CN116052176A (en) Text extraction method based on cascade multitask learning
CN115239740A (en) GT-UNet-based full-center segmentation algorithm
CN112561894B (en) Intelligent electronic medical record generation method and system for CT image
CN113177923A (en) Medical image content identification method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination