CN115049596A - Template image library generation method, target positioning device and storage medium - Google Patents
Template image library generation method, target positioning device and storage medium Download PDFInfo
- Publication number
- CN115049596A CN115049596A CN202210602333.6A CN202210602333A CN115049596A CN 115049596 A CN115049596 A CN 115049596A CN 202210602333 A CN202210602333 A CN 202210602333A CN 115049596 A CN115049596 A CN 115049596A
- Authority
- CN
- China
- Prior art keywords
- sample
- image
- template
- images
- subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 103
- 239000000523 sample Substances 0.000 claims description 448
- 239000013074 reference sample Substances 0.000 claims description 39
- 238000007667 floating Methods 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012935 Averaging Methods 0.000 claims description 13
- 238000007499 fusion processing Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 238000010276 construction Methods 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 9
- 238000012549 training Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 15
- 210000000629 knee joint Anatomy 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 5
- 238000002059 diagnostic imaging Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 210000001264 anterior cruciate ligament Anatomy 0.000 description 4
- 210000001519 tissue Anatomy 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000002310 elbow joint Anatomy 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Primary Health Care (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- Biomedical Technology (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to a template image library generation method, a target positioning device and a storage medium. The generation method of the template image library comprises the following steps: acquiring a plurality of sample image sets with different attribute information of a target part, and respectively constructing template images corresponding to the sample image sets aiming at the sample image sets; establishing a preset template image library of the target part according to template images corresponding to the sample image sets; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information; the embodiment of the application provides a method for generating a template image of a target part, which improves the practicability and operability of the template image; in addition, in this embodiment, one target portion corresponds to a plurality of template images, so that the classification granularity of the template images of the target portion is finer, the matching degree between the template images and the object to be detected is higher, and the accuracy of an analysis result obtained by analyzing the target portion of the object to be detected is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method for generating a template image library, a method for locating an object, an apparatus, a computer device, a storage medium, and a computer program product.
Background
In the medical field, with the development of medical imaging technology, taking medical images becomes an auxiliary treatment means for doctors to effectively treat target parts of patients, and with the aid of medical images of target parts, doctors can quickly understand pathological conditions of organs, tissues, blood vessels and the like of the target parts, and can also determine the position of a target area or an anatomical position during surgical treatment through the medical images.
In the traditional technology, doctors rely on professional knowledge and experience to analyze medical images of patients; however, when the medical image is not clearly developed, the target portion is seriously damaged, the individual structural difference of the target portion is large, or the doctor's experience is insufficient, the result of analyzing the medical image by the doctor is easily inaccurate.
Based on this, in theoretical research and practical verification, the inventor finds that when a medical image of a target part is analyzed, a more accurate analysis result can be obtained by combining a standard template image of the target part, wherein the standard template image is the medical image of the target part in a healthy state; therefore, how to obtain an accurate template image is an urgent technical problem to be solved.
Disclosure of Invention
In view of the above, it is necessary to provide a method for generating a template image library, a method for locating an object, an apparatus, a computer device, a computer readable storage medium and a computer program product, which can improve the accuracy of locating an anatomical site.
In a first aspect, the present application provides a method for generating a template image library. The method comprises the following steps:
acquiring a plurality of sample image sets of a target part; the attribute information corresponding to each sample image set is different;
respectively constructing template images corresponding to the sample image sets aiming at the sample image sets;
establishing a preset template image library of the target part according to template images corresponding to the sample image sets; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information.
In one embodiment, respectively constructing template images corresponding to each sample image set for each sample image set includes:
aiming at each sample image set, carrying out multi-level clustering operation on sample images in the sample image set to obtain at least one sample subset corresponding to each level of clustering operation; the sample subset comprises a plurality of sample images meeting the clustering condition;
determining a first template image corresponding to each sample subset according to each sample image in the sample subsets corresponding to the level clustering operation aiming at least one sample subset corresponding to each level clustering operation;
determining a second template image corresponding to the level clustering operation according to the first template image corresponding to each sample subset;
and determining the template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation.
In one embodiment, performing multiple levels of clustering operations on sample images in a sample image set to obtain at least one sample subset corresponding to each level of clustering operations includes:
performing first-level clustering operation on sample images in the sample image set to obtain a plurality of candidate sample subsets corresponding to the first-level clustering operation;
judging whether each candidate sample subset meets the clustering condition; the clustering condition comprises that the number of sample images in the candidate sample subset is greater than or equal to a preset threshold value;
under the condition that the candidate sample subset is judged to meet the clustering condition, taking the candidate sample subset as a sample subset corresponding to the first-level clustering operation;
and under the condition that the candidate sample subsets are judged not to meet the clustering conditions, performing second-level clustering operation on the candidate sample subsets which do not meet the clustering conditions until the clustering cutoff conditions are met, and obtaining at least one sample subset corresponding to each-level clustering operation.
In one embodiment, determining, according to each sample image in the sample subsets corresponding to the hierarchical clustering operation, a first template image corresponding to each sample subset includes:
respectively inputting each sample image in the sample subset into a preset registration model aiming at each sample subset corresponding to the level clustering operation to obtain a middle sample image corresponding to each sample image in the sample subset;
and performing fusion processing on the intermediate sample images respectively corresponding to the sample images in the sample subset to obtain a first template image corresponding to the sample subset.
In one embodiment, the inputting each sample image in the sample subset into a preset registration model respectively to obtain an intermediate sample image corresponding to each sample image in the sample subset respectively includes:
determining a reference sample image from each sample image in the sample subset;
respectively inputting the reference sample image and each floating sample image into a preset registration model to obtain a deformation field of each floating sample image mapped to the reference sample image; the floating sample image is a sample image in the sample subset except the reference sample image;
and aiming at each floating sample image, registering the floating sample image according to the deformation field corresponding to the floating sample image to obtain an intermediate sample image corresponding to each floating sample image.
In one embodiment, determining a reference sample image from each sample image in the sample subset comprises:
respectively calculating the mean square error between a first sample image in the sample subset and a second sample image except the first sample image; wherein the first sample image is any one of the sample images in the sample subset;
summing the mean square errors to obtain a sum result of the mean square errors;
a minimum summation result is determined from the summation results, and the first sample image corresponding to the minimum summation result is used as a reference sample image.
In one embodiment, the fusing the intermediate sample images respectively corresponding to the sample images in the sample subset to determine the first template image corresponding to the sample subset includes:
and averaging the intermediate sample images respectively corresponding to the sample images in the sample subset to obtain a first template image corresponding to the sample subset.
In one embodiment, determining a second template image corresponding to a hierarchical clustering operation according to a first template image corresponding to each sample subset includes:
respectively inputting the first template images corresponding to the sample subsets into a preset registration model to obtain intermediate template images corresponding to the first template images;
and performing fusion processing on the intermediate template images respectively corresponding to the first template images to obtain second template images corresponding to the level clustering operation.
In one embodiment, determining the template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation includes:
respectively inputting the second template images corresponding to each level of clustering operation into a preset registration model to obtain intermediate template images corresponding to the second template images;
and performing fusion processing on the intermediate template images corresponding to the second template images respectively to obtain template images corresponding to the sample image set.
In a second aspect, the present application further provides a target positioning method. The method comprises the following steps:
acquiring a medical image of a target part of an object to be detected;
acquiring a target template image of a target part matched with the attribute information of the object to be detected from a preset template image library according to the attribute information of the object to be detected;
positioning a target to be positioned in the target part according to the medical image of the target part and the target template image of the target part to obtain the position information of the target to be positioned; wherein the preset template image library is generated by the method of the first aspect.
In one embodiment, the positioning a target to be positioned in a target portion according to a medical image of the target portion and a target template image of the target portion to obtain position information of the target to be positioned includes:
registering the medical image of the target part and the target template image of the target part to obtain a registered target medical image;
and positioning the target to be positioned in the target part according to the target medical image to obtain the position information of the target to be positioned.
In a third aspect, the present application further provides a device for generating a template image library. The device includes:
the acquisition module is used for acquiring a plurality of sample image sets of the target part; attribute information corresponding to each sample image set is different;
the construction module is used for respectively constructing template images corresponding to the sample image sets aiming at the sample image sets;
the establishing module is used for establishing a preset template image library of the target part according to the template images corresponding to the sample image sets; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information.
In a fourth aspect, the present application further provides a target positioning device. The device includes:
the first acquisition module is used for acquiring a medical image of a target part of an object to be detected;
the second acquisition module is used for acquiring a target template image of a target part matched with the attribute information of the object to be detected from the preset template image library according to the attribute information of the object to be detected;
the determining module is used for positioning a target to be positioned in the target part according to the medical image of the target part and the target template image of the target part to obtain the position information of the target to be positioned; wherein the preset template image library is generated by the method of the first aspect.
In a fifth aspect, the present application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the methods in the first and second aspects described above when the processor executes the computer program.
In a sixth aspect, the present application further provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first and second aspects described above.
In a seventh aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method in the first and second aspects.
According to the template image library generation method, the target positioning device and the storage equipment, the template images corresponding to the sample image sets are respectively constructed for the sample image sets by acquiring the sample image sets with different attribute information of the target part; then, establishing a preset template image library of the target part according to template images corresponding to the various image sets; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information; the embodiment of the application provides a method for generating a template image of a target part, and the implementability and operability of the template image are improved; compared with the method of manually and directly analyzing the medical image of the target part, the method can improve the accuracy of the analysis result of the medical image of the target part after the medical image of the target part is analyzed by combining the template image. In addition, in the embodiment of the present application, the preset template image library of the target portion further includes a plurality of template images corresponding to different attribute information, that is, one target portion corresponds to a plurality of template images, so that the classification granularity of the template image of the target portion is finer, which not only can improve the application range of the template image, that is, different template images corresponding to the target portion can be applicable to objects to be detected with different attributes, but also can improve the matching degree between the template image and different objects to be detected, and further improve the accuracy of an analysis result after analyzing the target portions of different objects to be detected.
Drawings
FIG. 1 is a diagram illustrating an exemplary environment for generating a template image library;
FIG. 2 is a flowchart illustrating a method for generating a template image library according to an embodiment;
FIG. 3 is a flowchart illustrating a method for generating a template image library according to another embodiment;
FIG. 4 is a flowchart illustrating a method for generating a template image library according to another embodiment;
FIG. 5 is a flowchart illustrating a method for generating a template image library according to another embodiment;
FIG. 6 is a flowchart illustrating a method for generating a template image library according to another embodiment;
FIG. 7 is a flowchart illustrating a method for generating a template image library according to another embodiment;
FIG. 8 is a flow diagram illustrating a method for locating an object in one embodiment;
FIG. 9 is a block diagram illustrating an exemplary apparatus for generating a template image library;
FIG. 10 is a block diagram of an object localization mechanism in accordance with an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for generating the template image library according to the embodiment of the present application may be applied to a medical image terminal, or a server connected to the medical image terminal, or even a series of computer devices related to medical image processing, such as a medical image scanning device, and an internal structure diagram of the computer device may be as shown in fig. 1. The computer device comprises a processor, a memory, a communication interface, and optionally a display screen and an input device, which are connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for communicating with other external computer devices in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method for generating a template image library. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. In addition, in the case where the computer device is a server, the server may be implemented as an independent server or a server cluster composed of a plurality of servers.
It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 2, a method for generating a template image library is provided, which is described by taking the method as an example applied to the computer device in fig. 1, and includes the following steps:
Alternatively, the target portion may be any medical tissue portion of the object to be measured, such as knee joint, hip joint, elbow joint, chest, lung, brain, etc., the attribute information may be attribute information of the object to be measured, including but not limited to sex, age, and ethnicity of the object to be measured, category information of the target portion of the object to be measured, etc., and the category information of the target portion of the object to be measured may be position information of the target portion, such as: in the case where the target site is a knee joint, the position information of the target site may include a left knee joint and a right knee joint. The attribute information difference means that at least one attribute information is different in a plurality of attribute information of the object to be measured.
Optionally, the computer device may divide a plurality of empty subsets having different attribute information according to each attribute information, and acquire, for each empty subset, a plurality of sample images corresponding to the attribute information of the empty subset to form a sample image set corresponding to the empty subset, so that the computer device may acquire a plurality of sample image sets having different attribute information corresponding to the target site. Optionally, when acquiring sample images corresponding to different attribute information of the target portion, historical scan images for the target portion may be acquired from medical institutions in different areas, or scan images corresponding to different attribute information of the target portion may be acquired from a large database as sample images, which is not limited in this application.
Optionally, for each sample image set, image feature extraction may be performed on each sample image in the sample image set, and a template image corresponding to the sample image set is constructed according to the extracted image features; the sample image set can be clustered by adopting a clustering mode to obtain at least one sample subset corresponding to the sample image set, each sample subset is further analyzed to obtain a candidate template image corresponding to each sample subset, and then the template image corresponding to the sample image set can be constructed according to the candidate template image corresponding to each sample subset.
The preset template image library comprises a plurality of template images of target parts corresponding to different attribute information.
In this embodiment, by acquiring a plurality of sample image sets with different attribute information of a target portion, template images corresponding to the sample image sets are respectively constructed for the sample image sets; further, according to template images corresponding to the various image sets, a preset template image library of the target part is established; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information; the embodiment of the application provides a method for generating a template image of a target part, and the implementability and operability of the template image are improved; compared with the method of manually and directly analyzing the medical image of the target part, the method can improve the accuracy of the analysis result of the medical image of the target part after the medical image of the target part is analyzed by combining the template image. In addition, in the embodiment of the present application, the preset template image library of the target portion further includes a plurality of template images corresponding to different attribute information, that is, one target portion corresponds to a plurality of template images, so that the classification granularity of the template image of the target portion is finer, which not only can improve the application range of the template image, that is, different template images corresponding to the target portion can be applicable to objects to be detected with different attributes, but also can improve the matching degree between the template image and different objects to be detected, and further improve the accuracy of an analysis result after analyzing the target portions of different objects to be detected.
Fig. 3 is a flowchart illustrating a method for generating a template image library according to another embodiment. The present embodiment relates to an optional implementation process for constructing template images corresponding to various sample image sets respectively for the various sample image sets, and based on the foregoing embodiment, as shown in fig. 3, the foregoing step 202 includes:
The sample subset includes a plurality of sample images satisfying the clustering condition. Optionally, an unsupervised clustering mode may be adopted to perform unsupervised clustering on each sample image in the sample image set, and a clustering condition and a clustering cutoff condition may be set. Alternatively, the clustering condition may be a determination condition for each level of clustering result, the determination condition for each level of clustering result may be a condition related to the number of images of the clustered subset obtained after clustering, or may be another determination condition for the clustered subset obtained after clustering. The cluster termination condition may be the number of cluster levels, or a termination condition associated with a cluster subset obtained after each level of clustering, for example: the clustered subset only comprises one clustered subset, and the clustered subset does not meet the judgment condition of the clustering result of each level, so that the next-level clustering cannot be performed any more.
In an optional implementation manner of this embodiment, as shown in fig. 4, the step 301 may also include:
In an alternative implementation, the computer device may consider each sample image in the sample image set as a category cluster, and set a similarity threshold; calculating similarity of the category clusters in the sample image set pairwise, and combining the two category clusters with the similarity larger than the maximum similarity in the similarity threshold to obtain a plurality of combined candidate sample subsets; the similarity of the category clusters may be the pixel similarity between the category clusters (i.e., sample images), i.e., the gray value or the signal value between each pair of pixels in the two sample images is measured, so as to determine the similarity between the two sample images. Alternatively, the similarity calculation method may include, but is not limited to, a neighbor-neighbor distance (nearest-neighbor), a diameter-similarity distance (complete-link), an average-similarity distance (average-link), and the like.
And step 403, under the condition that the candidate sample subset is judged to meet the clustering condition, taking the candidate sample subset as the sample subset corresponding to the first-level clustering operation.
And step 404, under the condition that the candidate sample subsets are judged not to meet the clustering conditions, performing second-level clustering operation on the candidate sample subsets which do not meet the clustering conditions until the clustering cutoff conditions are met, and obtaining at least one sample subset corresponding to each level of clustering operation.
That is, under the condition that the number of sample images in the candidate sample subset is smaller than the preset threshold, continuing cluster merging processing on the candidate sample subset, namely performing next-level clustering operation; and analogizing until a clustering cutoff condition is met, namely the number of clustering levels is reached, or clustering can not be carried out any more.
The following is an example of a specific embodiment: assuming that the sample image set P1 is subjected to a first-level clustering operation, the obtained candidate sample subsets include a1, a2, a3, a4, a5, a6, a7, a8, a9 and a10, and assuming that a1 is a candidate sample subset satisfying a clustering condition, the a1 may be determined as the sample subset corresponding to the first-level clustering operation; for a2, a3, a4, a5, a6, a7, a8, a9 and a10 which do not satisfy the clustering conditions, the second-level clustering operation is continued, and assuming that after the second-level clustering operation, the obtained candidate sample subsets include b1 (merging a2 and a4), b2 (merging a3 and a7), b3 (merging a5 and a6) and b4 (merging a8, a9 and a10), and assuming that b1 and b4 are candidate sample subsets which satisfy the clustering conditions, b1 and b4 can be determined as sample subsets corresponding to the second-level clustering operation; then, continuing to perform a third-level clustering operation on b2 and b3 which do not meet the clustering condition to obtain a candidate sample subset c1 (combining b2 and b3) after the third-level clustering operation, and if c1 meets the clustering condition at this time, determining c1 as a sample subset corresponding to the third-level clustering operation; thus, a sample subset a1 corresponding to the first-level clustering operation, sample subsets b1 and b4 corresponding to the second-level clustering operation, and a sample subset c1 corresponding to the third-level clustering operation can be obtained.
It should be noted that the hierarchical clustering method using cluster merging (i.e., aggregation) provided above is only explained as one of the clustering methods, and in practical application, for multi-level clustering operation, a split hierarchical clustering method may also be used, and the implementation of clustering is not limited in the embodiment of the present application; in addition, since the split hierarchical clustering method is the prior art, the specific implementation process of the split hierarchical clustering method is not described in detail in the embodiments of the present application.
Optionally, for each sample image in the various sample subsets of each level, each sample image may be input into a preset template image model, so as to obtain a first template image corresponding to each sample subset. Based on the above example, for the sample subset a1 corresponding to the first-level cluster, the first template image T1 corresponding to the sample subset a1 can be obtained; for the sample subsets b1 and b4 corresponding to the second-level clustering, a first template image T2 corresponding to the sample subset b1 and a first template image T3 corresponding to the sample subset b4 can be obtained; for the sample subset c1 corresponding to the third-level clustering operation, the first template image T4 corresponding to the sample subset c1 can be obtained.
In an optional implementation manner of this embodiment, as shown in fig. 5, this step 302 may also include:
The preset registration model is obtained after an initial registration network is trained on the basis of a reference sample image and a plurality of floating sample images. Optionally, the computer device may first obtain a training data set, wherein the training data set includes a plurality of pairs of training samples, each pair of training samples being composed of a reference sample image and a different floating sample image; alternatively, the training sample may be determined from the sample subsets, a reference sample image may be determined from sample images of one sample subset, and sample images other than the reference sample image in the sample subset are used as floating sample images; then, the reference sample image and a floating sample image can be used as a pair of training samples; based on the same mode, a plurality of pairs of training samples can be obtained from a plurality of sample subsets; in addition, each sample subset may be a sample subset in the same sample image set, or may be a sample subset in a different sample image set.
Then, the computer device may construct an initial registration network (e.g., an unsupervised deep learning registration network), optionally, the unsupervised deep learning registration network may be formed by cascading an encoder and a decoder, the encoder may be formed by a plurality of convolution scales, and the decoder is structurally symmetric to the encoder; under the condition that the image modality of the training sample is CT or MR, a three-dimensional convolution kernel can be selected; in the case where the image modality of the training sample is X-ray, a two-dimensional convolution kernel may be selected. Alternatively, the encoder may be made up of 5 convolution scales, where a convolution scale refers to a series of convolution operations of different convolution kernel sizes.
When model training is performed, the multiple pairs of training samples can be sequentially input into the unsupervised deep learning registration network, a deformation field for image change corresponding to each pair of training samples is output, and then the obtained deformation field can be acted on the pair of training samples corresponding to the deformation field in a spatial transformation manner to obtain a registered image corresponding to a floating sample image in the pair of training samples; further, for the pair of training samples, inputting the registered images and the reference sample images into a preset loss function to calculate differences of the images and the reference sample images; the predetermined Loss Function may be composed of a mean square error Loss Function (e.g., MSE Loss Function) and a smoothing term, as shown in formula (1).
Then, adjusting and setting network hyper-parameters according to needs to enable the preset loss function to be converged continuously, achieving the optimal image segmentation effect and storing the trained network model as the preset registration model; optionally, a minimum loss value may be preset, and when the loss value of the preset loss function is less than or equal to the minimum loss value, it may be determined that the preset loss function reaches convergence; optionally, it may be determined that the preset loss function converges and the trained network model stabilizes under the condition that the network hyper-parameter remains unchanged for a long time or the variation amplitude thereof is less than or equal to the preset amplitude threshold, and at this time, the trained network model may be used as the preset registration model. In the embodiment, an unsupervised deep learning registration network is adopted, and compared with a supervised registration network, the method can avoid the complicated process of manually delineating the registration area, improve the training effect of the registration model, and further improve the construction efficiency of the standard template.
Optionally, after the preset registration model is obtained, for each sample subset, the method steps shown in fig. 6 may be adopted to obtain an intermediate sample image corresponding to each sample image in the sample subset, including:
Optionally, any sample image in the sample subset may be determined as a reference sample image; the reference sample image can also be determined from each sample image by adopting a preset screening algorithm, wherein the preset screening algorithm can be flexibly set according to actual use requirements, and the application is not specifically limited to this.
Based on the above example, assume that sample subset a1 includes sample image [ I 1 ,I 2 ,I 3 ,...,I N ]Wherein the reference sample image I _ fixed is I 1 The floating sample image is [ I 2 ,I 3 ,...,I N ]Then I can be substituted 1 And I 2 、I 1 And I 3 、......、I 1 And I N Respectively inputting the images into a preset registration model to finally obtain intermediate sample images, namely I, corresponding to each floating sample image 2 Corresponding to I 2 ’ 、I 3 Corresponding to I 3 ’ . N Corresponding to I N ’ . It should be noted that the middle corresponding to the reference sample image I _ fixedWhether the sample image is the reference sample image, i.e. I 1 =I 1 ’ 。
Alternatively, the fusion process may include, but is not limited to, an averaging process, a median process, and the like; note that, when the averaging process or the averaging process is performed, the averaging process or the averaging process may be performed after the maximum value/minimum value is removed.
Preferably, the computer device may perform averaging processing on intermediate sample images respectively corresponding to each sample image in the sample subset to obtain a first template image corresponding to the sample subset; optionally, the averaging process includes, but is not limited to, direct averaging, weighted averaging, and the like. Based on the above example, for each intermediate sample image corresponding to the sample subset a1, [ I _ fixed, I 2 ’ ,I 3 ’ ,...,I N ’ ]Then, the first template image T1 corresponding to the sample subset a1 is obtained by performing averaging processing.
Optionally, the first template images corresponding to the sample subsets may be respectively input into the preset registration model, so as to obtain intermediate template images corresponding to the first template images; and then, fusing the intermediate template images corresponding to the first template images respectively to obtain second template images corresponding to the level clustering operation. For the specific implementation process, reference may be made to the steps of the methods shown in fig. 5 and fig. 6, which are not described herein again.
In addition, when the hierarchical clustering operation corresponds to only one sample subset, the first template image corresponding to the sample subset may be used as the second template image corresponding to the hierarchical clustering operation.
Based on the above example, for the first-level clustering operation, since the sample subset corresponding to the first-level clustering operation only includes a1, the first template image T1 corresponding to the sample subset a1 may be used as the second template image U1 corresponding to the first-level clustering operation. For the second-level clustering operation, the first template image T2 corresponding to the sample subset b1 and the first template image T3 corresponding to the sample subset b4 may be input to a preset registration model, so as to obtain an intermediate template image corresponding to the first template image T2 and an intermediate template image corresponding to the first template image T3; then, the intermediate template image corresponding to the first template image T2 and the intermediate template image corresponding to the first template image T3 are averaged to obtain a second template image U2 corresponding to the second-level clustering operation. For the third-level clustering operation, since the sample subset corresponding to the third-level clustering operation only includes c1, the first template image T3 corresponding to the sample subset c1 may be used as the second template image U3 corresponding to the third-level clustering operation.
And step 304, determining a template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation.
Optionally, the second template images corresponding to each level of clustering operation may be respectively input into a preset registration model, so as to obtain intermediate template images corresponding to the second template images respectively; and then, carrying out fusion processing on the intermediate template images respectively corresponding to the second template images to obtain the template images corresponding to the sample image set. For the specific implementation process, reference may be made to the method steps shown in fig. 6 and fig. 7, which are not described herein again.
Based on the above example, the above processing may be performed on the second template image U1 corresponding to the first-level clustering operation, the second template image U2 corresponding to the second-level clustering operation, and the second template image U3 corresponding to the third-level clustering operation, so as to obtain the template image corresponding to the sample image set P1.
Similarly, template images corresponding to the sample image sets with different attribute information can be obtained according to the correlation steps for each sample image set corresponding to different attribute information, and then the preset template image library is generated according to the template images corresponding to the sample image sets with different attribute information.
In this embodiment, for various sample image sets, the computer device performs multi-level clustering operations on sample images in the sample image sets to obtain at least one sample subset corresponding to each level of clustering operations; determining a first template image corresponding to each sample subset according to each sample image in the sample subsets corresponding to the level clustering operation aiming at least one sample subset corresponding to each level clustering operation; then, according to the first template image corresponding to each sample subset, determining a second template image corresponding to the level clustering operation; determining a template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation; by adopting the layer-by-layer image set registration method, registration errors and noise generated in image fusion can be reduced, the high-precision standard template construction is realized, the accuracy of the standard template is improved, and the accuracy of target positioning according to the high-precision template image can be improved.
Fig. 7 is a flowchart illustrating a method for generating a template image library according to another embodiment. This embodiment relates to an alternative implementation process of determining a reference sample image from each sample image in a sample subset by a computer device, as shown in fig. 7, where the step 701 includes:
in step 701, mean square errors between a first sample image and a second sample image except the first sample image in the sample subset are calculated respectively.
Wherein the first sample image is any one of the sample images in the sample subset.
The conventional method for calculating the mean square error of the image can be used, and the mean square error of the image is common knowledge to those skilled in the art, so this is not explained in the embodiment of the present application.
In step 703, a minimum summation result is determined from the summation results, and the first sample image corresponding to the minimum summation result is used as the reference sample image.
In this embodiment, the computer device calculates the mean square error between the first sample image in the sample subset and the second sample image except the first sample image, and then sums the mean square errors to obtain a sum result of the mean square errors; determining a minimum summation result from the summation results, and using the first sample image corresponding to the minimum summation result as a reference sample image; wherein the first sample image is any one of the sample images in the sample subset; that is to say, in this embodiment, by calculating the sum of mean square errors between each sample image and other sample images, and further determining the minimum sum of mean square errors from the plurality of sums of mean square errors and the corresponding sample image as a reference sample image, the difference between the obtained reference sample image and other sample images is minimum, so that the quality of the image obtained by registering the other sample images in the sample subset according to the reference sample image is higher, and further, the accuracy of the template image determined according to each registered image is higher, and the accuracy of target positioning according to the template image can be improved.
By adopting the template image generation method provided in each embodiment, template image libraries corresponding to different target parts can be generated, and the template influence library of the target part can be used for performing auxiliary analysis on the medical image of the target part; such as: the physiological characteristics of the target part can be analyzed by combining the template image library of the target part and the medical image of the target part, and a certain target to be positioned in the target part can be positioned and analyzed by combining the template image library of the target part and the medical image of the target part; the following description will specifically take the example of performing accurate positioning analysis on a target to be positioned in a target portion.
In one embodiment, as shown in fig. 8, there is provided an object positioning method, which is described by taking the method as an example applied to the computer device in fig. 1, and includes the following steps:
Optionally, the computer device may obtain the medical image of the target portion of the object to be detected from the image scanning device, may also obtain the medical image from a server storing the medical image of the target portion of the object to be detected, and may also obtain the medical image from a local storage of the computer device, where the medical image stored in the local storage may be sent to the computer device after the image scanning device scans the medical image of the target portion of the object to be detected; the embodiment of the present application does not limit the acquisition mode of the medical image.
The preset template image library includes a plurality of template images of target portions corresponding to different attribute information, and the preset template image library may be generated by using the method of any one of the embodiments in fig. 2 to 7. The attribute information of the object to be measured may include, but is not limited to, sex, age, ethnicity of the object to be measured, category information of a target portion of the object to be measured, and the like, wherein the category information of the target portion of the object to be measured may be position information of the target portion, such as: left knee joint or right knee joint. Correspondingly, the preset template image library may include template images of the knee joint corresponding to the different attribute information, and at least one of the corresponding attribute information among the different template images is different; such as: the first template image is a template image corresponding to gender (male), age (20), ethnicity (Chinese) and position (left knee joint); the second template image is the template image corresponding to gender (woman), age (20), ethnicity (Han) and position (left knee joint); the third template image is a template image corresponding to gender (male), age (20), ethnicity (chinese), position (right knee joint), and the like.
Optionally, for template images of target portions corresponding to different attribute information, sample medical images of target portions of a plurality of different objects under the same attribute information may be acquired, and the sample medical images are medical images of the target portions in a healthy state, and then, the template images of the target portions corresponding to the attribute information may be determined by using the plurality of sample medical images. Optionally, when the template image of the target portion corresponding to the attribute information is determined by using a plurality of sample medical images, the common features of the plurality of sample medical images may be learned by using the existing deep learning technique to obtain the template image of the target portion corresponding to the attribute information; of course, the template image of the target portion corresponding to the attribute information may be obtained by processing a plurality of sample medical images using an existing image processing technique, such as image fusion. In the embodiment of the present application, a manner of acquiring the template image of the target portion is not particularly limited. Further, after the template images of the target portion corresponding to different attribute information are acquired, a preset template image library of the target portion can be established according to the template images of the target portion corresponding to different attribute information.
Further, when the target part of the object to be detected is anatomically positioned, the computer equipment can acquire the attribute information of the object to be detected, which is input by a user, and also can acquire the attribute information of the object to be detected from the medical record of the object to be detected; then, the computer device may obtain, from the preset template image library, a target template image of a target portion matching the attribute information of the object to be detected, according to the attribute information of the object to be detected.
Wherein, the target to be positioned can be an organ or tissue in the target part; such as: in the reconstruction of the anterior cruciate ligament of the knee joint, the target to be positioned can be the anterior cruciate ligament in the knee joint, namely, the position information of the anterior cruciate ligament is determined through the medical image of the knee joint, and then the anatomical position in the reconstruction of the anterior cruciate ligament is determined.
Optionally, the initial position information of the target to be positioned may be determined from the target template image of the target portion, and then the initial position information of the target to be positioned is matched to the medical image of the target portion to obtain the position information of the target to be positioned in the medical image; optionally, the medical image of the target part and the target template image of the target part may be registered to obtain a registered target medical image, and then the target to be positioned in the target part is positioned according to the registered target medical image to obtain the position information of the target to be positioned; it should be noted that, here, the registration between the medical image and the target template image may adopt the existing medical image registration technology, such as: the image registration technique may be implemented by using principles such as rigid body transformation, affine transformation, projective transformation, or nonlinear transformation, and the embodiment of the present application is not limited in this respect.
In the target positioning method, the computer equipment acquires the medical image of the target part of the object to be detected and acquires the target template image of the target part matched with the attribute information of the object to be detected from a preset template image library according to the attribute information of the object to be detected; secondly, positioning a target to be positioned in the target part according to the medical image of the target part and the target template image of the target part to obtain the position information of the target to be positioned; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information; that is to say, in the target positioning method in the embodiment of the present application, the position of the target to be positioned in the target portion is determined by combining the standard template image corresponding to the target portion of the object to be measured; because the positions of each tissue and organ in the target part in the standard template image of the target part are obvious and accurate, compared with the method for determining the position information of the target to be positioned directly from the medical image of the target part, the method can obtain more accurate position information of the target to be positioned based on the standard template image no matter how the developing effect of the position of the target to be positioned in the medical image is or how the damage condition of the position of the target to be positioned is. In addition, because the preset template image library of the target part in the embodiment of the application includes template images corresponding to different attribute information, the target template image matched with the attribute information of the object to be detected can be obtained from the preset template image library of the target part according to the attribute information of the object to be detected, so that the matching degree between the target template image and the object to be detected is improved, and the positioning accuracy of the target to be positioned in the target part of the object to be detected can be further improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a device for generating a template image library, which is used for implementing the method for generating a template image library. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in the embodiment of the device for generating one or more template image libraries provided below can be referred to the limitations on the method for generating the template image library, and are not described herein again.
In one embodiment, as shown in fig. 9, there is provided an apparatus for generating a template image library, including: an obtaining module 901, a constructing module 902 and a establishing module 903, wherein:
an obtaining module 901, configured to obtain multiple sample image sets of the target portion; and the attribute information corresponding to each sample image set is different.
A constructing module 902, configured to respectively construct, for each sample image set, a template image corresponding to each sample image set.
The establishing module 903 is configured to establish a preset template image library of the target portion according to the template image corresponding to each sample image set.
In one embodiment, the building module 902 includes a clustering unit, a first determining unit, a second determining unit, and a third determining unit; the clustering unit is used for carrying out multi-level clustering operation on the sample images in the sample image sets aiming at each sample image set to obtain at least one sample subset corresponding to each level of clustering operation; the sample subset comprises a plurality of sample images meeting the clustering condition; a first determining unit, configured to determine, for at least one sample subset corresponding to each level of clustering operation, a first template image corresponding to each sample subset according to each sample image in the sample subsets corresponding to the level of clustering operation; the second determining unit is used for determining a second template image corresponding to the level clustering operation according to the first template image corresponding to each sample subset; and the third determining unit is used for determining the template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation.
In one embodiment, the clustering unit is specifically configured to perform a first-level clustering operation on sample images in the sample image set to obtain a plurality of candidate sample subsets corresponding to the first-level clustering operation; judging whether each candidate sample subset meets the clustering condition; under the condition that the candidate sample subset is judged to meet the clustering condition, taking the candidate sample subset as a sample subset corresponding to the first-level clustering operation; under the condition that the candidate sample subsets are judged not to meet the clustering conditions, performing second-level clustering operation on the candidate sample subsets which do not meet the clustering conditions until the clustering cutoff conditions are met, and obtaining at least one sample subset corresponding to each-level clustering operation; the clustering condition comprises that the number of sample images in the candidate sample subset is greater than or equal to a preset threshold value; .
In one embodiment, the first determining unit is specifically configured to, for each sample subset corresponding to the hierarchical clustering operation, respectively input each sample image in the sample subset into a preset registration model, so as to obtain an intermediate sample image corresponding to each sample image in the sample subset; and performing fusion processing on the intermediate sample images respectively corresponding to the sample images in the sample subset to obtain a first template image corresponding to the sample subset.
In one embodiment, the first determining unit is specifically configured to determine a reference sample image from each sample image in the sample subset; respectively inputting the reference sample image and each floating sample image into a preset registration model to obtain a deformation field of each floating sample image mapped to the reference sample image; the floating sample image is a sample image in the sample subset except the reference sample image; and aiming at each floating sample image, registering the floating sample image according to the deformation field corresponding to the floating sample image to obtain an intermediate sample image corresponding to each floating sample image.
In one embodiment, the first determining unit is specifically configured to calculate a mean square error between a first sample image in the sample subset and a second sample image except the first sample image; summing the mean square errors to obtain a sum result of the mean square errors; determining a minimum summation result from the summation results, and using a first sample image corresponding to the minimum summation result as a reference sample image; wherein the first sample image is any one of the sample images in the sample subset.
In an embodiment, the first determining unit is specifically configured to perform averaging processing on intermediate sample images respectively corresponding to sample images in the sample subset to obtain a first template image corresponding to the sample subset.
In one embodiment, the second determining unit is specifically configured to input the first template images corresponding to the sample subsets into a preset registration model respectively, so as to obtain intermediate template images corresponding to the first template images respectively; and performing fusion processing on the intermediate template images respectively corresponding to the first template images to obtain second template images corresponding to the level clustering operation.
In one embodiment, the third determining unit is specifically configured to input the second template images corresponding to each level of clustering operation into a preset registration model, so as to obtain intermediate template images corresponding to the second template images respectively; and performing fusion processing on the intermediate template images respectively corresponding to the second template images to obtain the template images corresponding to the sample image set.
Similarly, based on the same inventive concept, the embodiment of the present application further provides an object positioning apparatus for implementing the object positioning method mentioned above. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the target positioning device provided below can be referred to the limitations of the target positioning method in the foregoing, and details are not described herein again.
In one embodiment, as shown in fig. 10, there is provided an object locating device comprising: a first obtaining module 1001, a second obtaining module 1002, and a determining module 1003, wherein:
the first acquiring module 1001 is configured to acquire a medical image of a target portion of a subject.
A second obtaining module 1002, configured to obtain, according to the attribute information of the object to be detected, a target template image of a target portion that matches the attribute information of the object to be detected from a preset template image library; the preset template image library comprises a plurality of template images of target parts corresponding to different attribute information.
The determining module 1003 is configured to locate a target to be located in the target portion according to the medical image of the target portion and the target template image of the target portion, so as to obtain position information of the target to be located.
In one embodiment, the determining module 1003 includes a registering unit and a determining unit; the registration unit is used for registering the medical image of the target part and the target template image of the target part to obtain a registered target medical image; and the determining unit is used for positioning the target to be positioned in the target part according to the target medical image to obtain the position information of the target to be positioned.
The generation device of the template image library and each module in the target positioning device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a medical imaging terminal, a server connected to the medical imaging terminal, or a series of computer devices related to medical imaging processing, such as a medical imaging scanning device, and its internal structure diagram may be as shown in fig. 1.
In one embodiment, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the template image library generation method and the target location method provided in the foregoing embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program is executed by a processor to implement the steps of the template image library generation method and the target positioning method provided in the above embodiments.
In one embodiment, a computer program product is provided, which includes a computer program, and when the computer program is executed by a processor, the steps of the method for generating a template image library and the method for locating an object provided in the above embodiments are implemented.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (13)
1. A method for generating a template image library, the method comprising:
acquiring a plurality of sample image sets of a target part; attribute information corresponding to each sample image set is different;
respectively constructing template images corresponding to the sample image sets aiming at the sample image sets;
establishing a preset template image library of the target part according to the template image corresponding to each sample image set; the preset template image library comprises a plurality of template images of the target part corresponding to different attribute information.
2. The method according to claim 1, wherein the constructing, for each of the sample image sets, a template image corresponding to each of the sample image sets respectively comprises:
for each sample image set, performing multi-level clustering operation on sample images in the sample image set to obtain at least one sample subset corresponding to each level of clustering operation; the sample subset comprises a plurality of sample images meeting a clustering condition;
determining a first template image corresponding to each sample subset corresponding to each level clustering operation according to each sample image in each sample subset corresponding to the level clustering operation aiming at least one sample subset corresponding to each level clustering operation;
determining a second template image corresponding to the hierarchical clustering operation according to the first template image corresponding to each sample subset;
and determining the template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation.
3. The method of claim 2, wherein the performing multiple levels of clustering operations on the sample images in the sample image set to obtain at least one sample subset corresponding to each level of clustering operations comprises:
performing first-level clustering operation on sample images in the sample image set to obtain a plurality of candidate sample subsets corresponding to the first-level clustering operation;
judging whether each candidate sample subset meets a clustering condition; the clustering condition comprises that the number of sample images in the candidate sample subset is greater than or equal to the preset threshold value;
if so, taking the candidate sample subset as a sample subset corresponding to the first-level clustering operation;
and if not, performing second-level clustering operation on the candidate sample subsets which do not meet the clustering condition until a clustering cutoff condition is met to obtain at least one sample subset corresponding to each-level clustering operation.
4. The method of claim 2, wherein determining the first template image corresponding to each of the sample subsets corresponding to the hierarchical clustering operation according to each of the sample images in each of the sample subsets comprises:
respectively inputting each sample image in the sample subsets into a preset registration model aiming at each sample subset corresponding to the hierarchical clustering operation to obtain intermediate sample images respectively corresponding to each sample image in the sample subsets;
and performing fusion processing on the intermediate sample images respectively corresponding to the sample images in the sample subset to obtain a first template image corresponding to the sample subset.
5. The method according to claim 4, wherein the inputting each sample image in the sample subset into a preset registration model to obtain an intermediate sample image corresponding to each sample image in the sample subset comprises:
determining a reference sample image from each sample image in the sample subset;
inputting the reference sample image and each floating sample image into a preset registration model respectively to obtain a deformation field of each floating sample image mapped to the reference sample image; the floating sample image is a sample image in the sample subset except the reference sample image;
and aiming at each floating sample image, registering the floating sample images according to the deformation field corresponding to the floating sample image to obtain an intermediate sample image corresponding to each floating sample image.
6. The method of claim 5, wherein determining a reference sample image from each sample image in the subset of samples comprises:
respectively calculating the mean square error between a first sample image in the sample subset and a second sample image except the first sample image; wherein the first sample image is any one of the sample images in the sample subset;
summing the mean square errors to obtain a sum result of the mean square errors;
determining a minimum summation result from the summation results, and using the first sample image corresponding to the minimum summation result as the reference sample image.
7. The method according to claim 4, wherein the determining the first template image corresponding to the sample subset by performing the fusion process on the intermediate sample images corresponding to the sample images in the sample subset comprises:
and carrying out averaging processing on the intermediate sample images respectively corresponding to the sample images in the sample subset to obtain a first template image corresponding to the sample subset.
8. The method of claim 2, wherein determining a second template image corresponding to the hierarchical clustering operation from the first template image corresponding to each of the sample subsets comprises:
inputting the first template images corresponding to the sample subsets into a preset registration model respectively to obtain intermediate template images corresponding to the first template images respectively;
and performing fusion processing on the intermediate template images respectively corresponding to the first template images to obtain second template images corresponding to the level clustering operation.
9. The method of claim 2, wherein determining the template image corresponding to the sample image set according to the second template image corresponding to each level of clustering operation comprises:
inputting the second template images corresponding to each level of clustering operation into a preset registration model respectively to obtain intermediate template images corresponding to the second template images respectively;
and performing fusion processing on the intermediate template images respectively corresponding to the second template images to obtain the template images corresponding to the sample image set.
10. A method of locating an object, the method comprising:
acquiring a medical image of a target part of an object to be detected;
acquiring a target template image of the target part matched with the attribute information of the object to be detected from a preset template image library according to the attribute information of the object to be detected;
positioning a target to be positioned in the target part according to the medical image of the target part and the target template image of the target part to obtain position information of the target to be positioned;
wherein the library of pre-defined template images is generated using the method of any one of claims 1 to 9.
11. An apparatus for generating a template image library, the apparatus comprising:
the acquisition module is used for acquiring a plurality of sample image sets of the target part; attribute information corresponding to each sample image set is different;
the construction module is used for respectively constructing template images corresponding to the sample image sets aiming at the sample image sets;
the establishing module is used for establishing a preset template image library of the target part according to the template images corresponding to the sample image sets; the preset template image library comprises a plurality of template images of the target part corresponding to different attribute information.
12. An object localization arrangement, characterized in that the arrangement comprises:
the first acquisition module is used for acquiring a medical image of a target part of an object to be detected;
the second acquisition module is used for acquiring a target template image of the target part matched with the attribute information of the object to be detected from a preset template image library according to the attribute information of the object to be detected;
the determining module is used for positioning a target to be positioned in the target part according to the medical image of the target part and the target template image of the target part to obtain the position information of the target to be positioned;
wherein the library of pre-defined template images is generated using the method of any one of claims 1 to 9.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210602333.6A CN115049596A (en) | 2022-05-30 | 2022-05-30 | Template image library generation method, target positioning device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210602333.6A CN115049596A (en) | 2022-05-30 | 2022-05-30 | Template image library generation method, target positioning device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115049596A true CN115049596A (en) | 2022-09-13 |
Family
ID=83160006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210602333.6A Pending CN115049596A (en) | 2022-05-30 | 2022-05-30 | Template image library generation method, target positioning device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049596A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310763A (en) * | 2023-05-10 | 2023-06-23 | 合肥英特灵达信息技术有限公司 | Template image generation method and device, electronic equipment and storage medium |
-
2022
- 2022-05-30 CN CN202210602333.6A patent/CN115049596A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310763A (en) * | 2023-05-10 | 2023-06-23 | 合肥英特灵达信息技术有限公司 | Template image generation method and device, electronic equipment and storage medium |
CN116310763B (en) * | 2023-05-10 | 2023-07-21 | 合肥英特灵达信息技术有限公司 | Template image generation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rueckert et al. | Model-based and data-driven strategies in medical image computing | |
Eppenhof et al. | Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks | |
CN110766730B (en) | Image registration and follow-up evaluation method, storage medium and computer equipment | |
Hosseini et al. | Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients | |
CN111862249A (en) | System and method for generating canonical imaging data for medical image processing using deep learning | |
Cunningham et al. | Real-time ultrasound segmentation, analysis and visualisation of deep cervical muscle structure | |
Peng et al. | Segmentation of lung in chest radiographs using hull and closed polygonal line method | |
Rundo et al. | Multimodal medical image registration using particle swarm optimization: A review | |
CN111709485B (en) | Medical image processing method, device and computer equipment | |
Chen et al. | Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images | |
EP3570288A1 (en) | Method for obtaining at least one feature of interest | |
Puyol-Anton et al. | A multimodal spatiotemporal cardiac motion atlas from MR and ultrasound data | |
Cao et al. | Deep learning methods for cardiovascular image | |
Galib et al. | A fast and scalable method for quality assurance of deformable image registration on lung CT scans using convolutional neural networks | |
US20220036575A1 (en) | Method for measuring volume of organ by using artificial neural network, and apparatus therefor | |
Li et al. | Biomechanical model for computing deformations for whole‐body image registration: A meshless approach | |
Rios et al. | Population model of bladder motion and deformation based on dominant eigenmodes and mixed-effects models in prostate cancer radiotherapy | |
Peng et al. | H-SegMed: a hybrid method for prostate segmentation in TRUS images via improved closed principal curve and improved enhanced machine learning | |
EP4156096A1 (en) | Method, device and system for automated processing of medical images to output alerts for detected dissimilarities | |
Tao et al. | NSCR‐Based DenseNet for Lung Tumor Recognition Using Chest CT Image | |
Lauzeral et al. | Shape parametrization of bio-mechanical finite element models based on medical images | |
Esmaeili et al. | Generative adversarial networks for anomaly detection in biomedical imaging: A study on seven medical image datasets | |
Ding et al. | Combining feature correspondence with parametric chamfer alignment: hybrid two-stage registration for ultra-widefield retinal images | |
CN115049596A (en) | Template image library generation method, target positioning device and storage medium | |
Li et al. | CR-GAN: Automatic craniofacial reconstruction for personal identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |