CN114842003A - Medical image follow-up target pairing method, device and application - Google Patents

Medical image follow-up target pairing method, device and application Download PDF

Info

Publication number
CN114842003A
CN114842003A CN202210776250.9A CN202210776250A CN114842003A CN 114842003 A CN114842003 A CN 114842003A CN 202210776250 A CN202210776250 A CN 202210776250A CN 114842003 A CN114842003 A CN 114842003A
Authority
CN
China
Prior art keywords
target
medical image
follow
feature
normalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210776250.9A
Other languages
Chinese (zh)
Other versions
CN114842003B (en
Inventor
何林阳
季红丽
程国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jianpei Technology Co ltd
Original Assignee
Hangzhou Jianpei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Jianpei Technology Co ltd filed Critical Hangzhou Jianpei Technology Co ltd
Priority to CN202210776250.9A priority Critical patent/CN114842003B/en
Publication of CN114842003A publication Critical patent/CN114842003A/en
Application granted granted Critical
Publication of CN114842003B publication Critical patent/CN114842003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides a medical image follow-up target pairing method, a device and an application, and the method comprises the following steps: acquiring a first medical image and a second medical image which meet follow-up conditions, extracting first target characteristic information of a first follow-up target, wherein the first target characteristic information comprises a first normalization coordinate, first normalization long and short path information and a first local image block, and extracting second target characteristic information of a second follow-up target, wherein the second target characteristic information comprises a second normalization coordinate, second normalization long and short path information and a second local image block; inputting the first target characteristic information/the second target characteristic information into a depth characteristic expression model to obtain a first target depth characteristic/a second target depth characteristic; and judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature, and improving the pairing accuracy through information of multiple dimensions.

Description

Medical image follow-up target pairing method, device and application
Technical Field
The present application relates to the field of medical images, and in particular, to a medical image follow-up target pairing method, device and application.
Background
Medical imaging refers to the technique and process of obtaining internal tissue images of a human body or a part of a human body in a non-invasive manner for medical treatment or medical research, and the internal tissue images displayed by medical imaging can help medical staff to know the actual pathological condition of a patient. In clinical medicine, follow-up medical images are often needed to determine the development condition of diseases so as to make good judgment on the progress and prognosis of a diagnosis and treatment scheme, and the follow-up medical images are images of the same part of a patient at intervals.
However, there may be some deviation in the medical images taken in different time periods, which may be caused by the change of the patient's position, the apparatus itself, or the tissue and organ itself. If the mode that medical staff manually reads the film to match the follow-up target is adopted, the defects of various workloads and large human errors in the registration precision exist. In a follow-up method based on medical images provided by CN112686866A in the prior art, a contrast image comparison result and/or a structured text comparison result of two-stage target tissue features are obtained, and when the contrast image comparison result is obtained, matching is performed only according to the hierarchy of a first image and the volume of a target image region, however, in this matching manner, first, sequences of contrast images that need to distinguish different groups of target tissue features are corresponding, and meanwhile, matching is performed only when the volume of the target region does not change much, so that the application scenarios of the whole scheme are very limited, and meanwhile, there is a great problem in the matching accuracy.
Disclosure of Invention
The embodiment of the application provides a medical image follow-up target pairing method, a device and application.
In a first aspect, an embodiment of the present application provides a medical image follow-up target pairing method, where the method includes:
acquiring a first medical image and a second medical image which meet follow-up conditions;
preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
extracting first target feature information of a first follow-up target of the first standard body position medical image, wherein the first target feature information comprises a first normalized coordinate, first normalized long and short path information and a first local image block, and extracting second target feature information of a second follow-up target of the second standard body position medical image, wherein the second target feature information comprises a second normalized coordinate, second normalized long and short path information and a second local image block;
inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
In a second aspect, an embodiment of the present application provides a medical image follow-up target pairing device, including: a medical image acquisition unit for acquiring a first medical image and a second medical image satisfying a follow-up condition;
the preprocessing unit is used for preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
an information extraction unit, configured to extract first target feature information of a first follow-up target of the first standard posture medical image, where the first target feature information includes a first normalized coordinate, first normalized long and short path information, and a first local image block, and extract second target feature information of a second follow-up target of the second standard posture medical image, where the second target feature information includes a second normalized coordinate, second normalized long and short path information, and a second local image block;
the depth feature obtaining unit is used for inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and the pairing unit is used for judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the medical image follow-up target pairing method.
In a fourth aspect, the present application provides a readable storage medium, in which a computer program is stored, the computer program including program code for controlling a process to execute the process, the process including any one of the medical image follow-up target pairing methods.
The main contributions and innovation points of the invention are as follows:
the embodiment of the application provides a medical image follow-up target pairing method, a device and an application, after standard body position correction and target alignment processing are carried out on a medical image, information of multiple dimensions such as a normalized coordinate, normalized long and short path information and a local image block is extracted and converted to obtain target depth characteristics, whether follow-up targets of two medical images are paired or not is judged through comparison of the target depth characteristics, and the information of the multiple dimensions can be combined to obtain a more accurate follow-up target pairing result. Compared with other patent methods, the method specially designs key technologies such as standard body position correction, depth feature pairing, same-patient triple modeling training and the like aiming at the conditions that the posture of the shot body position is not uniform and the patient is adjacent to the same-part focus, the overall prediction effect has very high pairing accuracy and robustness, the pairing accuracy can reach more than 92%, the model can be further expanded to the field of focus retrieval after further training is completed, and the popularization and application of related technologies have very high clinical value.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a medical image follow-up target pairing method according to an embodiment of the present application;
FIG. 2 is a framework diagram of a depth feature representation extraction model according to one embodiment of the present application;
FIG. 3 is a logic diagram of a medical image follow-up target pairing method according to an embodiment of the present application;
fig. 4 is a block diagram of a medical image follow-up target pairing device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
The application aims to provide a medical image follow-up target pairing method, a medical image follow-up target pairing device and application, which can be used for extracting depth expression characteristics of three dimensional information including a normalized coordinate, a normalized long and short path and a local image block after standard body position correction is carried out on at least two medical images, and realize the pairing of the follow-up targets of a plurality of medical images by using the depth expression characteristics covering more comprehensive information.
Specifically, referring to fig. 1 and 3, the medical image follow-up target pairing method includes:
acquiring a first medical image and a second medical image which meet follow-up conditions;
preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
extracting first target feature information of a first follow-up target of the first standard body position medical image, wherein the first target feature information comprises a first normalized coordinate, first normalized long and short path information and a first local image block, and extracting second target feature information of a second follow-up target of the second standard body position medical image, wherein the second target feature information comprises a second normalized coordinate, second normalized long and short path information and a second local image block;
inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
The technical scheme has a large application scene of assisting medical workers to quickly and accurately judge whether follow-up targets in follow-up medical images are paired or not, and if the follow-up targets are paired, the development conditions of diseases can be researched based on the pairing results.
In the step of "acquiring the first medical image and the second medical image satisfying the follow-up condition", the follow-up condition is a medical image of the same follow-up target of the same patient at different time periods. The time length of different time periods can be adjusted according to actual requirements, and can be one week, one month or even one year; the follow-up target is selected from any tissue part which can be obtained by image, such as tissue parts of chest, lung, spine and the like. The type of medical image may be selected from CT, MRI/fMRI, SPECT, etc.
It should be noted that, in order to facilitate the follow-up target pairing of the first medical image and the second medical image, the first medical image and the second medical image are preferably selected as the same type of medical image.
The step of preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image comprises the step of carrying out standard body position correction and target alignment processing on the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image.
At this time, the follow-up target in the first standard body position medical image and the second standard body position medical image is in a standard body position. The standard posture correction comprises the following specific steps:
acquiring a first image scanning parameter of the first medical image, and correcting the first medical image to a standard body position coordinate system based on the first image scanning parameter;
and acquiring second image scanning parameters of the second medical image, and correcting the first medical image to a standard body position coordinate system based on the second image scanning parameters.
The first image scanning parameter and the second image scanning parameter at least comprise human body placing position, scanning direction and initial coordinate information, the scheme can correct the pixel coordinates of the first medical image/the second medical image to a standard body position coordinate system based on the first image scanning parameter/the second image scanning parameter, and the patient in the corrected first medical image/the second medical image is positioned in a standard body position. It is worth mentioning that the present solution ensures that the error of the subsequently paired data is as small as possible by the correction of the standard posture.
In some embodiments, the pixel coordinates of the first/second medical image are corrected into a standard posture coordinate system by adopting a radiation transformation method.
Of course, the scheme not only performs standard body position correction on the first medical image and the second medical image, but also performs target alignment processing on the corrected medical images, so that the follow-up targets of the first standard body position medical image and the second standard body position medical image can be analyzed in the same scale.
The target alignment process further includes: normalization processing and region alignment. Wherein the standardization processing refers to the operations of card window width window level, normalization and the like. The area alignment refers to translating and scaling the follow-up targets to the same size and approximately aligning the follow-up targets. For example, if the follow-up target is lung tissue, the region alignment refers to translating and scaling the lung tissue regions in the first medical image and the second medical image to the same size, and the straight white dots are substantially aligned with the lung tips and the lung bottoms of the lung tissue of the two medical images.
In the step of performing standard body position correction and target alignment processing on the first medical image and the second medical image, the first medical image and the second medical image subjected to the standard body position correction are input into a segmentation model to be segmented into corresponding foreground tissues, an affine transformation matrix is calculated by using vertex coordinates of an external frame where the foreground tissues are located, and the target alignment processing is performed according to the affine transformation matrix.
Of course, the segmentation model mentioned in the present embodiment is obtained through training, and the follow-up target in the first medical image/the second medical image can be obtained through segmentation. The segmentation area comprises a foreground tissue and a background area, and the scheme acquires the foreground tissue corresponding to the follow-up target to perform target alignment processing.
In the stage of extracting the target characteristic information, the normalized coordinates and the normalized long and short path information of the follow-up targets in the medical image are extracted, and the normalized coordinates are mainly considered to be used for determining whether the two follow-up targets are positioned at the same biological anatomical position, and the normalized long and short paths are used for determining whether the central positions and the long and short sizes of the two follow-up targets are basically consistent.
The purpose of normalization is to unify the measurement scales so that the target characteristic information of different medical images is comparable. For example, when a lung nodule is determined, the normalized coordinates can use the most center of the heart as the origin of coordinates to obtain the coordinate position of the nodule lesion, and further determine whether the lung nodule in two medical images is on the same lung segment. The normalized long and short diameters can help to confirm whether the two follow-up targets are consistent, and generally, if the two follow-up targets are acquired at short time intervals, the central position and the length and size of the lesion at the same position should be substantially consistent.
That is, the first normalized coordinate and the second normalized coordinate mentioned in the present embodiment refer to: and the coordinate position of the follow-up target of the reference coordinate system is formed by the same coordinate origin. The same coordinate origin can be determined in the first standard body position medical image and the second standard body position medical image, and the first normalization coordinate and the second normalization coordinate are expressed by polar coordinates.
The first normalized long and short path information and the second normalized long and short path information mentioned in the present embodiment refer to: and the long and short diameters of the follow-up target are under the same measurement scale of a reference coordinate system formed by the same coordinate origin. According to the scheme, the width of the external frame of the foreground tissue corresponding to the follow-up target in the first standard body position medical image and the second standard body position medical image is used as a denominator, and the follow-up target is subjected to normalization processing to obtain the first normalization long and short path information and the second normalization long and short path information.
In an embodiment of the present disclosure, the centers of the first standard posture medical image and the second standard posture medical image are taken as the center of the reference coordinate system.
When the first local image block and the second local image block are extracted, taking the central coordinate of the follow-up target as the center, and if 1.5 times of the major diameter of the follow-up target is smaller than a set length, taking the set length as an intercepting length to intercept the follow-up target to obtain a local image block; if the length of the follow-up target is 1.5 times larger than the set length, the follow-up target is intercepted by taking the length of the follow-up target 1.5 times as the intercepting length to obtain a local image block.
That is, if 1.5 times of the major diameter of the first follow-up target is smaller than the set length, the first follow-up target is cut by taking the set length as the cut length to obtain a first local image block; if the length of the first follow-up target is 1.5 times larger than the set length, the first follow-up target is cut by taking the length of the first follow-up target 1.5 times as the cut length to obtain a first local image block. If the length of the second follow-up target is 1.5 times smaller than the set length, intercepting the second follow-up target by taking the set length as an interception length to obtain a local image block; and if the length of the second follow-up target is 1.5 times larger than the set length, intercepting the second follow-up target by taking 1.5 times of the length of the second follow-up target as the interception length to obtain a second local image block.
It is worth mentioning that the set length is generally obtained by counting the size distribution of the followed targets in the training data set, and taking 1.5 times of the average size as the set length. Therefore, the set lengths corresponding to different follow-up targets are different. In addition, the scheme takes the major axis of the follow-up target as a selection standard to ensure that the intercepted local image block can be as complete as possible.
The first target depth feature and the second target depth feature provided by the scheme can represent information of three dimensions, so that the matching accuracy can be higher. The scheme adaptively modifies the frame structure of the depth feature expression model, and the frame of the depth feature expression model is shown in FIG. 2.
The depth feature expression model comprises a convolution feature extraction network, a first full connection layer for splicing features and a second full connection layer for fusing features, the convolution feature extraction network, the first full connection layer and the second full connection layer share network weights, a first local image block/a second local image block are input into the convolution feature extraction network and are convolved to obtain a convolution feature extraction network, the first full connection layer splices the convolution features, a first normalization coordinate/a second normalization coordinate and a first normalization long and short path information/a second normalization long and short path information which are subjected to global average pooling processing into a one-dimensional feature vector, the second full connection layer weights and fuses information in the one-dimensional feature vector to obtain a target depth feature, and particularly, the second full connection layer weights and fuses the convolution feature extraction network, And obtaining the first target depth feature/the second target depth feature by the first normalized coordinate/the second normalized coordinate and the first normalized long-short path information/the second normalized long-short path information.
The purpose of the scheme is to arrange the first full connection layer: the first normalized coordinate/the second normalized coordinate and the first normalized long-short path information/the second normalized long-short path information are one-dimensional information, and the convolution feature, the first normalized coordinate/the second normalized coordinate and the first normalized long-short path information/the second normalized long-short path information are spliced by the first full-connection layer so as to ensure that the second full-connection layer can carry out weighting fusion processing on the convolution feature, the first normalized coordinate/the second normalized coordinate and the first normalized long-short path information/the second normalized long-short path information.
The depth characteristic expression model designed by the scheme is obtained by training through a triple training strategy. Randomly selecting a first follow-up target from a first standard body position medical image in a training data set, marking the first follow-up target as an anchor sample a, then selecting a second follow-up target matched with the first follow-up target from a second standard body position medical image, marking the second follow-up target as a matched sample p, finally selecting the follow-up target except the anchor sample and the matched sample from the first standard body position medical image or the second standard body position medical image as a negative sample n, and if no negative sample exists, selecting a sample with random position and size in a tissue foreground of the first standard body position medical image or the second standard body position medical image as the negative sample, and forming a triple by the anchor sample a, the matched sample p and the negative sample n.
Inputting the anchor sample a, the matching sample p and the negative sample n of the triple into the depth feature expression model respectively to obtain corresponding feature expressions, and recording the feature expressions as: f [ a ], f [ p ], f [ n ], calculating Euclidean distances (d (fa, fp)) between the feature expression fa corresponding to the anchor sample a and the feature expression fp of the matched sample; calculating Euclidean distance (d (fa, fn)) between the feature expression fa corresponding to the anchor sample a and the feature expression fn of the negative sample, wherein the loss function of the whole depth feature expression model is as follows:
Figure DEST_PATH_IMAGE002
wherein d represents the Euclidean distance between the feature vectors, mu =0.1, and the deep learning is carried out by the loss function, so that the distance between a and p feature expressions is as small as possible, the distance between a and n feature expressions is as large as possible, and the distance between a and n and the distance between a and p are the minimum interval, and the depth feature expression model is obtained after training.
In the step of determining the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature, the relationship between the distance between the first target depth feature and the second target depth feature and a preset threshold is calculated, and if the distance is smaller than the preset threshold, the pairing is determined.
In the scheme, the preset threshold is determined by a time interval between a first scanning time of the first medical image and a second scanning time of the second medical image. The specific calculation formula is as follows;
Figure DEST_PATH_IMAGE004
t is a preset threshold, T1 is a first scanning time, and T2 is a second scanning time. When the interval between the first scanning time and the second scanning time is less than 2 years, the threshold value is 0.8 x 2 at most in a progressive linear increasing mode, and the threshold value 1.6 can be uniformly used if the interval between the first scanning time and the second scanning time exceeds 2 years.
The distance between the first target depth feature and the second target depth feature is calculated by:
Figure DEST_PATH_IMAGE006
where d feature distances, F1 represents a first target depth feature, F2 represents a second target depth feature, the depth features are n bits in total, and i represents the ith bit of the depth feature. Specifically, the scheme is to calculate the Euclidean distance of the first target depth feature and the second target depth feature.
Example two
Based on the same concept, referring to fig. 4, the present application further provides a medical image follow-up target pairing device, including:
a medical image acquisition unit for acquiring a first medical image and a second medical image satisfying a follow-up condition;
the preprocessing unit is used for preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
an information extraction unit, configured to extract first target feature information of a first follow-up target of the first standard posture medical image, where the first target feature information includes a first normalized coordinate, first normalized long and short path information, and a first local image block, and extract second target feature information of a second follow-up target of the second standard posture medical image, where the second target feature information includes a second normalized coordinate, second normalized long and short path information, and a second local image block;
the depth feature obtaining unit is used for inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and the pairing unit is used for judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
The technical content of the second embodiment that is the same as the first embodiment will not be redundantly described here.
EXAMPLE III
The present embodiment further provides an electronic device, referring to fig. 5, including a memory 404 and a processor 402, where the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform the steps in any one of the embodiments of the medical image follow-up target pairing method.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may include a hard disk drive (hard disk drive, HDD for short), a floppy disk drive, a solid state drive (SSD for short), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 404 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 402.
The processor 402 reads and executes the computer program instructions stored in the memory 404 to implement any one of the above-mentioned embodiments of the medical image follow-up target pairing method.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The input and output devices 408 are used to input or output information. In this embodiment, the input information may be at least two medical images or the like, and the output information may be a matching result or the like.
Optionally, in this embodiment, the processor 402 may be configured to execute the following steps by a computer program:
the medical image follow-up target pairing method comprises the following steps:
acquiring a first medical image and a second medical image which meet follow-up conditions;
preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
extracting first target feature information of a first follow-up target of the first standard body position medical image, wherein the first target feature information comprises a first normalized coordinate, first normalized long and short path information and a first local image block, and extracting second target feature information of a second follow-up target of the second standard body position medical image, wherein the second target feature information comprises a second normalized coordinate, second normalized long and short path information and a second local image block;
inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples are merely illustrative of several embodiments of the present application, and the description is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A medical image follow-up target pairing method is characterized by comprising the following steps:
acquiring a first medical image and a second medical image which meet follow-up conditions;
preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
extracting first target feature information of a first follow-up target of the first standard body position medical image, wherein the first target feature information comprises a first normalized coordinate, first normalized long and short path information and a first local image block, and extracting second target feature information of a second follow-up target of the second standard body position medical image, wherein the second target feature information comprises a second normalized coordinate, second normalized long and short path information and a second local image block;
inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
2. The medical image follow-up target pairing method as claimed in claim 1, wherein the step of preprocessing the first medical image and the second medical image to obtain a first standard posture medical image and a second standard posture medical image comprises performing standard posture correction and target alignment on the first medical image and the second medical image to obtain a first standard posture medical image and a second standard posture medical image.
3. The medical image follow-up target pairing method according to claim 2, wherein first image scanning parameters of the first medical image are acquired, and the first medical image is corrected into a standard body position coordinate system based on the first image scanning parameters; and acquiring second image scanning parameters of the second medical image, and correcting the first medical image to a standard body position coordinate system based on the second image scanning parameters.
4. A medical image follow-up target pairing method as claimed in claim 2, wherein the first medical image and the second medical image subjected to the standard posture correction are input into a segmentation model to segment corresponding foreground tissues, an affine transformation matrix is calculated by using vertex coordinates of an external frame where the foreground tissues are located, and target alignment processing is performed according to the affine transformation matrix.
5. A medical image follow-up target pairing method as claimed in claim 1, wherein the first and second normalized coordinates refer to: forming a coordinate position of a follow-up target of a reference coordinate system by using the same coordinate origin; the first normalized long and short path information and the second normalized long and short path information refer to: and the long and short diameters of the follow-up target are under the same measurement scale of a reference coordinate system formed by the same coordinate origin.
6. The method as claimed in claim 1, wherein if 1.5 times of the major diameter of the first follow-up target is smaller than the set length, the first follow-up target is cut to obtain the first local image block with the set length as the cut length; if the length of the first follow-up target is 1.5 times larger than the set length, the first follow-up target is intercepted by taking the length of the first follow-up target 1.5 times as the interception length to obtain a first local image block; if the length of the second follow-up target is 1.5 times smaller than the set length, intercepting the second follow-up target by taking the set length as an interception length to obtain a local image block; and if the length of the second follow-up target is 1.5 times larger than the set length, intercepting the second follow-up target by taking 1.5 times of the length of the second follow-up target as the interception length to obtain a second local image block.
7. The medical image follow-up target pairing method according to claim 1, wherein the depth feature expression model comprises a convolution feature extraction network, a first full-link layer for splicing features, and a second full-link layer for fusing features, the convolution feature extraction network, the first full-link layer, and the second full-link layer share network weights, the first local image block/the second local image block are input into the convolution feature extraction network, the convolution feature extraction network is convolved to obtain convolution features, the first full-link layer splices the convolution features, the first normalized coordinates/the second normalized coordinates, and the first normalized long-short-path information/the second normalized long-short-path information, which are subjected to global averaging pooling processing, into one-dimensional feature vectors, and the second full-link layer weights and fuses the convolution features, the first normalized coordinates/the second normalized coordinates, And obtaining the first target depth feature/the second target depth feature by the first normalized long-short path information/the second normalized long-short path information.
8. A medical image follow-up target pairing device is characterized by comprising:
the medical image acquisition unit is used for acquiring a first medical image and a second medical image which meet follow-up visit conditions;
the preprocessing unit is used for preprocessing the first medical image and the second medical image to obtain a first standard body position medical image and a second standard body position medical image;
an information extraction unit, configured to extract first target feature information of a first follow-up target of the first standard posture medical image, where the first target feature information includes a first normalized coordinate, first normalized long and short path information, and a first local image block, and extract second target feature information of a second follow-up target of the second standard posture medical image, where the second target feature information includes a second normalized coordinate, second normalized long and short path information, and a second local image block;
the depth feature obtaining unit is used for inputting the first target feature information into a depth feature expression model to obtain a first target depth feature, and inputting the second target feature information into the depth feature expression model to obtain a second target depth feature;
and the pairing unit is used for judging the pairing condition of the first follow-up target and the second follow-up target based on the first target depth feature and the second target depth feature.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the medical image follow-up object pairing method according to any one of claims 1 to 7.
10. A readable storage medium having stored therein a computer program comprising program code for controlling a process to execute a process, the process comprising the medical image follow-up object pairing method according to any one of claims 1 to 7.
CN202210776250.9A 2022-07-04 2022-07-04 Medical image follow-up target pairing method, device and application Active CN114842003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210776250.9A CN114842003B (en) 2022-07-04 2022-07-04 Medical image follow-up target pairing method, device and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210776250.9A CN114842003B (en) 2022-07-04 2022-07-04 Medical image follow-up target pairing method, device and application

Publications (2)

Publication Number Publication Date
CN114842003A true CN114842003A (en) 2022-08-02
CN114842003B CN114842003B (en) 2022-11-01

Family

ID=82574242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210776250.9A Active CN114842003B (en) 2022-07-04 2022-07-04 Medical image follow-up target pairing method, device and application

Country Status (1)

Country Link
CN (1) CN114842003B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309449A (en) * 2023-03-14 2023-06-23 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium
CN117011352A (en) * 2023-09-27 2023-11-07 之江实验室 Standard brain map construction method, device and computer equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665062A (en) * 2012-03-16 2012-09-12 华为技术有限公司 Method and device for stabilizing target object image in video
CN103514276A (en) * 2013-09-22 2014-01-15 西安交通大学 Graphic target retrieval positioning method based on center estimation
CN105913093A (en) * 2016-05-03 2016-08-31 电子科技大学 Template matching method for character recognizing and processing
CN111462201A (en) * 2020-04-07 2020-07-28 广州柏视医疗科技有限公司 Follow-up analysis system and method based on novel coronavirus pneumonia CT image
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN111915584A (en) * 2020-07-29 2020-11-10 杭州健培科技有限公司 Focus follow-up assessment method and system based on CT (computed tomography) image
CN112488993A (en) * 2020-11-16 2021-03-12 杭州依图医疗技术有限公司 Method and device for acquiring lung cancer TNM (total lung cancer) by stages
CN112686866A (en) * 2020-12-31 2021-04-20 安徽科大讯飞医疗信息技术有限公司 Follow-up method and device based on medical image and computer readable storage medium
CN113205523A (en) * 2021-04-29 2021-08-03 浙江大学 Medical image segmentation and identification system, terminal and storage medium with multi-scale representation optimization
CN113343878A (en) * 2021-06-18 2021-09-03 北京邮电大学 High-fidelity face privacy protection method and system based on generation countermeasure network
CN113902642A (en) * 2021-10-13 2022-01-07 数坤(北京)网络科技股份有限公司 Medical image processing method and device, electronic equipment and storage medium
CN114693760A (en) * 2020-12-25 2022-07-01 虹软科技股份有限公司 Image correction method, device and system and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665062A (en) * 2012-03-16 2012-09-12 华为技术有限公司 Method and device for stabilizing target object image in video
CN103514276A (en) * 2013-09-22 2014-01-15 西安交通大学 Graphic target retrieval positioning method based on center estimation
CN105913093A (en) * 2016-05-03 2016-08-31 电子科技大学 Template matching method for character recognizing and processing
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN111462201A (en) * 2020-04-07 2020-07-28 广州柏视医疗科技有限公司 Follow-up analysis system and method based on novel coronavirus pneumonia CT image
CN111915584A (en) * 2020-07-29 2020-11-10 杭州健培科技有限公司 Focus follow-up assessment method and system based on CT (computed tomography) image
CN112488993A (en) * 2020-11-16 2021-03-12 杭州依图医疗技术有限公司 Method and device for acquiring lung cancer TNM (total lung cancer) by stages
CN114693760A (en) * 2020-12-25 2022-07-01 虹软科技股份有限公司 Image correction method, device and system and electronic equipment
CN112686866A (en) * 2020-12-31 2021-04-20 安徽科大讯飞医疗信息技术有限公司 Follow-up method and device based on medical image and computer readable storage medium
CN113205523A (en) * 2021-04-29 2021-08-03 浙江大学 Medical image segmentation and identification system, terminal and storage medium with multi-scale representation optimization
CN113343878A (en) * 2021-06-18 2021-09-03 北京邮电大学 High-fidelity face privacy protection method and system based on generation countermeasure network
CN113902642A (en) * 2021-10-13 2022-01-07 数坤(北京)网络科技股份有限公司 Medical image processing method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
L. WEIZMAN等: "Automatic segmentation, internal classification, and follow-up of optic pathway gliomas in MRI", 《MEDICAL IMAGE ANALYSIS》 *
谷宗运等: "基于SURF和改进的RANSAC算法的医学图像配准", 《中国医学影像学杂志》 *
龚敬: "基于CT影像的肺结节计算机辅助检", 《中国博士学位论文全文数据库 医药卫生科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309449A (en) * 2023-03-14 2023-06-23 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium
CN116309449B (en) * 2023-03-14 2024-04-09 浙江医准智能科技有限公司 Image processing method, device, equipment and storage medium
CN117011352A (en) * 2023-09-27 2023-11-07 之江实验室 Standard brain map construction method, device and computer equipment
CN117011352B (en) * 2023-09-27 2024-01-16 之江实验室 Standard brain map construction method, device and computer equipment

Also Published As

Publication number Publication date
CN114842003B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN114842003B (en) Medical image follow-up target pairing method, device and application
CN111932533B (en) Method, device, equipment and medium for positioning vertebrae by CT image
CN108596904B (en) Method for generating positioning model and method for processing spine sagittal position image
CN108428233B (en) Knowledge-based automatic image segmentation
CN111080573B (en) Rib image detection method, computer device and storage medium
CN111160367A (en) Image classification method and device, computer equipment and readable storage medium
CN109509177B (en) Method and device for recognizing brain image
CN110992370B (en) Pancreas tissue segmentation method and device and terminal equipment
CN105303550A (en) Image processing apparatus and image processing method
CN113240661B (en) Deep learning-based lumbar vertebra bone analysis method, device, equipment and storage medium
US20150302603A1 (en) Knowledge-based automatic image segmentation
CN111311655A (en) Multi-modal image registration method and device, electronic equipment and storage medium
CN113706559A (en) Blood vessel segmentation extraction method and device based on medical image
WO2022068228A1 (en) Lesion mark verification method and apparatus, and computer device and storage medium
CN108634934B (en) Method and apparatus for processing spinal sagittal image
CN113077499A (en) Pelvis registration method, pelvis registration device and pelvis registration system
CN112529900A (en) Method, device, terminal and storage medium for matching ROI in mammary gland image
CN110992312B (en) Medical image processing method, medical image processing device, storage medium and computer equipment
CN115170795B (en) Image small target segmentation method, device, terminal and storage medium
CN115482231B (en) Image segmentation method, device, storage medium and electronic equipment
CN115984203A (en) Eyeball protrusion measuring method, system, terminal and medium
Ajani et al. Automatic and interactive prostate segmentation in MRI using learned contexts on a sparse graph template
CN114372970A (en) Operation reference information generation method and device
US10299864B1 (en) Co-localization of multiple internal organs based on images obtained during surgery
Chen et al. Fully-automatic landmark detection in skull X-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant