CN110517302B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110517302B
CN110517302B CN201910815696.6A CN201910815696A CN110517302B CN 110517302 B CN110517302 B CN 110517302B CN 201910815696 A CN201910815696 A CN 201910815696A CN 110517302 B CN110517302 B CN 110517302B
Authority
CN
China
Prior art keywords
frame
image
image frames
frames
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910815696.6A
Other languages
Chinese (zh)
Other versions
CN110517302A (en
Inventor
魏亚男
姜譞
田疆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910815696.6A priority Critical patent/CN110517302B/en
Publication of CN110517302A publication Critical patent/CN110517302A/en
Application granted granted Critical
Publication of CN110517302B publication Critical patent/CN110517302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the invention provides an image processing method and device, which are used for matching a first group of image sequences with a second group of image sequences, and the method comprises the following steps: determining a first cropped region along a first region of the object and a second cropped region along a second region of the object in the first set of image sequences; determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in the second set of image sequences; determining M matched image frames in the N image frames based on the M image frames, and determining X matched image frames in the Y image frames based on the X image frames; and acquiring a first offset result according to the M frames of matched image frames and the M frames of image frames and the X frames of matched image frames and the X frames of image frames respectively for processing the first group of image sequences and the second group of image sequences. By using the image processing method and the image processing device provided by the invention, higher matching accuracy can be achieved.

Description

Image processing method and device
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image processing method and device.
Background
Liver cancer is one of the most common cancers worldwide, with mortality ranking the second place. Abdominal CT is the most commonly used means for diagnosis and treatment of liver cancer. The conventional abdominal enhanced CT includes an arterial CT sequence and a venous CT sequence, but due to the operation of a radiologist or the respiratory motion of a patient, the positions of the two phases of the CT are not matched, i.e., the ith frame of the arterial CT sequence and the ith frame of the venous CT sequence are not at the same position of the organ of the patient, which makes the doctor need to repeatedly observe during reading and brings great difficulty to the modeling of the arteriovenous two phases and the tumor characterization. There are also methods in the prior art to match the arterial phase CT sequence with the venous phase CT sequence, but all have some drawbacks. For example, a liver contour-based liver artery and vein phase CT registration method can be adopted, but the method depends heavily on the filtering effect and the precision of a boundary detection operator, and is poor in robustness and universality in the case of ascites due to liver extension, and the registration is performed only by using the liver contour, so that the accuracy is low, and meanwhile, the registration is greatly influenced by respiratory motion. A method of arteriovenous period registration based on 2D liver segmentation and 3D reconstruction can also be adopted, but the method needs a segmentation result of the whole liver, needs 3D reconstruction, is too long in time consumption and high in time complexity, and cannot respond in real time. In view of the shortcomings of the prior art in matching the venous phase CT sequence with the arterial phase CT sequence, a new image processing method and apparatus are needed.
Disclosure of Invention
The invention provides an image processing method and device.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions:
the first aspect of the present invention provides an image processing method, which applies matching to a first group of image sequences and a second group of image sequences, wherein the first group of image sequences comprises a plurality of image frames, the second group of image sequences comprises a plurality of image frames, and objects contained in the first group of image sequences and the second group of image sequences are the same; the method comprises the following steps:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first intercepting region, and determining N image frames in the third intercepting region based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame respectively, so as to process the first group of image sequences and the second group of image sequences.
Preferably, the method further includes randomly selecting a P frame image frame in the second capture area, determining a P frame matching image frame in the fourth capture area, and determining the preset parameter based on the P frame image frame and the P frame matching image frame.
Preferably, the determining a P frame matching image frame in the fourth clipping region, the determining the preset parameter based on the P frame image frame and the P frame matching image frame, includes,
determining the column sequence number of the P frame image frame;
matching each frame image in the P frame image frames with all image frames contained in a fourth intercepting area respectively to determine P frame matching image frames which are in one-to-one correspondence with each frame image in the P frame image frames respectively;
determining the column sequence number of the P frame matching image frame;
determining a second offset result based on the column number of the P frame image frame and the column number of the P frame matching image frame;
after the second group of image sequences moves the second offset result relative to the first group of image sequences, the error value of the first group of image sequences and the second group of image sequences is a preset parameter.
Preferably, the randomly selecting M image frames in the first capturing region, and determining N image frames in the third capturing region based on a preset parameter includes,
determining the column sequence number of the M frame image frames;
determining the sequence numbers of the image frames which respectively correspond to the M image frames one by one and the range image frames matched with the sequence numbers in a third intercepting area;
each frame image in the M frame image frames respectively corresponds to a range image frame in the third intercepting region, and the left and right of the serial number of the image frame corresponding to a certain frame image of the M frame image frames in the third intercepting region respectively take preset parameter frame images to form a range image frame corresponding to the frame image of the M frame image frames; the M range image frames corresponding to the M frame image frames form N frame image frames.
Preferably, the randomly selecting an X frame image frame in the second clipping region, and determining a Y frame image frame in the fourth clipping region based on the preset parameter, includes,
determining the column sequence number of the X frame image frame;
determining the sequence numbers of the image frames which respectively correspond to the X frame image frames one by one and the range image frames matched with the sequence numbers in a fourth intercepting area;
each frame image in the X frame image frames respectively corresponds to a range image frame in the fourth intercepting region, and the left and right of the serial number of the image frame corresponding to a certain frame image of the X frame image frames in the fourth intercepting region respectively take preset parameter frame images to form the range image frame corresponding to the frame image of the X frame image frame; the X range image frames corresponding to the X frame image frames form Y frame image frames.
Preferably, said determining M matching image frames among said N image frames based on said M image frames comprises,
and matching each frame image of the M frame image frames with the corresponding range image frame of the frame image in the third intercepting area to obtain M frame matched image frames which are respectively matched with each frame image in the M frame image frames one by one.
Preferably, the determining of the X frame matching image frame in the Y frame image frame based on the X frame image frame includes,
and matching each frame image of the X frame image frames with the corresponding range image frame of the frame image in the fourth intercepting area to obtain X frame matched image frames which are respectively matched with each frame image in the X frame image frames one by one.
Preferably, the obtaining a first offset result according to the M frame matching image frame and the M frame image frame, and the X frame matching image frame and the X frame image frame, respectively, includes,
determining a first offset result based on the column number of the M frame image frame and the column number of the M frame matching image frame, and based on the column number of the X frame image frame and the column number of the X frame matching image frame;
after the second group of image sequences moves the first offset result relative to the first group of image sequences, the error between the first group of image sequences and the second group of image sequences is smaller than a preset parameter.
The second aspect of the present invention provides an image processing apparatus, wherein matching is applied to a first group of image sequences and a second group of image sequences, the first group of image sequences includes a plurality of image frames, the second group of image sequences includes a plurality of image frames, and objects included in the first group of image sequences and the second group of image sequences are the same; the apparatus comprises at least a memory having a computer program stored thereon, and a processor that, when executing the computer program, performs the steps of:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first interception area, and determining N image frames in the third interception area based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and respectively acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame, so as to be used for processing the first group of image sequences and the second group of image sequences.
Preferably, the processor further performs the steps of: and randomly selecting a P frame image frame in the second intercepting area, determining a P frame matching image frame in the fourth intercepting area, and determining the preset parameters based on the P frame image frame and the P frame matching image frame.
Based on the disclosure of the above embodiments, it can be known that the embodiments of the present invention have the following beneficial effects:
by using the image processing method and the image processing device provided by the invention, the problem of low precision caused by depending on the liver contour for registration is effectively solved, meanwhile, the calculation amount and the complexity of matching can be reduced, the real-time response can be realized, and the matching can achieve higher precision.
Drawings
Fig. 1 is a schematic flow structure diagram of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of relative movement between a venous phase CT sequence and an arterial phase CT sequence according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The following detailed description of specific embodiments of the present invention is provided in connection with the accompanying drawings, which are not intended to limit the invention.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the invention will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It should also be understood that, although the invention has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the invention, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The description may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings,
as shown in fig. 1, a first embodiment of the present invention provides an image processing method, applying matching to a first group of image sequences and a second group of image sequences, where the first group of image sequences includes a plurality of image frames, the second group of image sequences includes a plurality of image frames, and objects included in the first group of image sequences and the second group of image sequences are the same; the method comprises the following steps:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first interception area, and determining N image frames in the third interception area based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and respectively acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame, so as to be used for processing the first group of image sequences and the second group of image sequences.
In this embodiment, the first set of image sequences and the second set of image sequences may be a vein phase CT sequence and an artery phase CT sequence, respectively, where when the first set of images is the vein phase CT sequence, the second set of image sequences is the artery phase CT sequence, and when the first set of image sequences is the artery phase CT sequence, the second set of image sequences is the vein phase CT sequence. For convenience of description, the first set of image sequences is a venous phase CT sequence, and the second set of image sequences is an arterial phase CT sequence in the following embodiments of the present invention. In practical application, although the venous phase CT sequence and the arterial phase CT sequence contain the same object, due to the operation of a radiological technician or the respiratory motion of a patient, the ith frame of the arterial phase CT sequence and the ith frame of the venous phase CT sequence are not at the same position of the object, that is, a misalignment phenomenon exists between the venous phase CT sequence and the arterial phase CT sequence.
For convenience of description, the object of the present invention may be a certain organ of a human body, for example, a liver of a human body.
Determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames; in the present invention, for convenience of understanding and description, a first cut-out region is defined along an upper edge of a liver in a venous CT sequence, and a second cut-out region is defined along a lower edge of the liver, where the first region may be an upper edge position of the liver, the second region may be a lower edge position of the liver, each frame of image frames in the first cut-out region and the second cut-out region includes an image of the liver, and the number of image frames included in the first cut-out region may be the same as or different from the number of image frames included in the second cut-out region.
Determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region; in the arterial phase CT sequence, a third cut-out region may be determined along an upper edge of the liver, and a fourth cut-out region is determined along a lower edge of the liver, where the third region may be an upper edge position of the liver, the fourth region may be a lower edge position of the liver, each frame image frame in the third cut-out region and the fourth cut-out region includes an image of the liver, and the number of image frames included in the third cut-out region may be the same as or different from the number of image frames included in the fourth cut-out region.
Randomly selecting M image frames in the first interception area, and determining N image frames in the third interception area based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero; for example, M may be 1, 2, 3, 4, 5, etc., and for convenience of description, in the following embodiment, M is equal to 3 as an example. The N image frames determined in the third capture area are substantially a range of images selected in the third capture area based on the preset parameters and matching with the M image frames, where the M image frames and the N image frames are not in a one-to-one correspondence, and N is greater than M.
Randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters; for convenience of description, in the following embodiment, X is equal to 3 as an example (for convenience of description, X and M are not equal, and may not be equal to each other). The Y frame image frame determined in the fourth capture area is substantially a range of images selected in the fourth capture area based on the predetermined parameter that match the X frame image frame, where the X frame image frame and the Y frame image frame are not in a one-to-one correspondence, and Y is greater than X.
Determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames; and determining M frame matching image frames from the N frame image frames, wherein the M frame image frames and the M frame matching image frames are in one-to-one correspondence, and determining an X frame matching image frame from the Y frame image frames, wherein the X frame image frames and the X frame matching image frames are in one-to-one correspondence.
Then, based on the M frame matching image frames and the M frame image frames, and the X frame matching image frames and the X frame image frames, a first offset result is obtained, and the arterial phase CT sequence and the venous phase CT sequence can be matched with each other with respect to the liver by moving the first offset result with respect to the venous phase CT sequence.
In the invention, because the image frames matched with the M image frames one by one are selected from the N image frames, all the image frames in the third intercepting area do not need to be matched with the M image frames; similarly, the image frames matched with the X frame image frames one by one are selected from the Y frame image frames, and all the image frames in the fourth intercepting area do not need to be matched with the X frame image frames, so that the calculated amount is greatly reduced, the matching complexity is reduced, the time is effectively shortened, and the corresponding can be carried out in real time.
In another embodiment of the present invention, the method further includes randomly selecting a P frame image frame in the second capture area, determining a P frame matching image frame in the fourth capture area, and determining the preset parameter based on the P frame image frame and the P frame matching image frame.
In this embodiment, for example, 3 image frames may be randomly selected from 10 image frames included in the second capture area, a P frame matching image frame is determined from 20 image frames included in the fourth capture area, the P frame matching image frame and the P frame image frame are in a one-to-one correspondence relationship, and the preset parameter may be determined based on the P frame matching image frame and the P frame image frame.
After the preset parameters are determined, the first offset result is determined based on the method provided by the embodiment, actually, the process of determining the preset parameters is a coarse matching process, and the process of determining the first offset result is a fine matching process.
As shown in fig. 2, in an embodiment provided by the present invention, the determining a P frame matching image frame in the fourth truncation area, the determining the preset parameter based on the P frame image frame and the P frame matching image frame, includes,
determining the column sequence number of the P frame image frame;
matching each frame image in the P frame image frames with all image frames contained in a fourth intercepting area respectively to determine P frame matching image frames which are in one-to-one correspondence with each frame image in the P frame image frames respectively;
determining the column sequence number of the P frame matching image frame;
determining a second offset result based on the column number of the P frame image frame and the column number of the P frame matching image frame;
after the second group of image sequences moves the second offset result relative to the first group of image sequences, the error value of the first group of image sequences and the second group of image sequences is a preset parameter.
In the present embodiment, as shown in fig. 2, the sequence of the number of specific image frames and the sequence number of each image frame included in the first cut region and the second cut region of the venous phase CT sequence, and the third cut region and the fourth cut region of the arterial phase CT sequence is shown in fig. 2.
If the row serial numbers of the randomly selected P frame image frames in the second intercepting region are respectively 1, 5 and 8, respectively matching three frame images with the row serial numbers respectively 1, 5 and 8 in the second intercepting region with all image frames contained in the fourth intercepting region, so as to determine three frame matched image frames which respectively correspond to the three frame images with the row serial numbers respectively 1, 5 and 8 in the second intercepting region one by one. Specifically, the frame image with the column number of 1 in the second capture area and the 20 frame images included in the fourth capture area are respectively compared one by one, and after comparison, it is found that the frame image with the column number of 1 in the second capture area is matched with the image frame with the column number of 8 in the fourth capture area, that is, the image frame with the column number of 8 in the fourth capture area is a matched image frame of the frame image with the column number of 1 in the second capture area; respectively comparing the frame image with the column number of 5 in the second intercepting region with the 20 frame images contained in the fourth intercepting region one by one, and finding that the frame image with the column number of 5 in the second intercepting region is matched with the image frame with the column number of 11 in the fourth intercepting region after comparison, namely, the image frame with the column number of 11 in the fourth intercepting region is a matched image frame of the frame image with the column number of 5 in the second intercepting region; respectively comparing the frame image with the column number of 8 in the second intercepting region with the 20 frame images contained in the fourth intercepting region one by one, and finding that the frame image with the column number of 8 in the second intercepting region is matched with the image frame with the column number of 15 in the fourth intercepting region after comparison, namely, the image frame with the column number of 15 in the fourth intercepting region is a matched image frame of the frame image with the column number of 8 in the second intercepting region; through the steps, three frame matching image frames which are respectively matched with the three frame images with the column serial numbers of 1, 5 and 8 in the second intercepting area one by one are found in the fourth intercepting area; namely, the image frame with the column number of 1 in the second intercepting area is matched with the image frame with the column number of 8 in the fourth intercepting area; the image frame with the column serial number of 5 in the second intercepting area is matched with the image frame with the column serial number of 11 in the fourth intercepting area; the image frame with the column number of 8 in the second cutout area is matched with the image frame with the column number of 15 in the fourth cutout area.
Then, based on the column numbers, the determined second offset result is: the above offset result can be rounded up to 6.67 for [ 8-1) + (11-5) + (15-8) ]/3, i.e. a second offset result of 7 is obtained. At this time, the arterial phase CT sequence is shifted 7 with respect to the venous phase CT sequence, and the course of coarse matching is completed. After a plurality of tests and coarse matching, the error value between the venous phase CT sequence and the arterial phase CT sequence is usually below 3, that is, the preset parameter is usually 3.
In another embodiment of the present invention, the randomly selecting M image frames in the first capturing region, and determining N image frames in the third capturing region based on a preset parameter includes,
determining the column sequence number of the M frame image frames;
determining the sequence numbers of the image frames which respectively correspond to the M image frames one by one and the range image frames matched with the sequence numbers in a third intercepting area;
each frame image in the M frame image frames respectively corresponds to a range image frame in the third intercepting region, and the left and right of the serial number of the image frame corresponding to a certain frame image of the M frame image frames in the third intercepting region respectively take preset parameter frame images to form a range image frame corresponding to the frame image of the M frame image frames; the M range image frames corresponding to the M frame image frames form N frame image frames.
In this embodiment, 3 image frames with column serial numbers of 2, 4, and 7 are randomly selected in the first capturing area, and based on the rough matching, it may be determined that the image frame with column serial number 2 in the first capturing area matches the image frame with column serial number 9 in the third capturing area; the image frame with the column serial number of 4 in the first capturing area is matched with the image frame with the column serial number of 11 in the third capturing area; the image frame with the column serial number of 7 in the first capturing area is matched with the image frame with the column serial number of 14 in the third capturing area; further, after rough matching, the preset parameter is 3, that is, at this time, the error value between the venous phase CT sequence and the arterial phase CT sequence is already controlled within 3, only three randomly selected image frames in the first capturing region need to be compared with part of the image frames in the third capturing region one by one, and all the image frames in the third capturing region do not need to be compared with all the randomly selected image frames in the first capturing region one by one, so that the calculation amount and complexity of matching are reduced, real-time response can be achieved, and higher accuracy of matching can be achieved. Specifically, the range image frames determined in the third cut-out region to match the image frame with the column number of 2 in the first cut-out region are (9-3) to (9+3), that is, the range image frames to match the image frame with the column number of 2 in the first cut-out region are 7 image frames in total, that is, 6, 7, 8, 9, 10, 11, 12; the range image frames determined in the third cut-off region to match the image frame with the column number of 4 in the first cut-off region are (11-3) to (11+3), that is, the range image frames to match the image frame with the column number of 4 in the first cut-off region are 7 image frames in total; the range image frames determined in the third cutout region to match the image frame with the column number of 7 in the first cutout region are (14-3) to (14+3), that is, the range image frames to match the image frame with the column number of 7 in the first cutout region are 7 image frames in total of 11, 12, 13, 14, 15, 16, 17.
In another embodiment of the present invention, said randomly selecting X frame image frames in said second truncation region, and determining Y frame image frames in said fourth truncation region based on said preset parameters, includes,
determining the column sequence number of the X frame image frame;
determining the sequence numbers of the image frames which respectively correspond to the X frame image frames one by one and the range image frames matched with the sequence numbers in a fourth intercepting area;
each frame image in the X frame image frames respectively corresponds to a range image frame in the fourth intercepting region, and the left and right of the serial number of the image frame corresponding to a certain frame image of the X frame image frames in the fourth intercepting region respectively take preset parameter frame images to form the range image frame corresponding to the frame image of the X frame image frame; the X range image frames corresponding to the X frame image frames form Y frame image frames.
In this embodiment, 3 image frames with column numbers of 1, 4, and 8 are randomly selected in the second capture area, and based on the rough matching, it may be determined that the image frame with column number 1 in the second capture area matches the image frame with column number 8 in the fourth capture area; the image frame with the column serial number of 4 in the second intercepting area is matched with the image frame with the column serial number of 11 in the fourth intercepting area; the image frame with the column serial number of 8 in the second intercepting area is matched with the image frame with the column serial number of 15 in the fourth intercepting area; further, after rough matching, the preset parameter is 3, that is, at this time, the error value between the venous phase CT sequence and the arterial phase CT sequence is already controlled within 3, only three randomly selected image frames in the second capture area need to be compared with a part of image frames in the fourth capture area one by one, and all the image frames in the fourth capture area need not to be compared with all the randomly selected image frames in the second capture area one by one, so that the calculation amount and complexity of matching are reduced, real-time response can be achieved, and the matching can achieve higher accuracy. Specifically, the range image frames determined in the fourth capture area and matched with the image frame with the column number of 1 in the second capture area are (8-3) - (8+3), that is, the range image frame matched with the image frame with the column number of 1 in the second capture area is 7 image frames in total, namely 5, 6, 7, 8, 9, 10, and 11; the range image frames determined in the fourth intercepting area and matched with the image frame with the column number of 4 in the second intercepting area are (11-3) - (11+3), that is, the range image frames matched with the image frame with the column number of 4 in the second intercepting area are 7 image frames in total, namely 8, 9, 10, 11, 12, 13 and 14; the range image frames determined in the fourth cutout region to match the image frame with column number 8 in the second cutout region are (15-3) to (15+3), that is, the range image frames to match the image frame with column number 8 in the second cutout region are 7 image frames in total, 12, 13, 14, 15, 16, 17, 18.
In one embodiment provided by the present invention, said determining M matching image frames among said N image frames based on said M image frames comprises,
and matching each frame image of the M frame image frames with the corresponding range image frame of the frame image in the third intercepting area to obtain M frame matched image frames which are respectively matched with each frame image in the M frame image frames one by one.
In the present embodiment, the range image frame matched with the image frame with the column number 2 in the first cut region is 7 image frames in total of 6, 7, 8, 9, 10, 11, 12; the image frames with the sequence number of 2 in the first cut area are respectively compared with the 7 frames of images one by one through a convolutional neural network, wherein the specific process of comparing one by one through the convolutional neural network is as follows:
using a convolutional neural network to obtain 256-dimensional vectors containing high-level semantic features, and comparing the high-dimensional vectors in a way of comparing vector dimensions, wherein the specific calculation mode is as follows:
Figure BDA0002186304150000131
wherein ResultkA high-level semantic matching calculation result representing a frame to be matched (where the frame to be matched corresponds to the image frame with the column number of 2 in the first cutout region in the above-described embodiment) and the kth candidate frame in 7 candidate frames (7 candidates correspond to 7 image frames with the column numbers of 6, 7, 8, 9, 10, 11, 12, respectively, in the third cutout region in the above-described embodiment), for example, when k is 1, V corresponds to the image frame with the column number of 6 in the third cutout regionjRepresenting the j-th dimension characteristic value in the high-level semantic characteristic vector of the frame to be matched,
Figure BDA0002186304150000132
and representing the j-dimension feature value in the high-level semantic feature vector of the k-th candidate in the 7 candidates. V and OkVectors consisting of 0,1 in 256 dimensions each, in the form of (1,1,0,1,1,0,0,0,0, …,1,1, 0). Respectively calculating Result for 7 candidates of the frame to be matched in the first cut-off region and the third cut-off regionkI.e. traverse all values of k, get ResultkSelecting the minimum ResultkAnd k corresponding to the value is the final matching result, namely the k-th candidate of the third interception area is closest to the high-level semantic meaning of the frame to be matched and is the best matching. For example, if k is 2If Result takes the minimum value, the image frame with the row number of 7 in the third capture area is matched with the image frame with the row number of 2 in the first capture area; if k is 3, Result takes the minimum value, and the image frame with the column number of 8 in the third capture area matches the image frame with the column number of 2 in the first capture area.
Continuing with the above embodiment, after comparison, it is found that the image frame with the column number of 2 in the first capture area matches the image frame with the column number of 7 in the third capture area; the range image frame matched with the image frame with the column number of 4 in the first capturing area is 7 image frames including 8, 9, 10, 11, 12, 13 and 14; respectively comparing the image frames with the row serial number of 4 in the first intercepting region with the 7 frames of images one by one through a convolutional neural network, and finding that the image frames with the row serial number of 4 in the first intercepting region are matched with the image frames with the row serial number of 10 in the third intercepting region after comparison; the range image frame matched with the image frame with the column number of 7 in the first cut area is 7 image frames in total, namely 11, 12, 13, 14, 15, 16 and 17; comparing the image frames with the serial numbers of 7 in the first intercepting region with the 7 image frames one by one through a convolutional neural network, and finding that the image frames with the serial numbers of 7 in the first intercepting region are matched with the image frames with the serial numbers of 13 in the third intercepting region after comparison; through the method, three matched image frames which are correspondingly matched with the three randomly selected image frames in the first intercepting region one by one are found in the third intercepting region.
In another embodiment provided by the present invention, said determining an X frame matching image frame among said Y frame image frames based on said X frame image frame comprises,
and matching each frame image of the X frame image frames with the corresponding range image frame of the frame image in the fourth intercepting area to obtain X frame matched image frames which are respectively matched with each frame image in the X frame image frames one by one.
In this embodiment, the range image frame matched with the image frame with the column number of 1 in the second cut-out area is 7 image frames including 5, 6, 7, 8, 9, 10 and 11; the image frames with the row serial numbers of 1 in the second intercepting area are respectively compared with the 7 frames of images one by one through a convolutional neural network, and after comparison, the image frames with the row serial numbers of 1 in the second intercepting area are found to be matched with the image frames with the row serial numbers of 7 in the fourth intercepting area; the range image frame matched with the image frame with the column serial number of 4 in the second intercepting area is 7 image frames including 8, 9, 10, 11, 12, 13 and 14; comparing the image frames with the row serial number of 4 in the second intercepting area with the 7 frames of images one by one through a convolutional neural network, and finding that the image frames with the row serial number of 4 in the second intercepting area are matched with the image frames with the row serial number of 9 in the fourth intercepting area after comparison; the range image frame matched with the image frame with the column serial number of 8 in the second intercepting area is 7 image frames including 12, 13, 14, 15, 16, 17 and 18; respectively comparing the image frames with the row serial number of 8 in the second intercepting area with the 7 frames of images one by one through a convolutional neural network, and finding that the image frames with the row serial number of 8 in the second intercepting area are matched with the image frames with the row serial number of 14 in the fourth intercepting area; through the method, three matched image frames which are correspondingly matched with the three randomly selected image frames in the second intercepting area one by one are found in the fourth intercepting area.
In other embodiments provided by the present invention, the obtaining the first offset result according to the M frame matching image frame and the M frame image frame, and the X frame matching image frame and the X frame image frame, respectively, includes,
determining a first offset result based on the column number of the M frame image frame and the column number of the M frame matching image frame, and based on the column number of the X frame image frame and the column number of the X frame matching image frame;
after the second group of image sequences moves the first offset result relative to the first group of image sequences, the error between the first group of image sequences and the second group of image sequences is smaller than a preset parameter.
In this embodiment, the image frame with the column number 2 in the first capture area matches the image frame with the column number 7 in the third capture area; the image frame with the column serial number of 4 in the first intercepting area is matched with the image frame with the column serial number of 10 in the third intercepting area; the image frame with the column number of 7 in the first capture area is matched with the image frame with the column number of 13 in the third capture area.
The image frames with the row serial numbers of 1 in the second intercepting area are matched with the image frames with the row serial numbers of 7 in the fourth intercepting area; the image frame with the row serial number of 4 in the second intercepting area is matched with the image frame with the row serial number of 9 in the fourth intercepting area; the image frame with the column number of 8 in the second intercepting area is matched with the image frame with the column number of 14 in the fourth intercepting area.
The first shift result was obtained by the above column numbers, and the first shift result was 6 when 5.6 is rounded off for [ 7-2) + (10-4) + (13-7) + (7-1) + (9-4) + (14-8) ]/6. That is, the arterial phase CT sequence is shifted 6 with respect to the venous phase CT sequence, and the fine matching process is completed.
In the invention, the quick and accurate matching of the vein phase CT sequence and the artery phase CT sequence is completed by the method, namely, the image frame with the column number of 1 of the vein phase CT sequence is not matched with the image frame with the column number of 1 of the artery phase CT sequence, but the image frame with the column number of 1 of the vein phase CT sequence is matched with the image frame with the column number of 1+6 of the artery phase CT sequence, namely, the method of the invention can quickly and accurately find the matched image frame matched with a certain image frame in the vein phase CT sequence in the artery phase CT sequence.
Based on the same inventive concept, as shown in fig. 3, a second embodiment of the present invention provides an apparatus for image processing, which applies matching to a first group of image sequences and a second group of image sequences, wherein the first group of image sequences includes a plurality of image frames, the second group of image sequences includes a plurality of image frames, and objects included in the first group of image sequences and the second group of image sequences are the same; the apparatus comprises at least a memory having a computer program stored thereon, and a processor that, when executing the computer program, performs the steps of:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first interception area, and determining N image frames in the third interception area based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame respectively, so as to process the first group of image sequences and the second group of image sequences.
In one embodiment provided by the present invention, the processor further performs the following steps: and randomly selecting a P frame image frame in the second intercepting region, determining a P frame matching image frame in the fourth intercepting region, and determining the preset parameters based on the P frame image frame and the P frame matching image frame.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the electronic device to which the data processing method described above is applied may refer to the corresponding description in the foregoing product embodiments, and details are not repeated herein.
The above embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and the scope of the present invention is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present invention, and such modifications and equivalents should also be considered as falling within the scope of the present invention.

Claims (10)

1. A method of image processing, applying matching of a first set of image sequences comprising a plurality of image frames and a second set of image sequences comprising a plurality of image frames, the first and second set of image sequences containing the same object; the method comprises the following steps:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first intercepting region, and determining N image frames in the third intercepting region based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and respectively acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame, so as to be used for processing the first group of image sequences and the second group of image sequences.
2. The method of claim 1, further comprising randomly selecting a P frame image frame in the second truncation region, determining a P frame matching image frame in the fourth truncation region, and determining the preset parameter based on the P frame image frame and the P frame matching image frame.
3. The method of claim 2, said determining a P frame matching image frame in said fourth truncation region, said determining said preset parameters based on said P frame image frame and P frame matching image frame, comprising,
determining the sequence number of the P frame image frame;
matching each frame image in the P frame image frames with all image frames contained in a fourth intercepting area respectively to determine P frame matching image frames which are in one-to-one correspondence with each frame image in the P frame image frames respectively;
determining the sequence number of the P frame matched image frame;
determining a second offset result based on the column number of the P frame image frame and the column number of the P frame matching image frame;
after the second group of image sequences moves the second offset result relative to the first group of image sequences, the error value of the first group of image sequences and the second group of image sequences is a preset parameter.
4. The method according to claim 1, wherein said randomly selecting M image frames in said first clipping region, determining N image frames in said third clipping region based on a preset parameter, comprises,
determining the column sequence number of the M frame image frames;
determining the sequence numbers of the image frames which respectively correspond to the M image frames one by one and the range image frames matched with the sequence numbers in a third intercepting area;
each frame image in the M frame image frames respectively corresponds to a range image frame in the third intercepting region, and preset parameter frame images are respectively taken at the left and right of the column sequence number of the image frame corresponding to a certain frame image of the M frame image frames in the third intercepting region so as to form the range image frame corresponding to the frame image of the M frame image frames; the M range image frames corresponding to the M frame image frames form N frame image frames.
5. The method of claim 4, said randomly selecting X frame image frames in said second cropped area, determining Y frame image frames in said fourth cropped area based on said preset parameters, comprising,
determining the column sequence number of the X frame image frame;
determining the sequence numbers of the image frames which respectively correspond to the X frame image frames one by one and the range image frames matched with the sequence numbers in a fourth intercepting area;
each frame image in the X frame image frames respectively corresponds to a range image frame in the fourth intercepting area, and preset parameter frame images are respectively taken at the left and right of the row sequence number of the image frame corresponding to a certain frame image of the X frame image frames in the fourth intercepting area so as to form the range image frame corresponding to the frame image of the X frame image frame; the X range image frames corresponding to the X frame image frames form a Y frame image frame.
6. The method of claim 4, said determining M matching image frames among said N image frames based on said M image frames, comprising,
and matching each frame image of the M frame image frames with the corresponding range image frame of the frame image in the third intercepting area to obtain M frame matched image frames which are respectively matched with each frame image in the M frame image frames one by one.
7. The method of claim 5, said determining an X frame matching image frame among said Y frame image frames based on said X frame image frame, comprising,
and matching each frame image of the X frame image frames with the corresponding range image frame of the frame image in the fourth intercepting area to obtain X frame matched image frames which are respectively matched with each frame image in the X frame image frames one by one.
8. The method of claim 5, said obtaining first offset results from M frame matched image frames with said M frame image frames and X frame matched image frames with said X frame image frames, respectively, for processing said first and second sets of image sequences, comprising,
determining a first offset result based on the column number of the M frame image frame and the column number of the M frame matching image frame, and based on the column number of the X frame image frame and the column number of the X frame matching image frame;
after the second group of image sequences moves the first offset result relative to the first group of image sequences, the error between the first group of image sequences and the second group of image sequences is smaller than a preset parameter.
9. An apparatus for image processing, applying matching of a first set of image sequences and a second set of image sequences, the first set of image sequences comprising a plurality of image frames, the second set of image sequences comprising a plurality of image frames, and the first set of image sequences and the second set of image sequences containing the same object; the apparatus comprises at least a memory having a computer program stored thereon, and a processor that, when executing the computer program, performs the steps of:
determining a first cut-out area along a first area of the object and a second cut-out area along a second area of the object in the first group of image sequences, wherein the first cut-out area and the second cut-out area respectively contain an integral number of frame image frames;
determining a third cut-out region along a third region of the object and a fourth cut-out region along a fourth region of the object in a second group of image sequences, wherein the third cut-out region and the fourth cut-out region respectively contain integral frame image frames, and the number of frames is respectively greater than the number of frames of the image frames in the first cut-out region and the second cut-out region;
randomly selecting M image frames in the first intercepting region, and determining N image frames in the third intercepting region based on a preset parameter, wherein the preset parameter is an integer greater than or equal to zero;
randomly selecting X frame image frames in the second intercepting area, and determining Y frame image frames in the fourth intercepting area based on the preset parameters;
determining M matched image frames from the N image frames based on the M image frames, and determining X matched image frames from the Y image frames based on the X image frames;
and respectively acquiring a first offset result according to the M frame matched image frame and the M frame image frame, and the X frame matched image frame and the X frame image frame, so as to be used for processing the first group of image sequences and the second group of image sequences.
10. The apparatus of claim 9, the processor further performing the steps of: and randomly selecting a P frame image frame in the second intercepting area, determining a P frame matching image frame in the fourth intercepting area, and determining the preset parameters based on the P frame image frame and the P frame matching image frame.
CN201910815696.6A 2019-08-30 2019-08-30 Image processing method and device Active CN110517302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910815696.6A CN110517302B (en) 2019-08-30 2019-08-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910815696.6A CN110517302B (en) 2019-08-30 2019-08-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110517302A CN110517302A (en) 2019-11-29
CN110517302B true CN110517302B (en) 2022-06-24

Family

ID=68629711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910815696.6A Active CN110517302B (en) 2019-08-30 2019-08-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110517302B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013018575A1 (en) * 2011-08-03 2013-02-07 株式会社日立メディコ Image diagnostic device and image correction method
WO2014084181A1 (en) * 2012-11-30 2014-06-05 シャープ株式会社 Image measurement device
KR102205898B1 (en) * 2013-09-04 2021-01-21 삼성전자주식회사 Method and Apparatus for registering medical images
WO2015185308A1 (en) * 2014-06-04 2015-12-10 Koninklijke Philips N.V. Device and method for registration of two images
CN106991694B (en) * 2017-03-17 2019-10-11 西安电子科技大学 Based on marking area area matched heart CT and ultrasound image registration method
CN107067420A (en) * 2017-04-28 2017-08-18 上海联影医疗科技有限公司 Image processing method, device and equipment
CN107133946B (en) * 2017-04-28 2020-05-22 上海联影医疗科技有限公司 Medical image processing method, device and equipment

Also Published As

Publication number Publication date
CN110517302A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN108182384B (en) Face feature point positioning method and device
Can et al. Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms
CN101727661B (en) Medical image processor
KR102410955B1 (en) Method and computer program for automatic segmentation of abnominal organs based on deep learning in medical images
US10134143B2 (en) Method for acquiring retina structure from optical coherence tomographic image and system thereof
CN107369131B (en) Conspicuousness detection method, device, storage medium and the processor of image
US10991102B2 (en) Image processing apparatus and image processing method
US20030016853A1 (en) Image position matching method and apparatus therefor
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
Ofir et al. On detection of faint edges in noisy images
CN108682025A (en) A kind of method for registering images and device
CN109325955B (en) Retina layering method based on OCT image
Lamecker et al. Automatic segmentation of the liver for preoperative planning of resections
CN110097587A (en) Robust structure light pattern for 3D camera system
KR20150053438A (en) Stereo matching system and method for generating disparity map using the same
Hacihaliloglu et al. Statistical shape model to 3D ultrasound registration for spine interventions using enhanced local phase features
CN109758170B (en) Exposure parameter adjusting method and device of X-ray imaging equipment
CN110517302B (en) Image processing method and device
CN111860423B (en) Improved human eye positioning method by integral projection method
CN111968112A (en) CT three-dimensional positioning image acquisition method and device and computer equipment
CN110598652A (en) Fundus data prediction method and device
EP3272288A1 (en) Apparatus and method for ct data reconstruction based on motion compensation
CN112116673B (en) Virtual human body image generation method and system based on structural similarity under posture guidance and electronic equipment
Engin et al. An evaluation of image registration methods for chest radiographs
CN107644447B (en) Apparatus and method for motion compensation based CT data reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant