CN115423751A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115423751A
CN115423751A CN202210916329.7A CN202210916329A CN115423751A CN 115423751 A CN115423751 A CN 115423751A CN 202210916329 A CN202210916329 A CN 202210916329A CN 115423751 A CN115423751 A CN 115423751A
Authority
CN
China
Prior art keywords
image
oct
angiography
pull
image group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210916329.7A
Other languages
Chinese (zh)
Inventor
朱锐
鲁全茂
刘超
毕鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Original Assignee
SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD filed Critical SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Priority to CN202210916329.7A priority Critical patent/CN115423751A/en
Publication of CN115423751A publication Critical patent/CN115423751A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Endoscopes (AREA)

Abstract

The application is applicable to the technical field of medical image processing, and provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an angiography image group for a probe object in a process of acquiring an Optical Coherence Tomography (OCT) image group for the probe object; registering each angiography image in the OCT image group and the angiography image group to obtain registration parameters; generating a light attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group; calculating a plaque attenuation index IPA value of each light attenuation coefficient image in the light attenuation coefficient image group; and marking each angiography image in the angiography image group by using the IPA value based on the registration parameters to obtain a target angiography image group. The method provided by the embodiment of the application can clearly and visually observe the position and the conforming condition of the vulnerable plaque, and improves the vulnerable plaque identification capability of the probe object.

Description

Image processing method and device, electronic equipment and storage medium
The present application is a divisional application of the chinese patent application entitled "method, apparatus, electronic device, and storage medium for image processing" filed by the chinese patent office at 13/07/2021 under the application number 202110790290.4.
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The optical coherence tomography OCT technique is an imaging technique. The method utilizes the basic principle of a weak coherent light interferometer to divide light rays emitted by a light source into two beams, one beam is emitted to a tested tissue, namely a sample arm, the other beam is emitted to a reference reflector, namely a reference arm, then two beams of light signals reflected by the tested tissue and the reference reflector are superposed and interfered, and finally, image gray scales with different strengths are displayed according to the light signals along with the difference of the tested tissue, so that the inside of the tissue is imaged.
The existing traditional optical coherent image technology has weak identification capability on vulnerable plaques, the difficulty in improving the identification capability from a system and equipment end is high, and high cost is generated; in addition, the existing OCT image has poor display effect, and is not convenient for assisting medical personnel to judge the load condition of vulnerable plaques.
Disclosure of Invention
Embodiments of the present application provide an image processing method and apparatus, an electronic device, and a storage medium, which may solve at least part of the above problems.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an angiography image group for a probe object in acquiring an Optical Coherence Tomography (OCT) image group for the probe object;
registering the OCT image group and each angiographic image in the angiographic image group to obtain registration parameters;
generating a plaque attenuation coefficient light attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group;
calculating an IPA value of each of the light attenuation coefficient images in the light attenuation coefficient image group;
and marking each angiography image in the angiography image group by using the IPA value based on the registration parameters to obtain a target angiography image group.
It should be understood that by registering each of the angiography images in the OCT image group and the angiography image group, registration parameters, i.e., correspondences, of the OCT image group and each of the angiography images are obtained, so that the light attenuation coefficient images obtained using each of the OCT images have consistent correspondences with each of the angiography images. On the basis, IPA values obtained based on the light attenuation coefficient images are marked on each angiography image, so that the position and the conforming condition of the vulnerable plaque can be clearly and visually observed, and the vulnerable plaque identification capability of the probe object is improved.
In a second aspect, an embodiment of the present application provides an apparatus for image processing, including:
an image acquisition module for acquiring an angiography image group for a probe object in a process of acquiring an Optical Coherence Tomography (OCT) image group for the probe object;
the image registration module is used for registering the OCT image group and each angiographic image in the angiographic image group to obtain registration parameters;
the light attenuation coefficient image generating module is used for generating a plaque attenuation coefficient light attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group;
an IPA value generation module for calculating IPA values of each of the light attenuation coefficient images in the light attenuation coefficient image group;
and the image marking module is used for marking each angiography image in the angiography image group by using the IPA value based on the registration parameters to obtain a target angiography image group.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the computer program, when executed by the processor, implementing the method steps of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer readable storage medium stores a computer program which, when executed by a processor, performs the method steps of the first aspect described above.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the method steps of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic flow chart diagram illustrating a method for image processing according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a correspondence relationship between an OCT image and an angiography image according to an embodiment of the present application;
FIG. 3 is a schematic diagram of image processing provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a method for image processing according to another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a method for image processing according to another embodiment of the present application;
FIG. 6 is a schematic diagram of image registration provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an attention U-network detection pull-back vessel provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a cycle generating countermeasure network provided by an embodiment of the present application;
FIG. 9a is an OCT image sample provided by an embodiment of the present application;
FIG. 9b is a light attenuation coefficient image sample provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of an apparatus for image processing according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The Optical Coherence Tomography (OCT) technique is an imaging technique. The method utilizes the basic principle of a weak coherent light interferometer to divide light rays emitted by a light source into two beams, one beam is emitted to a tested tissue, namely a sample arm, the other beam is emitted to a reference reflector, namely a reference arm, then two beams of light signals reflected by the tested tissue and the reference reflector are superposed and interfered, and finally, image gray scales with different strengths are displayed according to the light signals along with the difference of the tested tissue, so that the inside of the tissue is imaged.
The existing traditional optical coherent image technology is weak in vulnerable plaque identification capability. Determining vulnerable plaque requires determining the thickness of the fibrous cap, which requires manual measurement by a physician. The subjective factors of the measurer may cause variability in the measurement results. The difficulty in improving the recognition capability of the system and the equipment is high, and high cost is also generated.
Because the vulnerable plaque has dark regions such as lipid, the fiber cap boundary of the vulnerable plaque in the OCT image is not very clear, the display effect of the existing OCT image is poor, and the medical staff is not convenient to judge the load condition of the vulnerable plaque.
An embodiment of the present application provides an image processing method, which calculates a Plaque Attenuation Index (IPA) value from an image based on light Attenuation coefficients based on a result of contrast fusion registration (ACR), and maps the IPA value into a contrast image. Because the light attenuation of the vulnerable plaque area is obvious, the angiography image obtained by the image processing method provided by the embodiment of the application can visually prompt a doctor that the vulnerable plaque possibly exists in the current image frame by displaying the IPA value corresponding to the light attenuation image, so that the display of the IPA value is more visual and effective.
It should be noted that the image processing method provided in the embodiments of the present application may be implemented by software and/or hardware including, but not limited to, an apparatus for acquiring an angiographic image, an OCT apparatus, a local third-party computing apparatus, and a remote third-party computing apparatus, and the present application does not limit the subject matter for implementing the image processing method. The third party is a device other than the OCT device and the device of the angiographic image.
Fig. 1 illustrates a method for image processing provided by an embodiment of the present application. As shown in fig. 1, the method includes steps S110 to S150. The specific implementation principle of each step is as follows:
s110, in the process of acquiring the optical coherence tomography OCT image group aiming at the detected object, acquiring an angiography image group aiming at the detected object.
In the course of performing optical coherence tomography on a blood vessel of an object to be examined, such as a coronary blood vessel, by an OCT apparatus, a series of OCT images are obtained, which are referred to as an OCT image group. In this procedure, the object under investigation is subjected to angiographic imaging by an angiographic imaging apparatus, and a series of angiographic images, referred to as an angiographic image group, are acquired. In some embodiments, the angiographic imaging is of coronary vessels and the image obtained is referred to as a Coronary Angiography (CAG).
It is understood that the imaging devices for acquiring the OCT image set and the angiography image set may be the same device, two different devices, or a combination of devices having a controlling relationship. In some embodiments, the OCT image set and the angiography image set may be image processed by the OCT device, and the OCT image set and the angiography image set may also be image processed by the angiography imaging device. In some embodiments, after the OCT image set and the angiography image set are acquired by the OCT device and the angiography imaging device, the third-party computing device acquires the OCT image set and the angiography image set through a storage medium, a communication cable, and a communication network for processing.
And S120, registering the OCT image group and each angiographic image in the angiographic image group to obtain a registration parameter.
It is to be understood that the OCT image group 21 is a series of tomographic images of a blood vessel of the probe object, that is, sectional images equivalent to the blood vessel, as shown in fig. 2. The angiographic image 22 is a projection image for a blood vessel of an examination object. The OCT image set is registered with the angiographic image, i.e. the vessel section corresponding to each OCT image 211 in the OCT image set is determined, the location of the vessel projection in the angiographic image 221. Or, establishing the corresponding relation between each OCT image in the OCT image group and the position in the angiography image. The OCT image set and the individual angiographic images in the angiographic image set are registered, i.e. for each angiographic image in the angiographic image set, a correspondence between the OCT image set and a position in the angiographic image is established.
And S130, generating a light attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group.
The Optical Attenuation Coefficient (OAC) of the vascular wall (plaque-containing) tissue is an Optical characteristic parameter of OCT images. The light attenuation coefficient of the biological tissue is changed along with the space position, so that the tissue components such as thin fiber caps, calcified plaques, lipid-rich plaques and the like can be quantitatively calibrated according to the light attenuation coefficient.
In some embodiments, the light attenuation coefficient image may be obtained by calculating the light attenuation coefficient corresponding to the tissue in each OCT image based on the OCT image using optical parameters of the OCT apparatus, such as the Rayleigh length (zR), the half-width of the roll-off function (zW), and the like. Methods of calculation include, but are not limited to, curve Fitting (CF), or depth-resolved (DR) model methods. Thereby obtaining a series of light attenuation coefficient images corresponding to the OCT image group, and marking as the light attenuation coefficient image group. It should be noted that there is a one-to-one correspondence relationship between the OCT image set and the light attenuation coefficient image set. That is, the registration parameters of the OCT image set and the angiography image set are consistent with the registration parameters of the light attenuation coefficient image set and the angiography image set.
S140, calculating the plaque attenuation index IPA value of each light attenuation coefficient image in the light attenuation coefficient image group.
In some embodiments, the plaque attenuation Index (IPA) is the fraction of pixels in the attenuation map having an attenuation coefficient greater than some threshold x. In one specific example, the score may also be multiplied by a coefficient having a value of 1000. Specifically, the IPA value can be calculated using the following formula.
Figure BDA0003775824500000071
Wherein x is the plaque attenuation coefficient threshold, mu t As attenuation coefficient, N (. Mu.) t >x) is eachTotal number of A lines with maximum attenuation greater than x on A line (A-line), N total Indicating the number of all a lines.
S150, based on the registration parameters, marking each angiography image in the angiography image group by using the IPA value to obtain a target angiography image group.
In some embodiments, the IPA values are gray scale values with which to label each of the set of angiography images, obtaining a set of target angiography images, comprising: the gray value is marked at the corresponding position of the light attenuation image on each angiography image.
In some embodiments, the corresponding relationship between the IPA values and the labeled parameters is preset, such as a look-up table, or a conversion curve formula. The IPA values are mapped to the tag parameters using the correspondence. The marking parameters can be RGB color values, computer codes corresponding to signs such as "+", "#" and the like, and computer codes corresponding to other marking signs. And displaying the marking parameters corresponding to the IPA value at the corresponding positions of the light attenuation images on each angiography image.
In some embodiments, the OCT image set is a cross-sectional image of a pullback blood vessel on an angiogram, i.e., the OCT image in the OCT image set corresponds to the location of the pullback blood vessel on the angiogram. It should be understood that, due to the one-to-one correspondence of the light attenuation coefficient image and the OCT image, the IPA value obtained from each light attenuation coefficient image also corresponds to one position of the pulled-back blood vessel on the angiogram. Based on the registration parameters, labeling each of the angiography images in the angiography image group with the IPA value to obtain a target angiography image group, comprising: converting each IPA value into a marking parameter according to the corresponding relation between a preset IPA value and the marking parameter; and displaying the mark parameters of the corresponding light attenuation coefficient images at the target positions of the angiography images according to the registration parameters, wherein the target positions are pixel positions on the pull-back paths corresponding to the light attenuation coefficient images.
In one particular example, the marking parameter may be a color value. For example, if the IPA value is 100, the corresponding relationship between the preset IPA value and the marking parameter (which may be a comparison table) is queried, and the RGB value corresponding to the IPA value is determined to be [179,62,110]. And according to the registration parameters, pulling back pixels at the position corresponding to the blood vessel on the angiography image to display the RGB values. Namely, the color rendering display of the pull-back path on the current contrast image is realized.
It can be understood that, as shown in fig. 3, in the embodiment of the present application, registration parameters, that is, correspondence relationships between the OCT image group and each of the angiography images are obtained by registering each of the angiography images in the OCT image group and the angiography image group, so that the light attenuation coefficient images obtained by using each of the OCT images have a consistent correspondence relationship with each of the angiography images. On the basis, IPA values obtained based on the light attenuation coefficient images are marked on each angiography image, so that the position and the conforming condition of the vulnerable plaque can be clearly and visually observed, and the vulnerable plaque identification capability of the probe object is improved.
On the basis of the image processing method shown in fig. 1, as shown in fig. 4, step S120 registers the OCT image group and each of the angiography images in the angiography image group to obtain registration parameters, and includes steps S121 to S123. This process is also known as contrast fusion registration (ACR).
S121, detecting a pull-back path in each of the angiographic images.
In angiographic images, the pull-back vessel is the vessel with the guide wire, i.e. the vessel to be scanned. In some instances, pullback is also referred to as pullback. The pullback path, also known as the pullback path, is the path that the optical catheter scans during the OCT scan.
In some embodiments, as shown in fig. 5, detecting the pullback path in the respective angiographic images includes steps S1211 to S1214:
s1211, using the pre-trained target detection model, detecting the pull-back blood vessel in each of the angiography images.
The blood vessel to be drawn is a blood vessel with a guide wire, namely a blood vessel to be scanned.
In some embodiments, the object detection model may be a deep learning network model for object detection. In some embodiments, the object detection model performs an image segmentation operation on the pullback vessels in the detected angiographic images.
S1212, in the pullback blood vessel in each of the angiographic images, detects a start position and an end position of a development target object.
Wherein, the developing object can be a developing ring or an optical probe. The developing ring is a metal ring arranged at the head end of the guide wire for increasing the developing effect.
S1213, projecting the start position and the end position to the respective angiography images by using an Iterative Closest Point (ICP) algorithm.
In some embodiments, in the process of acquiring the optical coherence tomography OCT image group for the probe object, the angiography image group for the probe object is acquired, wherein the starting position is the position of the movement of the development ring of the first angiography image, and the ending position is the position of the movement of the development ring of the last angiography image. When the positions of the development rings of the two head and tail images are determined, the pull-back path of the movement of the development ring can be obtained, and then other angiography images can be projected through an ICP algorithm. Such that each angiographic image is marked with the start position and the end position.
S1214, obtaining a pull-back path in each angiography image based on the start position and the end position in each angiography image by using a shortest path algorithm.
In some embodiments, the weights matrix is established by using the gray values of the pixel points of the current angiography image, and since the gray values of the blood vessel parts are lower and the weights are lower, the shortest path algorithm is to ensure that the path weights between the target points are the lowest. When the starting position and the ending position of the developing ring are determined, a pull-back path can be determined on the pull-back blood vessel.
Fig. 6 is a schematic image registration diagram provided in an embodiment of the present application. Fig. 6 shows an example of projecting the development ring position to other angiographic images by the ICP algorithm, taking the development ring position of the last frame angiographic image 61, that is, the termination position 611 of the development ring as an example. And an example of obtaining the pull-back path by the shortest path method through the projection operation with the development ring start position 612 and the development ring end position 611 arbitrary one frame angiographic image 62.
The registration parameters include a corresponding relationship between each OCT image in the OCT image group and a target position, where the target position is a pixel point position in each angiography image. It should be understood that the target location is a pixel point location on the pullback path in the respective angiographic image.
The corresponding relation between each OCT image in the OCT image group and the target position is obtained by registering each angiography image in the OCT image group and the angiography image group, so that the light attenuation coefficient images obtained by adopting each OCT image have the consistent corresponding relation with the target position. On the basis, IPA values obtained based on the light attenuation coefficient images are marked at the target positions of all the angiography images, so that the positions and the conforming conditions of vulnerable plaques can be clearly and visually observed, and the vulnerable plaque identification capability of an exploration object is improved.
And S122, for each angiography image in the angiography image group, sampling a pull-back path in the angiography image at equal intervals according to the frame rate of the OCT image group, and obtaining the corresponding relation between each OCT image and the position of a pixel point on the pull-back path.
And S123, taking the corresponding relation between each OCT image and the position of the pixel point on the pull-back path as the registration parameter.
It should be noted that the default OCT pullback speed is kept constant, and therefore the distance moved per unit time is also constant. The pull-back path in the angiographic image can therefore be sampled at equal intervals to establish a correspondence between the location on the pull-back path and each OCT image.
In some embodiments, assuming a pullback path on the angiographic image is 600 pixel point distances, a set of OCT pullbacks is 300 frame images. The OCT equipment obtains a frame of OCT image each time of scanning, and correspondingly, the position of a developing ring on the angiography image moves to 2 pixel points. Therefore, one OCT image corresponds to the positions of two pixel points on each angiographic image.
Fig. 6 illustrates an example of a frame of OCT image 64, which is registered with the angiogram 63 with the determined pull-back path, in that the frame number of the OCT image 64 corresponds to a position 631 of the blood vessel coming back on the angiogram 63, and it should be noted that the position 631 may include 1 or more pixels, depending on the OCT frame rate and the number of pixels included in the pull-back path.
Based on the image processing method shown in fig. 5, step S1211, which detects a pull-back blood vessel in each of the angiography images by using a target detection model trained in advance, includes:
as shown in fig. 7, a trained target detection model is adopted as the Attention U-type network Attention-U-net model.
The Attention-U-net model is trained by adopting an angiography image sample set which is labeled with a pull-back blood vessel in advance. In some embodiments, for coronary angiography applications, a set of CAG contrast images may be prepared, with the guidewire pullback vessels labeled therein by an expert for training the Attention-U-net model to identify pullback vessels in the CAG images.
It should be noted that, if the Attention-U-net network is used to perform fine segmentation on the pullback blood vessels in the CAG images of various body positions, including the Left Anterior Descending (LAD), the Left Circumflex (LCX), and the Right Coronary Artery (RCA), the ratio of the CAG images of various body positions is guaranteed to be consistent as much as possible when training samples are collected, and the network is guaranteed to have close segmentation performance in each model.
In some embodiments, since the data amount of the CAG image sample set of the expert labeling pull-back blood vessel is small, the data sample amount can be expanded by rotating, turning, adjusting the contrast ratio and the like.
Attention-U-net adds Attention mechanism on U-net, which monitors feature of upper layer by feature of lower layer to realize Attention mechanism. As shown in FIG. 7, the Attention-U-net includes the processes of down-sampling and up-sampling, and adds the Attention mechanism used in FIG. 7
Figure BDA0003775824500000121
And (4) showing.
The Attention-U-net model employs a loss function L containing an expanded pull-back vessel weight parameter AUN And (5) training.
Figure BDA0003775824500000122
Figure BDA0003775824500000123
Wherein r is ln The real pixel class of the class l at the nth position is shown, and the classes in the embodiment of the present application are divided into two classes, namely a pull-back blood vessel path pixel and a background pixel. And P is ln Representing the corresponding prediction probability value, ω l The weight of each category is represented, and when the proportion of the category on the image is larger, the weight is smaller.
It should be appreciated that, for ease of understanding the present application, embodiments of the present application provide examples of the above formula for a loss function that includes an expanded pullback vessel weight parameter. The skilled person can refer to this example to adapt the form of the expanded pull-back vessel weight parameter, as well as the specific parameters of the loss function, in connection with the actual situation.
It can be understood that the activation value is adjusted through automatic learning parameters, the activated part is limited to a region with segmentation, the activation value of the background is reduced to optimize segmentation, end-to-end segmentation is realized, and the segmentation accuracy of the Attention-U-net in a complex image is higher, so that a trained model can automatically segment a blood vessel through which a guide wire passes, and a pull-back blood vessel is quickly positioned.
On the basis of the image processing method shown in fig. 1, as shown in fig. 8, an embodiment of the present application further provides a method for generating an optical attenuation coefficient image set corresponding to each OCT image based on each OCT image in the OCT image set by using a cyclic generation countermeasure network (cyclic gan). The cycle generation countermeasure network is also called a cycle consistency generation countermeasure network.
Reference is made to the method for generating the light attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group provided in the above embodiment. Based on the optical parameters of the OCT equipment, the optical hardware parameters of the OCT equipment are needed to obtain the OCT image light attenuation coefficient image through calculation. However, when the OCT apparatus is shipped from a factory, the relevant parameters may slightly differ, which may cause systematic errors in the light attenuation coefficient image generated from the OCT image. The embodiment of the application provides a method for synthesizing an optical attenuation coefficient image according to an OCT image by using a cycleGAN network and calculating an IPA value according to the synthesized optical attenuation coefficient image in order to eliminate the influence of the OCT system on IPA calculation.
In some embodiments of the present application, OCT images generated by multiple OCT devices are prepared for a CycleGAN network, constituting an OCT image sample set. Based on the OCT image sample set, the light attenuation coefficient image sample set is obtained according to the optical parameter calculation of each OCT device. Fig. 9a shows an OCT image sample provided in an embodiment of the present application, and fig. 9b shows a light attenuation coefficient image sample provided in an embodiment of the present application.
The method has the advantages that the method adopts the OCT image sample set and the light attenuation coefficient image sample set generated by the multiple devices to train the CycleGAN network, can improve the generalization capability of the CycleGAN network, and enables the trained CycleGAN network to generate the corresponding light attenuation coefficient image based on the OCT image generated by any OCT device.
In some embodiments, since the OCT image sample set and the light attenuation coefficient image sample set obtained by expert labeling have a small data volume, the OCT image sample set and the light attenuation coefficient image sample set may be transformed by rotating, flipping, adjusting contrast, and the like to expand the data sample volume.
Referring to fig. 8, the OCT image is combined into a light attenuation coefficient image using a CycleGAN network. The CycleGAN network consists essentially of two cycles, a forward cycle and a reverse cycle.
The forward loop consists of three independent CNN models, where the synthesizer network is also called generator network:
(1)Syn IPA is to make OCT image Img OCT A synthesizer network that converts to an IPA image;
(2)Syn OCT is to attenuate the light attenuation coefficient image Syn IPA (Img OCT ) A synthesizer network that converts back to an OCT image;
(3)Dis IPA is to distinguish the synthesized light attenuation coefficient image Syn IPA (Img OCT ) And a real light attenuation coefficient image RealIPAImg discriminator network.
For the combined light attenuation coefficient image, the label is 0, and for the real light attenuation image, the label is 1. The continuous learning of the discriminator network can distinguish between synthetic and real, i.e. for synthetic the discriminator output is 0 and for real the output is 1. However, as the model is trained continuously, the quality of the generated light attenuation image is better and closer to reality, and finally the discriminator is difficult to distinguish synthesis from normal, so that the purpose of model training is achieved.
Media network Dis IPA Attempting to distinguish the synthesized light attenuation coefficient image Syn IPA (Img OCT ) Network Syn when comparing with real light attenuation coefficient image RealIPAAmg IPA The OCT image can be synthesized into the optical attenuation coefficient image Syn which is close to the real optical attenuation coefficient image Syn as much as possible IPA (Img OCT ) To enable the network Dis IPA Cannot be distinguished. In addition, the synthesized light attenuation coefficient image Syn IPA (Img OCT ) It also needs to be synchronized via the network Syn OCT Switch back to OCT image for the original image Syn OCT (Img IPA ) The reconstruction is as accurate as possible.
In order to improve the stability of training, reverse circulation is added, an OCT image is synthesized by using the light attenuation coefficient image, and the synthesized OCT image is converted back to the light attenuation coefficient image. The reverse loop likewise comprises three parts, wherein the two synthetic networks of the reverse loop are common to the forward loop, i.e. netsyn OCT And network Syn IPA . In addition, the reverse loop also contains a discriminator network Dis OCT For distinguishing synthetic OCT images Syn OCT (Img IPA ) And a true OCT image realoct.
The antagonistic targets of the synthesizer network and the arbiter network react to a Loss function Loss as follows IPA And Loss OCT In (1).
Dis discriminator IPA And a value for determining whether or not the image is a real light attenuation coefficient image, which is 1 when the image is a real light attenuation coefficient image, and is 0 when the image is a synthesized light attenuation coefficient image. The optical attenuation coefficient of discriminator Dis will minimize the Loss term as much as possible, and the Loss will be IPA Comprises the following steps:
Loss IPA =(1-Dis IPA (Img IPA )) 2 +Dis IPA (Syn IPA (Img OCT )) 2
similarly, discriminator Dis OCT Used for judging whether the image is a real OCT image, the real is 1, otherwise, the image is 0, and the Loss is less OCT Comprises the following steps:
Loss OCT =(1-Dis OCT (Img OCT )) 2 +Dis OCT (Syn OCT (Img IPA )) 2
in addition, there is a Loss of consistency in computing loop Loss cycle In the case of the OCT image and the light attenuation coefficient image, an area with a large gray value, that is, an unstable plaque, is a major concern, but the unstable plaque occupies a small proportion in the image, and the weight coefficient of the area with a large gray value is added to the loss term.
Figure BDA0003775824500000151
Where i represents the pixel point value of the corresponding position of the real image and the synthesized image, and N represents the number of pixel points in the image, which in some embodiments is 704x704.Syn OCT (Syn IPA (Img OCT ) Graph showing the light attenuation coefficient by synthesisThe image is converted back to an OCT image. In the same way, syn IPA (Syn OCT (Img IPA ) Represents the light attenuation coefficient image converted back from the synthesized OCT image. When the pixel point value of the corresponding position in the real image is larger, the instability represented by the pixel point value is larger, and the error of the position is smaller.
The cycleGAN network cycle consistency loss provided by the embodiment of the application comprises a first generation loss item
Figure BDA0003775824500000152
And second generation loss term
Figure BDA0003775824500000153
The first generation loss term includes a first weight coefficient negatively correlated with a pixel value of the first generated sample
Figure BDA0003775824500000154
The first generated sample is an OCT image sample; the second generation loss term includes a second weight coefficient negatively correlated with a pixel value of a second generated sample
Figure BDA0003775824500000155
The second generated samples are light attenuation coefficient image samples. Because the effective pixel areas in OCT and IPA account for a smaller proportion of the picture, because if the weighting terms are not added, the loss weights for the effective areas in OCT and IPA are smaller, but with the weighting coefficients in the two terms, the smaller the proportion of effective pixels to the total pixels of the image is, the larger the two terms are, thereby increasing the weights.
Total Loss function Loss of cycleGAN network total Is Loss total =Loss IPA +Loss OCT +λLoss cycle . Wherein, λ is a scaling coefficient and is a hyper-parameter.
Specific network architecture diagram referring to fig. 8, a forward loop and a reverse loop are included. In the forward loop, syn IPA Synthesizing the OCT image into an optical attenuation coefficient image Syn OCT The combined light attenuation coefficient image is converted back to OCT close to the original imageImage, dis IPA Is used to distinguish the real light attenuation coefficient image RealIPAImg from the synthesized light attenuation coefficient image. In the reverse loop, syn OCT Is an optical attenuation coefficient image synthesis OCT image, syn IPA The combined OCT image is converted back to an optical attenuation coefficient image, dis, close to the original image OCT Is used to distinguish between real OCT images realoct and synthetic OCT images.
It should be noted that when training the CycleGAN network, the OCT images are synthesized into the light attenuation coefficient images, and the light attenuation coefficient images are synthesized into the OCT images, that is, both the forward cycle and the reverse cycle are trained. However, after the CycleGAN is trained, a network of light attenuation coefficient image portions is synthesized using only the OCT image.
It can be understood that the method for synthesizing an optical attenuation coefficient image according to an OCT image by using a CycleGAN network and calculating an IPA value according to the synthesized optical attenuation coefficient image provided in the embodiment of the present application can get rid of the influence of specific OCT device parameters, synthesize an optical attenuation coefficient image directly through an OCT image, reduce system errors, and greatly improve the processing efficiency of the synthesized optical attenuation coefficient image.
Corresponding to the method of image processing shown in fig. 1, fig. 10 shows an apparatus M100 of image processing provided in an embodiment of the present application, including:
an image acquisition module M110 is configured to acquire an angiography image set for a probe object in the process of acquiring the optical coherence tomography OCT image set for the probe object.
An image registration module M120, configured to register each angiography image in the OCT image group and the angiography image group, so as to obtain a registration parameter.
And the light attenuation coefficient image generating module M130 is configured to generate a plaque attenuation coefficient image group corresponding to each OCT image based on each OCT image in the OCT image group.
An IPA value generating module M140, configured to calculate an IPA value of each of the light attenuation coefficient images in the light attenuation coefficient image group.
And the image marking module is used for marking each angiography image in the angiography image group by using the IPA value based on the registration parameter to obtain a target angiography image group.
It is understood that various embodiments and combinations of the embodiments in the above embodiments and their advantages are also applicable to this embodiment, and are not described herein again.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 11, the electronic device D10 of this embodiment includes: at least one processor D100 (only one is shown in fig. 11), a memory D101, and a computer program D102 stored in the memory D101 and operable on the at least one processor D100, wherein the processor D100 implements the steps of any of the method embodiments described above when executing the computer program D102.
The electronic device D10 may be an OCT device, an angiography imaging device, a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The electronic device may include, but is not limited to, a processor D100, a memory D101. Those skilled in the art will appreciate that fig. 11 is merely an example of the electronic device D10, and does not constitute a limitation of the electronic device D10, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
Processor D100 may be a Central Processing Unit (CPU), and Processor D100 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage D101 may be an internal storage unit of the electronic device D10 in some embodiments, for example, a hard disk or a memory of the electronic device D10. In other embodiments, the memory D101 may also be an external storage device of the electronic device D10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device D10. Further, the memory D101 may also include both an internal storage unit and an external storage device of the electronic device D10. The memory D101 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory D101 may also be used to temporarily store data that has been output or is to be output.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
Embodiments of the present application provide a computer program product, which when executed on an electronic device, enables the electronic device to implement the steps in the above method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of image processing, comprising:
detecting a pull-back path in each of angiography images in an angiography image group of an inspection object; the angiographic image group is acquired in a process of performing Optical Coherence Tomography (OCT) on the probe object;
for each angiography image, sampling a pull-back path in the angiography image at equal intervals according to a frame frequency of an OCT image group corresponding to the angiography image, and obtaining a corresponding relation between each OCT image in the OCT image group and a pixel point position on the pull-back path;
and determining the corresponding relation between each OCT image and the position of the pixel point on the pull-back path as the registration parameter of the angiography image.
2. The method of claim 1, wherein said detecting a pull-back path in each angiographic image of the set of angiographic images of the object under investigation comprises:
detecting a pull-back blood vessel in each angiography image by using a target detection model obtained through pre-training;
detecting a start position and an end position of a development target in the pull-back blood vessel in the respective angiographic images;
projecting the start and end positions to the respective angiographic images using a closest point iteration algorithm;
and obtaining a pull-back path in each angiography image based on the starting position and the ending position in each angiography image by using a shortest path algorithm.
3. The method of claim 2, wherein the target detection model is an Attention-U-type network Attention-U-net model;
the Attention-U-net model is trained by adopting an angiography image sample set which is labeled with a pullback blood vessel in advance;
the Attention-U-net model is trained using a loss function that includes parameters that expand the weight of the pull-back vessel.
4. The method of claim 3, wherein the loss function comprises:
Figure FDA0003775824490000021
Figure FDA0003775824490000022
wherein L is AUN Representing said loss function, r ln A real pixel class representing class i at the nth position, the class including a pull-back blood vessel path pixel and a background pixel, P ln Representing the corresponding prediction probability value, ω l And the weight of each category is represented, and the larger the proportion of the categories on the image is, the smaller the weight is.
5. An apparatus for image processing, comprising an image registration module; the image registration module is to:
detecting a pull-back path in each of angiography images in an angiography image group of an inspection object; the angiographic image group is acquired in a process of performing Optical Coherence Tomography (OCT) on the probe object;
for each angiography image, sampling a pull-back path in the angiography image at equal intervals according to a frame frequency of an OCT image group corresponding to the angiography image, and obtaining a corresponding relation between each OCT image in the OCT image group and a pixel point position on the pull-back path;
and determining the corresponding relation between each OCT image and the position of the pixel point on the pull-back path as the registration parameter of the angiography image.
6. The apparatus of claim 5, wherein the image registration module is further to:
detecting a pull-back blood vessel in each angiography image by using a target detection model obtained through pre-training;
detecting a start position and an end position of a development target in the pull-back blood vessel in the respective angiographic images;
projecting the start and end positions to the respective angiographic images using a closest point iteration algorithm;
and obtaining a pull-back path in each angiography image based on the starting position and the ending position in each angiography image by using a shortest path algorithm.
7. The apparatus of claim 6, wherein the target detection model is an Attention-U-type network Attention-U-net model;
the Attention-U-net model is trained by adopting an angiography image sample set which is labeled with a pullback blood vessel in advance;
the Attention-U-net model is trained using a loss function including an expanded pull-back vessel weight parameter.
8. The apparatus of claim 7, wherein the loss function comprises:
Figure FDA0003775824490000031
Figure FDA0003775824490000032
wherein L is AUN Representing said loss function, r ln Representing a real pixel class of class i at the nth position, the class comprising a pullbackBlood vessel path pixels and background pixels, P ln Representing the corresponding prediction probability value, ω l And the weight of each category is represented, and the larger the proportion of the categories on the image is, the smaller the weight is.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202210916329.7A 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium Pending CN115423751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210916329.7A CN115423751A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210916329.7A CN115423751A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium
CN202110790290.4A CN113469986A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110790290.4A Division CN113469986A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115423751A true CN115423751A (en) 2022-12-02

Family

ID=77880096

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210916329.7A Pending CN115423751A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium
CN202110790290.4A Pending CN113469986A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110790290.4A Pending CN113469986A (en) 2021-07-13 2021-07-13 Image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (2) CN115423751A (en)
WO (1) WO2023284056A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563414B (en) * 2023-07-11 2023-09-12 天津博霆光电技术有限公司 OCT-based cardiovascular imaging fibrillation shadow eliminating method and equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7609289B2 (en) * 2003-09-25 2009-10-27 Omnitek Partners, Llc Methods and apparatus for capturing images with a multi-image lens
CN111652917B (en) * 2013-03-12 2024-07-09 光学实验室成像公司 Vascular data processing and image registration system, method and device
US20170086675A1 (en) * 2014-03-25 2017-03-30 The Johns Hopkins University Quantitative tissue property mapping for real time tumor detection and interventional guidance
CN105825488B (en) * 2016-05-30 2018-08-21 天津大学 It is a kind of angiocarpy in optical coherence tomography image enchancing method
CN108053429B (en) * 2017-12-28 2021-11-02 中科微光医疗研究中心(西安)有限公司 Automatic registration method and device for cardiovascular OCT and coronary angiography
CN111710012B (en) * 2020-06-12 2023-04-14 浙江大学 OCTA imaging method and device based on two-dimensional composite registration
CN111768403A (en) * 2020-07-09 2020-10-13 成都全景恒升科技有限公司 Calcified plaque detection decision-making system and device based on artificial intelligence algorithm
CN112804510B (en) * 2021-01-08 2022-06-03 海南省海洋与渔业科学院 Color fidelity processing method and device for deep water image, storage medium and camera

Also Published As

Publication number Publication date
CN113469986A (en) 2021-10-01
WO2023284056A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
CN101336844B (en) Medical image processing apparatus and medical image diagnosis apparatus
CN110522465B (en) Method and system for estimating hemodynamic parameters based on image data
US8483488B2 (en) Method and system for stabilizing a series of intravascular ultrasound images and extracting vessel lumen from the images
CN107307848B (en) Face recognition and skin detection system based on high-speed large-range scanning optical micro-radiography imaging
CN105719324B (en) Image processing apparatus and image processing method
US7260252B2 (en) X-ray computed tomographic apparatus, image processing apparatus, and image processing method
US7924972B2 (en) Reconstruction of an image of a moving object from volumetric data
JP2791255B2 (en) Ultrasound color Doppler tomography
US20070177005A1 (en) Adaptive sampling along edges for surface rendering
CN108292433A (en) The detection and verification of shade in intravascular image
JP2002224116A (en) Ultrasonic diagnostic apparatus and image processor
JPH10502194A (en) Method and system for constructing and displaying three-dimensional images
JP2006034983A (en) Method and device for visualizing deposition in blood vessel
JP6436442B2 (en) Photoacoustic apparatus and image processing method
FR2842931A1 (en) IMPROVEMENT OF A METHOD FOR DISPLAYING TEMPORAL VARIATIONS IN IMAGES OVERLAPPED IN SPACE.
US10902585B2 (en) System and method for automated angiography utilizing a neural network
US11928816B2 (en) Image processing method, apparatus, and system, electronic device, and storage medium
JP3187008B2 (en) Ultrasound color Doppler tomography
CN115423751A (en) Image processing method and device, electronic equipment and storage medium
CA3054490A1 (en) Mid-procedure view change for ultrasound diagnostics
JP2007325641A (en) Medical image display device
US7116808B2 (en) Method for producing an image sequence from volume datasets
CN109345498A (en) Merge the coronary artery dividing method of double source CT data
US20030114743A1 (en) Method of improving the resolution of a medical nuclear image
JP6898047B2 (en) Quantitative evaluation of time-varying data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination