CN114511615A - Method and device for calibrating image - Google Patents

Method and device for calibrating image Download PDF

Info

Publication number
CN114511615A
CN114511615A CN202111677559.4A CN202111677559A CN114511615A CN 114511615 A CN114511615 A CN 114511615A CN 202111677559 A CN202111677559 A CN 202111677559A CN 114511615 A CN114511615 A CN 114511615A
Authority
CN
China
Prior art keywords
image
diameter
determining
target object
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111677559.4A
Other languages
Chinese (zh)
Inventor
朱锐
刘超
鲁全茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Original Assignee
SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD filed Critical SHENZHEN VIVOLIGHT MEDICAL DEVICE & TECHNOLOGY CO LTD
Priority to CN202111677559.4A priority Critical patent/CN114511615A/en
Publication of CN114511615A publication Critical patent/CN114511615A/en
Priority to CN202210546813.5A priority patent/CN116433743A/en
Priority to PCT/CN2022/109465 priority patent/WO2023124069A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application provides a method and a device for calibrating an image, and relates to the technical field of image processing. Wherein, the method comprises the following steps: acquiring a first diameter of a target object in a first image; adding the image of the reference object to the first image to obtain a second image; converting the second image into a third image; determining a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image under the polar coordinate system; determining a calibration value according to the third radius and the fourth radius; and determining a calibrated image according to the calibration value. The relative position relationship between the target object and the reference object in the polar coordinate system can be determined by utilizing the relative position relationship between the target object and the reference object in the world coordinate system, and the calibration value in the polar coordinate system can be determined according to the relative position relationship between the target object and the reference object in the polar coordinate system, so that the image can be calibrated by utilizing the calibration value, and the problem of low calibration precision caused by manual calibration is avoided.

Description

Method and device for calibrating image
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for calibrating an image.
Background
The image is the main carrier of information, and people can acquire or exchange information by using the image. In order to facilitate the analysis of information in an image, generally, one processes the acquired image to obtain a desired result, for example, performing a gray scale process on the acquired color image; for example, the acquired distorted image is subjected to calibration processing.
For example, in an Optical Coherence Tomography (OCT) system, an image processing device in the system scans and images biological tissue by using the interference principle of a light source to acquire an image of the biological tissue, and is used for auxiliary diagnosis of diseases, for example, the OCT device acquires a cross-sectional image of a blood vessel. However, due to the condition limitation of the image processing device and the limitation of the acquisition environment, an error exists in the image acquired by the image processing device, and the calibration personnel misjudges the state of illness of the patient. Therefore, it is often necessary to calibrate the image acquired by the image processing apparatus.
In the related art, the image is generally calibrated by using a manual calibration method, but this method completely depends on the manual operation capability of a calibration person, is time-consuming and labor-consuming, and is not high in the accuracy of image calibration.
Disclosure of Invention
The embodiment of the application provides a method and a device for calibrating an image, which are used for calibrating the image acquired by an OCT (optical coherence tomography) device and avoiding the problem of low calibration precision caused by manual calibration.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a method for calibrating an image, where the method is applied to an image processing apparatus, and the method includes:
acquiring a first diameter of a target object in a first image, wherein the target object is circular or elliptical;
adding an image of a reference object to the first image to obtain a second image, wherein the second image comprises a target object and the reference object, the reference object is circular, the diameter of the reference object is a second diameter, and image coordinate systems corresponding to the first image and the second image are world coordinate systems;
converting the second image into a third image, wherein an image coordinate system corresponding to the third image is a polar coordinate system;
determining a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image under the polar coordinate system;
determining a calibration value according to the third radius and the fourth radius;
and determining the calibrated image according to the calibration value.
In the scheme, in a world coordinate system, a relative position relation exists between the target object and the reference object; after the second image in the world coordinate system is converted into the third image in the polar coordinate system, the target object and the reference object have relative position relationship. The relative position relationship between the target object and the reference object in the polar coordinate system can be determined by utilizing the relative position relationship between the target object and the reference object in the world coordinate system, and the calibration value in the polar coordinate system can be determined according to the relative position relationship between the target object and the reference object in the polar coordinate system, so that the image can be calibrated by utilizing the calibration value, wherein the first diameter and the second diameter can represent the relative position relationship between the target object and the reference object in the world coordinate system, the third radius and the fourth radius can represent the relative position relationship between the target object and the reference object in the polar coordinate system, and the problem of low calibration precision caused by manual calibration is avoided.
Optionally, the method further comprises:
acquiring an original image corresponding to the first image;
inputting an original image into a contour model, and determining a first image, wherein the first image comprises a contour image of a target object;
a first diameter of the target object is determined from the profile image of the target object.
Optionally, determining the first diameter of the target object according to the contour image of the target object includes:
determining the area of the target object in the first image according to the contour image of the target object;
a first diameter of the target object is determined based on the area of the target object.
Optionally, the method further comprises:
acquiring marking information of a first object in the fourth image, wherein the marking information is used for marking the first object in the fourth image, and the first object is circular or elliptical;
inputting the fourth image into the ith detection model to obtain a predicted object;
determining a loss value of the first object based on the marked first object in the fourth image and the predicted object;
determining whether the ith detection model is a contour model according to the loss value of the first object;
wherein i is a positive integer.
In the above scheme, the label information of the first object in the fourth image is acquired, and the fourth image is input into the ith detection model, so as to train the ith detection model, enable the model to predict the first object in the fourth image, and determine the trained detection model as the contour model. The loss value of the first object determined by the marked first object and the predicted object is used as the basis for finishing the training of the contour model, so that the accuracy of predicting the first object by the contour model can be improved.
Optionally, determining a loss value of the first object from the marked first object in the fourth image and the predicted object comprises:
determining a contour loss and a diameter loss of the first object from the marked first object and the predicted object in the fourth image;
a loss value for the first object is determined based on the profile loss and the diameter loss.
Optionally, determining whether the ith detection model is a contour model according to the loss value of the first object includes:
and if the loss value of the first object is less than or equal to the preset loss value, determining the ith detection model as the contour model.
Optionally, determining a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image in the polar coordinate system includes:
determining a ratio of the second diameter to the first diameter;
a fourth radius is determined based on the ratio of the second diameter to the first diameter and the third radius.
Optionally, determining the calibrated image according to the calibration value includes:
calibrating the third image according to the calibration value to obtain a calibrated image under a polar coordinate system;
and converting the image calibrated in the polar coordinate system into an image calibrated in the world coordinate system.
In the above scheme, the third image is calibrated according to the calibration value, and the calibrated third image is converted into the calibrated image in the world coordinate system by using the conversion relationship between the polar coordinate and the image in the world coordinate system, that is, the calibrated image is obtained by using the coordinate conversion relationship corresponding to the image in the determined calibration value, and the image does not need to be manually calibrated, so that the problem of low calibration precision caused by manually calibrating the image is avoided.
In a second aspect, an embodiment of the present application provides an apparatus for calibrating an image, the apparatus including:
the acquiring unit is used for acquiring a first diameter of a target object in the first image, and the shape of the target object is circular or elliptical;
the processing unit is used for adding an image of a reference object to the first image to obtain a second image, the second image comprises a target object and the reference object, the shape of the reference object is circular, the diameter of the reference object is a second diameter, and image coordinate systems corresponding to the first image and the second image are a world coordinate system;
the conversion unit is used for converting the second image into a third image, and an image coordinate system corresponding to the third image is a polar coordinate system;
a determination unit configured to determine a fourth radius of the target object in the third image according to the third radius, the first diameter, and the second diameter of the reference object in the third image in the polar coordinate system;
the determining unit is further used for determining a calibration value according to the third radius and the fourth radius;
the determining unit is further configured to determine the calibrated image according to the calibration value.
Optionally, the acquiring unit is further configured to acquire an original image corresponding to the first image.
Optionally, the determining unit is further configured to input the original image into the contour model, and determine a first image, where the first image includes a contour image of the target object.
Optionally, the determining unit is further configured to determine the first diameter of the target object according to the contour image of the target object.
Optionally, the determining unit is specifically configured to determine an area of the target object in the first image according to the contour image of the target object;
a first diameter of the target object is determined based on the area of the target object.
Optionally, the acquiring unit is further configured to acquire mark information of the first object in the fourth image, where the mark information is used to mark the first object in the fourth image, and the first object is circular or elliptical.
Optionally, the processing unit is further configured to input the fourth image into the ith detection model, so as to obtain a predicted object.
Optionally, the determining unit is further configured to determine a loss value of the first object according to the first object marked in the fourth image and the predicted object.
Optionally, the determining unit is further configured to determine whether the ith detection model is a contour model according to the loss value of the first object.
Optionally, the determining unit is further configured to determine a contour loss and a diameter loss of the first object according to the first object marked in the fourth image and the predicted object;
a loss value for the first object is determined based on the profile loss and the diameter loss.
Optionally, the determining unit is further configured to determine the ith detection model as the contour module if the loss value of the first object is less than or equal to a preset loss value, where i is a positive integer.
Optionally, the determining unit is further configured to determine a ratio of the second diameter to the first diameter;
a fourth radius is determined based on the ratio of the second diameter to the first diameter and the third radius.
Optionally, the processing unit is further configured to calibrate the third image according to the calibration value, so as to obtain a calibrated image in a polar coordinate system;
and converting the image calibrated in the polar coordinate system into an image calibrated in the world coordinate system.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including a processor, coupled with a memory, where the processor is configured to execute a computer program or instructions stored in the memory to implement the method of the first aspect or any implementation manner of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: the relative position relationship between the target object and the reference object in the polar coordinate system can be determined by utilizing the relative position relationship between the target object and the reference object in the world coordinate system, and the calibration value in the polar coordinate system can be determined according to the relative position relationship between the target object and the reference object in the polar coordinate system, so that the image can be calibrated by utilizing the calibration value, wherein the first diameter and the second diameter can represent the relative position relationship between the target object and the reference object in the world coordinate system, the third radius and the fourth radius can represent the relative position relationship between the target object and the reference object in the polar coordinate system, and the problem of low calibration precision caused by manual calibration is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an OCT system for acquiring images according to an embodiment of the present application;
fig. 2 is an image acquired by an OCT apparatus in an OCT system according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a method for calibrating an image according to an embodiment of the present application;
FIG. 4 is a diagram illustrating the relative position of an imaging catheter and marker rings in a polar coordinate system according to an embodiment of the present application;
FIG. 5 is a diagram of the relative position of an imaging catheter and marker ring in a world coordinate system according to an embodiment of the present application;
FIG. 6 is an image before and after calibration of a third image in a polar coordinate system according to an embodiment of the present disclosure;
FIG. 7 is a block diagram illustrating a method for converting an image calibrated in a polar coordinate system to an image calibrated in a world coordinate system according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of an apparatus for calibrating an image according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail below with reference to the embodiments of the present application.
It should be understood that the modes, situations, categories and divisions of the embodiments of the present application are for convenience only and do not limit the present application, and the features of the various modes, categories, situations and embodiments can be combined without contradiction.
It should also be understood that "first", "second", "third", and "fourth" in the embodiments of the present application are merely for distinguishing and do not limit the present application in any way. It should also be understood that, in the embodiments of the present application, the size of the sequence number in each process does not mean the execution sequence of the steps, and the execution sequence of the steps is determined by the internal logic thereof, and does not form any limitation on the execution process of the embodiments of the present application.
The image is the main carrier of information, and the information can be acquired or exchanged by using the image. In order to facilitate the analysis of information in an image, generally, one processes the acquired image to obtain a desired result, for example, performing a gray scale process on the acquired color image; for example, the acquired distorted image is subjected to calibration processing.
For example, fig. 1 is a schematic structural diagram of an Optical Coherence Tomography (OCT) system for acquiring an image of a biological tissue, which includes a catheter connection unit, an imaging catheter and an image processing apparatus. As shown in fig. 1, the optical signal emitted from the image processing apparatus is transmitted to the imaging catheter through the catheter connection unit, and an image of the biological tissue is acquired and used for performing an auxiliary treatment, such as acquiring a cross-sectional image of a blood vessel using an OCT apparatus.
However, due to the limitations of the conditions and the acquisition environment of the image processing device, the image acquired by the image processing device has errors, and the calibrator misjudges the condition of the patient. For example, the OCT apparatus acquires a cross-sectional image of a blood vessel as shown in fig. 2, where a is a target object, which may be an imaging catheter, and b is a reference object, which is artificially added after acquiring the cross-sectional image of the blood vessel, which may be a marker ring. When the imaging catheter is not attached to the marking ring, for example, the cross-sectional area of the lumen derived from the size of the imaging catheter in fig. 2 is smaller than a predetermined value, the calibrator may determine that the patient has a stenosis, but it may be only the result caused by an imaging error when the OCT apparatus is acquiring an image, that is, an error exists in a measurement value of a cross-sectional image of a blood vessel acquired by the OCT system. Therefore, it is often necessary to calibrate the images acquired by the OCT apparatus.
In the related art, the calibration is generally performed manually, i.e., the marker ring shown in fig. 2 is manually dragged to the imaging catheter, so that the marker ring and the imaging catheter are fitted as closely as possible. However, this method is completely dependent on the manual operation ability of the operator, and is time-consuming and labor-consuming, and the accuracy of image calibration is not high.
Based on the problems in the related art, the application provides a method and a device for calibrating an image, and the method comprises the steps of firstly, acquiring a first diameter of a target object in a first image, wherein the target object is circular or elliptical; adding an image of a reference object to the first image to obtain a second image, wherein the second image comprises a target object and the reference object, the reference object is circular, the diameter of the reference object is a second diameter, and image coordinate systems corresponding to the first image and the second image are world coordinate systems; converting the second image into a third image, wherein an image coordinate system corresponding to the third image is a polar coordinate system; determining a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image under the polar coordinate system; determining a calibration value according to the third radius and the fourth radius; and finally, determining the calibrated image according to the calibration value. In a world coordinate system, the relative position relationship between the target object and the reference object exists; after the second image in the world coordinate system is converted into the third image in the polar coordinate system, the target object and the reference object have relative position relationship. The relative position relationship between the target object and the reference object in the polar coordinate system can be determined by utilizing the relative position relationship between the target object and the reference object in the world coordinate system, and the calibration value in the polar coordinate system can be determined according to the relative position relationship between the target object and the reference object in the polar coordinate system, so that the image can be calibrated by utilizing the calibration value, wherein the first diameter and the second diameter can represent the relative position relationship between the target object and the reference object in the world coordinate system, the third radius and the fourth radius can represent the relative position relationship between the target object and the reference object in the polar coordinate system, and the problem of low calibration precision caused by manual calibration is avoided.
The technical solutions of the present application are described in detail below with specific embodiments, which may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Because the implementation of the scheme of the application is completed based on the deep learning network, and the deep learning network model needs to be trained and tested before being used, the process of obtaining the contour model is described first:
before the contour model is not obtained, the model in the training process is called a detection model.
For a better understanding of the solution in the present application, an example of obtaining a contour model is given below:
data set: 4000 blood vessel cross-section images containing the first object, wherein the marking information of the first object in the images is known, the marking information is used for marking the first object in the blood vessel cross-section images, the shape of the first object is circular or elliptical, 3500 blood vessel cross-section images are used as training images for training a detection model, 500 blood vessel cross-section images are used as test images for testing the detection model, and the first object can be an imaging catheter.
Optionally, the 4000 acquired cross-sectional images of the blood vessel may be subjected to image pre-processing, such as rotation, flipping, or adjusting contrast, to increase the data set.
According to the scheme, the acquired image is preprocessed, a data set can be added, so that a contour model obtained by deep learning of a large number of training images can be used for accurately identifying the first object, and meanwhile, the contour model can also be used for identifying some special images, such as blood residues near an imaging catheter and cross-section images of blood vessels with unclear imaging catheter boundaries.
1) Training of the detection model: inputting a first image in a training image into a convolutional neural Network structure in an i detection model, extracting catheter features by a convolutional kernel, generating a Network (Region pro-social Network, RPN) through a Region in the i detection model on a feature map Of a last layer, traversing the feature map by using a sliding window with the size Of 3 x 3 Of the RPN Network, generating 9 anchor frames by using the center Of each sliding window, namely pixel points according to size values [52,32,64] and proportion values [0.5,1,2], inputting the 9 anchor frames into a full-connection layer in the i detection model, and obtaining Regression Of a binary value and a boundary frame (Bounding Box Regression, BB) Of each anchor frame, wherein the two classification values are specifically the probability Of a foreground and the probability Of a background in the anchor frame, and finally outputting 300 Regions Of Interest (ROI); inputting 300 ROIs into an RPN, performing secondary classification and BB regression, filtering the ROIs with the background in an anchor frame, performing ROI alignment operation on the rest ROIs in an i-th detection model, performing difference processing on the feature maps through an ROI alignment network to enable each ROI in the rest ROIs to generate a feature map with a fixed size, and finally obtaining three output vectors: specifically, the classification result of each ROI, BB regression and binary mask, where the ROI is the object predicted by the i-th detection model, the classification result of the ROI is specifically the probability of the first object and the probability of the non-first object in the ROI, and three losses obtained from the above three vectors, i.e., the classification loss, the regression loss and the mask loss, and the profile loss and the diameter loss of the first object obtained from the labeled first object and the predicted first object are determined as the loss value of the first object.
If the loss value is less than or equal to a preset loss value, determining the ith detection model as a contour model;
if the loss value is larger than the preset loss value, the ith detection model is adjusted according to the loss value, the (i + 1) th detection model is determined, and the training process of the detection model is repeated until the obtained loss value is smaller than or equal to the preset loss value.
Wherein the profile loss of the first object is:
Figure BDA0003452580170000101
i is the value of the pixel point at the corresponding position of the first object image, Y is the marked first object,
Figure BDA0003452580170000102
is a dieA first object of type prediction; the loss in diameter of the first object is:
LossDiameters=(Dauto-Dmanual)/Dmanualwherein D isautoTo a predicted diameter of the first object, DmanualIs the predicted diameter of the first object.
In the above scheme, the label information of the first object in the fourth image is obtained, and the fourth image is input into the ith detection model, so as to train the ith detection model, enable the model to predict the first object in the fourth image, and determine the trained detection model as the contour model. The loss value of the first object determined by the marked first object and the predicted object is used as the basis for finishing the training of the contour model, so that the accuracy of the contour model for identifying the first object can be improved.
2) Testing of the contour model: each of the test images is input into the contour model, which predicts the contour ROI segmentation map. And calculating the contact ratio of the predicted outline ROI image and the marked ROI image, and when the contact ratio is smaller than a first value, finishing the test of the outline model.
Fig. 3 is a schematic flowchart of a method for calibrating an image according to an embodiment of the present application, as shown in fig. 3, the method is applied to an image processing apparatus, and the method includes the following steps:
s310, the image processing device acquires a first diameter of a target object in the first image, wherein the target object is circular or elliptical in shape.
Alternatively, the first image in step S310 may be a blood vessel cross-sectional image.
The target object in the first image may be an imaging catheter.
For example, an image processing device acquires a first diameter of an imaging catheter in a cross-sectional image of a blood vessel.
Alternatively, when the shape of the target object in step S310 is an ellipse, the major axis of the ellipse is taken as the first diameter of the target object.
Alternatively, when the shape of the target object in step S310 is an ellipse, the minor axis of the ellipse is taken as the first diameter of the target object.
Alternatively, when the shape of the target object in step S310 is an ellipse, the average axial length of the minor axis and the major axis of the ellipse is taken as the first diameter of the target object.
Optionally, step S310 further includes: the image processing equipment acquires an original image corresponding to the first image;
the image processing equipment inputs an original image into a contour model, and determines a first image, wherein the first image comprises a contour image of a target object;
the image processing apparatus determines a first diameter of the target object based on the contour image of the target object.
In one implementation, the original image is input into a contour model, a ROI segmentation map containing a second object is determined from the contour model, the second object is determined as the target object, and an image including a contour of the target object is determined as the first image.
Optionally, the determining, by the image processing apparatus, the first diameter of the target object according to the contour image of the target object includes:
determining the area of the target object in the first image according to the contour image of the target object;
a first diameter of the target object is determined based on the area of the target object.
In an implementation mode, the image processing equipment determines the number of pixel points in the contour image of the target object by using a contour model, and determines the area of the target object according to the number of the pixel points and the square of the pixel distance; and determining the first diameter of the target object according to the relation between the area and the diameter, wherein the square of the pixel pitch is used for representing the size of the area represented by each pixel point.
For example, the number of pixels is 56600, the square of the pixel pitch is 0.01, and the diameter formula is
Figure BDA0003452580170000111
Obtaining the area of the target object according to 56600 x 0.01 x 566, and then obtaining the area of the target object according to
Figure BDA0003452580170000112
A first diameter of the target object is obtained of 27, where S is the area of the target object.
Optionally, the image processing device obtains the number of pixel points of the longest axis on the target object, and determines the number as the first diameter of the target object.
For example, if the number of the pixel points that acquire the longest axis on the target object is 26, the first diameter of the target object is 26.
And S320, adding the image of the reference object to the first image by the image processing equipment to obtain a second image, wherein the second image comprises a target object and the reference object, the shape of the reference object is circular, the diameter of the reference object is a second diameter, and image coordinate systems corresponding to the first image and the second image are world coordinate systems.
Optionally, in step S320, the first image may be a cross-sectional image of a blood vessel, the first image including an imaging catheter therein;
the reference object may be a marker ring.
For example, the image processing device adds an image of the marker ring to a cross-sectional image of a blood vessel including the imaging catheter, resulting in a second image including the imaging catheter and the marker ring.
S330, the image processing equipment converts the second image into a third image, and an image coordinate system corresponding to the third image is a polar coordinate system.
S340, the image processing apparatus determines a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image in the polar coordinate system.
Optionally, step S340 includes: the image processing device determines the ratio of the second diameter to the first diameter according to the first diameter and the second diameter;
the image processing apparatus determines a fourth radius based on a ratio of the second diameter to the first diameter and the third radius.
For example, according to F2=F1÷(D1/D2) I.e. according to the third radius and the target of the reference object in the polar coordinate systemDetermining a fourth radius according to the principle that the ratio of the fourth radius of the object is equal to the ratio of the second diameter of the reference object to the first diameter of the target object in the world coordinate system, wherein F1Is the third radius of the reference object, i.e. F1The distance between the position of the marker ring and the theta axis is shown in FIG. 4, and the specific value is F1R ÷ (7/2) × (0.89/2); r is the calibration range under the visual field range of 7 mm; f2Is the fourth radius of the target object, i.e., the distance between the position of the imaging catheter and the theta axis as shown in fig. 4; d1Is the second diameter, D, of the reference object, i.e. the marker ring, as shown in FIG. 52The first diameter of the target object, i.e., the imaging catheter, is shown in fig. 5.
Optionally, step S340 includes: the image processing device determines a first ratio of the first diameter to the second diameter according to the first diameter and the second diameter;
the image processing apparatus determines a fourth radius based on a product of the third radius and the first ratio.
For example, according to F2=F1×(D2/D1) That is, the fourth radius is determined according to the ratio of the fourth radius of the target object to the third radius of the reference object in the polar coordinate system, and the ratio of the first diameter of the target object to the second diameter of the reference object in the world coordinate system, wherein F is1Is the third radius of the reference object, specifically F1R ÷ (7/2) × (0.89/2); r is the calibration range under the visual field range of 7 mm; f2A fourth radius of the target object; d1The second diameter of the reference object, which may be the second diameter of the marker ring, D2The first diameter of the imaging catheter may be the first diameter of the imaging catheter.
S350, the image processing device determines a calibration value according to the third radius and the fourth radius.
Optionally, step S350 includes: the image processing device determines a calibration value according to the difference value obtained by the third radius and the fourth radius and a first initial value, wherein the first initial value is the initial calibration value.
For example, according to F' ═ F2-F1=F1÷(D1/D2)-F1=F1÷(D1/D2-1), and a first initial value S, consisting of S' ═ S + F1÷(D1/D2-1) determining a calibration value, wherein S represents an initial calibration value, in particular 0, F1Is the third radius of the reference object, F2Is the fourth radius of the target object, D1Is the second diameter of the reference object, D2And F 'is the difference value of the third radius and the fourth radius, and S' is a calibration value.
Optionally, step S350 includes: the image processing equipment determines a first difference value after carrying out absolute value processing on the difference value of the fourth radius and the third radius;
the image processing device determines a calibration value according to the sum of the first difference value and a first initial value, wherein the first initial value is the initial calibration value.
For example, according to F' ═ F1-F2|=|F1×(D2/D1)-F1|=F1×(D2/D1-1), and a first initial value S, consisting of S' ═ S + F1×(D2/D1-1) determining a calibration value, wherein S represents an initial calibration value, in particular 0, F1Is the third radius of the reference object, F2Is the fourth radius of the target object, D1Is the second diameter of the reference object, D2And F 'is the difference value of the third radius and the fourth radius, and S' is a calibration value.
S360, the image processing device determines a calibrated image according to the calibration value.
Optionally, step S360 includes: and the image processing equipment calibrates the third image according to the calibration value and determines a calibrated image under a polar coordinate system.
For example, as shown in fig. 6, the image before and after calibration of the third image in a polar coordinate system is provided in the embodiment of the present application. Fig. 6(a) is a third image, and fig. 6(b) is an image obtained by calibrating the third image. The specific process is as follows: in the third image of fig. 6(a), the distance of the calibration value F 'is determined downward with the starting point 0 of the vertical axis as a reference, the third image is horizontally cut from the F', and at this time, the width of the third image is less than 670 pixel points, and then, 0 complementing processing is performed on the third image, that is, the image shown in fig. 6(b), that is, the calibrated image, can be obtained, where 670 pixel points are specifically, in the current environment, and the height range of the image in the polar coordinate system corresponding to the viewing range of 7mm is 670 pixel points.
Optionally, step S360 includes: the image processing equipment calibrates the third image according to the calibration value to obtain a calibrated image under a polar coordinate system;
the image processing apparatus converts the image calibrated in the polar coordinate system into an image calibrated in the world coordinate system.
For example, as shown in fig. 7, the image calibrated in the polar coordinate system in fig. 6(b) is converted into the image calibrated in the world coordinate system.
In the above scheme, the third image is calibrated according to the calibration value, and the calibrated third image is converted into the calibrated image in the world coordinate system by using the conversion relationship between the polar coordinate and the image in the world coordinate system, that is, the calibrated image is obtained by using the coordinate conversion relationship corresponding to the image in the determined calibration value, and the image does not need to be manually calibrated, so that the problem of low calibration precision caused by manually calibrating the image is avoided.
Fig. 8 is a schematic structural diagram of an apparatus for calibrating an image according to an embodiment of the present disclosure, and as shown in fig. 8, the apparatus according to the embodiment includes:
an obtaining unit 810, configured to obtain a first diameter of a target object in the first image, where the target object is circular or elliptical;
a processing unit 820, configured to add an image of a reference object to the first image to obtain a second image, where the second image includes a target object and the reference object, the reference object is circular, the reference object has a second diameter, and image coordinate systems corresponding to the first image and the second image are a world coordinate system;
a converting unit 830, configured to convert the second image into a third image, where an image coordinate system corresponding to the third image is a polar coordinate system;
a determining unit 840 for determining a fourth radius of the target object in the third image according to the third radius, the first diameter and the second diameter of the reference object in the third image under the polar coordinate system;
the determining unit is further used for determining a calibration value according to the third radius and the fourth radius;
the determining unit is further configured to determine the calibrated image according to the calibration value.
Optionally, the acquiring unit is further configured to acquire an original image corresponding to the first image.
Optionally, the determining unit is further configured to input the original image into the contour model, and determine a first image, where the first image includes a contour image of the target object.
Optionally, the determining unit is further configured to determine the first diameter of the target object according to the contour image of the target object.
Optionally, the determining unit is specifically configured to determine an area of the target object in the first image according to the contour image of the target object;
a first diameter of the target object is determined based on the area of the target object.
Optionally, the acquiring unit is further configured to acquire mark information of the first object in the fourth image, where the mark information is used to mark the first object in the fourth image, and the first object is circular or elliptical.
Optionally, the processing unit is further configured to input the fourth image into the ith detection model, so as to obtain a predicted object.
Optionally, the determining unit is further configured to determine a loss value of the first object according to the first object marked in the fourth image and the predicted object.
Optionally, the determining unit is further configured to determine whether the ith detection model is a contour model according to the loss value of the first object.
Optionally, the determining unit is further configured to determine a contour loss and a diameter loss of the first object according to the first object marked in the fourth image and the predicted object;
a loss value for the first object is determined based on the profile loss and the diameter loss.
Optionally, the determining unit is further configured to determine the ith detection model as the contour module if the loss value of the first object is less than or equal to the preset loss value.
Optionally, the determining unit is further configured to determine a ratio of the second diameter to the first diameter;
a fourth radius is determined based on the ratio of the second diameter to the first diameter and the third radius.
Optionally, the processing unit is further configured to calibrate the third image according to the calibration value, so as to obtain a calibrated image in a polar coordinate system;
and converting the image calibrated in the polar coordinate system into an image calibrated in the world coordinate system.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Based on the same inventive concept, fig. 9 is an image processing apparatus provided in an embodiment of the present application, and includes a processor, where the processor is coupled with a memory, and the processor is configured to execute a computer program or instructions stored in the memory to implement the method of the first aspect or any implementation manner of the first aspect.
The integrated units described above may be stored in one device if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method of the embodiments described above can be implemented by a computer program, which can be stored in a chip of a computer and can implement the steps of the embodiments of the methods described above when being executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for calibrating an image, the method being applied to an image processing apparatus, comprising:
acquiring a first diameter of a target object in a first image, wherein the target object is circular or elliptical;
adding an image of a reference object to the first image to obtain a second image, wherein the second image comprises the target object and the reference object, the reference object is circular, the reference object has a second diameter, and image coordinate systems corresponding to the first image and the second image are world coordinate systems;
converting the second image into a third image, wherein an image coordinate system corresponding to the third image is a polar coordinate system;
determining a fourth radius of the target object in the third image according to the third radius of the reference object in the third image under the polar coordinate system, the first diameter and the second diameter;
determining a calibration value according to the third radius and the fourth radius;
and determining a calibrated image according to the calibration value.
2. The method of claim 1, wherein the method further comprises:
acquiring an original image corresponding to the first image;
inputting the original image into a contour model, and determining the first image, wherein the first image comprises a contour image of the target object;
determining the first diameter of the target object from the profile image of the target object.
3. The method of claim 2, wherein said determining said first diameter of said target object from said profile image of said target object comprises:
determining the area of the target object in the first image according to the contour image of the target object;
determining the first diameter of the target object from an area of the target object.
4. The method as recited in claim 2, wherein said method further comprises:
acquiring marking information of a first object in a fourth image, wherein the marking information is used for marking the first object in the fourth image, and the first object is circular or elliptical;
inputting the fourth image into an ith detection model to obtain a predicted object;
determining a loss value for the first object based on the first object marked in the fourth image and the predicted object;
determining whether the ith detection model is the contour model according to the loss value of the first object;
wherein i is a positive integer.
5. The method of claim 4, wherein determining the loss value for the first object from the first object marked in the fourth image and the predicted object comprises:
determining a contour loss and a diameter loss of the first object from the first object marked in the fourth image and the predicted object;
determining a loss value for the first object based on the profile loss and the diameter loss.
6. The method of claim 4, wherein the determining whether the ith detection model is the contour model according to the loss value of the first object comprises:
and if the loss value of the first object is less than or equal to a preset loss value, determining the ith detection model as the profile model.
7. The method of claim 1, wherein determining a fourth radius of the target object in the third image from a third radius of the reference object, the first diameter, and the second diameter in the third image in the polar coordinate system comprises:
determining a ratio of the second diameter to the first diameter;
determining the fourth radius from the ratio of the second diameter to the first diameter and the third radius.
8. The method of any of claims 1 to 7, wherein said determining a calibrated image from said calibration values comprises:
calibrating the third image according to the calibration value to obtain a calibrated image under a polar coordinate system;
and converting the image calibrated under the polar coordinate system into an image calibrated under a world coordinate system.
9. An apparatus for calibrating an image, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a first diameter of a target object in a first image, and the shape of the target object is circular or elliptical;
a determining unit, configured to add a reference object process to the first image, and determine a second image, where the second image includes the target object and a reference object, the reference object is circular, the reference object has a second diameter, and coordinates of image data corresponding to the first image and the second image are a world coordinate system;
the conversion unit is used for converting the second image into a third image, and the coordinate system of the image data corresponding to the second image is a polar coordinate system;
the determining unit is further configured to determine a fourth radius of the target object in the third image according to a third radius of a reference object in the third image in the polar coordinate system, the first diameter and the second diameter;
the determining unit is further configured to determine a calibration value according to the third radius and the fourth radius;
the determining unit is further configured to determine a calibrated image according to the calibration value.
10. An image processing apparatus comprising a processor coupled with a memory, the processor being configured to implement the method of any one of claims 1-8 when executing a computer program or instructions stored in the memory.
CN202111677559.4A 2021-12-31 2021-12-31 Method and device for calibrating image Withdrawn CN114511615A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111677559.4A CN114511615A (en) 2021-12-31 2021-12-31 Method and device for calibrating image
CN202210546813.5A CN116433743A (en) 2021-12-31 2022-05-19 Image calibration method and device
PCT/CN2022/109465 WO2023124069A1 (en) 2021-12-31 2022-08-01 Method and apparatus for calibrating image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111677559.4A CN114511615A (en) 2021-12-31 2021-12-31 Method and device for calibrating image

Publications (1)

Publication Number Publication Date
CN114511615A true CN114511615A (en) 2022-05-17

Family

ID=81548024

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111677559.4A Withdrawn CN114511615A (en) 2021-12-31 2021-12-31 Method and device for calibrating image
CN202210546813.5A Pending CN116433743A (en) 2021-12-31 2022-05-19 Image calibration method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210546813.5A Pending CN116433743A (en) 2021-12-31 2022-05-19 Image calibration method and device

Country Status (2)

Country Link
CN (2) CN114511615A (en)
WO (1) WO2023124069A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124069A1 (en) * 2021-12-31 2023-07-06 深圳市中科微光医疗器械技术有限公司 Method and apparatus for calibrating image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100890102B1 (en) * 2004-12-27 2009-03-24 올림푸스 가부시키가이샤 Medical image processing apparatus, and medical image processing method
US7813609B2 (en) * 2007-11-12 2010-10-12 Lightlab Imaging, Inc. Imaging catheter with integrated reference reflector
US9286673B2 (en) * 2012-10-05 2016-03-15 Volcano Corporation Systems for correcting distortions in a medical image and methods of use thereof
US9301687B2 (en) * 2013-03-13 2016-04-05 Volcano Corporation System and method for OCT depth calibration
CN109118508B (en) * 2018-08-31 2022-03-25 成都美律科技有限公司 IVOCT image blood vessel wall lumen contour extraction method
CN111062942B (en) * 2020-03-16 2020-06-12 南京景三医疗科技有限公司 Blood vessel bifurcation detection method and device and medical equipment
CN113421231B (en) * 2021-06-08 2023-02-28 杭州海康威视数字技术股份有限公司 Bleeding point detection method, device and system
CN114511615A (en) * 2021-12-31 2022-05-17 深圳市中科微光医疗器械技术有限公司 Method and device for calibrating image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124069A1 (en) * 2021-12-31 2023-07-06 深圳市中科微光医疗器械技术有限公司 Method and apparatus for calibrating image

Also Published As

Publication number Publication date
WO2023124069A1 (en) 2023-07-06
CN116433743A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN102525381B (en) The recording equipment of image processing apparatus, image processing method and embodied on computer readable
CN109389129B (en) Image processing method, electronic device and storage medium
JP5959168B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
WO2013187206A1 (en) Image processing device, image processing method, and image processing program
CN112348082B (en) Deep learning model construction method, image processing method and readable storage medium
CN110974306B (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
US11810293B2 (en) Information processing device, information processing method, and computer program
JP2013051988A (en) Device, method and program for image processing
WO2017221592A1 (en) Image processing device, image processing method, and image processing program
CN112164043A (en) Method and system for splicing multiple fundus images
CN116453104B (en) Liquid level identification method, liquid level identification device, electronic equipment and computer readable storage medium
CN114511615A (en) Method and device for calibrating image
CN116977338B (en) Chromosome case-level abnormality prompting system based on visual semantic association
CN110956623B (en) Wrinkle detection method, wrinkle detection device, wrinkle detection equipment and computer-readable storage medium
CN116091522A (en) Medical image segmentation method, device, equipment and readable storage medium
CN111445456A (en) Classification model, network model training method and device, and identification method and device
CN114612669B (en) Method and device for calculating ratio of inflammation to necrosis of medical image
CN116597016A (en) Optical fiber endoscope image calibration method
CN114511526A (en) Aneurysm segmentation method and device based on deep learning
CN112734701A (en) Fundus focus detection method, fundus focus detection device and terminal equipment
CN113269747A (en) Pathological picture liver cancer diffusion detection method and system based on deep learning
CN117496584B (en) Eyeball tracking light spot detection method and device based on deep learning
CN115439686B (en) Method and system for detecting object of interest based on scanned image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220517