CN113538572A - Method, device and equipment for determining coordinates of target object - Google Patents

Method, device and equipment for determining coordinates of target object Download PDF

Info

Publication number
CN113538572A
CN113538572A CN202010308007.5A CN202010308007A CN113538572A CN 113538572 A CN113538572 A CN 113538572A CN 202010308007 A CN202010308007 A CN 202010308007A CN 113538572 A CN113538572 A CN 113538572A
Authority
CN
China
Prior art keywords
dimensional
target object
dimensional image
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010308007.5A
Other languages
Chinese (zh)
Inventor
何滨
徐琦
童睿
林必贵
陈汉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Santan Medical Technology Co Ltd
Original Assignee
Hangzhou Santan Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Santan Medical Technology Co Ltd filed Critical Hangzhou Santan Medical Technology Co Ltd
Priority to CN202010308007.5A priority Critical patent/CN113538572A/en
Publication of CN113538572A publication Critical patent/CN113538572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The application provides a coordinate determination method, a coordinate determination device and a coordinate determination device of a target object, wherein the method comprises the following steps: determining a two-dimensional image and a three-dimensional image about a target object; determining a three-dimensional reconstruction coordinate of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image; and determining the three-dimensional reconstruction coordinates as coordinate information of the target object in the three-dimensional image.

Description

Method, device and equipment for determining coordinates of target object
Technical Field
The present application relates to the field of network technologies, and in particular, to a method, an apparatus, and a device for determining coordinates of a target object.
Background
With the rapid development of image imaging technology, various imaging devices are in a wide range, and meanwhile, due to the introduction of new imaging devices and the continuous improvement of existing imaging devices, the image imaging technology is widely applied in more and more industries.
Different types of images obtained by different image devices have different characteristics and application values, so that the system integrates the advantages of the different types of image devices and has important research significance for realizing comprehensive utilization of different types of image information. However, in the related art, the feature information of the target object in a certain type of image can only be acquired from the type of image, which restricts the cross utilization rate between different types of image information, limits the acquisition path of the feature information, and causes the problem of low acquisition efficiency of the feature information.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus, and a device for determining coordinates of a target object, so as to solve at least the technical problems in the related art.
In order to achieve the above purpose, the present application provides the following technical solutions:
according to a first aspect of the present application, a method for determining coordinates of a target object is presented, the method comprising:
determining a two-dimensional image and a three-dimensional image about a target object;
determining a three-dimensional reconstruction coordinate of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image;
and determining the three-dimensional reconstruction coordinates as coordinate information of the target object in the three-dimensional image.
Optionally, the three-dimensional image is labeled with three-dimensional planning coordinates of the target object, and the method further includes:
displaying difference information between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates, the difference information indicating that a pose for the target object is adjusted.
Optionally, the method further includes:
determining edge contour curves of at least two edge regions of a pattern formed by a target object in a two-dimensional image, wherein the edge contour curves are obtained by fitting features in the edge regions;
and determining intersection point coordinates among the edge contour curves, and taking the determined intersection point coordinates as two-dimensional coordinate information of the target object represented by the two-dimensional image.
Optionally, the method further includes:
performing image detection on the two-dimensional image to determine feature points corresponding to a target object in the two-dimensional image;
and determining the two-dimensional coordinate information of the target object according to the coordinate information of the characteristic points.
Optionally, after the determining the two-dimensional image and the three-dimensional image containing the target object, the method further includes:
and calculating the two-dimensional coordinate information of the pre-marked object in the two-dimensional image and the three-dimensional coordinate information of the pre-marked object in the three-dimensional image according to a PNP algorithm so as to determine the image registration information between the two-dimensional image and the three-dimensional image.
Optionally, the three-dimensional image is labeled with three-dimensional planning coordinates of the target object, and the method further includes:
determining an offset vector between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates;
generating adjusted navigation information based on the offset vector, the adjusted navigation information including an adjusted path from the three-dimensional reconstructed coordinates to the three-dimensional planning coordinates.
According to a second aspect of the present application, there is provided an apparatus for determining coordinates of a target object, the apparatus comprising:
an image determining unit that determines a two-dimensional image and a three-dimensional image with respect to a target object;
the first coordinate determination unit is used for determining three-dimensional reconstruction coordinates of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image;
and the second coordinate determination unit is used for determining the three-dimensional reconstruction coordinates as the coordinate information of the target object in the three-dimensional image.
Optionally, the three-dimensional image is labeled with three-dimensional planning coordinates of the target object, and the apparatus further includes:
an information presentation unit that presents difference information between the three-dimensional reconstruction coordinates and the three-dimensional planning coordinates, the difference information being used to indicate that the pose for the target object is adjusted.
Optionally, the apparatus further comprises:
a curve determining unit that determines edge contour curves of at least two edge regions of a pattern formed in a two-dimensional image by a target object, the edge contour curves being obtained by fitting features in the edge regions;
and the third coordinate determination unit is used for determining intersection point coordinates among the edge contour curves so as to take the determined intersection point coordinates as the two-dimensional coordinate information of the target object represented by the two-dimensional image.
Optionally, the method further includes:
the image detection unit is used for carrying out image detection on the two-dimensional image so as to determine a characteristic point corresponding to a target object in the two-dimensional image;
and the fourth coordinate determination unit is used for determining the two-dimensional coordinate information of the target object according to the coordinate information of the characteristic points.
Optionally, the apparatus further comprises:
and the information calculation unit is used for calculating the two-dimensional coordinate information of the pre-marked object in the two-dimensional image and the three-dimensional coordinate information of the pre-marked object in the three-dimensional image according to a PNP algorithm so as to determine the image registration information between the two-dimensional image and the three-dimensional image.
Optionally, the three-dimensional image is labeled with three-dimensional planning coordinates of the target object, and the apparatus further includes:
an offset determining unit that determines an offset vector between the three-dimensional reconstruction coordinates and the three-dimensional planning coordinates;
and the information adjusting unit generates adjusting navigation information based on the offset vector, wherein the adjusting navigation information comprises an adjusting path from the three-dimensional reconstruction coordinate to the three-dimensional planning coordinate.
According to a third aspect of the present application, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions to implement the method of the first aspect.
According to a fourth aspect of the present application, a computer-readable storage medium is proposed, on which computer instructions are stored, which instructions, when executed by a processor, carry out the steps of the method of the first aspect described above.
According to the embodiments, the three-dimensional coordinate information of the target object in the three-dimensional image can be determined according to the two-dimensional coordinate of the target object in the two-dimensional image and the image registration information between the two-dimensional image and the three-dimensional image, so that the obtaining way of the feature information in the three-dimensional image is expanded, and particularly under the condition that the position information of the target object cannot be conveniently obtained or cannot be obtained in real time by using the three-dimensional image imaging equipment, the technical scheme of the application can improve the obtaining efficiency of the position information of the target object.
Drawings
FIG. 1 is a flow chart of a method for coordinate determination of a target object according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of another method of coordinate determination of a target object in an exemplary embodiment according to the present application;
FIG. 3 is a schematic diagram illustrating one of determining registration information according to an exemplary embodiment of the present application;
FIG. 4 is a schematic block diagram of an electronic device in an exemplary embodiment in accordance with the subject application;
fig. 5 is a block diagram of a coordinate determination apparatus of a target object according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the rapid development of image forming technology, the introduction of new image forming apparatuses and the continued improvement of existing image forming apparatuses have led to the widespread use of image forming technology in more and more industries. The imaging modes based on different types of imaging equipment are different, and images obtained by the different types of imaging equipment have different characteristics and application values, such as high density resolution based on CT (computed tomography), so that the CT images can better display organs consisting of soft tissues, and certainly, the CT images are very clear for imaging bones; the nuclear magnetic resonance utilizes a strong magnetic field to vibrate the water in the body, and forms images by utilizing the vibration difference of the water in different tissues, so that the MRI images have a clearer imaging effect on soft tissues; the ultrasonic image has the advantages of strong real-time performance, no radiation and the like on the basis of ensuring high resolution and strong contrast of the soft tissue structure.
In view of the fact that images obtained by different types of image equipment have different application values, the method integrates the advantages of the different types of image equipment, and has important research significance for realizing comprehensive utilization of different types of image information. However, in the related art, the feature information of the target object in a certain type of image can only be obtained from the type of image, for example, if the coordinate information of the target object in the CT image is determined, the coordinate information of the target object in the CT image can only be determined by imaging the target object by the imaging device of the CT image; for example, if the coordinate information of the target object in the two-dimensional X-ray image is determined, the coordinate information of the target object in the two-dimensional X-ray image can be determined only during the process or as a result of the X-ray fluoroscopy performed on the target object.
Therefore, in the related art, the fixed correspondence between the feature information of the target object and the image type of the target object causes problems of low cross utilization rate between different types of image information, low feature information acquisition efficiency, and the like. It can be seen that, if the feature information of the target object in the a-type image can only be determined by the imaging device that generates the a-type image, and the feature information of the target object in the B-type image can only be determined by the imaging device that generates the B-type image, the feature information of the target object in the B-type image cannot be obtained when the imaging device that generates the B-type image is inconvenient to use or cannot be used, which results in problems of limited obtaining route of the feature information, low obtaining efficiency of the feature information, and the like.
In view of the above, the present application provides a method, an apparatus, and a device for determining coordinates of a target object, so as to solve at least the technical problems in the related art. Referring to fig. 1, fig. 1 is a flowchart of a method for determining coordinates of a target object according to an exemplary embodiment of the present application, and as shown in fig. 1, the method may include the following steps:
in step 101, a two-dimensional image and a three-dimensional image of a target object are determined.
And 102, determining a three-dimensional reconstruction coordinate of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image.
In one embodiment, the edge profile curves of the at least two edge regions of the pattern formed in the two-dimensional image by the target object may be determined by fitting the features of the at least two edge regions of the pattern formed in the two-dimensional image by the target object; and according to the intersection point coordinates between the edge contour curves, taking the determined intersection point coordinates as the two-dimensional coordinate information of the target object represented by the two-dimensional image.
In this embodiment, compared with the case that the administrator manually marks the contour curve of the target object represented in the two-dimensional image and the intersection point between the contour curves, the present embodiment may determine the intersection point coordinate between the edge contour curve and the edge contour curve according to the fitting result by performing fitting analysis on the edge area of the pattern formed by the target object in the two-dimensional image, and then determine the two-dimensional coordinate information of the target object represented by the two-dimensional image according to the intersection point coordinate between the edge contour curves, thereby improving the determination efficiency of the two-dimensional coordinate of the target object.
In yet another embodiment, image detection may be performed on a two-dimensional image to determine feature points corresponding to a target object in the two-dimensional image; and then determining the two-dimensional coordinate information of the target object according to the coordinate information of the characteristic points.
In this embodiment, by performing image detection on the two-dimensional image, the coordinates of the feature points of the target object in the two-dimensional image can be determined according to the detection result, and then the two-dimensional coordinate information of the target object in the two-dimensional image is obtained according to the determined coordinate information of the feature points, so that the automatic determination of the two-dimensional coordinates in the two-dimensional image is realized, and the determination efficiency of the two-dimensional coordinate information of the target object in the two-dimensional image is improved.
In yet another embodiment, a map between a two-dimensional image and a three-dimensional imageThe image registration information can be calculated according to the two-dimensional coordinate information of the pre-marked object in the two-dimensional image and the three-dimensional coordinate information of the pre-marked object in the three-dimensional image by the PNP algorithm, so that the image registration information between the two-dimensional image and the three-dimensional image is determinedThe pre-marked object in the two-dimensional image or the three-dimensional image may be an object predetermined in the two-dimensional image or the three-dimensional image, and two-dimensional coordinate information of the pre-marked object in the two-dimensional image and three-dimensional coordinate information of the pre-marked object in the three-dimensional image are predetermined.
And determining registration information between the two-dimensional image and the three-dimensional image of the imaging pattern containing the pre-marked object according to the two-dimensional coordinate information of the pre-marked object in the pre-determined two-dimensional image, the three-dimensional coordinate information of the pre-marked object in the three-dimensional image and the PNP algorithm. Further, the determination process of the registration information between the two-dimensional image and the three-dimensional image may be determined in real time or predetermined, which is not limited by the present disclosure.
Step 103, determining the three-dimensional reconstruction coordinates as coordinate information of the target object in the three-dimensional image.
In an embodiment, the three-dimensional planning coordinate of the target object may be marked in the three-dimensional image, and then difference information between the three-dimensional reconstruction coordinate and the three-dimensional planning coordinate may be displayed, so as to generate indication information for adjusting the pose of the target object according to the difference information.
In this embodiment, by displaying difference information between the pre-labeled three-dimensional planning coordinate in the three-dimensional image and the three-dimensional reconstruction coordinate of the target object, in the process of adjusting the pose of the target object, adjustment can be performed according to the difference information, so that the adjusted three-dimensional coordinate information of the target object approaches the three-dimensional planning coordinate, and the accuracy of adjusting the pose information of the target object in direction is improved.
In another embodiment, an offset vector between the three-dimensional reconstructed coordinates and the three-dimensional planned coordinates may be determined, an adjustment path from the three-dimensional reconstructed coordinates to the three-dimensional planned coordinates may be determined based on the offset vector, and adjustment navigation information including the adjustment path may be generated.
In this embodiment, after determining the offset vector between the three-dimensional reconstructed coordinate and the three-dimensional planned coordinate, the adjustment navigation information may be generated according to the offset vector, so that the user may adjust the pose information of the target object according to the adjustment navigation information, thereby adjusting the target object from the three-dimensional reconstructed coordinate to the three-dimensional planned coordinate.
According to the embodiments, the three-dimensional coordinate information of the target object in the three-dimensional image can be determined according to the two-dimensional coordinate of the target object in the two-dimensional image and the image registration information between the two-dimensional image and the three-dimensional image, so that the obtaining way of the feature information in the three-dimensional image is expanded, and particularly under the condition that the position information of the target object cannot be conveniently obtained or cannot be obtained in real time by using the three-dimensional image imaging equipment, the technical scheme of the application can improve the obtaining efficiency of the position information of the target object.
The technical solution of the present application is explained in detail below by using a flowchart corresponding to fig. 2, where fig. 2 is a flowchart of another method for determining coordinates of a target object according to an exemplary embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step 201, a three-dimensional image and at least two-dimensional images about a target object are determined.
In one embodiment, a three-dimensional image and at least two-dimensional images of a target object are acquired in advance, wherein the two-dimensional images include an imaging pattern of the target object, and two-dimensional coordinate information of the target object in the two-dimensional images is determined according to the imaging pattern of the target object included in the two-dimensional images.
Regarding the target object that may be pre-marked in the three-dimensional picture of the target object, the three-dimensional planning coordinates of the pre-marked target object, or the three-dimensional planning coordinates of the pre-marked target object and the target object, in the case where only the target object is pre-marked in the three-dimensional picture, a marking pattern including the target object may be automatically analyzed to determine coordinate information obtained from the marking pattern as the three-dimensional planning coordinate information of the target object in the three-dimensional picture.
Of course, according to the actual application requirement, the three-dimensional image or the two-dimensional image of the target object may include imaging patterns of other objects besides the imaging pattern of the target object, such as the leg bone imaging pattern of the left lower leg in the case that the target object is an instrument installed on the outer side of the left lower leg; for example, in the case where the target object is a component inside a precision instrument, the imaging pattern of another object may be an imaging pattern of a precision instrument other than the component.
Step 202, determining registration information of the three-dimensional images and each two-dimensional image.
The process of determining the registration information of the three-dimensional images with the two-dimensional images respectively can comprise automatic registration or manual registration.
In the process of automatically determining the registration information of the three-dimensional image and each two-dimensional image, the registration information of the three-dimensional image and each two-dimensional image can be determined according to a preset algorithm.
Specifically, the registration information of the three-dimensional image and each two-dimensional image can be determined according to a PNP algorithm. The registration information of the three-dimensional image and the two-dimensional image may be an external parameter matrix [ R | t ] of the camera, R in the external parameter matrix is a rotation matrix, and t is a translation matrix, where the rotation matrix R may be determined based on a rodgers formula, that is, the rotation in the three-dimensional space may be represented by a combination of a rotation angle and a vector. Accordingly, if the coordinate corresponding to the world coordinate system is X and the coordinate corresponding to the camera coordinate system is X ', the obtained pose information of the camera satisfies X' ═ R | t ] ×. For the situation that the pose data of the camera are determined according to the PNP algorithm, the accuracy of the determined pose data is correspondingly improved, and the stability of tracking and determining the pose data of the camera is enhanced.
Taking the determination of the three-dimensional image and the two-dimensional images of the target object as an example, in this step, two sets of registration information of the three-dimensional image and the two-dimensional image m of the target object and the three-dimensional image and the two-dimensional image n of the target object need to be determined respectively. As shown in fig. 3, fig. 3 is a schematic diagram illustrating a method for determining registration information according to an exemplary embodiment of the present application, and taking a pre-labeled object ABC as an example, assuming that position information of a point a, a point B, and a point C in the diagram is represented by three-dimensional coordinate information in a three-dimensional image based on a camera coordinate system, and a two-dimensional image m and a two-dimensional image n are obtained based on a registration camera P and a registration camera Q, respectively, where the two-dimensional image m includes an imaging pattern DEF corresponding to the pre-labeled object ABC, and the two-dimensional image n includes an imaging pattern GHI corresponding to the pre-labeled object ABC, the three-dimensional coordinate information of the point a, the point B, and the point C and the two-dimensional coordinate information of the point D, the point E, and the point F in the two-dimensional image m are calculated by using a P3P algorithm, so as to determine pose information of the registration camera P; correspondingly, the position and attitude information of the registration camera Q can be determined by calculating the three-dimensional coordinate information of the point A, the point B and the point C and the two-dimensional coordinate information of the point G, the point H and the point I in the two-dimensional image n through the P3P algorithm, and the position and attitude information of the registration camera P and the position and attitude information of the registration camera Q are the registration information between the three-dimensional image and the two-dimensional images respectively.
In the process of manually determining the registration information of the three-dimensional images and the two-dimensional images, the pose information of the registration camera can be manually adjusted to determine the pose information of the registration camera when the three-dimensional image of the pre-marked object corresponds to the imaging pattern of the pre-marked object in the two-dimensional images, and further determine the pose information of the registration camera when the three-dimensional image corresponds to the imaging pattern in the two-dimensional images as the registration information of the three-dimensional images and the two-dimensional images.
And step 203, reconstructing three-dimensional reconstruction coordinates of the target object in the three-dimensional image according to the two-dimensional coordinate information of the target object in the two-dimensional image and the registration information of the three-dimensional image and the two-dimensional image respectively.
The two-dimensional coordinate information of the target object in the two-dimensional image can be marked through automatic identification or manual operation. In an embodiment, the two-dimensional image may be detected through a preconfigured image detection algorithm to determine feature points corresponding to the target object in the two-dimensional image, and then determine two-dimensional coordinate information of the target object in the two-dimensional image according to coordinate information of the feature points. In the practical application process, the image detection algorithm may be a depth learning model including a plurality of convolution layers and pooling layers, the depth learning model may obtain an image vector corresponding to the two-dimensional image through the input layer, and further determine feature points of the target object in the two-dimensional image according to the features extracted by the depth learning model through convolution operations such as the multilayer convolution layers and pooling operations of the multilayer pooling layers, and further determine feature point coordinates corresponding to the feature points identified by the image detection algorithm, so as to determine two-dimensional coordinate information of the target object according to the coordinate information of the determined feature points; of course, the image detection algorithm may be a detection model constructed based on other image detection technologies, and the application does not limit the specific image detection algorithm.
In another embodiment, in the process of automatically identifying the two-dimensional coordinate information in the two-dimensional image, the two-dimensional coordinate information of the target object in the two-dimensional image may be obtained by fitting the positions of the pixel points in the edge region of the imaging pattern of the target object in the two-dimensional image.
In the process of determining the imaging pattern of the target object in the two-dimensional image, binarization processing can be performed on the two-dimensional image containing the imaging pattern of the target object, so that the technical effect of highlighting the imaging pattern of the target object in the two-dimensional image is realized, and the accurate determination of the pixel point position of the edge area of the imaging pattern is improved.
In the process of fitting the positions of the pixel points in the edge area of the imaging pattern of the target object in the two-dimensional image, the contour area of the imaging pattern of the target object in the two-dimensional image can be obtained, and a contour point set forming the contour area is determined. The determined contour point sets forming the contour curve can be matched with a preset spacing distance; or, extracting the contour region of the imaging pattern based on the maximum inter-class variance method, and further obtaining a regression model corresponding to the contour region of the position fitting imaging pattern of the projection maximum value point in the contour region.
The two-dimensional coordinate information of the target object in the two-dimensional image can be the coordinate information of a projection maximum point in the edge region of an imaging pattern of the target object in the two-dimensional image; or intersection point information of a plurality of regression models corresponding to the edge region of the imaging pattern of the target object in the two-dimensional image.
In the process of determining intersection point information of a plurality of regression models corresponding to the edge region of the imaging pattern of the target object in the two-dimensional image, fitting can be performed on the distribution of the determined contour point set to obtain a plurality of fitting equations corresponding to the contour region, and then intersection point information is determined according to the fitting equations.
A linear regression model corresponding to a partial point set in the contour region of the imaging pattern of the target object may be determined, such as in a case where a partial point set distribution of the contour point set conforms to a linear distribution; under the condition that the distribution of the partial point set in the contour point set conforms to the curve distribution, a curve regression model corresponding to the partial point set in the contour region of the imaging pattern of the target object can be determined, in practical application, the curve regression model can be an S curve regression equation, a cubic curve regression equation, a growth curve regression equation and the like, and the fitting result of the contour point set is not limited by the application.
And 204, displaying difference information between the three-dimensional planning coordinate and the three-dimensional reconstruction coordinate of the target object in the three-dimensional image.
In an embodiment, after determining the three-dimensional reconstruction coordinates of the target object in the three-dimensional image, difference information between the three-dimensional planning coordinates and the three-dimensional reconstruction coordinates of the target object in the three-dimensional image may be displayed, and further, the displayed difference information may be used to indicate that the pose of the target object is adjusted. In particular, the disparity information may comprise an offset vector between the respective coordinates, wherein the offset vector characterizes an offset distance and a rotation angle between the respective coordinates.
In another embodiment, an offset vector between the three-dimensional reconstructed coordinates and the three-dimensional planned coordinates may be determined to determine an adjustment path from the three-dimensional reconstructed coordinates to the three-dimensional planned coordinates based on the offset vector, and then adjustment navigation information about the adjustment path may be generated, so that a user or a device receiving the adjustment navigation information may adjust the pose information of the target object according to the adjustment navigation information, so that the adjusted three-dimensional coordinate information of the target object corresponds to the three-dimensional planned coordinates, and adjustment efficiency and adjustment accuracy of the pose information of the target object are improved.
FIG. 4 is a schematic block diagram of an electronic device in an exemplary embodiment in accordance with the present application. Referring to fig. 4, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the coordinate determination device of the target object on a logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 5, fig. 5 is a block diagram of a coordinate determination apparatus of a target object according to an exemplary embodiment of the present application, and in a software implementation, the coordinate determination apparatus of the target object may include:
an image determining unit 501 that determines a two-dimensional image and a three-dimensional image with respect to a target object;
a first coordinate determination unit 502, configured to determine a three-dimensional reconstruction coordinate of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image;
a second coordinate determination unit 503, configured to determine the three-dimensional reconstruction coordinates as coordinate information of the target object in the three-dimensional image.
Optionally, the three-dimensional image labeled with the three-dimensional planning coordinate of the target object further includes:
an information presentation unit 504 that presents difference information between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates, the difference information indicating that the pose for the target object is adjusted.
Optionally, the method further includes:
a curve determining unit 505 that determines edge contour curves of at least two edge regions of a pattern formed in a two-dimensional image by a target object, the edge contour curves being obtained by fitting features in the edge regions;
and a third coordinate determination unit 506, configured to determine coordinates of intersection points between the edge contour curves, so as to use the determined coordinates of the intersection points as two-dimensional coordinate information of the target object represented by the two-dimensional image.
Optionally, the method further includes:
an image detection unit 507, which performs image detection on the two-dimensional image to determine feature points corresponding to a target object in the two-dimensional image;
a fourth coordinate determination unit 508 that determines two-dimensional coordinate information of the target object from the coordinate information of the feature point.
Optionally, the method further includes:
the information calculating unit 509 calculates the two-dimensional coordinate information of the pre-labeled object in the two-dimensional image and the three-dimensional coordinate information of the pre-labeled object in the three-dimensional image according to a PNP algorithm to determine the image registration information between the two-dimensional image and the three-dimensional image.
Optionally, the three-dimensional image labeled with the three-dimensional planning coordinate of the target object further includes:
an offset determining unit 510 that determines an offset vector between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates;
an information adjusting unit 511, which generates, based on the offset vector, adjusted navigation information including an adjusted path from the three-dimensional reconstructed coordinates to the three-dimensional planned coordinates.
The device corresponds to the method, and more details are not repeated.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. Information may be computer readable instructions, data structures, units of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (14)

1. A method of determining coordinates of a target object, the method comprising:
determining a two-dimensional image and a three-dimensional image about a target object;
determining a three-dimensional reconstruction coordinate of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image;
and determining the three-dimensional reconstruction coordinates as coordinate information of the target object in the three-dimensional image.
2. The method of claim 1, wherein the three-dimensional imagery is labeled with three-dimensional planning coordinates of the target object, the method further comprising:
displaying difference information between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates, the difference information indicating that a pose for the target object is adjusted.
3. The method of claim 1, further comprising:
determining edge contour curves of at least two edge regions of a pattern formed by a target object in a two-dimensional image, wherein the edge contour curves are obtained by fitting features in the edge regions;
and determining intersection point coordinates among the edge contour curves, and taking the determined intersection point coordinates as two-dimensional coordinate information of the target object represented by the two-dimensional image.
4. The method of claim 1, further comprising:
performing image detection on the two-dimensional image to determine feature points corresponding to a target object in the two-dimensional image;
and determining the two-dimensional coordinate information of the target object according to the coordinate information of the characteristic points.
5. The method of claim 1, wherein after determining the two-dimensional image and the three-dimensional image containing the target object, further comprising:
and calculating the two-dimensional coordinate information of the pre-marked object in the two-dimensional image and the three-dimensional coordinate information of the pre-marked object in the three-dimensional image according to a PNP algorithm so as to determine the image registration information between the two-dimensional image and the three-dimensional image.
6. The method of claim 1, wherein the three-dimensional imagery is labeled with three-dimensional planning coordinates of the target object, the method further comprising:
determining an offset vector between the three-dimensional reconstructed coordinates and the three-dimensional planning coordinates;
generating adjusted navigation information based on the offset vector, the adjusted navigation information including an adjusted path from the three-dimensional reconstructed coordinates to the three-dimensional planning coordinates.
7. An apparatus for determining coordinates of a target object, the apparatus comprising:
an image determining unit that determines a two-dimensional image and a three-dimensional image with respect to a target object;
the first coordinate determination unit is used for determining three-dimensional reconstruction coordinates of the target object in the three-dimensional image according to image registration information between the two-dimensional image and the three-dimensional image and two-dimensional coordinate information of the target object in the two-dimensional image;
and the second coordinate determination unit is used for determining the three-dimensional reconstruction coordinates as the coordinate information of the target object in the three-dimensional image.
8. The apparatus of claim 7, wherein the three-dimensional imagery is labeled with three-dimensional planning coordinates of the target object, the apparatus further comprising:
an information presentation unit that presents difference information between the three-dimensional reconstruction coordinates and the three-dimensional planning coordinates, the difference information being used to indicate that the pose for the target object is adjusted.
9. The apparatus of claim 7, further comprising:
a curve determining unit that determines edge contour curves of at least two edge regions of a pattern formed in a two-dimensional image by a target object, the edge contour curves being obtained by fitting features in the edge regions;
and the third coordinate determination unit is used for determining intersection point coordinates among the edge contour curves so as to take the determined intersection point coordinates as the two-dimensional coordinate information of the target object represented by the two-dimensional image.
10. The apparatus of claim 7, further comprising:
the image detection unit is used for carrying out image detection on the two-dimensional image so as to determine a characteristic point corresponding to a target object in the two-dimensional image;
and the fourth coordinate determination unit is used for determining the two-dimensional coordinate information of the target object according to the coordinate information of the characteristic points.
11. The apparatus of claim 7, further comprising:
and the information calculation unit is used for calculating the two-dimensional coordinate information of the pre-marked object in the two-dimensional image and the three-dimensional coordinate information of the pre-marked object in the three-dimensional image according to a PNP algorithm so as to determine the image registration information between the two-dimensional image and the three-dimensional image.
12. The apparatus of claim 7, wherein the three-dimensional imagery is labeled with three-dimensional planning coordinates of the target object, the apparatus further comprising:
an offset determining unit that determines an offset vector between the three-dimensional reconstruction coordinates and the three-dimensional planning coordinates;
and the information adjusting unit generates adjusting navigation information based on the offset vector, wherein the adjusting navigation information comprises an adjusting path from the three-dimensional reconstruction coordinate to the three-dimensional planning coordinate.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions; wherein the processor is configured with executable instructions to implement the method of any one of claims 1-6.
14. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-6.
CN202010308007.5A 2020-04-17 2020-04-17 Method, device and equipment for determining coordinates of target object Pending CN113538572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010308007.5A CN113538572A (en) 2020-04-17 2020-04-17 Method, device and equipment for determining coordinates of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010308007.5A CN113538572A (en) 2020-04-17 2020-04-17 Method, device and equipment for determining coordinates of target object

Publications (1)

Publication Number Publication Date
CN113538572A true CN113538572A (en) 2021-10-22

Family

ID=78093612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010308007.5A Pending CN113538572A (en) 2020-04-17 2020-04-17 Method, device and equipment for determining coordinates of target object

Country Status (1)

Country Link
CN (1) CN113538572A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030014034A1 (en) * 2001-03-22 2003-01-16 Norbert Strobel Method for detecting the three-dimensional position of a medical examination instrument introduced into a body region, particularly of a catheter introduced into a vessel
US20030220555A1 (en) * 2002-03-11 2003-11-27 Benno Heigl Method and apparatus for image presentation of a medical instrument introduced into an examination region of a patent
CN101190149A (en) * 2006-10-05 2008-06-04 西门子公司 Integrating 3D images into interventional procedures
CN101553182A (en) * 2006-11-28 2009-10-07 皇家飞利浦电子股份有限公司 Apparatus for determining a position of a first object within a second object
US20100228118A1 (en) * 2009-03-04 2010-09-09 Michael Maschke Method for image support during the navigation of a medical instrument and an apparatus for carrying out a minimally-invasive intervention for therapy of a tumor
CN102819657A (en) * 2012-02-10 2012-12-12 中国人民解放军总医院 Image-guided device for ablation therapies
US20130101196A1 (en) * 2010-01-12 2013-04-25 Koninklijke Philips Electronics N.V. Navigating an interventional device
DE102012224057A1 (en) * 2012-12-20 2014-06-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for supporting image in navigation area of patient medical instrument, involves inserting portion of medical instrument in signal generated from three dimensional image data based on image position and properties of instrument
US20150320324A1 (en) * 2014-05-09 2015-11-12 Samsung Electronics Co., Ltd. Method, apparatus, and system for providing medical image
CN105232152A (en) * 2014-07-02 2016-01-13 柯惠有限合伙公司 Fluoroscopic pose estimation
US20180101729A1 (en) * 2016-10-07 2018-04-12 Schneider Electric Industries Sas Method for 3d mapping of 2d point of interest
CN108392271A (en) * 2018-01-31 2018-08-14 上海联影医疗科技有限公司 Orthopaedics operating system and its control method
CN108420529A (en) * 2018-03-26 2018-08-21 上海交通大学 The surgical navigational emulation mode guided based on image in magnetic tracking and art
CN110473196A (en) * 2019-08-14 2019-11-19 中南大学 A kind of abdominal CT images target organ method for registering based on deep learning
CN110650686A (en) * 2017-05-24 2020-01-03 皇家飞利浦有限公司 Device and corresponding method for providing spatial information of an interventional device in live 2D X radiographs

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030014034A1 (en) * 2001-03-22 2003-01-16 Norbert Strobel Method for detecting the three-dimensional position of a medical examination instrument introduced into a body region, particularly of a catheter introduced into a vessel
US20030220555A1 (en) * 2002-03-11 2003-11-27 Benno Heigl Method and apparatus for image presentation of a medical instrument introduced into an examination region of a patent
CN101190149A (en) * 2006-10-05 2008-06-04 西门子公司 Integrating 3D images into interventional procedures
CN101553182A (en) * 2006-11-28 2009-10-07 皇家飞利浦电子股份有限公司 Apparatus for determining a position of a first object within a second object
US20100228118A1 (en) * 2009-03-04 2010-09-09 Michael Maschke Method for image support during the navigation of a medical instrument and an apparatus for carrying out a minimally-invasive intervention for therapy of a tumor
US20130101196A1 (en) * 2010-01-12 2013-04-25 Koninklijke Philips Electronics N.V. Navigating an interventional device
CN102819657A (en) * 2012-02-10 2012-12-12 中国人民解放军总医院 Image-guided device for ablation therapies
DE102012224057A1 (en) * 2012-12-20 2014-06-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method for supporting image in navigation area of patient medical instrument, involves inserting portion of medical instrument in signal generated from three dimensional image data based on image position and properties of instrument
US20150320324A1 (en) * 2014-05-09 2015-11-12 Samsung Electronics Co., Ltd. Method, apparatus, and system for providing medical image
CN105232152A (en) * 2014-07-02 2016-01-13 柯惠有限合伙公司 Fluoroscopic pose estimation
US20180101729A1 (en) * 2016-10-07 2018-04-12 Schneider Electric Industries Sas Method for 3d mapping of 2d point of interest
CN110650686A (en) * 2017-05-24 2020-01-03 皇家飞利浦有限公司 Device and corresponding method for providing spatial information of an interventional device in live 2D X radiographs
CN108392271A (en) * 2018-01-31 2018-08-14 上海联影医疗科技有限公司 Orthopaedics operating system and its control method
CN108420529A (en) * 2018-03-26 2018-08-21 上海交通大学 The surgical navigational emulation mode guided based on image in magnetic tracking and art
CN110473196A (en) * 2019-08-14 2019-11-19 中南大学 A kind of abdominal CT images target organ method for registering based on deep learning

Similar Documents

Publication Publication Date Title
RU2568635C2 (en) Feature-based recording of two-/three-dimensional images
US7327865B2 (en) Fiducial-less tracking with non-rigid image registration
US7231076B2 (en) ROI selection in image registration
US10699401B2 (en) Method and system for determining the local quality of surface data extracted from volume date
US7522779B2 (en) Image enhancement method and system for fiducial-less tracking of treatment targets
US7366278B2 (en) DRR generation using a non-linear attenuation model
KR101851303B1 (en) Apparatus and method for reconstructing 3d space
CN102132322B (en) Apparatus for determining modification of size of object
CN112884819A (en) Image registration and neural network training method, device and equipment
US8867809B2 (en) Image processing method
CN113538572A (en) Method, device and equipment for determining coordinates of target object
CN113963057B (en) Imaging geometric relation calibration method and device, electronic equipment and storage medium
US10825202B2 (en) Method for compressing measurement data
CA3075614C (en) Gridding global data into a minimally distorted global raster
WO2016131955A1 (en) Automatic 3d model based tracking of deformable medical devices with variable appearance
EP4060604A1 (en) Method and system to generate modified x-ray images
US9779498B2 (en) Device and method for assessing X-ray images
Waz et al. Electronic Navigational Chart in aid of generation of multi-dimensional radar display
Liu et al. Symmetry identification using partial surface matching and tilt correction in 3D brain images
EP4144298A1 (en) Object visualisation in x-ray imaging
CN112508858B (en) Medical image processing method and device and computer equipment
Ozendi et al. Stochastic surface mesh reconstruction
EP4152254A1 (en) Target object positioning method and apparatus, device, medium, and program product
JP2007293550A (en) Polygon mesh editing method, device, system, and program
CN116940827A (en) Computer-implemented method, computer program, data processing system and device for determining a reflection behaviour of a surface of an object, and storage medium having stored thereon instructions for determining a reflection behaviour of a surface of an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination