CN109925054B - Auxiliary method, device and system for determining target point path and readable storage medium - Google Patents

Auxiliary method, device and system for determining target point path and readable storage medium Download PDF

Info

Publication number
CN109925054B
CN109925054B CN201910161268.6A CN201910161268A CN109925054B CN 109925054 B CN109925054 B CN 109925054B CN 201910161268 A CN201910161268 A CN 201910161268A CN 109925054 B CN109925054 B CN 109925054B
Authority
CN
China
Prior art keywords
image
dimensional
projection
points
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910161268.6A
Other languages
Chinese (zh)
Other versions
CN109925054A (en
Inventor
何滨
李伟栩
沈丽萍
宁建利
方华磊
陈枭
童睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Santan Medical Technology Co Ltd
Original Assignee
Hangzhou Santan Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Santan Medical Technology Co Ltd filed Critical Hangzhou Santan Medical Technology Co Ltd
Priority to CN201910161268.6A priority Critical patent/CN109925054B/en
Publication of CN109925054A publication Critical patent/CN109925054A/en
Priority to US17/431,683 priority patent/US20220133409A1/en
Priority to PCT/CN2020/077846 priority patent/WO2020177725A1/en
Application granted granted Critical
Publication of CN109925054B publication Critical patent/CN109925054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides an auxiliary method, an auxiliary device and an auxiliary system for determining a target point path and a readable storage medium. The method includes acquiring a three-dimensional local image for the affected part and a virtual path within the three-dimensional local image; when a simulated two-dimensional image obtained based on the three-dimensional local image projection is matched with a two-dimensional projection image obtained based on an affected part, acquiring projection information of the virtual path on the simulated two-dimensional image; and adjusting a guide structure according to the image position information of the plurality of position points on the virtual path, so that the guide direction of the guide structure sequentially points to the target points of the affected part corresponding to the plurality of position points.

Description

Auxiliary method, device and system for determining target point path and readable storage medium
Technical Field
The present application relates to the field of medical technology, and in particular, to an auxiliary method, an apparatus, and a system for determining a target path, and a readable storage medium.
Background
At present, when performing the operation to the affected part through surgical instruments, need insert inside the affected part usually, and the accurate route that obtains surgical instruments and get into in the affected part can improve the precision of operation greatly, reduces the shooting number of times of X piece, reduces the radiation, reduces patient's misery. Therefore, in the medical technology field, how to improve the needle insertion accuracy has become a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The application provides an auxiliary method, an auxiliary device and an auxiliary system for determining a target point path and a readable storage medium, so as to solve the defects in the related art.
According to a first aspect of embodiments of the present application, there is provided an auxiliary method for determining a target point path, including:
acquiring a three-dimensional local image of an affected part and a virtual path located within the three-dimensional local image;
when a simulated two-dimensional image obtained based on the three-dimensional local image projection is matched with a two-dimensional projection image obtained based on an affected part, acquiring image position information of the virtual path on the simulated two-dimensional image;
and adjusting a guide structure according to the image position information of the plurality of position points on the virtual path, so that the guide direction of the guide structure sequentially points to a plurality of target points of the affected part corresponding to the plurality of position points.
Optionally, the matching of the simulated two-dimensional image obtained based on the three-dimensional local image projection and the two-dimensional projection image obtained based on the affected part includes:
carrying out perspective projection on the affected part to obtain a two-dimensional projection image;
projecting the three-dimensional local image to obtain the simulated two-dimensional image;
and acquiring the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image, and determining that the simulated two-dimensional image is matched with the two-dimensional projection image when the coincidence degree is not less than a preset threshold value.
Optionally, the obtaining of the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image includes:
extracting a first projection area in the simulated two-dimensional image and a second projection area in the two-dimensional projection image;
and calculating the contact ratio according to the edge contour matching degree between the first projection area and the second projection area.
Optionally, the obtaining of the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image includes:
dividing the simulated two-dimensional image and the two-dimensional projection image along a preset direction according to a preset proportion;
and matching each segmentation area of the simulated two-dimensional image with the segmentation area corresponding to the two-dimensional projection image to obtain the contact ratio.
Optionally, the adjusting the guidance structure according to the image position information of the plurality of position points on the virtual path includes:
acquiring image position information of a projection path matched with the virtual path on the two-dimensional projection image according to the image position information of the virtual path on the two-dimensional simulation image;
and adjusting the guiding structure according to the image position information of a plurality of position points on the projection path.
Optionally, the determining system is applied to a target path determining system, and the determining system includes a positioning device, where the positioning device includes a plurality of first preset mark points located on a first plane and a plurality of second preset mark points located on a second plane, and a space is provided between the first plane and the second plane in a projection direction;
the adjusting of the guide structure according to the image position information of the plurality of position points on the virtual path includes:
acquiring positioning position information of each first preset mark point on a first plane and each second preset mark point on a second plane in a positioning coordinate system;
acquiring image position information of each first preset mark point on a first plane and each second preset mark point on a second plane in an image coordinate system, wherein the image coordinate system is formed on the basis of a projection image of the positioning device;
and adjusting the guide structure according to the positioning position information and the image position information of the first preset mark point, the positioning position information and the image position information of the second preset mark point and the image position information of any position point on the virtual path.
Optionally, the method further includes:
determining a first conversion relation between a calibration coordinate system and an image coordinate system according to calibration position information of a plurality of third preset mark points in the calibration coordinate system and image coordinate information in the image coordinate system;
and obtaining the calibrated two-dimensional projection image according to the first conversion relation.
Optionally, the directing structure includes a light beam emitting component, and the directing direction of the directing structure sequentially pointing to the target points of the affected part corresponding to the plurality of position points includes:
the light beam emitted by the light beam component is directed to a target point.
Alternatively, the directing structure comprises an instrument guide channel,
the guide direction of the guide structure sequentially points to the target points of the affected part corresponding to the plurality of position points, and the guide structure comprises:
the central axis of the instrument guide channel points to the target point.
According to a second aspect of the embodiments of the present application, there is provided an assisting apparatus for determining a target path, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module acquires a three-dimensional local image aiming at an affected part and a virtual path positioned in the three-dimensional local image;
the second acquisition module is used for acquiring projection information of the virtual path on the simulated two-dimensional image when the simulated two-dimensional image projected based on the three-dimensional local image is matched with the two-dimensional projection image projected based on the affected part;
and the adjusting module adjusts the guiding structure according to the image position information of the plurality of position points on the virtual path, so that the guiding direction of the guiding structure sequentially points to the target points of the affected part corresponding to the plurality of position points.
According to a third aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any of the embodiments described above.
According to a fourth aspect of embodiments of the present application, there is provided and a surgical navigation system, comprising:
a camera for acquiring a projected image;
a guide structure;
a display for showing an image;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured for implementing the steps of the method according to any of the embodiments described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the embodiment, the simulation two-dimensional image and the two-dimensional projection image are matched to obtain the physical posture parameters of the operation path in the affected part based on the shooting device, so that the pointing structure can be adjusted according to the physical posture parameters to sequentially point at a plurality of target points forming the operation path. Based on this, can obtain the planning route that corresponds with the operation route on the body surface, should plan the route and can assist medical personnel to implant surgical instruments, supplementary path planning in the art is favorable to improving the precision of operation, reduces the disease misery.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow diagram illustrating an exemplary method of assisting in determining a target point path, according to one illustrative embodiment.
FIG. 2 is a flow diagram illustrating another assistance method for determining a target path, according to an exemplary embodiment.
FIG. 3 is a diagram illustrating an algorithmic model according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating yet another method of assisting in determining a target point path, according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating yet another method of assisting in determining a target point path, according to an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating another simulated two-dimensional image according to an exemplary embodiment.
FIG. 7 is a schematic diagram illustrating another two-dimensional projection image in accordance with an exemplary embodiment.
FIG. 8 is a block diagram illustrating the structure of a surgical navigation system according to an exemplary embodiment.
FIG. 9 is a schematic block diagram of an apparatus provided in accordance with an exemplary embodiment.
FIG. 10 is a block diagram illustrating an auxiliary device for determining a target path, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart illustrating an auxiliary method for determining a target point path according to an exemplary embodiment, and as shown in fig. 1, the auxiliary method may include:
in step 101, a three-dimensional local image for the lesion is acquired, the three-dimensional local image comprising a virtual path.
In this embodiment, the affected part may include a diseased limb, a chest, a waist, a spine or a head, which is not limited in this application. The scanning information can be obtained by CT scanning or MR image, and the scanning information can be input into a computer to be reconstructed, so that a three-dimensional local image is obtained. Furthermore, a virtual path formed by combining a plurality of target points in a three-dimensional local image can also be reconstructed according to the scanning information of the CT or the MR. The affected part in this embodiment may include, but is not limited to, an extremity, a vertebra, a waist, a chest, or a head. The virtual path may be a linear type for which length, width and angle adjustments may be made in the three-dimensional partial image.
In step 102, when the simulated two-dimensional image projected based on the three-dimensional local image is matched with the two-dimensional projection image projected based on the affected part, image position information of the virtual path on the simulated two-dimensional image is obtained.
In this embodiment, the lesion may be visualized by a C-arm machine or other X-ray device to obtain a two-dimensional projection image. For example, the simulated two-dimensional image and the position information of the virtual path in the simulated two-dimensional image can be obtained through a DRR algorithm based on the three-dimensional local image. Alternatively, in other embodiments, the simulated two-dimensional image may be obtained by using a Siddon algorithm, a Shear-Warp algorithm, or the like, which is not limited in this application.
Further, the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image is obtained, and when the coincidence degree is not smaller than a preset threshold value, the simulated two-dimensional image is determined to be matched with the two-dimensional projection image. When the two-dimensional projection image is matched with the simulation two-dimensional image, the posture parameter of the three-dimensional local image can be used as the physical posture parameter of the affected part based on the shooting device.
Further, when the coincidence degree between the two-dimensional projection image and the simulated two-dimensional image is smaller than a preset threshold value, the difference between the two-dimensional projection image and the simulated two-dimensional image is considered to be large, the adjusted simulated two-dimensional image can be obtained by adjusting the space posture of the three-dimensional local image or aiming at the projection parameters of the three-dimensional local image, and the adjusted simulated two-dimensional image is matched with the two-dimensional projection image. Of course, in other embodiments, the spatial pose of the three-dimensional partial image and the projection parameters for the three-dimensional partial image may be adjusted simultaneously, which is not limited in this application.
Specifically, in an embodiment, the difference between the simulated two-dimensional image and the two-dimensional projection image may be fed back according to the degree of coincidence between the simulated two-dimensional image and the two-dimensional projection image, so as to readjust the spatial posture of the three-dimensional partial image or adjust the projection parameters for the three-dimensional partial image according to the degree of coincidence. Of course, in other embodiments, the spatial posture or the projection parameter of the three-dimensional local image may also be sequentially adjusted at regular intervals, which is not limited in the present application.
Adjusting the spatial pose of the three-dimensional partial image may be adjusting a rotation angle of the three-dimensional partial image with respect to at least one coordinate axis, or may be adjusting a displacement of the three-dimensional partial image with respect to at least one coordinate axis. Of course, in other embodiments, the rotation angle and the displacement of any coordinate axis may be adjusted. Adjusting the projection parameters for the three-dimensional partial image may be adjusting one or more of a position of the virtual light source, a focal length of the virtual light source, and an imaging resolution. Wherein the imaging resolution is used to characterize the relationship between pixels and geometric dimensions, which is related to the dimensions of the projected imaging plane of the three-dimensional partial image and the image dimensions of the two-dimensional projection image.
In the above embodiments, the degree of coincidence between the simulated two-dimensional image and the two-dimensional projection image may be implemented as follows:
in one embodiment, the user person may match the two-dimensional projection image and the simulated two-dimensional image to obtain the degree of coincidence.
In another embodiment, a first projection region in the simulated two-dimensional image and a second projection region in the two-dimensional projection image may be extracted, and the degree of coincidence may be obtained according to the degree of edge contour matching between the first projection region and the second projection region.
In another embodiment, the simulated two-dimensional image and the two-dimensional projection image may be divided along a preset direction according to a preset ratio, and each divided region of the simulated two-dimensional image may be matched with a divided region corresponding to the two-dimensional projection image, so as to obtain a degree of overlap.
In step 103, a guiding structure is adjusted according to the image position information of the plurality of position points on the virtual path, so that the guiding direction of the guiding structure sequentially points to a plurality of target points of the affected part corresponding to the plurality of position points.
In this embodiment, when the two-dimensional projection image is matched with the simulated two-dimensional projection image, the posture parameter of the three-dimensional local image can be used as the physical posture parameter of the affected part based on the shooting device. Therefore, the attitude parameters of the virtual path can be used as the physical attitude parameters of the actual operation path based on the shooting device. Therefore, when the guiding structure is adjusted according to the image position information of a plurality of position points on the virtual path on the simulated two-dimensional image, the guiding direction of the guiding structure sequentially points to the corresponding target points in the affected part.
In one embodiment, the target path assisting method can be applied to an assisting system for determining a target path, wherein the determining system comprises a positioning device, the positioning device comprises a plurality of first preset mark points on a first displacement plane and a plurality of second preset mark points on a second plane, and the first plane and the second plane have a space in a projection direction; acquiring image position information of each first preset mark point on a first plane and the second preset mark point on a second plane in an image coordinate system, wherein the image coordinate system is formed based on a projection image of the positioning device; determining a first positioning point on a first plane based on positioning coordinate information of a positioning coordinate system according to image position information of a plurality of first preset marking points in an image coordinate system, positioning coordinate information of the plurality of first preset marking points in the positioning coordinate system and image position information of each position point in the image coordinate system, wherein the first positioning point and the corresponding position point are overlapped in the image coordinate system; determining a second positioning point on the first plane based on the positioning coordinate information of the positioning coordinate system according to the image position information of the second preset marking points in the image coordinate system, the positioning coordinate information of the second preset marking points in the positioning coordinate system and the image position information of each position point in the image coordinate system, wherein the second positioning point is overlapped with the corresponding position point in the image coordinate system; and adjusting the guide structure according to the positioning position information of the first positioning point, the positioning position information of the second positioning point and the positioning position information of the guide structure.
In another embodiment, the image position information of the projection path of the two-dimensional projection image matched with the virtual path may be obtained through the image position information of the virtual path on the two-dimensional simulation image, and the guidance structure may be further adjusted according to the image position information of a plurality of position points on the projection path. The plurality of position points on the projection path have corresponding position points on a virtual path of the two-dimensional simulation image. Specifically, reference may be made to an adjustment method performed according to a position point on the virtual path, which is not described in detail herein.
In the above embodiments, calibrating the two-dimensional projection image may be further included. For example, a first conversion relationship between the positioning coordinate system and the image coordinate system may be determined according to the positioning position information of the plurality of third preset mark points in the positioning coordinate system and the image coordinate information in the image coordinate system, and a calibrated two-dimensional projection image is obtained according to the first conversion relationship, so as to reduce image distortion and improve the accuracy of the pointing structure pointing to the target point.
In this embodiment, the directing structure includes a light beam emitting component, and the directing direction of the directing structure sequentially pointing to the target points of the affected part corresponding to the plurality of position points includes: the light beam emitted by the light beam component is directed to a target point. So that the light beam directed to the target point can be subsequently used to position the guide channel for needle insertion.
Or, the guide structure comprises an instrument guide channel, and the sequentially directing of the guide structure to the target points of the affected part corresponding to the plurality of position points comprises: the central axis of the instrument guide channel points to the target point. Therefore, the subsequent surgical instruments can be inserted directly through the instrument guide channel, and the surgical steps are simplified.
According to the embodiment, the simulation two-dimensional image and the two-dimensional projection image are matched to obtain the physical posture parameters of the operation path in the affected part based on the shooting device, so that the pointing structure can be adjusted according to the physical posture parameters to sequentially point at a plurality of target points forming the operation path. Based on this, can obtain the planning route that corresponds with the operation route on the body surface, should plan the route and can assist medical personnel to implant surgical instruments, supplementary path planning in the art is favorable to improving the precision of operation, reduces the disease misery. For example, medical personnel can draw the planned path on the body surface, so that the needle inserting angle can be adjusted according to the direction and the starting point position of the planned path on the body surface, the purpose of assisting surgical planning is achieved, and the needle inserting accuracy is improved.
With respect to the above technical solutions, the following detailed description will be based on a specific embodiment. As shown in fig. 2, the auxiliary method flow for determining the target point path may include the following steps:
in step 201, calibration position information O of the third preset mark point O in the calibration coordinate system XOY is obtained1_2d。
In this embodiment, the captured image of the C-arm machine may be calibrated by a preset calibration device. And a plurality of third preset mark points can be arranged on the preset calibration device, the calibration coordinate system is positioned at any position on the preset calibration device, and each third preset mark point is positioned on the calibration device, so that the calibration position information of each third preset mark point is known. Here, the positioning position information of the plurality of third preset mark points O may be represented as O1_2 d. Wherein, since the number of the third preset mark points is plural, the positioning position information O1A2 d may include O1(X1, Y1), O2(X2, Y2) … … (Xn, Yn), specific amountsAnd are not limiting.
In step 202, the image position information O at the time of occurrence of distortion of the third preset mark point O in the image coordinate system MON is acquired2_2d。
In the present embodiment, the image coordinate system is formed based on a two-dimensional image projected by a C-arm machine. When the C-arm machine shooting calibration device obtains a two-dimensional image, the two-dimensional image comprises a plurality of projection points corresponding to a third preset marking point, and the image position information O of the plurality of projection points is generated when the image coordinate system MON is distorted2_2 d. Similarly, since the number of the third preset mark points is plural, the image position information O at the time of the distortion2A 2d may include O1(M1, N1), O2(M2, N2.) the.
In step 203, calibrating position information O according to the third preset mark point12d and distorted image position information O when distortion occurs2_2d, the first conversion relationship T1 between the calibration coordinate system XOY and the image coordinate system MON is calculated.
In step 204, the lesion is projected and the two-dimensional projection image after calibration is obtained according to the first transformation relation T1.
In the present embodiment, the functional relationship between the calibration position information and the distorted image position information may be established by a mathematical method, and the first conversion relationship T1 may be solved from the functional relationship.
For example, the following functional relationship may be established:
M=a0+a1X+a2Y+a3X2+a4XY+a5Y2+a6X3+a7X2Y+a8XY2+a9Y3+…
N=b0+b1X+b2Y+b3X2+b4XY+b5Y2+b6X3+b7X2Y+b8XY2+b9Y3+…
wherein (M, N) is an image when distortion occursPosition information, (X, Y) is calibration position information, a0…a9,b0…b9For representing the first conversion relation T1. In other words, the first conversion relation T1 characterizes a conversion relation between a distorted image and an undistorted image. Therefore, after the two-dimensional projection image in a distorted state for the affected part is obtained by the C-arm camera shooting, the calibration can be performed by the first conversion relationship, so that the calibrated two-dimensional projection image is obtained.
In step 205, a three-dimensional partial image and a virtual path within the three-dimensional partial image are acquired.
In this embodiment, three-dimensional reconstruction may be performed based on the scanning information of the C-arm machine or the scanning information of other equipment, so as to obtain a three-dimensional local image based on the affected part. Further, a virtual path may be established for the lesion according to the current state of the lesion shown by the three-dimensional partial image.
In step 206, the three-dimensional partial image is projected to obtain the simulated two-dimensional image.
In this embodiment, for example, a simulated two-dimensional image may be obtained by a DRR algorithm based on the three-dimensional local image, and a virtual path may be planned. Alternatively, in other embodiments, the three-dimensional image may be reconstructed by using a Siddon algorithm, a Shear-Warp algorithm, or the like, which is not limited in this application.
In step 207, the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image is obtained, and when the coincidence degree is not less than a preset threshold value, it is determined that the simulated two-dimensional image matches the two-dimensional projection image.
In this embodiment, the detailed implementation of steps 205-207 can be seen in the embodiments shown in fig. 4-7.
In step 208, image position information B1_2d of a plurality of position points B of the virtual path in the image coordinate system is acquired.
In this embodiment, since the simulated two-dimensional image is matched with the calibrated two-dimensional projection image, the image position information of each position point in the virtual path in the image coordinate system at this time can be used as the image position information of the corresponding point on the operation path inside the affected part in the calibrated two-dimensional projection image.
In step 209, the image position information P1_2d, Q1_2d of the plurality of first preset mark points P, the plurality of second preset mark points Q after calibration in the image coordinate system MON, and the positioning position information P2_2d, Q2_2d in the positioning coordinate system are acquired.
In this embodiment, the target path assisting method may be applied to an assisting system for determining a target path, and the assisting system for determining a target path may include a positioning device, and the positioning device may include a plurality of first preset mark points P located on a first plane, a plurality of second preset mark points Q located on the first plane, and a pointing structure, where a space is provided between the first plane and the second plane in a projection direction.
Based on the above, the positioning position information of each positioning mark point in the positioning coordinate system formed based on the positioning device can be obtained, and the positioning position information of the first preset mark points P can be represented as P2Q represents the positioning position information of a plurality of second preset mark points Q in a 2d way2_2d。
Furthermore, when the positioning device can be shot by a C-arm camera, the calibrated image position information P of the first preset mark point P can be obtained based on the first conversion relation T112d and calibrated image position information Q of second preset mark point Q1_2 d. The first preset mark point P and the second preset mark point Q are located on different planes, so that a straight line starting from a light source on the shooting device and pointing to a target point is constructed through the position information of the first preset mark point P and the second preset mark point Q on different planes.
In step 210, the light beam emitting device is adjusted according to P1_2d, P2_2d, B1_2d, Q1_2d, Q2_2d so that the emitted light of the light beam emitting device is directed to the target point.
In the present embodiment, the imaging resolution of the photographing device may be first determined according to the projection imaging plane size and the pixel size of the photographing device. For example, if the projection imaging plane size is 100mm × 100mm and the pixel size is 1000 pixels × 1000 pixels, the imaging resolution is 0.1 mm/pixel in both the length direction and the width direction of the X-ray; for another example, if the projection imaging plane size is 200mm × 200mm and the pixel size is 1000 pixels × 2000 pixels, the imaging resolution is 0.2 mm/pixel in the longitudinal direction of the X-ray film and 0.1 mm/pixel in the width direction of the X-ray film.
Further, as shown in fig. 3, assuming that the first preset mark point P includes mark points P1 and P2, the second preset mark point Q includes mark points Q1 and Q2, and point B is a target point, it is assumed that a coordinate system MON is established on a plane perpendicular to the central ray T of the photographing apparatus; then, in the M-axis direction in the MOT plane, the calibrated image position information T1 and T2 of P1 and P2 can be obtained, and at this time, the resolution of the projected image can be determined according to the physical distance | P1P2| between the mark points P1 and P2 in the positioning coordinate system and the number of the pixel points between the mark points P1 and P2 in the image coordinate system, and further according to the resolution of the projected image, the number of the pixel points between the projected point T1 of the mark point P1 and the projected point T3 of the target point, the distance | P1B | between the mark point P1 and the target point B in the positioning coordinate system can be knownMB, carrying out the following steps of; similarly, | P1BN、|Q1B|M、|Q1B|N
Therefore, when the marker point P1 moves along the M axis | P1BM| P1B | moving along N axisNAfter the distance, the P1 point coincides with the position of the target point in the image coordinate system; when the marking point Q1 moves along the M axis | Q1BM| Q1B moving along N axisNAfter the distance, the Q1 point coincides with the position of the target point in the image coordinate system. In other words, P1, Q1, and B are now on the same projected beam. Therefore, the light beam emitting device can be adjusted according to the positioning position information of the light beam emitting device in the positioning coordinate system and the position relation between the light beam emitting device and the points P1 and Q1, so that the light beam emitted by the light beam emitting device passes through the points P1 and Q1, and the light beam is naturally directed to the target point B.
When step 209 and step 210 are respectively executed for a plurality of position points on the virtual path, a planned path formed by points where the light beam passes through the body surface can be obtained, so that medical staff can be assisted to execute an operation through the planned path, and the accuracy is improved.
FIG. 4 is a partial flow diagram of a target path assistance method. As shown in fig. 4, the following steps may be included:
in step 401, a three-dimensional partial image is reconstructed from the scan information.
In step 402, a virtual path located within the three-dimensional partial image is acquired.
In this embodiment, the scanning information can be obtained by CT scanning or MR imaging, and the scanning information can be input into a computer to be reconstructed, so as to obtain a three-dimensional local image. Further, a virtual path may be established for the lesion according to the current state of the lesion shown by the three-dimensional partial image.
In step 403, the spatial pose and projection parameters of the three-dimensional partial image are adjusted.
In the present embodiment, rotation or translation may be performed based on the illustrated three-dimensional partial image to adjust the spatial posture of the three-dimensional partial image. The spatial pose may include angle information made between the three-dimensional partial image and each coordinate axis, position information on each coordinate axis, and the like. The projection parameters may include adjusting a focal length of the virtual light source, i.e., a separation distance between the virtual light source and the virtual projection imaging plane, and may further include position information of the virtual light source with respect to each coordinate axis and an imaging resolution related to a size of the projection imaging plane of the three-dimensional partial image and an image size of the two-dimensional projection image. For example, assuming that the projection imaging plane has a size of 200mm × 200mm and the projected image size of the two-dimensional projection image is 1000 pixels × 1000 pixels, the imaging resolution is 0.2 mm/pixel. For another example, when the projection imaging plane has a size of 200mm × 300mm and the projected image size of the two-dimensional projection image is 1000 pixels × 1000 pixels, the imaging resolution is 0.2 mm/pixel in one direction and 0.3 mm/pixel in the other direction.
In step 404, the three-dimensional partial image is projected to obtain a simulated two-dimensional image.
In this embodiment, a simulated two-dimensional image may be obtained by a DRR algorithm based on the three-dimensional local image, and a virtual path may be planned. Alternatively, in other embodiments, the three-dimensional image may be reconstructed by using a Siddon algorithm, a Shear-Warp algorithm, or the like, which is not limited in this application.
In step 405, a first projection region is extracted based on the simulated two-dimensional image.
In step 406, the lesion is photographed to obtain a two-dimensional projection image.
In step 407, a second projection region is extracted based on the two-dimensional projection image.
In this embodiment, based on the size of the projection plane and the area covered by the virtual light source, there may be other areas in the simulated two-dimensional image except the area corresponding to the three-dimensional local image. Similarly, there may be other regions in the two-dimensional projection image than the region corresponding to the lesion. Therefore, the present application extracts the first projection region in the simulated two-dimensional image and the second projection region in the two-dimensional projection image by image processing. Wherein the image processing may be extracting a patient area in the simulated two-dimensional image and a patient area in the two-dimensional projection image based on the gray values.
In step 408, the edge contours of the first projection region and the second projection region are matched to obtain a coincidence ratio.
In this embodiment, the edge profiles of the first projection region and the second projection region may be matched to obtain the coincidence ratio between the two-dimensional projection image and the simulated two-dimensional image. For example, the relative positions of the first projection area and the second projection area may be adjusted based on the mark point on the first projection area and the corresponding position of the mark point on the second projection area, so that the mark point substantially coincides with the corresponding position of the mark point on the second projection area, and then edge contour matching is performed. The mark point may be a specific area on the affected part, such as a nerve mark point, a bone mark point, etc., which is not limited in this application.
In step 409, it is determined whether the contact ratio is greater than a predetermined threshold.
In the present embodiment, step 410 is performed when the distance between the two-dimensional projection image and the simulated two-dimensional image is equal to or greater than a preset threshold, and step 403 is performed when the distance between the two-dimensional projection image and the simulated two-dimensional image is less than the preset threshold. The adjustment amount and the adjustment direction for the three-dimensional local image space attitude and the projection parameters can be determined according to the contact ratio, so that the matching efficiency is improved.
In step 410, it is determined that the simulated two-dimensional image matches the two-dimensional projection image.
Fig. 5 is another embodiment according to the present disclosure. As shown in fig. 5, the projection path assisting method may include the steps of:
in step 501, a three-dimensional partial image is reconstructed from the scan information.
In step 502, a virtual path within the three-dimensional partial image is acquired.
In step 503, the spatial pose and projection parameters of the three-dimensional partial image are adjusted.
In step 504, the three-dimensional partial image is projected based on the projection parameters to obtain a simulated two-dimensional image.
In the present embodiment, the step 501-504 can refer to the step 401-404 in the embodiment shown in fig. 4, and is not described herein again.
In step 505, the simulated two-dimensional image is segmented along a preset direction according to a preset proportion.
In step 506, the lesion is photographed to obtain a two-dimensional projection image.
In step 507, the two-dimensional projection image is divided along a preset direction according to a preset proportion.
In this embodiment, as shown in fig. 6 and 7, the simulated two-dimensional image of fig. 6 and the two-dimensional projection image of fig. 7 may be divided into N divided regions having a size of D × D by dividing the two-dimensional projection image of fig. 6 in a direction indicated by an arrow a and a direction indicated by an arrow B.
In step 508, each segmented image of the simulated two-dimensional image is matched with a segmented image of a corresponding region on the two-dimensional projection image.
In step 509, a degree of coincidence between the simulated two-dimensional image and the two-dimensional projection image is calculated based on the matching result.
In this embodiment, as shown in fig. 6 and 7, each of the divided regions in fig. 6 has a corresponding region in fig. 7, for example, the divided region D1 located in the third column of the second row in fig. 6 corresponds to the divided region D1 located in the third column of the second row in fig. 7, and further the divided regions D1 and D1 may be matched to obtain a matching value. Similarly, the other segmentation regions in fig. 6 can be matched with the corresponding segmentation regions in fig. 7, and finally the coincidence degree is obtained. For example, each partition may correspond to a weight coefficient, and the sum of the product of the matching degree and the weight coefficient of each partition and the product of the matching degree and the weight coefficient of other partitions is the coincidence degree.
In step 510, it is determined whether the contact ratio is greater than a predetermined threshold.
In the present embodiment, step 511 is performed when the distance between the two-dimensional projection image and the simulated two-dimensional image is equal to or greater than a preset threshold, and step 503 is performed when the distance between the two-dimensional projection image and the simulated two-dimensional image is less than the preset threshold.
In step 511, a match between the simulated two-dimensional image and the two-dimensional projection image is determined.
Based on the technical solution of the present application, a surgical navigation system is further provided, as shown in fig. 8, the navigation system may include a camera 1001, a guiding structure 1002 and a display 1003, the camera 1001 may be used for shooting a lesion to obtain a two-dimensional projection image, for example, the camera 1001 may include a C-arm machine or other X-ray imaging equipment. The display 1003 may be used to show relevant images such as three-dimensional partial images, two-dimensional projection images, and simulated two-dimensional images. The computer may further comprise a processor (not shown in the figures) and a memory for storing processor-executable instructions, the processor being configured for carrying out the steps of the method according to any of the embodiments described above.
Corresponding to the embodiments of the target path assisting method, the present application also provides embodiments of a target path assisting device.
FIG. 9 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 9, at the hardware level, the apparatus includes a processor 1502, an internal bus 1504, a network interface 1506, a memory 1508, and a non-volatile storage 1510, although other hardware required for services may be included. The processor 1502 reads the corresponding computer program from the non-volatile memory 1510 into the memory 1508 and then runs the computer program to form the auxiliary device 1500 for determining the target path on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 10, in a software implementation, the auxiliary device 1500 for determining a target point path may include a first obtaining module 1601, a second obtaining module 1602, and an adjusting module 1603; wherein:
a first acquisition module 1601 acquires a three-dimensional local image for an affected part, the three-dimensional local image including a virtual path.
A second obtaining module 1602, configured to obtain image position information of the virtual path on the simulated two-dimensional image when the simulated two-dimensional image obtained based on the three-dimensional local image projection matches the two-dimensional projection image obtained based on the affected part.
The adjusting module 1603 adjusts the guiding structure according to the image position information of the plurality of position points on the virtual path, so that the guiding direction of the guiding structure sequentially points to a plurality of target points of the affected part corresponding to the plurality of position points.
Optionally, the second obtaining module 1602 includes a first projection unit, a second projection unit, and a first obtaining unit, where:
and the first projection unit is used for carrying out perspective projection on the affected part to obtain the two-dimensional projection image.
And the second projection unit is used for projecting the three-dimensional local image to obtain the simulated two-dimensional image.
The first obtaining unit is used for obtaining the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image, and when the coincidence degree is not smaller than a preset threshold value, the simulated two-dimensional image is determined to be matched with the two-dimensional projection image.
Optionally, the first obtaining unit includes an extracting subunit and a matching subunit, where:
an extraction subunit that extracts a first projection region in the simulated two-dimensional image and a second projection region in the two-dimensional projection image;
and the first matching subunit calculates the contact ratio according to the edge contour matching degree between the first projection area and the second projection area.
Optionally, the first obtaining unit includes a splitting subunit and a second matching subunit, where:
the segmentation subunit is used for segmenting the simulation two-dimensional image and the two-dimensional projection image along a preset direction according to a preset proportion;
and the second matching subunit is used for matching each segmentation region of the simulated two-dimensional image with the segmentation region corresponding to the two-dimensional projection image to obtain the coincidence degree.
Optionally, the adjusting module 1603 includes a fourth obtaining unit and a second adjusting unit, where:
a fourth obtaining unit, configured to obtain, according to image position information of the virtual path on the two-dimensional simulation image, image position information of a projection path on the two-dimensional projection image, the projection path being matched with the virtual path;
and the second adjusting unit adjusts the guide structure according to the image position information of a plurality of position points on the projection path.
Optionally, the auxiliary system is applied to determine a target path, and the determining system includes a positioning device, where the positioning device includes a plurality of first preset mark points located on a first plane and a plurality of second preset mark points located on a second plane, and a space is provided between the first plane and the second plane in a projection direction;
the adjustment module 1603 includes:
the second acquisition unit is used for acquiring the positioning position information of each first preset mark point on the first plane and each second preset mark point on the second plane in a positioning coordinate system;
the third acquisition unit is used for acquiring image position information of each first preset mark point on the first plane and each second preset mark point on the second plane in an image coordinate system, and the image coordinate system is formed on the basis of a projection image of the positioning device;
and the first adjusting unit adjusts the guide structure according to the positioning position information and the image position information of the first preset mark point, the positioning position information and the image position information of the second preset mark point and the image position information of any position point on the virtual path.
Optionally, the method further includes: the calibration module is used for calibrating the calibration module,
the calibration module is used for determining a first conversion relation between a calibration coordinate system and an image coordinate system according to calibration position information of a plurality of third preset mark points in the calibration coordinate system and image coordinate information in the image coordinate system;
and obtaining the calibrated two-dimensional projection image according to the first conversion relation.
Optionally, the directing structure includes a light beam emitting component, and the adjusting module includes:
a first pointing unit to point the light beam emitted from the light beam component to a target point.
Alternatively, the guidance structure comprises an instrument guide channel, and the adjustment module comprises:
and the central axis of the instrument guide channel points to the target point.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1510 comprising instructions, executable by the processor 1502 of the electronic device to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, perform the steps of:
acquiring a three-dimensional local image of an affected part and a virtual path located within the three-dimensional local image;
when a simulated two-dimensional image obtained based on the three-dimensional local image projection is matched with a two-dimensional projection image obtained based on an affected part, acquiring image position information of the virtual path on the simulated two-dimensional image;
adjusting a guide structure according to the image position information of a plurality of position points on the virtual path, so that the guide direction of the guide structure sequentially points to a plurality of target points of the affected part corresponding to the plurality of position points;
the method is applied to a target point path determining system, the determining system comprises a positioning device, the positioning device comprises a plurality of first preset mark points positioned on a first plane and a plurality of second preset mark points positioned on a second plane, and in the projection direction, a space is formed between the first plane and the second plane;
the adjusting of the guide structure according to the image position information of the plurality of position points on the virtual path includes:
acquiring positioning position information of each first preset mark point on a first plane and each second preset mark point on a second plane in a positioning coordinate system;
acquiring image position information of each first preset mark point on a first plane and each second preset mark point on a second plane in an image coordinate system, wherein the image coordinate system is formed on the basis of a projection image of the positioning device;
and adjusting the guide structure according to the positioning position information and the image position information of the first preset mark point, the positioning position information and the image position information of the second preset mark point and the image position information of any position point on the virtual path.
2. The computer-readable storage medium of claim 1, wherein the matching of the simulated two-dimensional image based on the three-dimensional partial image projection with the two-dimensional projection image based on the lesion comprises:
carrying out perspective projection on the affected part to obtain a two-dimensional projection image;
projecting the three-dimensional local image to obtain the simulated two-dimensional image;
and acquiring the coincidence degree of the simulated two-dimensional image and the two-dimensional projection image, and determining that the simulated two-dimensional image is matched with the two-dimensional projection image when the coincidence degree is not less than a preset threshold value.
3. The computer-readable storage medium of claim 2, wherein said obtaining a degree of coincidence of the simulated two-dimensional image and the two-dimensional projection image comprises:
extracting a first projection area in the simulated two-dimensional image and a second projection area in the two-dimensional projection image;
and calculating the contact ratio according to the edge contour matching degree between the first projection area and the second projection area.
4. The computer-readable storage medium of claim 2, wherein said obtaining a degree of coincidence of the simulated two-dimensional image and the two-dimensional projection image comprises:
dividing the simulated two-dimensional image and the two-dimensional projection image along a preset direction according to a preset proportion;
and matching each segmentation area of the simulated two-dimensional image with the segmentation area corresponding to the two-dimensional projection image to obtain the contact ratio.
5. The computer-readable storage medium of claim 1, wherein said adjusting a guiding structure according to image location information of a plurality of location points on the virtual path comprises:
acquiring image position information of a projection path matched with the virtual path on the two-dimensional projection image according to the image position information of the virtual path on the simulated two-dimensional image;
and adjusting the guiding structure according to the image position information of a plurality of position points on the projection path.
6. The computer-readable storage medium of claim 1, further comprising:
determining a first conversion relation between a calibration coordinate system and an image coordinate system according to calibration position information of a plurality of third preset mark points in the calibration coordinate system and image coordinate information in the image coordinate system;
and obtaining the calibrated two-dimensional projection image according to the first conversion relation.
7. The computer-readable storage medium of claim 1, wherein the directing structure comprises a beam emitting component, and wherein directing the directing structure to sequentially direct the target points corresponding to the plurality of location points at the lesion comprises:
the light beam emitted by the light beam emitting component points to a target point;
alternatively, the directing structure comprises an instrument guide channel,
the guide direction of the guide structure sequentially points to the target points of the affected part corresponding to the plurality of position points, and the guide structure comprises:
the central axis of the instrument guide channel points to the target point.
8. An assistive device for determining a target path, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module acquires a three-dimensional local image aiming at an affected part and a virtual path positioned in the three-dimensional local image;
the second acquisition module is used for acquiring projection information of the virtual path on the simulated two-dimensional image when the simulated two-dimensional image projected based on the three-dimensional local image is matched with the two-dimensional projection image projected based on the affected part;
the adjusting module adjusts the guiding structure according to the image position information of the plurality of position points on the virtual path, so that the guiding direction of the guiding structure sequentially points to the target points of the affected part corresponding to the plurality of position points;
the method is applied to a target point path determining system, the determining system comprises a positioning device, the positioning device comprises a plurality of first preset mark points positioned on a first plane and a plurality of second preset mark points positioned on a second plane, and in the projection direction, a space is formed between the first plane and the second plane;
the adjusting of the guide structure according to the image position information of the plurality of position points on the virtual path includes:
acquiring positioning position information of each first preset mark point on a first plane and each second preset mark point on a second plane in a positioning coordinate system;
acquiring image position information of each first preset mark point on a first plane and each second preset mark point on a second plane in an image coordinate system, wherein the image coordinate system is formed on the basis of a projection image of the positioning device;
and adjusting the guide structure according to the positioning position information and the image position information of the first preset mark point, the positioning position information and the image position information of the second preset mark point and the image position information of any position point on the virtual path.
9. A surgical navigation system, comprising:
a camera for acquiring a projected image;
a guide structure;
a display for showing an image;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the instructions stored in the computer-readable storage medium of any one of claims 1-7.
CN201910161268.6A 2019-03-04 2019-03-04 Auxiliary method, device and system for determining target point path and readable storage medium Active CN109925054B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910161268.6A CN109925054B (en) 2019-03-04 2019-03-04 Auxiliary method, device and system for determining target point path and readable storage medium
US17/431,683 US20220133409A1 (en) 2019-03-04 2020-03-04 Method for Determining Target Spot Path
PCT/CN2020/077846 WO2020177725A1 (en) 2019-03-04 2020-03-04 Target path determining method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910161268.6A CN109925054B (en) 2019-03-04 2019-03-04 Auxiliary method, device and system for determining target point path and readable storage medium

Publications (2)

Publication Number Publication Date
CN109925054A CN109925054A (en) 2019-06-25
CN109925054B true CN109925054B (en) 2020-12-18

Family

ID=66986378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910161268.6A Active CN109925054B (en) 2019-03-04 2019-03-04 Auxiliary method, device and system for determining target point path and readable storage medium

Country Status (1)

Country Link
CN (1) CN109925054B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220133409A1 (en) * 2019-03-04 2022-05-05 Hangzhou Santan Medical Technology Co., Ltd Method for Determining Target Spot Path
CN111494009B (en) * 2020-04-27 2021-09-14 上海霖晏医疗科技有限公司 Image registration method and device for surgical navigation and surgical navigation system
CN114073579B (en) * 2020-08-19 2022-10-14 杭州三坛医疗科技有限公司 Operation navigation method, device, electronic equipment and storage medium
CN114424978B (en) * 2021-11-22 2023-05-12 赛诺威盛科技(北京)股份有限公司 Fusion registration method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9947110B2 (en) * 2014-02-13 2018-04-17 Brainlab Ag Method for assisting the positioning of a medical structure on the basis of two-dimensional image data
JP6334052B2 (en) * 2015-06-05 2018-05-30 チェン シェシャオChen,Chieh Hsiao Intraoperative tracking method
WO2017158592A2 (en) * 2016-03-13 2017-09-21 David Tolkowsky Apparatus and methods for use with skeletal procedures
CN107157579A (en) * 2017-06-26 2017-09-15 苏州铸正机器人有限公司 A kind of pedicle screw is implanted into paths planning method
CN107679574A (en) * 2017-09-29 2018-02-09 深圳开立生物医疗科技股份有限公司 Ultrasonoscopy processing method and system

Also Published As

Publication number Publication date
CN109925054A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109925054B (en) Auxiliary method, device and system for determining target point path and readable storage medium
CN109993792B (en) Projection method, device and system and readable storage medium
JP2022141792A5 (en)
CN109925052B (en) Target point path determination method, device and system and readable storage medium
US8597211B2 (en) Determination of indicator body parts and pre-indicator trajectories
US6415171B1 (en) System and method for fusing three-dimensional shape data on distorted images without correcting for distortion
EP2849630B1 (en) Virtual fiducial markers
JP5837261B2 (en) Multi-camera device tracking
US9721379B2 (en) Real-time simulation of fluoroscopic images
US11317879B2 (en) Apparatus and method for tracking location of surgical tools in three dimension space based on two-dimensional image
JP6806655B2 (en) Radiation imaging device, image data processing device and image processing program
CN112233155B (en) 2D-3D image registration algorithm
CN109925053B (en) Method, device and system for determining surgical path and readable storage medium
JP6960921B2 (en) Providing projection dataset
TWI836493B (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
CN109155068B (en) Motion compensation in combined X-ray/camera interventions
WO2020177725A1 (en) Target path determining method
EP4144298A1 (en) Object visualisation in x-ray imaging
JP7407831B2 (en) Intervention device tracking
CN116959675A (en) External fixation three-dimensional reconstruction and electronic prescription generation method based on scribing
CN115661234A (en) Device for synchronizing planned points between images, electronic apparatus, storage medium, and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant