CN109833092A - Internal navigation system and method - Google Patents

Internal navigation system and method Download PDF

Info

Publication number
CN109833092A
CN109833092A CN201711230713.7A CN201711230713A CN109833092A CN 109833092 A CN109833092 A CN 109833092A CN 201711230713 A CN201711230713 A CN 201711230713A CN 109833092 A CN109833092 A CN 109833092A
Authority
CN
China
Prior art keywords
image
target
video
optical indicia
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711230713.7A
Other languages
Chinese (zh)
Inventor
杨永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Fu Zhi Da Medical Technology Co Ltd
Original Assignee
Shanghai Fu Zhi Da Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Fu Zhi Da Medical Technology Co Ltd filed Critical Shanghai Fu Zhi Da Medical Technology Co Ltd
Priority to CN201711230713.7A priority Critical patent/CN109833092A/en
Publication of CN109833092A publication Critical patent/CN109833092A/en
Pending legal-status Critical Current

Links

Abstract

The present invention provides a kind of internal navigation system and method.The system includes: video acquisition device, it wherein include the optical indicia fixed in target in video for acquiring the video of target in real time, the angle of video acquisition device acquisition video and the direction of observation of user are consistent, wherein, target includes instrument and/or object;Positioning device, the optical indicia in video for identification, according to the optical indicia in the 3-D image of the optical indicia and target that are identified, the corresponding position 3-D image of target being anchored in video;And display, for Three-dimensional Display target in video in the intracorporal part of object.Above scheme not only effectively prevents the visual field problem of fixed infrared optics navigation, the navigation device avoided in magnetic navigation system occupies operative space, the problem of electromagnetic field is vulnerable to interference effect positioning accuracy, while the consumables cost of navigating surgery can be greatly reduced, improve the practicability of navigation.

Description

Internal navigation system and method
Technical field
The present invention relates to computer aided medicine technical fields, and in particular, to a kind of internal navigation system and method.
Background technique
It in medicinal practice process, often needs to probe into medical instrument in human body, carries out certain operation.For example, can The micro sample of pathological tissues in human body is obtained by percutaneous external puncture.In this generic operation, the accurate navigation of medical instrument It is very important.In example as above, the Needle localization navigation based on image is always the emphasis demand in clinical position.
Existing image-guidance system is broadly divided into two classes: optical navigation system based on infrared reflecting point and being based on microwire The electromagnetic field navigation system (abbreviation magnetic navigation) of circle.In actual use, both systems are each problematic, as shown in table 1:
The existing navigation system brief introduction of table 1
Accordingly, it is desirable to provide a kind of internal navigation system and method, existing in the prior art at least to be partially solved The above problem.
Summary of the invention
In order at least be partially solved problems of the prior art, according to an aspect of the present invention, one kind is provided Internal navigation system, comprising:
Video acquisition device, for acquiring the video of target in real time, wherein including consolidating in the video in the target Fixed optical indicia, the angle of the video acquisition device acquisition video and the direction of observation of user are consistent, wherein described Target includes instrument and/or object;
Positioning device, the optical indicia in the video for identification, according to the optical indicia that is identified and described The 3-D image of the target is anchored to the corresponding position in the video by the optical indicia in the 3-D image of target;With And
Display, for target described in the Three-dimensional Display in the video in the intracorporal part of object.
Illustratively, the video acquisition device is wear-type optical camera.
Illustratively, the display is head-mounted display.
Illustratively, the optical indicia is two dimensional code and/or gridiron pattern.
Illustratively, the optical indicia is the solid figure of the target surface.
Illustratively, the positioning device is in the following manner according to the three-dimensional of the optical indicia and the target that are identified The 3-D image of the target is anchored to the corresponding position in the video by the optical indicia in image:
According to the characteristic point in the optical indicia in the identified optical indicia and the 3-D image, determination is used for The 3-D image is transformed into the first conversion parameter under user coordinate system, wherein the video is under the user coordinate system 's;And
The corresponding position being anchored to the 3-D image using first conversion parameter in the video.
Illustratively, the positioning device is in the following manner according to the three-dimensional of the optical indicia and the target that are identified The 3-D image of the target is anchored to the corresponding position in the video by the optical indicia in image:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the second conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is described It under target-based coordinate system, and also include the optical indicia in the image of the target;
The 3-D image is converted to the 3-D image under the target-based coordinate system using second conversion parameter;
According in the optical indicia in the 3-D image under the identified optical indicia and the target-based coordinate system Characteristic point determines the third conversion parameter for being transformed into the 3-D image under the target-based coordinate system under user coordinate system, Wherein the video is under the user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the third conversion parameter in the video Answer position.
Illustratively, the video acquisition device includes alignment sensor, for video acquisition device described in real-time detection Position;
The 3-D image of the target is anchored to the correspondence position in the video by the positioning device in the following manner It sets:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the 4th conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is by institute State video acquisition device the first moment acquire, image under the target-based coordinate system;
The 3-D image is converted to the 3-D image under the target-based coordinate system using the 4th conversion parameter;
Change in location according to the video acquisition device detected at the first moment and current time, determination is for inciting somebody to action 3-D image under the target-based coordinate system is transformed into the 5th conversion parameter under user coordinate system, wherein the video is described Under user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the 5th conversion parameter in the video Answer position.
Illustratively, the internal navigation system further includes input unit, for receiving the input of the user;
The positioning device is also used to adjust pair of the 3-D image of the target in the video according to the input Answer position.
Illustratively, the target is object, and the 3-D image of the object is using CT, MRI or ultrasonic device to institute It states after object is scanned, reconstructed operation 3-D image obtained.
Illustratively, the target is instrument, and the 3-D image of the instrument is by graphics software for the instrument Draw, spatial digitizer to the instrument scanning mapping or after CT scan through reconstruction operation 3-D image obtained.
According to a further aspect of the invention, a kind of internal air navigation aid is additionally provided, comprising:
The video of acquisition target in real time, wherein include the optical indicia fixed in the target in the video, it is described The acquisition angles of video and the direction of observation of user are consistent, wherein the target includes instrument and/or object;
The optical indicia in the video is identified, according to the 3-D image of the optical indicia and the target that are identified In optical indicia, the 3-D image of the target is anchored to the corresponding position in the video;And
Target described in Three-dimensional Display is in the intracorporal part of object in the video.
Illustratively, the optical indicia according in the optical indicia identified and the 3-D image of the target, will The corresponding position that the 3-D image of the target is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the identified optical indicia and the 3-D image, determination is used for The 3-D image is transformed into the first conversion parameter under user coordinate system, wherein the video is under the user coordinate system 's;And
The corresponding position being anchored to the 3-D image using first conversion parameter in the video.
Illustratively, the optical indicia according in the optical indicia identified and the 3-D image of the target is by institute Stating the corresponding position that the 3-D image of target is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the second conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is described It under target-based coordinate system, and also include the optical indicia in the image of the target;
The 3-D image is converted to the 3-D image under the target-based coordinate system using second conversion parameter;
According in the optical indicia in the 3-D image under the identified optical indicia and the target-based coordinate system Characteristic point determines the third conversion parameter for being transformed into the 3-D image under the target-based coordinate system under user coordinate system, Wherein the video is under the user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the third conversion parameter in the video Answer position.
Illustratively, the video acquisition device that the method also includes: real-time detections for acquiring the target in real time Position;
The optical indicia according in the optical indicia identified and the 3-D image of the target is by the target The corresponding position that 3-D image is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the 4th conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is by institute State video acquisition device the first moment acquire, image under the target-based coordinate system;
The 3-D image is converted to the 3-D image under the target-based coordinate system using the 4th conversion parameter;
Change in location according to the video acquisition device detected at the first moment and current time, determination is for inciting somebody to action 3-D image under the target-based coordinate system is transformed into the 5th conversion parameter under user coordinate system, wherein the video is described Under user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the 5th conversion parameter in the video Answer position.
Illustratively, the method also includes:
Receive the input of the user;
Corresponding position of the 3-D image of the target in the video is adjusted according to the input.
The internal navigation system and method provided according to the present invention, user will not within the scope of the intracorporal operating space of object Occur may interfere with the equipment or consumptive material of sight, therefore not only effectively prevent the visual field problem of fixed infrared optics navigation, also The problem of navigational hardware avoided in magnetic navigation system occupies operative space, and electromagnetic field is vulnerable to interference effect positioning accuracy.On It states internal navigation system to identify using visible light optical, while the consumables cost that navigating surgery is greatly reduced, ensure that and lead The practicability of boat.Scheme is implemented simple simultaneously, and user only needs the moment to keep attention on patient or surgical instrument, does not change Traditional use habit, it is low that difficulty is promoted in study.More operation objects can be allowed to benefit from internal airmanship.
A series of concept of simplification is introduced in summary of the invention, these concepts will be in the detailed description section into one Step is described in detail.This part of the disclosure be not meant to attempt to limit technical solution claimed key feature and Essential features do not mean that the protection scope for attempting to determine technical solution claimed more.
Below in conjunction with attached drawing, the advantages of the present invention will be described in detail and feature.
Detailed description of the invention
Following drawings of the invention is incorporated herein as part of the present invention for the purpose of understanding the present invention.Shown in the drawings of this hair Bright embodiment and its description, principle used to explain the present invention.In the accompanying drawings,
Fig. 1 shows internal navigation system according to an embodiment of the invention and the schematic block diagram of its working environment;
Fig. 2 shows the schematic diagrames for the instrument that object according to an embodiment of the invention and needle operate on it;
Fig. 3 shows the 3-D image of object according to an embodiment of the invention;
Fig. 4 shows the 3-D image of instrument according to an embodiment of the invention;
Fig. 5 shows the frame in video according to an embodiment of the invention;
Fig. 6 shows positioning device according to an embodiment of the invention and the 3-D image of object is anchored in video The mode of corresponding position;
Fig. 7 shows positioning device in accordance with another embodiment of the present invention and the 3-D image of object is anchored in video Corresponding position mode;
Fig. 8 shows the coordinate transform process in the embodiment of Fig. 7;
The 3-D image of object is anchored in video by the positioning device that Fig. 9 shows another embodiment according to the present invention Corresponding position mode;And
Figure 10 shows positioning device according to an embodiment of the invention and the 3-D image of instrument is anchored in video Corresponding position mode.
Specific embodiment
In the following description, a large amount of details is provided so as to thoroughly understand the present invention.However, this field skill Art personnel will be seen that, only relate to presently preferred embodiments of the present invention described below, and the present invention may not need one or more in this way Details and be carried out.In addition, in order to avoid confusion with the present invention, not for some technical characteristics well known in the art It is described.
Internal navigation system provided herein is used to provide for user for the intracorporal tissue of object and/or instrument position Navigation in the intracorporal part of object.Wherein, user is the observer of entire navigation procedure in vivo, is also to probe into instrument pair As intracorporal operator.Object can be the people or other animals that user needs to operate on it.Instrument, which can be, arbitrarily may be used Probe into the intracorporal tool of object.Instrument may, for example, be puncture needle, biopsy needle, radio frequency or microwave melt needle, ultrasonic probe, hard The medical instruments such as sponge holding forceps, electric knife or stapler under endoscope, endoscopic surgery.
In above-mentioned internal navigation system, it is primarily based on visible image to determine the position of target, it is then visible at this Preceding aim is dimensionally shown in optical image.Schematically, which can be instrument and/or object.To will actually can not Intracorporeal organ, lesion and/or the medical instrument for the object seen dimensionally are shown to user, to guide user to operate in true environment Instrument.
According to an aspect of the present invention, a kind of internal navigation system is provided.Display in the internal navigation system is aobvious Show in the video to user, can show both object and instrument in corresponding position wherein, be provided more comprehensively for user Navigation information.It will appreciated by the skilled person that object or instrument can also only be shown wherein, for not showing Target, with user experience carry out instrumentation.According to being described below, it will be appreciated by those of ordinary skill in the art that only showing Show the embodiment of object or instrument, for sake of simplicity, details are not described herein.
Fig. 1 shows internal navigation system according to an embodiment of the invention and the schematic block diagram of its working environment. Internal navigation system shown in FIG. 1 includes video acquisition device 110, positioning device 120 and display 130.Video acquisition dress 110 are set for real-time acquisition target and the video of instrument.User can watch display 130, be used to show video, at this The surface portion of 110 object collected and instrument of video acquisition device is not only shown in video, but also in corresponding position three Intracorporeal organ, lesion and the instrument of the practical sightless object of dimension ground display are in the intracorporal part of object.In other words, in video In, practical sightless intracorporeal organ, lesion and instrument are located at intracorporal part and be aligned with human body and real instrument, thus User is guided to operate instrument in the virtual three-dimensional scene for being similar to true environment.
The acquisition angles of video acquisition device 110 and the direction of observation of user are consistent.When user is led in vivo using this When boat system, video acquisition device 110 can be worn on body, such as head.Optionally, video acquisition device is head Wear formula optical camera.When user uses internal navigation system, no matter which kind of posture it uses, can holding head well The acquisition angles for wearing formula optical camera are consistent with its direction of observation.It not only ensure that video shown by display 130 as a result, Angle be angle that user is watched, ensure that the precision of instrument navigation, and avoid the use of internal navigation system Interference to the various operations of user.To significantly improve user experience.
When navigation system works in vivo, the surface of object is fixed with the first optical indicia, and the surface of instrument is fixed There is the second optical indicia.For example, the first optical indicia can be adhesive on the skin of object.In instrument, close hand-held portion Point it can be printed on the second optical indicia, in the external of object when navigation system works in vivo for the part.First optical indicia With the second optical indicia be all in visible images it is identifiable.
Fig. 2 shows the instruments that object according to an embodiment of the invention and needle operate on it.As shown in Fig. 2, Object and equipment surfaces all have optical indicia, i.e. two dimensional code in Fig. 2.Two dimensional code be distributed in the plane it is chequered with black and white Planar graph, point above be highly susceptible to identifying, by identifying at least three point therein, determining for the two dimensional code may be implemented Position.Because two dimensional code is fixed on object or instrument, the positioning of the object for being fixed with the two dimensional code or instrument may be implemented. Optionally, optical indicia can also be other such as tessellated planar graphs.Using two dimensional code or gridiron pattern as optics mark Know, so that positioning object or instrument are more acurrate and quick.It is thus possible to more accurately be navigated to instrument is fast moved.
Optionally, the optical indicia fixed on object and equipment surfaces can also be solid figure.For example, being set in instrument During family planning produces, the second optical indicia, which can be the handle of the instrument or the second optical indicia, can be affixed to handle Some structure of side.Although the calculating time carried out needed for space orientation identifies using solid figure is long with respect to planar graph, But it is higher to object space positioning accuracy that is fixed or moving slowly at.
Including first optical indicia and second optical indicia in the video collected of video acquisition device 110, such as It is preceding described, be respectively used to in video object and instrument position.
The first optical indicia and the second optical indicia in video for identification of positioning device 120.I.e. from each of video The first optical indicia and the second optical indicia therein are identified in frame.Identification operation can be based on the image recognition of existing maturation Algorithm, such as based on identification methods such as textural characteristics, frequency-domain analysis and machine learning.
Positioning device 120 is also used to the first optics in the 3-D image according to the first optical indicia and object that are identified The 3-D image of the object is anchored to the corresponding position in the video collected of video acquisition device 110 by mark, and according to The second optical indicia in the 3-D image of the second optical indicia and instrument that are identified, the 3-D image of the instrument is anchored to Corresponding position in the video.
It include the characteristic point in the first optical indicia in the 3-D image of above-mentioned object.Illustratively, first with CT, MRI or ultrasonic device etc. are scanned object, to obtain the faultage image of object.Then the faultage image is rebuild, To obtain the 3-D image.The 3-D image is under coordinate systems in image.Can before scanning, the specific position with object A mark picture or mark structure are fixed, receives scanning together with object.Picture material includes that can be scanned equipment (such as CT, MRI, ultrasonic device etc.) identification identification point.The mode for obtaining the 3-D image of the object is at low cost, Yi Shixian and standard Exactness is high.It will appreciated by the skilled person that the mode of the 3-D image for the acquisition object that above-mentioned example provides is only Signal, rather than limit.
In order to preferably be user present its intracorporal interested object element of object, such as bone, blood vessel, internal organs and Pathological target etc. avoids the interference of other independent elements, after the faultage image for obtaining the characteristic point comprising the first optical indicia, Using normal image post-processing approach, the specific dissection of characteristic point and object is partitioned into from faultage image initial data The object elements such as structure, such as bone, blood vessel, internal organs and lesion, and reconstructed generate the 3-D image to elephant.The object 3-D image in the object element of object and the characteristic point of the first optical indicia can be three-dimensionally shown.Fig. 3 shows basis The 3-D image of the object of one embodiment of the invention.The bone and liver and the first optical indicia of object is shown Three characteristic points.
It include the second optical indicia in the 3-D image of above-mentioned instrument.It include the second optical indicia in the 3-D image of instrument With the spatial relationship of instrument itself.Fig. 4 shows the 3-D image of instrument according to an embodiment of the invention.Illustratively, The 3-D image of instrument can be draw by graphics software for instrument, spatial digitizer and be scanned and be surveyed and drawn to instrument Or to instrument carry out CT scan after through reconstruction operation 3-D image obtained.
As previously mentioned, including the first optical indicia in the 3-D image of object, and object and the first optical indicia all may be used To be considered approximate rigidity, so positioning device 120 is by the first optical indicia in the 3-D image of object and from video First optical indicia of identification is perfectly aligned, the 3-D image of the object can be anchored to video acquisition device 110 and be acquired Video frame in corresponding position.
It is appreciated that above-mentioned alignment operation, it can be merely with the characteristic point in the first optical indicia.Multiple characteristic points can be with Entire first optical indicia is substituted, to complete the positioning to object.The above-mentioned positioning that object is realized using characteristic point is only to show Meaning can also realize the positioning of object using other figures such as straight lines on the first optical indicia.
It is appreciated that the 3-D image of the object can only include a part of object element of object, which can To be tissue of object, such as various internal organs, tracheae, blood vessel and bone etc., the lesions position of object can also be.By This realizes the three-dimensional figure of object by the three-dimensional image projection of object extremely with the consistent realistic space three-dimensional position of object itself As the location matches with its video.
Similarly with the aforementioned positioning method of object, positioning device 120 is also used to according to the second optical indicia identified With the second optical indicia in the 3-D image of instrument, the 3-D image of the instrument is anchored to the corresponding position in the video. For sake of simplicity, details are not described herein for its principle and detailed process.
Display 130 is for the intracorporal part of three-dimensional display object and instrument in video in the intracorporal part of object.Fig. 5 Show the frame in video shown by display 130 according to an embodiment of the invention.Human body contour outline in Fig. 5 is view Content in the primitive frame of frequency.On the basis of the primitive frame, the three-dimensional figure of object also is shown in the corresponding position of human body Picture.Wherein only show user interested, object element, including bone, liver and lesions position of object etc..For device Tool (puncture needle) be also it is similar, not only show the true instrument itself in the primitive frame of video, i.e. part outside subject, Also in the virtual instrument in the corresponding position of instrument in the intracorporal part of object.
Because in video, the virtual 3-D image of object is registrated with real object, user can in real time " seeing " arrive from Bone, the operation of the vitals such as big blood vessel are avoided in the actual position of the three dimensional structure extracted in 3-D image in vivo, selection Path.In the head of instrument insertion subject, when can not see, user can be by the second optics mark for persistently identifying instrument Know, the display by display to the 3-D image of instrument, " seeing " to the head for being hidden in the intracorporal part of object and instrument Corresponding extended line direction, it is ensured that instrument run-home at any time is advanced along pre-determined route.All regions of anatomy of object, target Region, instrument, progress path (surgical planning) etc. prompt image or information, can be shown in independent display 130, for user Observation in real time.
Display 130 can be the regular display erect within sweep of the eye in user.Optionally, display 130 is head Head mounted displays.When user uses internal navigation system, head-mounted display is held in the visual field of user at any time, poly- convenient for it Coke arrives object and instrument, without user frequently the observation display and bowing that comes back see surgical instrument the two act in switch, Reduce its operational risk.
When being operated using above-mentioned internal navigation system according to the present invention, user is in the intracorporal operating space model of object It is not in the visual field that may interfere with the equipment or consumptive material of sight, therefore not only effectively prevent fixed infrared optics navigation in enclosing Problem, it is thus also avoided that the problem of navigational hardware in existing magnetic navigation system occupies operative space.Above-mentioned internal navigation system Optical indicia is tracked using visible light, while the consumables cost that navigating surgery is greatly reduced, it is ensured that in video dimensionally The expectation target region for showing user, ensure that the practicability of navigation.Scheme is implemented simple simultaneously, does not change the use of user's tradition Habit, it is low that difficulty is promoted in study.More operation objects can be allowed to benefit from internal airmanship.
Illustratively, internal navigation system further includes input unit, for receiving the input of user.The input unit is for example Mouse, keyboard, acoustic control input unit etc..User can be by directly observing the three-dimensional figure of real goal and target on display 130 The overlapping cases of picture, confirm the positioning accuracy of target, while can use input unit input instruction.In this example, it positions Device 120 be also used to according to input unit received user input adjustment target 3-D image correspondence position in video It sets, so that translation or rotation occur for the 3-D image of the target on display 130, to obtain the positioning effect of higher precision Fruit.
According to one embodiment of present invention, as shown in fig. 6, positioning device 120 can be in the following manner by object 3-D image is anchored to the corresponding position in video.
S11, according to from the first optical indicia in the 3-D image of the first optical indicia and object that are identified in video In characteristic point, determine the first conversion parameter for being transformed into the 3-D image of object under user coordinate system.As previously described The 3-D image of object is under coordinate systems in image, and video is under user coordinate system.First conversion parameter can be used for by Under 3-D image and the video unification to the same coordinate system of object.Thus, it is possible to which the suitable position of video frame is in video Now virtual object, so that user seems to see the intracorporal object element of object virtual, three-dimensional in video, example Such as internal organs, bone.
The first optical indicia in video is matched with the first optical indicia in the 3-D image of object.It is exemplary Ground, which realizes registration using iteration closest approach algorithm, and seeks optimal solution by mean square error function, that is, finds Best matching result.It can use the mean square error f (R, T) that formula (1) calculates the 3-D image of video frame and object in video. When mean square error f (R, T) is less than a certain specific threshold it is believed that obtaining desired first conversion parameter R3dAnd T3d.Its Middle R3dAnd T3dRespectively indicate spin matrix and transition matrix.
Wherein f (R, T) indicates the mean square error of the 3-D image of video frame and object in video, and N indicates the first optics mark The sum of characteristic point in knowledge,WithRespectively indicate the characteristic point in the first optical indicia in the 3-D image of object With the character pair point of the first optical indicia in video frame.
S12, the corresponding position being anchored to the 3-D image of object using identified first conversion parameter in video.It can Selection of land realizes the operation using formula (2).For each pixel in the 3-D image of object, based on it in 3-D image Coordinate XP、YPAnd ZPAnd identified first conversion parameter R3dAnd T3d, the pixel can be calculated under user coordinate system Coordinate XO、YOAnd ZO, that is, obtain the position of the pixel in video.
In above-described embodiment, the 3-D image of object is anchored in video according to the characteristic point of the first optical indicia, is counted Calculation amount is small, and real-time is good.
Another embodiment of the present invention supports position movement faster of the video acquisition device 110 relative to object, i.e., Allow user's faster moving relative to object.For example, user can wear video acquisition device 110, comfortably, conveniently with it Position operation object.In this embodiment, the 3-D image of object can be anchored to by positioning device 120 in the following manner Corresponding position in video, is described in detail below with reference to Fig. 7.
S21 obtains the image of the also object including aforementioned first optical indicia.If the image of the object is object coordinates system Under.It is appreciated that the image of the object, which can be, utilizes an initial view in the video collected of video acquisition device 110 Frequency frame.It will appreciated by the skilled person that the image of the object is also possible to other other than video acquisition device 110 Device is collected.Also, according to the first light in the 3-D image of the first optical indicia and object in the image of the object The characteristic point in mark is learned, determines the second conversion parameter for being transformed into the 3-D image of object under object coordinates system.By This, the second conversion parameter can be used under the 3-D image unification to the coordinate system where the image of object by object, i.e. object Coordinate system.Thus, it is possible to virtual, the three-dimensional intracorporal object element of object is presented in the suitable position in the image of object, Such as internal organs, bone etc..
The 3-D image of object is converted to the three-dimensional figure of the object under object coordinates system using the second conversion parameter by S22 Picture.
Above-mentioned two operation S21 and S22 is similar with operation S11 and S12 as above respectively, only need to be by the video of video therein Frame is changed to the image of the object in this operation, and for brief introduction, details are not described herein.
S23, according to from the 3-D image under the first optical indicia and the object coordinates system identified in video One optical indicia determines the third conversion parameter for being transformed into the 3-D image under object coordinates system under user coordinate system, Wherein aforementioned video is under the user coordinate system.
Optionally, third conversion parameter is determined using following formula (3):WithWhereinWithRespectively indicate rotation Torque battle array and transition matrix.
Wherein, XP、YPAnd ZPRespectively indicate a spy in the first optical indicia in the 3-D image under object coordinates system Levy X-axis, Y-axis and the Z axis coordinate of point, XO、YOAnd ZORespectively indicate X-axis, Y-axis and Z of the preceding feature point in the video frame of video Axial coordinate.
According to the coordinate of known multiple characteristic points, third conversion parameter can be determined using formula as above (3): With
3-D image under object coordinates system is anchored to the correspondence position in the video using third conversion parameter by S24 It sets.The operation is similar with operation S12 as above, and only the 3-D image under coordinate systems in image therein need to be changed in this operation 3-D image under object coordinates system, for brief introduction, details are not described herein.
In this embodiment, the image for operating the object under object coordinates system involved in S21 can be at first The video frame of acquisition is carved, the video frame for operating video involved in S24 can be the video frame acquired after the first moment, lead to Operation S21 and operation S22 are crossed, the 3-D image of the object under coordinate systems in image is converted into the three-dimensional figure under object coordinates system Picture.Then according to the conversion of object coordinates system and user coordinate system, correspond to user in the change of the observation position at front and back moment Change, the 3-D image under object coordinates system is converted into the 3-D image under user coordinate system.Fig. 8 is shown on according to the present invention State the coordinate transform process of embodiment.In the embodiment, the real-time and precise to object's position is realized by coordinate transform twice Tracking, is no longer limited the position of object and user, improves the comfort level of object and the convenience of user.
Another embodiment of the present invention also supports that user is mobile relative to the faster position of object, i.e. permission video acquisition Device 110 is faster moved relative to object.In this embodiment, video acquisition device 110 includes alignment sensor, is used In the position of real-time detection video acquisition device 110.The alignment sensor such as gyroscope and accelerometer etc., are able to record The space displacement of itself.The 3-D image of object can be anchored to the correspondence in video by positioning device 120 in the following manner Position is described in detail below with reference to Fig. 9.
S31, according to the feature in the first optical indicia in the image of object and the first optical indicia in 3-D image Point determines the 4th conversion parameter for being transformed into 3-D image under object coordinates system.The image of the object is by video Image that acquisition device 110 acquires, under object coordinates system.It also include the first optical indicia in the image of the object.
The 3-D image is converted to the three-dimensional figure under the object coordinates system using the 4th conversion parameter by S32 Picture.
Above-mentioned two operation S31 and S32 is similar with operation S11 and S12 as above respectively, only need to be by the video of video therein Frame is changed to the image of the object in this operation, and for brief introduction, details are not described herein.
S33, the change in location according to video acquisition device 110 detected at the first moment and current time are determined and are used In the 5th conversion parameter being transformed into the 3-D image under object coordinates system under user coordinate system.First moment is video At the time of acquisition device 110 acquires the image of the object.Aforementioned video is under the user coordinate system, is video acquisition dress 110 are set in current time acquisition.Video acquisition device 110 corresponds to object in the change in location at the first moment and current time The conversion of coordinate system and user coordinate system.In one example, true in the change in location of different moments according to video acquisition device Fixed the 5th Transformation Parameters for being transformed into the 3-D image under object coordinates system under user coordinate system: spin matrix and conversion Matrix.
3-D image under object coordinates system is anchored to the correspondence position in aforementioned video using the 5th conversion parameter by S34 It sets.In example as above, the 5th conversion parameter: spin matrix and transition matrix can use, by the three-dimensional under object coordinates system Image is anchored in aforementioned video.The operation is similar with operation S12 as described above, for sake of simplicity, details are not described herein.
In the embodiment, its position is determined in real time using the alignment sensor on video acquisition device 110, to realize Tracking to object in video.User need not can keep at any time attention near the first optical indicia of object, and go to see more The intracorporal target position of object is examined, to improve user experience.
Further embodiment according to the present invention can execute aforesaid operations S21 to S24 and above-mentioned in different time sections respectively Operate S31 to S34.In other words, in different time sections respectively according to the detection of the first optical indicia and alignment sensor of object The position of video acquisition device 110 carries out the tracking of object.Inspection of the alignment sensor for the position of video acquisition device 110 Survey has error, if carrying out object tracing based on position detected for a long time, error is possible to build up increase.Successively adopt More accurate tracing and positioning may be implemented with above two mode.
According to one embodiment of present invention, the positioning device 120 is in the following manner by the 3-D image anchor of instrument The fixed corresponding position into video.The process is described in detail below with reference to Figure 10.
S41, as previously mentioned, including the second optical indicia in the 3-D image of instrument.It can be according to identifying from video The characteristic point of the second optical indicia in second optical indicia and the 3-D image is determined for converting the 3-D image of instrument The 6th conversion parameter under to user coordinate system.Video is under the user coordinate system.
S42, the corresponding position being anchored to the 3-D image of instrument using the 6th conversion parameter in video.
Aforesaid operations S41 and S42 are similar with operation S11 and S12 as above respectively, only need to be by the 3-D image of object therein It is changed to the 3-D image of the instrument in this operation, for brief introduction, details are not described herein.According to the characteristic point of the second optical indicia The 3-D image of instrument is anchored in video, calculation amount is small, and real-time is good.
According to a further aspect of the invention, a kind of internal air navigation aid is additionally provided.The internal air navigation aid includes:
The video of acquisition target in real time, wherein include the optical indicia fixed in the target in the video, it is described The acquisition angles of video and the direction of observation of user are consistent, wherein the target includes instrument and/or object;
The optical indicia in the video is identified, according to the 3-D image of the optical indicia and the target that are identified In optical indicia, the 3-D image of the target is anchored to the corresponding position in the video;And
Target described in Three-dimensional Display is in the intracorporal part of object in the video.
Illustratively, the optical indicia according in the optical indicia identified and the 3-D image of the target, will The corresponding position that the 3-D image of the target is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the identified optical indicia and the 3-D image, determination is used for The 3-D image is transformed into the first conversion parameter under user coordinate system, wherein the video is under the user coordinate system 's;And
The corresponding position being anchored to the 3-D image using first conversion parameter in the video.
Illustratively, the optical indicia according in the optical indicia identified and the 3-D image of the target is by institute Stating the corresponding position that the 3-D image of target is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the second conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is described It under target-based coordinate system, and also include the optical indicia in the image of the target;
The 3-D image is converted to the 3-D image under the target-based coordinate system using second conversion parameter;
According in the optical indicia in the 3-D image under the identified optical indicia and the target-based coordinate system Characteristic point determines the third conversion parameter for being transformed into the 3-D image under the target-based coordinate system under user coordinate system, Wherein the video is under the user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the third conversion parameter in the video Answer position.
Illustratively, the video acquisition device that the method also includes: real-time detections for acquiring the target in real time Position;
The optical indicia according in the optical indicia identified and the 3-D image of the target is by the target The corresponding position that 3-D image is anchored in the video further comprises:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, really It is fixed for by the 4th conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is by institute State video acquisition device the first moment acquire, image under the target-based coordinate system;
The 3-D image is converted to the 3-D image under the target-based coordinate system using the 4th conversion parameter;
Change in location according to the video acquisition device detected at the first moment and current time, determination is for inciting somebody to action 3-D image under the target-based coordinate system is transformed into the 5th conversion parameter under user coordinate system, wherein the video is described Under user coordinate system;And
Pair being anchored to the 3-D image under the target-based coordinate system using the 5th conversion parameter in the video Answer position.
Illustratively, the method also includes:
Receive the input of the user;
Corresponding position of the 3-D image of the target in the video is adjusted according to the input.
It is appreciated that the target in above-mentioned internal navigation algorithm can be with object and/or instrument.Above with respect to internal navigation In the description of system, the wherein embodiment of each device and function etc., those skilled in the art's knot is described in detail It closes and describes specific steps and its advantage etc. it will be appreciated that internal air navigation aid above with respect to Fig. 1 to Figure 10, for sake of simplicity, this Text is repeated not to this.
In the description of the present invention, term " first ", " second " etc. are used for description purposes only, and should not be understood as instruction or It implies relative importance or implicitly indicates the quantity of indicated technical characteristic." first ", " second " etc. are defined as a result, Feature can explicitly or implicitly include one or more of the features.
The present invention has been explained by the above embodiments, but it is to be understood that, above-described embodiment is only intended to The purpose of citing and explanation, is not intended to limit the invention to the scope of the described embodiments.Furthermore those skilled in the art It is understood that the present invention is not limited to the above embodiments, introduction according to the present invention can also be made more kinds of member Variants and modifications, all fall within the scope of the claimed invention for these variants and modifications.Protection scope of the present invention by The appended claims and its equivalent scope are defined.

Claims (10)

1. a kind of internal navigation system, comprising:
Video acquisition device, for acquiring the video of target in real time, wherein including being fixed in the target in the video Optical indicia, the angle of the video acquisition device acquisition video and the direction of observation of user are consistent, wherein the target Including instrument and/or object;
Positioning device, the optical indicia in the video for identification, according to the optical indicia and the target identified 3-D image in optical indicia, the 3-D image of the target is anchored to the corresponding position in the video;And
Display, for target described in the Three-dimensional Display in the video in the intracorporal part of object.
2. internal navigation system as described in claim 1, wherein the video acquisition device is wear-type optical camera.
3. internal navigation system as claimed in claim 1 or 2, wherein the display is head-mounted display.
4. internal navigation system as claimed in claim 1 or 2, wherein the optical indicia is two dimensional code and/or gridiron pattern.
5. internal navigation system as claimed in claim 1 or 2, wherein the optical indicia is the solid of the target surface Figure.
6. internal navigation system as claimed in claim 1 or 2, wherein the positioning device is in the following manner according to being known The 3-D image of the target is anchored to the view by the optical indicia in other optical indicia and the 3-D image of the target Corresponding position in frequency:
According to the characteristic point in the optical indicia in the identified optical indicia and the 3-D image, determine for by institute The first conversion parameter that 3-D image is transformed under user coordinate system is stated, wherein the video is under the user coordinate system; And
The corresponding position being anchored to the 3-D image using first conversion parameter in the video.
7. internal navigation system as claimed in claim 1 or 2, wherein the positioning device is in the following manner according to being known The 3-D image of the target is anchored to the view by the optical indicia in other optical indicia and the 3-D image of the target Corresponding position in frequency:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, determines and use In by the second conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is the target It under coordinate system, and also include the optical indicia in the image of the target;
The 3-D image is converted to the 3-D image under the target-based coordinate system using second conversion parameter;
According to the feature in the optical indicia in the 3-D image under the identified optical indicia and the target-based coordinate system Point determines the third conversion parameter for being transformed into the 3-D image under the target-based coordinate system under user coordinate system, wherein The video is under the user coordinate system;And
The 3-D image under the target-based coordinate system is anchored to the correspondence position in the video using the third conversion parameter It sets.
8. internal navigation system as claimed in claim 1 or 2, wherein the video acquisition device includes alignment sensor, is used The position of the video acquisition device described in real-time detection;
The 3-D image of the target is anchored to the corresponding position in the video by the positioning device in the following manner:
According to the characteristic point in the optical indicia in the optical indicia and the 3-D image in the image of the target, determines and use In by the 4th conversion parameter under the 3-D image converting into target coordinate system, wherein the image of the target is by the view Frequency acquisition device the first moment acquire, image under the target-based coordinate system;
The 3-D image is converted to the 3-D image under the target-based coordinate system using the 4th conversion parameter;
Change in location according to the video acquisition device detected at the first moment and current time, determination is used for will be described 3-D image under target-based coordinate system is transformed into the 5th conversion parameter under user coordinate system, wherein the video is the user Under coordinate system;And
The 3-D image under the target-based coordinate system is anchored to the correspondence position in the video using the 5th conversion parameter It sets.
9. internal navigation system as claimed in claim 1 or 2, wherein further include input unit, for receiving the user's Input;
The positioning device is also used to adjust correspondence position of the 3-D image of the target in the video according to the input It sets.
10. a kind of internal air navigation aid, comprising:
The video of acquisition target in real time, wherein including the optical indicia fixed in the target, the video in the video Acquisition angles and the direction of observation of user be consistent, wherein the target includes instrument and/or object;
The optical indicia in the video is identified, according in the 3-D image of the optical indicia and the target that are identified The 3-D image of the target is anchored to the corresponding position in the video by optical indicia;And
Target described in Three-dimensional Display is in the intracorporal part of object in the video.
CN201711230713.7A 2017-11-29 2017-11-29 Internal navigation system and method Pending CN109833092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711230713.7A CN109833092A (en) 2017-11-29 2017-11-29 Internal navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711230713.7A CN109833092A (en) 2017-11-29 2017-11-29 Internal navigation system and method

Publications (1)

Publication Number Publication Date
CN109833092A true CN109833092A (en) 2019-06-04

Family

ID=66882545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711230713.7A Pending CN109833092A (en) 2017-11-29 2017-11-29 Internal navigation system and method

Country Status (1)

Country Link
CN (1) CN109833092A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181498A1 (en) * 2019-03-12 2020-09-17 上海复拓知达医疗科技有限公司 In-vivo navigation system and method
CN113384361A (en) * 2021-05-21 2021-09-14 中山大学 Visual positioning method, system, device and storage medium
CN113509263A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Object space calibration positioning method
CN114224489A (en) * 2021-12-12 2022-03-25 浙江德尚韵兴医疗科技有限公司 Trajectory tracking system for surgical robot and tracking method using the same
CN115396654B (en) * 2022-09-02 2023-08-08 北京积水潭医院 Navigation offset verification device, method, navigation equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095027A1 (en) * 2005-03-11 2006-09-14 Bracco Imaging S.P.A. Methods and apparati for surgical navigation and visualization with microscope
CN101904770A (en) * 2009-06-05 2010-12-08 复旦大学 Operation guiding system and method based on optical enhancement reality technology
CN102737158A (en) * 2012-02-10 2012-10-17 中国人民解放军总医院 Ablation treatment image booting equipment with three-dimensional image processing device
CN103211655A (en) * 2013-04-11 2013-07-24 深圳先进技术研究院 Navigation system and navigation method of orthopedic operation
CN105266897A (en) * 2015-11-25 2016-01-27 上海交通大学医学院附属第九人民医院 Microscopic surgical operation navigation system based on augmented reality and navigation method
CN106648077A (en) * 2016-11-30 2017-05-10 南京航空航天大学 Adaptive dynamic stereoscopic augmented reality navigation system based on real-time tracking and multi-source information fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095027A1 (en) * 2005-03-11 2006-09-14 Bracco Imaging S.P.A. Methods and apparati for surgical navigation and visualization with microscope
CN101904770A (en) * 2009-06-05 2010-12-08 复旦大学 Operation guiding system and method based on optical enhancement reality technology
CN102737158A (en) * 2012-02-10 2012-10-17 中国人民解放军总医院 Ablation treatment image booting equipment with three-dimensional image processing device
CN103211655A (en) * 2013-04-11 2013-07-24 深圳先进技术研究院 Navigation system and navigation method of orthopedic operation
CN105266897A (en) * 2015-11-25 2016-01-27 上海交通大学医学院附属第九人民医院 Microscopic surgical operation navigation system based on augmented reality and navigation method
CN106648077A (en) * 2016-11-30 2017-05-10 南京航空航天大学 Adaptive dynamic stereoscopic augmented reality navigation system based on real-time tracking and multi-source information fusion

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181498A1 (en) * 2019-03-12 2020-09-17 上海复拓知达医疗科技有限公司 In-vivo navigation system and method
CN113509263A (en) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 Object space calibration positioning method
CN113384361A (en) * 2021-05-21 2021-09-14 中山大学 Visual positioning method, system, device and storage medium
CN114224489A (en) * 2021-12-12 2022-03-25 浙江德尚韵兴医疗科技有限公司 Trajectory tracking system for surgical robot and tracking method using the same
CN114224489B (en) * 2021-12-12 2024-02-13 浙江德尚韵兴医疗科技有限公司 Track tracking system for surgical robot and tracking method using same
CN115396654B (en) * 2022-09-02 2023-08-08 北京积水潭医院 Navigation offset verification device, method, navigation equipment and storage medium

Similar Documents

Publication Publication Date Title
US11464575B2 (en) Systems, methods, apparatuses, and computer-readable media for image guided surgery
US11883118B2 (en) Using augmented reality in surgical navigation
EP1804705B1 (en) Aparatus for navigation and for fusion of ecographic and volumetric images of a patient which uses a combination of active and passive optical markers
EP2637593B1 (en) Visualization of anatomical data by augmented reality
JP2950340B2 (en) Registration system and registration method for three-dimensional data set
US8116848B2 (en) Method and apparatus for volumetric image navigation
JP7277967B2 (en) 3D imaging and modeling of ultrasound image data
CN109833092A (en) Internal navigation system and method
CN109998678A (en) Augmented reality assisting navigation is used during medicine regulation
JP2007531553A (en) Intraoperative targeting system and method
WO2012045626A1 (en) Image projection system for projecting image on the surface of an object
CA2963865C (en) Phantom to determine positional and angular navigation system error
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
JP2022517807A (en) Systems and methods for medical navigation
Bichlmeier et al. Evaluation of the virtual mirror as a navigational aid for augmented reality driven minimally invasive procedures
US11869216B2 (en) Registration of an anatomical body part by detecting a finger pose
Shahidi et al. Volumetric image guidance via a stereotactic endoscope
WO2020181498A1 (en) In-vivo navigation system and method
Kumar et al. Stereoscopic augmented reality for single camera endoscope using optical tracker: a study on phantom
Nakajima et al. Enhanced video image guidance for biopsy using the safety map
De Paolis et al. Visualization System to Improve Surgical Performance during a Laparoscopic Procedure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190604