TWI839311B - Three-dimensional real-time positioning compensation method - Google Patents

Three-dimensional real-time positioning compensation method Download PDF

Info

Publication number
TWI839311B
TWI839311B TW112139563A TW112139563A TWI839311B TW I839311 B TWI839311 B TW I839311B TW 112139563 A TW112139563 A TW 112139563A TW 112139563 A TW112139563 A TW 112139563A TW I839311 B TWI839311 B TW I839311B
Authority
TW
Taiwan
Prior art keywords
image
marking devices
surgical
bounding box
compensation method
Prior art date
Application number
TW112139563A
Other languages
Chinese (zh)
Inventor
胡博期
林治中
陳杰華
黃文輝
陳彥廷
Original Assignee
財團法人金屬工業研究發展中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人金屬工業研究發展中心 filed Critical 財團法人金屬工業研究發展中心
Priority to TW112139563A priority Critical patent/TWI839311B/en
Application granted granted Critical
Publication of TWI839311B publication Critical patent/TWI839311B/en

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A three-dimensional real-time positioning compensation method comprises to obtain an initial pose of several marking devices in a surgical image at world coordinates system respectively. A brightness normalization is performed on the surgical image to produce a normalized image. Several texture images with different resolution are generated for all marks of the marking devices, and the texture image with the closest resolution is taken as a standard image. At least N sampling points are selected from the standard image and the corresponding reference points are obtained in the normalized image. At least N optimization points with the minimum brightness error with the at least N reference points are obtained and an error value is calculated respectively. According to these error values, the initial attitude is modified to produce a compensating attitude.

Description

手術三維即時定位補償方法Compensation method for real-time 3D positioning during surgery

本發明主要為一種手術三維即時定位補償方法,特別是有關於一種透過增加取樣點數量,並以影像亮度與實際亮度修正軌跡球姿態的手術三維即時定位補償方法。 The present invention is mainly a three-dimensional real-time positioning compensation method for surgery, and in particular, a three-dimensional real-time positioning compensation method for surgery that increases the number of sampling points and corrects the trackball posture using image brightness and actual brightness.

近年來,電腦輔助定位技術的發展,使得醫療人員能夠在進行如骨科或脊椎等精密的外科手術過程中,利用影像攝影設備取得病患的病灶位置之二維影像,再利用電腦根據該二維影像重建出該病灶的立體影像,並進行該病灶的座標化,醫療人員即可在電腦的引導下,精準地將植體植入到正確的位置,具有大幅提升手術定位精準度的優點。 In recent years, the development of computer-assisted positioning technology has enabled medical personnel to use imaging equipment to obtain a two-dimensional image of the patient's lesion location during precision surgical operations such as orthopedics or spine surgery, and then use a computer to reconstruct a three-dimensional image of the lesion based on the two-dimensional image and coordinate the lesion. Under the guidance of the computer, medical personnel can accurately implant the implant in the correct position, which has the advantage of greatly improving the accuracy of surgical positioning.

中華民國專利公告號I708591公開了一種骨科手術之三維即時定位方法,該方法透過將立體標記裝置固定在手術部位上,並架設攝影機拍攝取得一拍攝影像。隨後,對該拍攝影像進行辨識,以取得該立體標記裝置的二維角點位置。然而,該拍攝影像本身由像素所組成,微觀下呈現顆粒狀,在邊界會產生影像梯度,縱使該拍攝影像中的立體標記裝置處於靜止狀態下,其二維角點仍會 產生±3像素的誤差,導致在運算該立體標記裝置的姿態上容易產生較大誤差,而具有精度不佳的問題。 Patent Publication No. I708591 of the Republic of China discloses a three-dimensional real-time positioning method for orthopedic surgery, which fixes a three-dimensional marker device on the surgical site and sets up a camera to shoot a photographic image. Subsequently, the photographic image is identified to obtain the two-dimensional corner point position of the three-dimensional marker device. However, the photographic image itself is composed of pixels, which appear granular under microscopic conditions, and image gradients will be generated at the boundaries. Even if the three-dimensional marker device in the photographic image is in a static state, its two-dimensional corner point will still produce an error of ±3 pixels, resulting in a large error in calculating the posture of the three-dimensional marker device, and has a problem of poor accuracy.

有鑑於此,有必要提供一種手術三維即時定位補償方法,以解決上述之問題。 In view of this, it is necessary to provide a three-dimensional real-time positioning compensation method for surgery to solve the above problems.

本發明的目的在於提供一種手術三維即時定位補償方法,係可以透過增加取樣點數量,並以影像亮度與實際亮度修正軌跡球姿態。 The purpose of the present invention is to provide a three-dimensional real-time positioning compensation method for surgery, which can correct the trackball posture by increasing the number of sampling points and using image brightness and actual brightness.

本發明所述之「機械角點」,係指一標記裝置的正多面體具有一中心原點,該正多面體上的每一個標誌之圖形的四個角落,其相對於該中心原點的實際座標。 The "mechanical corner point" mentioned in the present invention refers to a regular polyhedron of a marking device having a central origin, and the actual coordinates of the four corners of each marked graphic on the regular polyhedron relative to the central origin.

為達成上述目的,本發明提供一種手術三維即時定位補償方法,包含:取得一手術影像,其影像中包含數個標記裝置,該標記裝置具有一正多面體,該正多面體具有至少四幾何圖面,該幾何圖面具有一標誌,該標誌由一邊框及一圖形共同組成,該圖形位於該邊框內側,並供辨識取得具有唯一性的一識別碼;將該手術影像輸入至一物件偵測模型,以偵測出該數個標記裝置各自的一第一邊界框資訊、各該標記裝置所具有之邊框的一第二邊界框資訊,以及各該標記裝置所具有之圖形所代表的識別碼及一第三邊界框資訊;根據該第二邊界框資訊取得相對應之邊框的四個角點,根據該數個標記裝置各自的第一邊界框資訊及所具有之邊框的四個角點,並搭配相對應之機械角點執行投影變換計算,以獲得該數個標記裝置各自在世界座標系下的一初始姿態;對該手術影像執行一亮度正規化,以產生一正規化影像;對該數個標記裝置的所有標誌各自生成長寬等 比例大小,但具有不同解析度的數張紋理影像;根據各該標記裝置所具有之圖形所代表的識別碼,與具有相同識別碼的數張紋理影像分別進行一模糊度運算,並將解析度最接近的紋理影像作為一標準影像;根據該第二邊界框資訊及該第三邊界框資訊,從該標準影像的邊框或圖形中選取該四個角點以外的至少N個取樣點,將該至少N個取樣點搭配相對應之機械角點執行投影變換計算,再透過一相機參數進行運算,以獲得該至少N個取樣點各自在該正規化影像中相對應的一參照點;以該參照點為原點在一容許範圍內取得一優化點,該優化點與該參照點之間具有最小亮度誤差,透過一最小化誤差函數計算取得各該參照點與相對應的優化點之間的一誤差值,再根據該誤差值修正該初始姿態,以產生一補償姿態;及將該數個標記裝置各自在世界座標系下的補償姿態實時繪製顯示於一顯示螢幕上。 To achieve the above-mentioned purpose, the present invention provides a method for real-time three-dimensional positioning compensation during surgery, comprising: obtaining a surgical image, wherein the image comprises a plurality of marking devices, wherein the marking device comprises a regular polyhedron, wherein the regular polyhedron comprises at least four geometrical figures, wherein the geometrical figure comprises a marker, wherein the marker comprises a border frame and a figure, wherein the figure is located inside the border frame and is used for identification to obtain a unique identification code; inputting the surgical image into an object detection model to detect a first border frame information of each of the plurality of marking devices, and a first identification code of each of the marking devices; The method comprises: obtaining a second bounding frame information of a bounding frame possessed by the plurality of marking devices, an identification code represented by a graphic possessed by each of the marking devices, and a third bounding frame information; obtaining four corner points of the corresponding bounding frame according to the second bounding frame information, performing a projection transformation calculation according to the first bounding frame information of each of the plurality of marking devices and the four corner points of the bounding frame possessed by the plurality of marking devices, and matching the corresponding mechanical corner points, so as to obtain an initial posture of each of the plurality of marking devices in the world coordinate system; performing a brightness normalization on the surgical image to generate a normalized image; performing a projection transformation calculation on the plurality of marking devices ... Each of the marks generates a plurality of texture images of equal length and width but different resolutions; a fuzziness calculation is performed on each of the texture images with the same identification code according to the identification code represented by the graphics of each of the marking devices, and the texture image with the closest resolution is used as a standard image; at least N sampling points other than the four corner points are selected from the border frame or graphics of the standard image according to the second bounding frame information and the third bounding frame information, and the projection transformation calculation is performed on the at least N sampling points in combination with the corresponding mechanical corner points, Then, a camera parameter is used to calculate to obtain a reference point corresponding to each of the at least N sampling points in the normalized image; an optimal point is obtained within an allowable range with the reference point as the origin, and the optimal point has a minimum brightness error with the reference point; an error value between each reference point and the corresponding optimal point is calculated by a minimizing error function, and then the initial posture is corrected according to the error value to generate a compensation posture; and the compensation postures of each of the several marking devices in the world coordinate system are drawn and displayed in real time on a display screen.

在一些實施例中,100≦N≦300。 In some embodiments, 100≦N≦300.

在一些實施例中,該容許範圍為±1像素。 In some embodiments, the tolerance range is ±1 pixel.

在一些實施例中,該物件偵測模型為YoLov5。 In some embodiments, the object detection model is YoLov5.

本發明的手術三維即時定位補償方法具有下列特點:透過增加該標誌的邊框或圖形上的取樣點數量,並以該取樣點的影像亮度與實際亮度之間的最小誤差來修正該數個標記裝置各自在世界座標系下的姿態。藉此,本發明的手術三維即時定位補償方法,在使用ASTM2554進行精度驗證之下,該標記裝置的平均誤差由0.8547下降至0.1493,係可以達到提高及穩定標記裝置定位精度之功效。 The three-dimensional real-time positioning compensation method for surgery of the present invention has the following characteristics: by increasing the number of sampling points on the border or figure of the mark, and using the minimum error between the image brightness of the sampling point and the actual brightness to correct the posture of each of the several marking devices in the world coordinate system. Thus, the three-dimensional real-time positioning compensation method for surgery of the present invention, under the accuracy verification using ASTM2554, the average error of the marking device is reduced from 0.8547 to 0.1493, which can achieve the effect of improving and stabilizing the positioning accuracy of the marking device.

〔本發明〕 [The present invention]

1:標記裝置 1: Marking device

11:正多面體 11: Regular polyhedron

11a:幾何圖面 11a: Geometric drawing

12:釘狀體 12: Nail-like body

13:標誌 13: Logo

13a:邊框 13a: Border

13b:圖形 13b: Graphics

2:攝影機 2: Camera

3:電腦主機 3: Computer host

4:顯示螢幕 4: Display screen

P:取樣點 P: Sampling point

S:脊椎 S: Spine

S1:影像取得步驟 S1: Image acquisition step

S2:物件偵測步驟 S2: Object detection step

S3:姿態取得步驟 S3: Posture acquisition step

S4:亮度正規化步驟 S4: Brightness normalization step

S5:影像生成步驟 S5: Image generation step

S6:模糊比對步驟 S6: Fuzzy comparison step

S7:取樣步驟 S7: Sampling step

S8:誤差最小化步驟 S8: Error minimization step

S9:姿態補償步驟 S9: Posture compensation step

[圖1]為本發明之手術三維即時定位補償方法的步驟流程圖;[圖2]為本發明之手術三維即時定位補償方法之標記裝置的立體圖;[圖3]為本發明之手術三維即時定位補償方法的使用情形圖;[圖4]為本發明之手術三維即時定位補償方法的取樣點示意圖。 [Figure 1] is a step flow chart of the three-dimensional real-time positioning compensation method for surgery of the present invention; [Figure 2] is a three-dimensional diagram of the marking device of the three-dimensional real-time positioning compensation method for surgery of the present invention; [Figure 3] is a diagram of the use of the three-dimensional real-time positioning compensation method for surgery of the present invention; [Figure 4] is a schematic diagram of the sampling points of the three-dimensional real-time positioning compensation method for surgery of the present invention.

茲配合圖式將本發明實施例詳細說明如下,其所附圖式主要為簡化之示意圖,僅以示意方式說明本發明之基本結構,因此在該等圖式中僅標示與本發明有關之元件,且所顯示之元件並非以實施時之數目、形狀、尺寸比例等加以繪製,其實際實施時之規格尺寸實為一種選擇性之設計,且其元件佈局形態有可能更為複雜。 The embodiments of the present invention are described in detail with the help of the drawings. The attached drawings are mainly simplified schematic diagrams, which only illustrate the basic structure of the present invention in a schematic manner. Therefore, only the components related to the present invention are marked in the drawings, and the components shown are not drawn in the number, shape, size ratio, etc. during implementation. The actual specifications and dimensions during implementation are actually a selective design, and the layout of the components may be more complicated.

以下各實施例的說明是參考附加的圖式,用以例示本發明可據以實施的特定實施例。本發明所提到的方向用語,例如「上」、「下」、「前」、「後」等,僅是參考附加圖式的方向。因此,使用的方向用語是用以說明及理解本申請,而非用以限制本申請。另外,在說明書中,除非明確地描述為相反的,否則詞語“包含”將被理解為意指包含所述元件,但是不排除任何其它元件。 The following descriptions of the embodiments refer to the attached drawings to illustrate specific embodiments according to which the present invention can be implemented. The directional terms mentioned in the present invention, such as "upper", "lower", "front", "back", etc., are only with reference to the directions of the attached drawings. Therefore, the directional terms used are used to illustrate and understand the present application, rather than to limit the present application. In addition, in the specification, unless explicitly described to the contrary, the word "comprising" will be understood to mean including the elements described, but not excluding any other elements.

請參照圖1,其為本發明手術三維即時定位補償方法的步驟流程圖,係包含以下步驟:請一併參照圖2,影像取得步驟S1:取得一手術影像,其影像中包含數個標記裝置1,在本實施例中,該標記裝置1為一追跡球,並採用剛性十二面體球。具體而言,該標記裝置1具有一正多面體11及一釘狀體12,該正多 面體11具有呈現正五邊形的至少四幾何圖面11a,該幾何圖面11a具有一標誌13,該標誌13由一邊框13a及一圖形13b共同組成,其中,該圖形13b位於該邊框13a內側,並供辨識取得具有唯一性的一識別碼。舉例而言,該標誌13可以為AR-ToolKit標誌、ARTag標誌、ArUco標誌或AlVar標誌的其中一種,在本實施例中,係以AlVar標誌作為該幾何圖面11a上的標誌13。該釘狀體12用以使該標記裝置1固定在一手術器械或一病患/教具的至少一手術部位。 Please refer to FIG1, which is a flow chart of the steps of the three-dimensional real-time positioning compensation method for surgery of the present invention, which includes the following steps: Please also refer to FIG2, image acquisition step S1: obtain a surgical image, the image includes a plurality of marking devices 1, in this embodiment, the marking device 1 is a tracking ball, and a rigid dodecahedron ball is used. Specifically, the marking device 1 has a regular polyhedron 11 and a nail body 12, the regular polyhedron 11 has at least four geometric figures 11a showing regular pentagons, the geometric figure 11a has a mark 13, the mark 13 is composed of a frame 13a and a figure 13b, wherein the figure 13b is located inside the frame 13a, and is used for identification to obtain a unique identification code. For example, the mark 13 can be one of an AR-ToolKit mark, an ARTag mark, an ArUco mark or an AlVar mark. In this embodiment, the AlVar mark is used as the mark 13 on the geometric drawing 11a. The nail-shaped body 12 is used to fix the marking device 1 to at least one surgical site of a surgical instrument or a patient/teaching aid.

請參照圖3,當該手術部位為脊椎S時,則該釘狀體12為一脊突釘,屬於本發明技術領域中具有通常知識者可以理解。值得注意的是,該數個標記裝置1之間不具有相同識別碼的圖形13b。舉例而言,設置於該手術器械的標記裝置1的標誌13之圖形13b所代表的識別碼可以為0~8,而設置於該脊椎的三個標記裝置1各自的標誌13之圖形13b所代表的識別碼,依此類推分別可以為9~17、18~26及27~35。值得一提的是,該標記裝置1插設於手術部位時,其中3個幾何圖面11a朝下面向該手術部位而無法被該辨識,因此,該標誌13之圖形13b所代表的識別碼設定9個標誌號數即可。 Please refer to FIG3 . When the surgical site is the spine S, the nail-shaped body 12 is a spinal nail, which is within the technical field of the present invention and can be understood by those with ordinary knowledge. It is worth noting that the plurality of marking devices 1 do not have the same identification code pattern 13b. For example, the identification code represented by the pattern 13b of the mark 13 of the marking device 1 of the surgical instrument can be 0 to 8, and the identification codes represented by the patterns 13b of the marks 13 of the three marking devices 1 of the spine can be 9 to 17, 18 to 26, and 27 to 35, respectively. It is worth mentioning that when the marking device 1 is inserted into the surgical site, the three geometric figures 11a face downwards towards the surgical site and cannot be identified. Therefore, the identification code represented by the figure 13b of the mark 13 can be set to 9 mark numbers.

承上述,在本發明中,為架設至少一攝影機2,並朝該數個標記裝置1進行拍攝,以產生該手術影像。在本實施例中,該攝影機2為具有高畫質功能及六自由度(Degree of Freedom,DOF)運動姿態的攝影機。 Based on the above, in the present invention, at least one camera 2 is set up and shoots toward the plurality of marking devices 1 to generate the surgical image. In this embodiment, the camera 2 is a camera with high-definition function and six degrees of freedom (DOF) motion posture.

物件偵測步驟S2:將該手術影像輸入至一電腦主機3中的一物件偵測模型,以偵測出該數個標記裝置1各自的一第一邊界框資訊、各該標記裝置1所具有之邊框13a的一第二邊界框資訊,以及各該標記裝置1所具有之圖形13b所代表的識別碼及一第三邊界框資訊。在本實施例中,該標記裝置1具有至 少三個標誌13的圖形13b所代表的識別碼可以被辨識出來,具有提高定位精準度之功效。 Object detection step S2: Input the surgical image into an object detection model in a computer host 3 to detect a first bounding box information of each of the plurality of marking devices 1, a second bounding box information of the border 13a of each of the marking devices 1, and an identification code represented by a graphic 13b of each of the marking devices 1 and a third bounding box information. In this embodiment, the identification code represented by the graphic 13b of at least three markings 13 of the marking device 1 can be identified, which has the effect of improving positioning accuracy.

在本發明中,該物件偵測模型係可以從網路上下載已完成參數設定的深度學習模型至該電腦主機3,該電腦主機3在C++開發環境之下,根據先前拍攝的數張手術影像訓練該深度學習模型,以獲得該物件偵測模型,在本實施例中,係以YoLov5予以說明,其屬於本發明技術領域中的通常知識,在此不多加贅述。 In the present invention, the object detection model can be a deep learning model with parameter settings downloaded from the Internet to the computer host 3. The computer host 3 trains the deep learning model based on several surgical images previously taken in the C++ development environment to obtain the object detection model. In this embodiment, YoLov5 is used for illustration, which belongs to the common knowledge in the technical field of the present invention and will not be elaborated here.

姿態取得步驟S3:根據該第二邊界框資訊取得相對應之邊框13a的四個角點。隨後,根據該數個標記裝置1各自的第一邊界框資訊及所具有之邊框13a的四個角點,並搭配相對應之機械角點執行投影變換計算,以獲得該數個標記裝置1各自在世界座標系下的一初始姿態。在本實施例中,該電腦主機3係可以透過調用OpenCV電腦視覺庫中的solvepnp()函數完成計算,屬於本發明相關領域中的通常知識,在此不多加贅述。 Posture acquisition step S3: Obtain the four corner points of the corresponding border frame 13a according to the second bounding frame information. Subsequently, perform projection transformation calculation according to the first bounding frame information of each of the plurality of marking devices 1 and the four corner points of the border frame 13a they possess, and match the corresponding mechanical corner points to obtain an initial posture of each of the plurality of marking devices 1 in the world coordinate system. In this embodiment, the computer host 3 can complete the calculation by calling the solvepnp() function in the OpenCV computer vision library, which belongs to the common knowledge in the relevant field of the present invention and will not be elaborated here.

亮度正規化步驟S4:對該手術影像執行一亮度正規化,以產生一正規化影像。舉例而言,係可以採用對比拉伸方式將該標準影像進行亮度直方圖題曲後,再均勻地拉至0~255的亮度範圍區間。 Brightness normalization step S4: Perform brightness normalization on the surgical image to generate a normalized image. For example, the standard image can be subjected to a brightness histogram warp using a contrast stretching method and then evenly stretched to a brightness range of 0 to 255.

影像生成步驟S5:對該數個標記裝置1的所有標誌13各自生成長寬等比例大小,但具有不同解析度的數張紋理影像。在本實施例中,該攝影機2所拍攝的手術影像的原圖解析度為640*640,該電腦主機3透過對該手術影像執行縮小、放大及模糊化等影像處理方法,以生成解析度分別為320*320、160*160、80*80、40*40、20*20及10*10等數張紋理影像。 Image generation step S5: Generate several texture images with the same length and width ratio but different resolutions for all the marks 13 of the several marking devices 1. In this embodiment, the original resolution of the surgical image taken by the camera 2 is 640*640, and the computer host 3 performs image processing methods such as reduction, enlargement and blurring on the surgical image to generate several texture images with resolutions of 320*320, 160*160, 80*80, 40*40, 20*20 and 10*10.

模糊比對步驟S6:根據各該標記裝置1所具有之圖形13b所代表的識別碼,與具有相同識別碼的數張紋理影像分別進行一模糊度運算,並將解析度最接近的紋理影像作為一標準影像。在本實施例中,該模糊度運算為採用能量梯度函式進行計算。 Fuzzy comparison step S6: According to the identification code represented by the graphic 13b of each marking device 1, a fuzziness calculation is performed on several texture images with the same identification code, and the texture image with the closest resolution is used as a standard image. In this embodiment, the fuzziness calculation is performed using an energy gradient function.

請一併參照圖4,取樣步驟S7:根據該第二邊界框資訊及該第三邊界框資訊,從該標準影像的邊框13a或圖形13b中選取該四個角點以外的至少N個取樣點P,在本實施例中,100≦N≦300;將該至少N個取樣點P搭配相對應之機械角點執行投影變換計算,再透過該攝影機2的相機參數進行運算,以獲得該至少N個取樣點P各自在該正規化影像中相對應的一參照點。其中,該攝影機2的相機參數屬於本發明技術領域中的通常知識。 Please refer to FIG. 4 , sampling step S7: According to the second bounding box information and the third bounding box information, select at least N sampling points P other than the four corner points from the border frame 13a or the figure 13b of the standard image, in this embodiment, 100≦N≦300; perform projection transformation calculation on the at least N sampling points P in combination with the corresponding mechanical corner points, and then perform calculation through the camera parameters of the camera 2 to obtain a reference point corresponding to each of the at least N sampling points P in the normalized image. Among them, the camera parameters of the camera 2 belong to the common knowledge in the technical field of the present invention.

誤差最小化步驟S8:以該參照點為原點在一容許範圍內取得一優化點,該優化點與該參照點之間具有最小亮度誤差。其中,該容許範圍可以為±1像素,該亮度誤差之公式可以如下列所示,屬於本發明技術領域中的通常知識。 Error minimization step S8: Take the reference point as the origin and obtain an optimal point within an allowable range, where the optimal point has the minimum brightness error with the reference point. The allowable range can be ±1 pixel, and the formula for the brightness error can be as shown below, which belongs to the common knowledge in the technical field of the present invention.

Figure 112139563-A0305-02-0009-1
Figure 112139563-A0305-02-0009-1

隨後,透過一最小化誤差函數計算取得各該參照點與相對應的優化點之間的一誤差值,再根據該誤差值修正該初始姿態,以產生一補償姿態。其中,該最小化誤差函數可以如下列所示,屬於本發明技術領域中的通常知識。 Subsequently, an error value between each reference point and the corresponding optimization point is calculated by a minimization error function, and then the initial posture is corrected according to the error value to generate a compensation posture. The minimization error function can be shown as follows, which belongs to the common knowledge in the technical field of the present invention.

Figure 112139563-A0305-02-0009-2
Figure 112139563-A0305-02-0009-2

姿態補償步驟S9:將該數個標記裝置1各自在世界座標系下的補償姿態實時繪製顯示於一顯示螢幕4上。 Posture compensation step S9: The compensation postures of the multiple marking devices 1 in the world coordinate system are drawn and displayed in real time on a display screen 4.

承上所述,本發明的手術三維即時定位補償方法,透過增加該標誌的邊框或圖形上的取樣點數量,並以該取樣點的影像亮度與實際亮度之間的 最小誤差來修正該數個標記裝置各自在世界座標系下的姿態。藉此,本發明的手術三維即時定位補償方法,在使用ASTM2554進行精度驗證之下,該標記裝置的平均誤差由0.8547下降至0.1493,係可以達到提高及穩定標記裝置定位精度之功效。 As mentioned above, the surgical three-dimensional real-time positioning compensation method of the present invention increases the number of sampling points on the border or figure of the mark, and uses the minimum error between the image brightness of the sampling point and the actual brightness to correct the posture of each of the several marking devices in the world coordinate system. In this way, the surgical three-dimensional real-time positioning compensation method of the present invention, under the accuracy verification using ASTM2554, the average error of the marking device is reduced from 0.8547 to 0.1493, which can achieve the effect of improving and stabilizing the positioning accuracy of the marking device.

上述揭示的實施形態僅例示性說明本發明之原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施形態進行修飾與改變。任何運用本發明所揭示內容而完成之等效改變及修飾,均仍應為下述之申請專利範圍所涵蓋。 The above disclosed implementation forms are only exemplary of the principles, features and effects of the present invention, and are not intended to limit the scope of implementation of the present invention. Anyone familiar with this technology can modify and change the above implementation forms without violating the spirit and scope of the present invention. Any equivalent changes and modifications completed by using the content disclosed in the present invention should still be covered by the scope of the patent application below.

S1:影像取得步驟 S1: Image acquisition step

S2:物件偵測步驟 S2: Object detection step

S3:姿態取得 S3: Posture acquisition

S4:亮度正規化步驟 S4: Brightness normalization step

S5:影像生成 S5: Image generation

S6:模糊比對步驟 S6: Fuzzy comparison step

S7:取樣步驟 S7: Sampling step

S8:誤差最小化步驟 S8: Error minimization step

S9:姿態補償步驟 S9: Posture compensation step

Claims (4)

一種手術三維即時定位補償方法,包含: 取得一手術影像,其影像中包含數個標記裝置,該標記裝置具有一正多面體,該正多面體具有至少四幾何圖面,該幾何圖面具有一標誌,該標誌由一邊框及一圖形共同組成,該圖形位於該邊框內側,並供辨識取得具有唯一性的一識別碼; 將該手術影像輸入至一物件偵測模型,以偵測出該數個標記裝置各自的一第一邊界框資訊、各該標記裝置所具有之邊框的一第二邊界框資訊,以及各該標記裝置所具有之圖形所代表的識別碼及一第三邊界框資訊; 根據該第二邊界框資訊取得相對應之邊框的四個角點,根據該數個標記裝置各自的第一邊界框資訊及所具有之邊框的四個角點,並搭配相對應之機械角點執行投影變換計算,以獲得該數個標記裝置各自在世界座標系下的一初始姿態; 對該手術影像執行一亮度正規化,以產生一正規化影像; 對該數個標記裝置的所有標誌各自生成長寬等比例大小,但具有不同解析度的數張紋理影像; 根據各該標記裝置所具有之圖形所代表的識別碼,與具有相同識別碼的數張紋理影像分別進行一模糊度運算,並將解析度最接近的紋理影像作為一標準影像; 根據該第二邊界框資訊及該第三邊界框資訊,從該標準影像的邊框或圖形中選取該四個角點以外的至少N個取樣點,將該至少N個取樣點搭配相對應之機械角點執行投影變換計算,再透過一相機參數進行運算,以獲得該至少N個取樣點各自在該正規化影像中相對應的一參照點; 以該參照點為原點在一容許範圍內取得一優化點,該優化點與該參照點之間具有最小亮度誤差,透過一最小化誤差函數計算取得各該參照點與相對應的優化點之間的一誤差值,再根據該誤差值修正該初始姿態,以產生一補償姿態;及 將該數個標記裝置各自在世界座標系下的補償姿態實時繪製顯示於一顯示螢幕上。 A method for real-time three-dimensional positioning compensation during surgery, comprising: Obtaining a surgical image, wherein the image comprises a plurality of marking devices, wherein the marking device comprises a regular polyhedron, wherein the regular polyhedron comprises at least four geometric figures, wherein the geometric figure comprises a mark, wherein the mark comprises a border frame and a figure, wherein the figure is located inside the border frame and is used for identification to obtain a unique identification code; Inputting the surgical image into an object detection model to detect a first border frame information of each of the plurality of marking devices, a second border frame information of the border frame of each of the marking devices, and an identification code represented by the figure of each of the marking devices and a third border frame information; According to the second bounding box information, the four corner points of the corresponding bounding box are obtained. According to the first bounding box information of each of the plurality of marking devices and the four corner points of the bounding box, projection transformation calculation is performed in combination with the corresponding mechanical corner points to obtain an initial posture of each of the plurality of marking devices in the world coordinate system; Perform a brightness normalization on the surgical image to generate a normalized image; Generate a plurality of texture images with equal length and width ratios but different resolutions for all the markers of the plurality of marking devices; According to the identification code represented by the graphics of each of the marking devices, a blur calculation is performed on the plurality of texture images with the same identification code, and the texture image with the closest resolution is used as a standard image; According to the second bounding box information and the third bounding box information, at least N sampling points other than the four corner points are selected from the border or figure of the standard image, and the at least N sampling points are matched with the corresponding mechanical corner points to perform projection transformation calculation, and then a camera parameter is used to perform operation to obtain a reference point corresponding to each of the at least N sampling points in the normalized image; Taking the reference point as the origin, an optimal point is obtained within an allowable range, and there is a minimum brightness error between the optimal point and the reference point. An error value between each reference point and the corresponding optimal point is calculated through a minimization error function, and then the initial posture is corrected according to the error value to generate a compensation posture; and The compensation postures of the multiple marking devices in the world coordinate system are drawn and displayed in real time on a display screen. 如請求項1所述之手術三維即時定位補償方法,其中,100≦N≦300。The three-dimensional real-time positioning compensation method for surgery as described in claim 1, wherein 100≦N≦300. 如請求項1所述之手術三維即時定位補償方法,其中,該容許範圍為 ±1像素。A surgical three-dimensional real-time positioning compensation method as described in claim 1, wherein the allowable range is ±1 pixel. 如請求項1所述之手術三維即時定位補償方法,其中,該物件偵測模型為YoLov5。A three-dimensional real-time positioning compensation method for surgery as described in claim 1, wherein the object detection model is YoLov5.
TW112139563A 2023-10-17 2023-10-17 Three-dimensional real-time positioning compensation method TWI839311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112139563A TWI839311B (en) 2023-10-17 2023-10-17 Three-dimensional real-time positioning compensation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112139563A TWI839311B (en) 2023-10-17 2023-10-17 Three-dimensional real-time positioning compensation method

Publications (1)

Publication Number Publication Date
TWI839311B true TWI839311B (en) 2024-04-11

Family

ID=91618621

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112139563A TWI839311B (en) 2023-10-17 2023-10-17 Three-dimensional real-time positioning compensation method

Country Status (1)

Country Link
TW (1) TWI839311B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020163845A2 (en) * 2019-02-08 2020-08-13 The Board Of Trustees Of The University Of Illinois Image-guided surgery system
TWI708591B (en) * 2019-12-06 2020-11-01 財團法人金屬工業研究發展中心 Three-dimensional real-time positioning method for orthopedic surgery
US11154378B2 (en) * 2015-03-25 2021-10-26 Camplex, Inc. Surgical visualization systems and displays
US20220087746A1 (en) * 2016-03-12 2022-03-24 Philipp K. Lang Augmented Reality Guided Fitting, Sizing, Trialing and Balancing of Virtual Implants on the Physical Joint of a Patient for Manual and Robot Assisted Joint Replacement
US20220287676A1 (en) * 2021-03-10 2022-09-15 Onpoint Medical, Inc. Augmented reality guidance for imaging systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11154378B2 (en) * 2015-03-25 2021-10-26 Camplex, Inc. Surgical visualization systems and displays
US20220087746A1 (en) * 2016-03-12 2022-03-24 Philipp K. Lang Augmented Reality Guided Fitting, Sizing, Trialing and Balancing of Virtual Implants on the Physical Joint of a Patient for Manual and Robot Assisted Joint Replacement
WO2020163845A2 (en) * 2019-02-08 2020-08-13 The Board Of Trustees Of The University Of Illinois Image-guided surgery system
TWI708591B (en) * 2019-12-06 2020-11-01 財團法人金屬工業研究發展中心 Three-dimensional real-time positioning method for orthopedic surgery
US20220287676A1 (en) * 2021-03-10 2022-09-15 Onpoint Medical, Inc. Augmented reality guidance for imaging systems

Similar Documents

Publication Publication Date Title
US20230072188A1 (en) Calibration for Augmented Reality
JP4976756B2 (en) Information processing method and apparatus
Yao Assessing accuracy factors in deformable 2D/3D medical image registration using a statistical pelvis model
Duan et al. 3D tracking and positioning of surgical instruments in virtual surgery simulation.
US20180307929A1 (en) System and method for pattern detection and camera calibration
EP4042374A1 (en) System and method for improved electronic assisted medical procedures
JP2023511315A (en) Aligning medical images in augmented reality displays
CN110807459B (en) License plate correction method and device and readable storage medium
CN109498156A (en) A kind of head operation air navigation aid based on 3-D scanning
CN115068110A (en) Image registration method and system for femoral neck fracture surgery navigation
CN114010314B (en) Augmented reality navigation method and system for endoscopic retrograde cholangiopancreatography
EP3543955A1 (en) Image processing device and projection system
Guéziec et al. Providing visual information to validate 2-D to 3-D registration
CN115153835A (en) Acetabular prosthesis placement guide system and method based on feature point registration and augmented reality
CN113786228B (en) Auxiliary puncture navigation system based on AR augmented reality
Li et al. A vision-based navigation system with markerless image registration and position-sensing localization for oral and maxillofacial surgery
CN113100941B (en) Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system
CN112184807B (en) Golf ball floor type detection method, system and storage medium
TWI839311B (en) Three-dimensional real-time positioning compensation method
CN106504257B (en) A kind of radiotherapy head position attitude measuring and calculation method
CN116458904A (en) Flat C-arm calibration method and device, storage medium and electronic equipment
CN113786229B (en) Auxiliary puncture navigation system based on AR augmented reality
TWM605788U (en) Positioning system
Fuertes et al. Augmented reality system for keyhole surgery-performance and accuracy validation
TW202424996A (en) Three-dimensional real-time positioning method for orthopedic surgery using artificial intelligence