WO2023088275A1 - Procédé et appareil de positionnement automatique de région d'intérêt (roi), système de robot chirurgical, dispositif et support - Google Patents

Procédé et appareil de positionnement automatique de région d'intérêt (roi), système de robot chirurgical, dispositif et support Download PDF

Info

Publication number
WO2023088275A1
WO2023088275A1 PCT/CN2022/132130 CN2022132130W WO2023088275A1 WO 2023088275 A1 WO2023088275 A1 WO 2023088275A1 CN 2022132130 W CN2022132130 W CN 2022132130W WO 2023088275 A1 WO2023088275 A1 WO 2023088275A1
Authority
WO
WIPO (PCT)
Prior art keywords
roi
coronal
sagittal
image sequence
target
Prior art date
Application number
PCT/CN2022/132130
Other languages
English (en)
Chinese (zh)
Inventor
白全海
刘鹏飞
刘赫
Original Assignee
苏州微创畅行机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州微创畅行机器人有限公司 filed Critical 苏州微创畅行机器人有限公司
Publication of WO2023088275A1 publication Critical patent/WO2023088275A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Definitions

  • the present application relates to the technical field of image processing, in particular to an ROI automatic positioning method, device, surgical robot system, equipment and media.
  • the main functions of human joints are connection and movement.
  • the medical image may be a magnetic resonance imaging (Magnetic Resonance Imaging, MRI) image, may also be a computerized tomography (Computed Tomography, CT) image, or may be a positron emission computed tomography (Positron Emission Computed Tomography, PET) image, Ultrasound images and the like may also be used. Process the obtained medical images and use them as auxiliary means for subsequent clinical diagnosis and treatment.
  • a ROI automatic positioning method, device, surgical robot system, equipment, and medium are provided to reduce the operator's manual positioning of joints and other ROI operations, improve the accuracy of ROI positioning, and improve work efficiency. Simplify the operation process of navigation software for related operations, and improve the versatility of navigation software.
  • the present application provides a ROI automatic positioning method, comprising: acquiring original image data; preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence; The ROI in the image is positioned; the ROI positioned in the coronal image sequence is integrated to obtain the coronal ROI integration area, and the ROI located in the sagittal image sequence is integrated to obtain the sagittal ROI integration area; the coronal ROI integration area is obtained; Coordinate transformation is performed on the plane ROI integration area and the sagittal plane ROI integration area to obtain the three-dimensional coordinates of the target ROI in the original image data.
  • before the positioning of the ROI in the images of the coronal view sequence and the sagittal view sequence further includes: Classifying the images in the sequence, determining whether each image in the coronal image sequence and the sagittal image sequence contains an ROI; filtering images that do not contain the ROI in the coronal image sequence and the sagittal image sequence.
  • the preprocessing the original image data to obtain the coronal image sequence and the sagittal image sequence includes: performing standardization processing on the original image data to obtain standardized three-dimensional image data; according to the Standardize the three-dimensional image data to obtain the coronal image sequence and the sagittal image sequence.
  • the standardization processing of the original image data to obtain standardized three-dimensional image data includes: obtaining parameters of the original image data; obtaining target image parameters and an image transformation interpolation algorithm; The target image parameters and the image transformation interpolation algorithm perform standardization processing on the original image data to obtain the standardized three-dimensional image data.
  • the parameters of the original image data include at least one of the shooting direction angle, resolution, origin coordinates and three-dimensional size of the original image;
  • the target image parameters include at least the target shooting direction Angle, target resolution, target origin coordinates, and target 3D size.
  • the ROI automatic positioning method further includes: performing window width and window level processing on the images in the coronal image sequence and the sagittal image sequence.
  • each image in the coronal image sequence and the sagittal image sequence is Before including ROI, further include:
  • the locating the ROI in the images of the coronal image sequence and the sagittal image sequence includes: locating all ROIs in the coronal image sequence and the sagittal image sequence performing feature extraction on the image; predicting position information of the ROI contained in the images in the coronal image sequence and the sagittal image sequence according to the extracted features.
  • the location information includes center point coordinates and size information of the ROI.
  • integrating the ROI positioned in the coronal image sequence to obtain a coronal ROI integration area, and integrating the ROI positioned in the sagittal image sequence to obtain a sagittal ROI integration area includes : Based on the non-maximum value suppression algorithm, integrate the overlapping parts of the ROI positioned in the coronal image sequence to obtain the target coronal ROI; based on the non-maximum value suppression algorithm, integrate all the positioned in the sagittal image sequence The overlapping part of the ROI is used to obtain the target sagittal plane ROI; the target coronal plane ROI and the target sagittal plane ROI are respectively clustered to obtain the coronal plane ROI integration area and the sagittal plane ROI integration area.
  • performing clustering processing on the target coronal ROI and the target sagittal ROI respectively to obtain the coronal ROI integration area and the sagittal ROI integration area comprising: performing clustering processing on the target coronal ROI according to a k-means clustering algorithm to obtain an integrated area of the coronal ROI; performing clustering processing on the target sagittal ROI according to a k-means clustering algorithm, The sagittal plane ROI integration area is obtained.
  • performing clustering processing on the target coronal ROI according to the k-means clustering algorithm to obtain the coronal ROI integration area includes: selecting a plurality of the target coronal ROIs As a cluster center; according to the intersection of each cluster center and other said target coronal planes, the target coronal plane ROI is clustered to obtain the coronal plane ROI integration area; said clustering according to k-means
  • the similar algorithm performs clustering processing on the target sagittal plane ROI to obtain the integrated region of the sagittal plane ROI, including: selecting a plurality of the target sagittal plane ROIs as cluster centers; The intersection and union comparison of the target sagittal plane performs clustering processing on the target sagittal plane ROI to obtain the integration region of the coronal plane ROI.
  • the present application also provides a ROI automatic positioning device, including: a data acquisition module for acquiring original image data; a data preprocessing module for preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence
  • the positioning module is used to locate the ROI of the image in the coronal image sequence and the sagittal image sequence;
  • the integration module is used to integrate the ROI positioned in the coronal image sequence to obtain a coronal ROI integration area, Integrating the ROIs positioned in the sagittal image sequence to obtain a sagittal plane ROI integration area; a coordinate transformation module for performing coordinate transformation on the coronal plane ROI integration area and the sagittal plane ROI integration area to obtain the The 3D coordinates of the target ROI in the original image data.
  • the present application also provides a surgical robot system configured to execute any one of the methods for automatic ROI positioning described above.
  • the present application also provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, any one of the above ROI automatic positioning methods is implemented.
  • the present application also provides a computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by one or more processors, the one or more processors perform any of the above-mentioned The ROI automatic positioning method.
  • Fig. 1 is a schematic diagram of the joint position in the coronal view and sagittal view of the medical image manually marked by the operator based on the navigation software in the related art of the present application;
  • FIG. 2 is an application environment diagram of the ROI automatic positioning method according to an embodiment of the present application
  • FIG. 3 is a flowchart of an ROI automatic positioning method according to an embodiment of the present application.
  • Fig. 4 is the flowchart of the preprocessing of the original image data according to an embodiment of the present application.
  • FIG. 5 is a flow chart of standardization processing of original image data according to an embodiment of the present application.
  • FIG. 6 is a flowchart of determining whether an ROI is included in a coronal image sequence and a sagittal image sequence according to an embodiment of the present application
  • Fig. 7 is a flow chart of classifying coronal image sequences and sagittal image sequences according to an embodiment of the present application
  • FIG. 8 is a flow chart of image preprocessing in a coronal image sequence and a sagittal image sequence according to an embodiment of the present application
  • FIG. 9 is a flowchart of ROI positioning in an image according to an embodiment of the present application.
  • Fig. 10a is a heat map of a target area according to an embodiment of the present application.
  • Fig. 10b shows the prediction result of the offset of the center point of the target area according to an embodiment of the present application
  • Fig. 10c shows the prediction results of the length and width of the target area according to an embodiment of the present application
  • FIG. 11 is a flow chart of integrating multiple overlapping ROIs based on a non-maximum value suppression algorithm according to an embodiment of the present application
  • FIG. 12 shows a process of integrating multiple overlapping ROIs based on a non-maximum value suppression algorithm according to an embodiment of the present application
  • FIG. 13 is a flow chart of integrating multiple overlapping ROI coordinate frames using an NMS algorithm combined with a clustering algorithm according to an embodiment of the present application;
  • FIG. 14 is a schematic diagram of multiple ROI coordinate frames that integrate and overlap multiple ROIs using an NMS algorithm combined with a clustering algorithm according to an embodiment of the present application;
  • Fig. 15 is a schematic diagram of transforming the coordinates of the ROI in the coronal plane and the sagittal plane to the three-dimensional coordinates of the ROI in the original image;
  • FIG. 16 is a schematic structural diagram of an automatic ROI positioning device according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an operator's known manual marking of joint positions in coronal and sagittal views of a medical image based on navigation software.
  • the operator before joint surgery, the operator first manually marks the joint position in the coronal view and sagittal view of the medical image based on the navigation software, and then the navigation system automatically gives the three-dimensional coordinates of the joint.
  • This manual marking method is time-consuming and labor-intensive, which increases the workload of the operator.
  • the operator needs to locate different joints in different views. For example, in the navigation software, the left and right knee joints and hip joints need to be marked in the coronal view and sagittal view. The operator needs to perform 8 operations, which greatly Increase the labor intensity of the operator.
  • This application provides a method for automatically locating a region of interest (region of interest, ROI), which can be applied to the application environment shown in FIG. 2 .
  • the application environment includes a surgical trolley 1, a mechanical arm 2, a tool target 21, a femoral target 22, a tibial target 23, a base target 24, a pointed target 241, an osteotomy guide tool 31, an oscillating saw 41, and an NDI navigation device 51 , auxiliary display 52, navigation trolley 61, main display 62, keyboard 63 and operating bed 81.
  • the operator can perform preoperative operations such as key point marking in the corresponding three-dimensional model through the navigation software.
  • surgical treatment is performed using each device in the application environment diagram.
  • the ROI automatic positioning method may be implemented by an ROI positioning device, and its executing body may be a computer processor.
  • the operator imports the three-dimensional model described by the medical image sequence, and triggers the ROI automatic positioning method of the present application.
  • the operator can determine the position in the imported three-dimensional model in the display interface Each ROI.
  • the specific process of data processing from the operator importing the medical image sequence until triggering the ROI automatic positioning method may include: reading user data through navigation software; opening up shared memory and writing data; starting the ROI automatic positioning device through message notification.
  • Fig. 3 is the flowchart of the ROI automatic positioning method of an embodiment of the present application, as shown in Fig. 3, the ROI automatic positioning method provided by the present application includes the following steps:
  • Step S11 obtaining original image data
  • Step S12 preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence
  • Step S13 locating the ROI in the images of the coronal view sequence and the sagittal view sequence
  • Step S14 integrating the ROI positioned in the coronal image sequence to obtain a coronal ROI integration area, and integrating the ROI located in the sagittal image sequence to obtain a sagittal ROI integration area;
  • Step S15 performing coordinate transformation on the coronal plane ROI integration area and the sagittal plane ROI integration area to obtain the three-dimensional coordinates of the target ROI in the original image data.
  • step S11 the original image data is acquired.
  • the original image data in the embodiments of the present application is, for example, a three-dimensional medical image
  • the computer device can obtain the medical image by performing three-dimensional reconstruction on the data of the patient's part to be examined collected by the scanning device.
  • the medical image in this application is described by taking a computerized tomography (Computed Tomography, CT) image as an example.
  • step S12 the original image data is preprocessed to obtain a coronal image sequence and a sagittal image sequence.
  • the preprocessing of the original image data in this application includes standardizing the original image data, so as to convert the original image data (such as the 3D model of the whole lower limb) into coronal and sagittal image sequences under unified image rules .
  • Fig. 4 is a flow chart of preprocessing the original image data according to an embodiment of the present application. As shown in Fig. 4, according to the embodiment of the present application, step S12 preprocesses the original image data to obtain the coronal image sequence and the coronal image sequence
  • the sagittal image sequence includes the following steps S121 and S122.
  • step S121 standardization processing is performed on the original image data to obtain standardized three-dimensional image data.
  • FIG. 5 is a flow chart of standardization processing of original image data according to an embodiment of the present application. As shown in FIG. 5 , step S121 performs standardization processing on the original image data to obtain standardized three-dimensional image data, including the following steps S1211 to S1213.
  • step S1211 parameters of the original image data are obtained.
  • the parameters of the original image data include at least one of shooting direction angle, resolution, origin coordinates and three-dimensional size of the original image.
  • step S1212 the target image parameters and image transformation and interpolation algorithm are acquired.
  • the target image parameters include at least one of target orientation angle, target resolution, target origin coordinates, and target three-dimensional size.
  • the image transformation interpolation algorithm may be nearest neighbor method, bilinear interpolation method or cubic interpolation method, which is not limited in this application.
  • step S1213 standardize the original image data according to the target image parameters and the image transformation and interpolation algorithm to obtain the standardized three-dimensional image data.
  • the standardization process of image data can also be min-max standardization, and the original image data is linearly transformed, so that the result value is mapped to between 0-1, and the conversion formula is specifically:
  • max is the maximum signal value of the original image data
  • min is the minimum signal value of the original image data
  • P i is the signal value of the i-th point in the original image data
  • P i * is the signal value of the i-th point in the normalized image data.
  • step S122 the coronal image sequence and the sagittal image sequence are obtained according to the standardized three-dimensional image data.
  • a sequence of coronal images is obtained by extracting each 2D image section along the coronal direction in the 3D image data, wherein each 2D image section is regarded as a coronal image.
  • a sequence of sagittal images is obtained by extracting each two-dimensional image slice along the sagittal plane direction in the three-dimensional image data, wherein each two-dimensional image slice is regarded as a sagittal image.
  • a coronal view sequence and a sagittal view sequence are obtained.
  • Fig. 6 is a flow chart of determining whether an ROI is included in a coronal image sequence and a sagittal image sequence according to an embodiment of the present application.
  • the processing speed further includes steps S125 and S126 before step S13 locates the ROI of the images in the coronal view sequence and the sagittal view sequence.
  • step S125 by classifying the images in the coronal image sequence and the sagittal image sequence, it is determined whether each image in the coronal image sequence and the sagittal image sequence contains an ROI.
  • the images in the coronal image sequence and the sagittal image sequence contain ROIs, and some do not contain ROIs.
  • the ROI pre-classification network is used to classify the images in the coronal image sequence and the sagittal image sequence, and whether the classification label output by the pre-classification network contains ROI, It is determined whether each image in the sequence of coronal images and the sequence of sagittal images contains a ROI.
  • step S125 classifies images in the coronal image sequences and the sagittal image sequences , determining whether each image in the coronal image sequence and the sagittal image sequence contains ROI includes the following steps S1251-S1253.
  • step S1251 feature extraction is performed on the images in the coronal image sequence and the sagittal image sequence respectively.
  • a backbone network is used to perform feature extraction on images in the coronal image sequence and the sagittal image sequence respectively.
  • the backbone network is any one of VGG series and Resnet series.
  • step S1252 the extracted features are mapped to a binary classification space.
  • a fully connected network is used to map the features extracted by the backbone network to a binary classification space.
  • the classification result is 0 or 1, for example, 0 represents an image containing only the background, and 1 represents an image containing ROI.
  • step S1253 according to the output classification result, it is determined whether the images in the coronal image sequence and the sagittal image sequence contain ROI.
  • the determination is made according to the classification results of 0 and 1, for example, the coronal and sagittal images with a classification result of 1 are determined to include ROI images; the coronal and sagittal images with a classification result of 0 and Sagittal images, identified as images that do not contain ROIs.
  • step S126 the images in the coronal image sequence and the sagittal image sequence that do not contain ROI are filtered.
  • the coronal image and the sagittal image containing the ROI are respectively composed of a coronal image sequence and a sagittal image sequence for further target detection of the ROI.
  • Fig. 8 is a flow chart of image preprocessing in coronal image sequence and sagittal image sequence in an embodiment of the present application.
  • Image requirements before the coronal image sequence and the sagittal image sequence are input into the pre-classification network, the images in the coronal image sequence and the sagittal image sequence need to be preprocessed, that is, before step S125, further include the step S123 and S124.
  • step S123 window width and window level processing is performed on the images in the coronal image sequence and the sagittal image sequence.
  • the original image is, for example, a CT image
  • the window width and level of the CT image in the coronal view sequence and the sagittal view sequence can be set, and based on the window width and level of the CT image, performing window width and window level processing on the CT images in the coronal image sequence and the sagittal image sequence, so as to enhance the ROI data of the CT images in the coronal image sequence and the sagittal image sequence.
  • it also includes adjusting the size of the images in the coronal image sequence and the sagittal image sequence, for example, adjusting the image size to 1024 ⁇ 512.
  • image size There are two ways to adjust image size including edge cropping and padding.
  • step S124 preprocessing is performed on the enhanced images in the coronal image sequence and the sagittal image sequence.
  • the enhanced coronal image can also be processed according to the requirements of the pre-classification network sequence and the sagittal image sequence were normalized.
  • the normalization method can be a Z-score normalization method:
  • is the mean value of the image data in the enhanced coronal image sequence and the sagittal image sequence
  • is the standard deviation of the image data in the enhanced coronal image sequence and the sagittal image sequence
  • step S13 the ROIs of the images in the coronal view sequence and the sagittal view sequence are located.
  • the target detection network is used to locate the ROI of each coronal image in the coronal image sequence, and to locate the ROI of each sagittal image in the sagittal image sequence to obtain the desired target.
  • FIG. 9 is a flow chart of ROI positioning in an image according to an embodiment of the present application. As shown in FIG. 9, step S13 locates the ROI of the images in the coronal image sequence and the sagittal image sequence, including the following Step S131 and Step S132.
  • step S131 feature extraction is performed on images in the coronal image sequence and the sagittal image sequence.
  • a feature extraction network is used to perform feature extraction on the images in the coronal image sequence and the sagittal image sequence respectively.
  • the feature extraction network can be Resnet50, Resnet101, HourglassNet or MobelNet.
  • the feature extraction network uses the extracted features for target prediction.
  • the convolution operation is performed on the images in the coronal image sequence and the sagittal image sequence through a convolutional neural network to realize feature extraction.
  • the convolutional neural network uses a 3 ⁇ 3 filter, and the filter is scanned to the right and down in sequence, and the value of each element of the output matrix can be obtained to realize the filtering of the image to be processed.
  • the calculation process can be as follows, for example:
  • Black line frame: 3 1 ⁇ 1+0 ⁇ 1+0 ⁇ 1+1 ⁇ 0+1 ⁇ 1+1 ⁇ 0+0 ⁇ 1+1 ⁇ 0+1 ⁇ 1
  • a pooling layer is added between adjacent convolutional layers in the convolutional neural network.
  • the pooling layer for example, can use a 2 ⁇ 2 filter to perform a maximum pooling operation on a 4 ⁇ 4 image, and the result takes the corresponding maximum value in the 2 ⁇ 2 window, and finally obtains a 2 ⁇ 2 image.
  • the maximum pooling provides a way to down-sample the convolutional matrix for subsequent network layers to continue processing until the image features used to determine whether the image input to the target prediction network contains ROI are obtained.
  • step S132 the position information of the ROI included in the images in the coronal image sequence and the sagittal image sequence is predicted according to the extracted features.
  • the position information of the ROI includes the center point coordinates and size information of the ROI.
  • the results shown in Figures 10a-10c can be predicted.
  • the heat map shown in Fig. 10a shows the probability that each image block in the image contains a joint.
  • Figure 10b shows the offset between the center point of the joint area and the actual center point in the heat map, and the vertical and horizontal coordinate values represent the offset angle and sum of the predicted center point of the joint area relative to the actual center point of the joint Offset length.
  • Figure 10c shows the prediction of the length and width of the joint area, and using the predicted coordinate offset of the center point of the joint area to modify the predicted center point coordinates of the joint area, according to the size of the joint area predicted by the target detection network Information (such as length and width) and the corrected center point, the joint area in the image can be determined.
  • target detection network Information such as length and width
  • the ROI of each image in each coronal image sequence and sagittal image sequence is obtained through detection by the target detection network.
  • the ROIs covering the dimension of the coronal view and the sagittal view of the three-dimensional model it is necessary to integrate the detected ROIs of the images.
  • step S14 the ROIs located in the coronal image sequence are integrated to obtain a coronal ROI integration area, and the ROIs located in the sagittal image sequence are integrated to obtain a sagittal ROI integration area.
  • the coronal plane ROI integration area and the sagittal plane ROI integration area In order to obtain the coronal plane ROI integration area and the sagittal plane ROI integration area, in an embodiment of the present application, based on the non-maximum suppression (Non-Maximum Suppression, NMS) algorithm, multiple overlapping ROIs are integrated to obtain the coronal The coronal ROI integration area of the ROI of the coronal image of the map sequence, and the sagittal ROI integration area of the ROI of the sagittal image of the overlay sagittal image sequence.
  • NMS non-Maximum Suppression
  • Fig. 11 is a flow chart of integrating multiple overlapping ROIs based on the non-maximum suppression algorithm in one embodiment of the present application.
  • step S14 integrates the ROIs positioned in the coronal image sequence to obtain the coronal ROI integration area , integrating the ROIs positioned in the sagittal image sequence to obtain a sagittal image ROI integration area, including the following steps S1401-S1403.
  • step S1401 based on a non-maximum value suppression algorithm, the overlapping parts of the ROIs positioned in the coronal image sequence are integrated to obtain a target coronal ROI.
  • step S1402 based on the non-maximum value suppression algorithm, the overlapping parts of the ROIs positioned in the sagittal image sequence are integrated to obtain a target sagittal ROI.
  • step S1403 cluster processing is performed on the target coronal ROI and the target sagittal ROI to obtain the coronal ROI integration area and the sagittal ROI integration area.
  • Figure 12 shows the process of integrating multiple overlapping joint areas based on the non-maximum value suppression algorithm in an embodiment of the present application.
  • the joint area with a non-maximum value of 0.78 and the joint area with a non-maximum value of 0.80 The joint area with a maximum value of 0.86 is suppressed, and the joint area corresponding to the maximum value of 0.92 is retained, thereby removing redundant joint areas and retaining the best joint area.
  • a clustering algorithm may also be used to eliminate redundant joint regions and retain the best joint regions.
  • the target coronal plane ROI is clustered to obtain the coronal plane ROI integration area;
  • the target sagittal plane ROI is clustered to obtain the target coronal plane ROI. Said sagittal plane ROI integration area.
  • the target coronal ROI is clustered, and the step of obtaining the coronal ROI integration area includes: selecting a plurality of the target coronal ROIs as cluster centers; The intersection and union comparison between the cluster center and the other target coronal planes is used to cluster the target coronal plane ROIs to obtain the coronal plane ROI integrated regions.
  • the target sagittal plane ROI is clustered, and the step of obtaining the sagittal plane ROI integration area includes: selecting a plurality of the target sagittal plane ROIs as cluster centers; The intersection and union comparison between the cluster center and other target sagittal planes is used to cluster the target sagittal plane ROI to obtain the coronal plane ROI integration area.
  • step S14 uses the NMS algorithm combined with the clustering algorithm to integrate multiple overlapping ROIs, and step S14 integrates the ROIs positioned in the coronal image sequence to obtain the coronal plane
  • the ROI integration area, integrating the ROIs positioned in the sagittal image sequence to obtain the sagittal plane ROI integration area specifically includes the following steps S1411-S1418.
  • step S141 multiple overlapping ROIs of each ROI are integrated based on a non-maximum value algorithm.
  • step S1412 the initial value of the number n of cluster centers is set to 0.
  • step S1413 a cluster center corresponding to the current ROI is selected.
  • step S1414 the number n of cluster centers is accumulated by 1, and an intersection over union (IOU) between the current ROI and the current cluster center is calculated.
  • IOU intersection over union
  • step S1415 it is judged whether the IOU is greater than a given threshold.
  • step S1416 if the IOU is greater than a given threshold, the current ROI is classified into the current cluster center.
  • step S1417 if the IOU is less than or equal to a given threshold, it is judged whether the number n of the calculated cluster centers is greater than or equal to k, where k is the total number of cluster centers corresponding to each type of ROI set ; If not, reselect a cluster center from unselected cluster centers as the current cluster center, and return to step S1414.
  • step S1418 it is judged whether all ROIs have been traversed, if not, then select an ROI from unclustered ROIs as the current ROI, and return to execute step S1412; if yes, end the integration of the ROI, The coronal plane ROI integration area and the sagittal plane ROI integration area are obtained.
  • the step S1413 selects a cluster center corresponding to the ROI, specifically, randomly selects a cluster center corresponding to the ROI.
  • the re-selecting a cluster center from unselected cluster centers may be re-randomly selecting a cluster center from unselected cluster centers.
  • the above-mentioned steps S1411 - S1418 are performed respectively, so as to realize the integration of multiple overlapping joint areas of all joints.
  • the NMS algorithm combined with the clustering algorithm to integrate multiple joint areas of the joint even after the non-maximum value suppression is performed, there may be some frames that deviate from the center of the cluster, through the joint area integration method of this application, it can also be obtained Accurate coronal and sagittal joint positions.
  • step S15 coordinate transformation is performed on the coronal image region of the ROI and the sagittal image region of the ROI to obtain the three-dimensional coordinates of the target ROI in the original image data.
  • the three-dimensional coordinates of the ROI are calculated according to the obtained coordinates of the ROI on the coronal plane and the sagittal plane.
  • P ct (X,Y,Z) represents the three-dimensional coordinates of the ROI in the original CT image
  • P i ( xi ,y i ) represents the coordinates of the ROI on the i-th coronal plane
  • P j (x j ,y j ) represents the coordinates of the ROI on the jth sagittal plane
  • the three-dimensional coordinates of P i ( xi ,y i ) transformed into the original CT image are CT( xi ,Y,y i )
  • P j (x j ,y j ) The three-dimensional coordinates transformed into the original CT image are CT(X, x j , y j ), according to the above formula, the three-dimensional coordinates of the ROI under the original CT image can be deduced
  • the ROI automatic positioning method of the present application does not need to perform calculations on three-dimensional data, reduces computational complexity, reduces data processing time, can realize automatic positioning of ROI parts in three-dimensional space, and detects differences in medical images at one time.
  • the ROI position of the navigation software simplifies the operation process of the navigation software, improves the versatility of the navigation software, improves work efficiency, and reduces the workload of the navigation software operators.
  • the present application provides an automatic ROI positioning device. As shown in FIG. 1
  • the data acquisition module 101 is configured to acquire original image data.
  • the data preprocessing module 102 performs preprocessing on the original image data to obtain a coronal image sequence and a sagittal image sequence.
  • the positioning module 103 locates the ROI of the images in the coronal view sequence and the sagittal view sequence.
  • the integration module 104 integrates the ROIs located in the coronal image sequence to obtain a coronal ROI integration area, and integrates the ROIs located in the sagittal image sequence to obtain a sagittal ROI integration area.
  • the coordinate transformation module 105 performs coordinate transformation on the coronal plane ROI integration area and the sagittal plane ROI integration area to obtain the three-dimensional coordinates of the target ROI in the original image data.
  • This application provides an ROI automatic positioning device, which does not need to perform calculations on three-dimensional data, reduces computational complexity, reduces data processing time, can realize automatic positioning of ROI parts in three-dimensional space, and detects medical images at one time
  • the different ROI parts on the map simplifies the operation process of the navigation software, improves the versatility of the navigation software, improves the work efficiency, and reduces the workload of the navigation software operators.
  • Each module in the above-mentioned ROI automatic positioning device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can call and execute the corresponding operations of the above-mentioned modules.
  • the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • the present application provides a surgical robot system.
  • the robot system is configured to execute the ROI automatic positioning method in each of the above embodiments.
  • the robot system does not need to perform calculations on the three-dimensional data, which reduces the computational complexity and the time of data processing, and can realize the automatic positioning of the ROI in the three-dimensional space.
  • the different ROI parts on the medical image can be detected accurately, which simplifies the operation process of the navigation software, improves the versatility of the navigation software, improves the work efficiency, and reduces the workload of the navigation software operators.
  • the present application provides a computer device, including a memory and a processor, the memory stores a computer program, and the processor implements the ROI automatic positioning method in any embodiment of the present application when executing the computer program.
  • the present application provides a computer-readable storage medium on which computer-readable instructions are stored.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the ROI automatic positioning method of any embodiment of the present application.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Architecture (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention divulgue un procédé de positionnement automatique de ROI, consistant : à obtenir des données d'image d'origine ; à prétraiter les données d'image d'origine afin d'obtenir une séquence d'images coronales et une séquence d'images sagittales ; à positionner la ROI des images dans la séquence d'images coronales et la séquence d'images sagittales ; à intégrer la ROI positionnée dans la séquence d'images coronales afin d'obtenir une zone d'intégration de ROI de plan coronal, et à intégrer la ROI positionnée dans la séquence d'images sagittales afin d'obtenir une zone d'intégration de ROI de plan sagittal ; et à effectuer une transformation de coordonnées sur la zone d'Intégration de ROI de plan coronal et la zone d'intégration de ROI de plan sagittal afin d'obtenir des coordonnées tridimensionnelles d'une ROI cible dans les données d'image d'origine.
PCT/CN2022/132130 2021-11-19 2022-11-16 Procédé et appareil de positionnement automatique de région d'intérêt (roi), système de robot chirurgical, dispositif et support WO2023088275A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111391566.8A CN114255329A (zh) 2021-11-19 2021-11-19 Roi自动定位方法、装置、手术机器人系统、设备及介质
CN202111391566.8 2021-11-19

Publications (1)

Publication Number Publication Date
WO2023088275A1 true WO2023088275A1 (fr) 2023-05-25

Family

ID=80791012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132130 WO2023088275A1 (fr) 2021-11-19 2022-11-16 Procédé et appareil de positionnement automatique de région d'intérêt (roi), système de robot chirurgical, dispositif et support

Country Status (2)

Country Link
CN (1) CN114255329A (fr)
WO (1) WO2023088275A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255329A (zh) * 2021-11-19 2022-03-29 苏州微创畅行机器人有限公司 Roi自动定位方法、装置、手术机器人系统、设备及介质
CN116779093B (zh) * 2023-08-22 2023-11-28 青岛美迪康数字工程有限公司 一种医学影像结构化报告的生成方法、装置和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
CN110021053A (zh) * 2019-04-16 2019-07-16 河北医科大学第二医院 一种基于坐标转换的图像定位方法、装置、存储介质及设备
CN110427970A (zh) * 2019-07-05 2019-11-08 平安科技(深圳)有限公司 图像分类方法、装置、计算机设备和存储介质
CN111340780A (zh) * 2020-02-26 2020-06-26 汕头市超声仪器研究所有限公司 一种基于三维超声图像的病灶检测方法
CN114255329A (zh) * 2021-11-19 2022-03-29 苏州微创畅行机器人有限公司 Roi自动定位方法、装置、手术机器人系统、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
CN110021053A (zh) * 2019-04-16 2019-07-16 河北医科大学第二医院 一种基于坐标转换的图像定位方法、装置、存储介质及设备
CN110427970A (zh) * 2019-07-05 2019-11-08 平安科技(深圳)有限公司 图像分类方法、装置、计算机设备和存储介质
CN111340780A (zh) * 2020-02-26 2020-06-26 汕头市超声仪器研究所有限公司 一种基于三维超声图像的病灶检测方法
CN114255329A (zh) * 2021-11-19 2022-03-29 苏州微创畅行机器人有限公司 Roi自动定位方法、装置、手术机器人系统、设备及介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI BIN, LEI XUE; DAI MENG; WANG RUNHUI; BI YANHUA; FAN ZHENZENG: "Utilizing Coordinate Transition of Imaging Data to Locate the Relationship Between the Intracranial Target and the Skull Surface Signs", JOURNAL OF HEBEI MEDICAL UNIVERSITY, vol. 33, no. 11, 1 November 2012 (2012-11-01), pages 1260, XP093067899 *
WANG JINCHUAN: "The Key Technology Research of3-D Reconstruction in Medical Image Based on MC Algorithm", MASTER'S THESIS, no. 12, 25 April 2012 (2012-04-25), CN, pages 1 - 63, XP009545685 *

Also Published As

Publication number Publication date
CN114255329A (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
WO2023088275A1 (fr) Procédé et appareil de positionnement automatique de région d'intérêt (roi), système de robot chirurgical, dispositif et support
CN106682435B (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
CN107480677B (zh) 一种识别三维ct图像中感兴趣区域的方法及装置
US7783094B2 (en) System and method of computer-aided detection
US11996198B2 (en) Determination of a growth rate of an object in 3D data sets using deep learning
US20080226145A1 (en) Image processing apparatus and computer readable media containing image processing program
CN114974575A (zh) 基于多特征融合的乳腺癌新辅助化疗疗效预测装置
JP5296981B2 (ja) アフィン変換を用いたモダリティ内医療体積画像の自動位置合わせ
CN110490841B (zh) 计算机辅助影像分析方法、计算机设备和存储介质
CN110211200B (zh) 一种基于神经网络技术的牙弓线生成方法及其系统
Jaffar et al. An ensemble shape gradient features descriptor based nodule detection paradigm: a novel model to augment complex diagnostic decisions assistance
US20060078184A1 (en) Intelligent splitting of volume data
KR20190090986A (ko) 흉부 의료 영상 판독 지원 시스템 및 방법
CN112464802B (zh) 一种玻片样本信息的自动识别方法、装置和计算机设备
CN109816665B (zh) 一种光学相干断层扫描图像的快速分割方法及装置
CN113723417B (zh) 基于单视图的影像匹配方法、装置、设备及存储介质
CN113160199B (zh) 影像识别方法、装置、计算机设备和存储介质
CN114334097A (zh) 基于医学图像上病灶进展的自动评估方法及相关产品
CN113962957A (zh) 医学图像处理方法、骨骼图像处理方法、装置、设备
US10699426B2 (en) Registration apparatus, registration method, and registration program
JPWO2020174770A1 (ja) 領域特定装置、方法およびプログラム、学習装置、方法およびプログラム、並びに識別器
CN111210414A (zh) 医学图像分析方法、计算机设备和可读存储介质
CN115148341B (zh) 一种基于体位识别的ai结构勾画方法及系统
CN112183618B (zh) 相似度确定方法和相似度确定装置
EP4040384A1 (fr) Procédé, dispositif et système permettant de déterminer la présence d'une appendicite

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894808

Country of ref document: EP

Kind code of ref document: A1