CN114255329A - ROI automatic positioning method and device, surgical robot system, equipment and medium - Google Patents

ROI automatic positioning method and device, surgical robot system, equipment and medium Download PDF

Info

Publication number
CN114255329A
CN114255329A CN202111391566.8A CN202111391566A CN114255329A CN 114255329 A CN114255329 A CN 114255329A CN 202111391566 A CN202111391566 A CN 202111391566A CN 114255329 A CN114255329 A CN 114255329A
Authority
CN
China
Prior art keywords
roi
coronal
sagittal
sequence
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111391566.8A
Other languages
Chinese (zh)
Inventor
白全海
刘鹏飞
刘赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaowei Changxing Robot Co ltd
Original Assignee
Suzhou Xiaowei Changxing Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaowei Changxing Robot Co ltd filed Critical Suzhou Xiaowei Changxing Robot Co ltd
Priority to CN202111391566.8A priority Critical patent/CN114255329A/en
Publication of CN114255329A publication Critical patent/CN114255329A/en
Priority to PCT/CN2022/132130 priority patent/WO2023088275A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Architecture (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses an ROI automatic positioning method, which comprises the following steps: acquiring original image data; preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence; locating a ROI of an image in the coronal view sequence and the sagittal view sequence; integrating the positioned ROI in the coronal map sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal map sequence to obtain an ROI integrated region in a sagittal plane; and carrying out coordinate transformation on the coronal ROI integrated region and the sagittal ROI integrated region to obtain the three-dimensional coordinate of the target ROI in the original image data. The application also discloses an ROI automatic positioning device, a surgical robot system, equipment and a medium.

Description

ROI automatic positioning method and device, surgical robot system, equipment and medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for automatically positioning an ROI, a surgical robot system, a device, and a medium.
Background
The main functions of the human joints are articulation and motion. Medical images of internal joints are acquired non-invasively in the human body or in a part of the human body for medical or medical research. The medical image may be a Magnetic Resonance Imaging (MRI) image, a Computed Tomography (CT) image, a Positron Emission Tomography (PET) image, an ultrasound image, or the like. The obtained medical image is processed and used as an auxiliary means for subsequent clinical diagnosis, treatment and the like.
Fig. 1 is a schematic diagram of an operator manually marking joint positions in a coronal view and a sagittal view of a medical image based on navigation software in the related art of the present application. As shown in fig. 1, in the related art, before a joint operation, joint positions in a coronal view and a sagittal view of a medical image are first manually marked by an operator based on navigation software, and then three-dimensional coordinates of the joint part are automatically given by a navigation system. This manual labeling is time and labor consuming and increases the workload of the operator. The operator needs to position different joint parts under different views, for example, in the navigation software, the left and right knee joints and the hip joint need to be marked in a coronal view and a sagittal view, and the operator needs to perform 8 operations, so that the labor intensity of the operator is greatly increased. In addition, joint positioning standards of different operators are not uniform, so that positions of selected joint parts are not uniform, joint positioning accuracy is low, and the problem that a three-dimensional reconstruction model of a joint is inaccurate is further caused. Finally, the joint positions need to be manually marked by operators, a universal positioning method is not available, the joint positioning method can adapt to the positioning of different joints, different program codes need to be developed for different joint parts, and the platform construction of navigation software is not facilitated.
Disclosure of Invention
Based on this, according to the embodiments of the present application, a method, a device, a surgical robot system, a device, and a medium for automatically positioning an ROI are provided to reduce operations of an operator to manually position an ROI such as a joint, improve the accuracy of ROI positioning, improve work efficiency, simplify the operation flow of related surgical navigation software, and improve the versatility of the navigation software.
The application provides an ROI automatic positioning method, which comprises the following steps: acquiring original image data; preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence; locating a ROI in images of the coronal view sequence and the sagittal view sequence; integrating the positioned ROI in the coronal map sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal map sequence to obtain an ROI integrated region in a sagittal plane; and carrying out coordinate transformation on the coronal ROI integrated region and the sagittal ROI integrated region to obtain the three-dimensional coordinate of the target ROI in the original image data.
In an embodiment of the present application, before said locating the ROI in the images of the coronal map sequence and the sagittal map sequence, further comprising: determining whether each image in the coronal view sequence and the sagittal view sequence contains an ROI by classifying the images in the coronal view sequence and the sagittal view sequence; filtering images in the coronal map sequence and the sagittal map sequence that do not include the ROI.
In an embodiment of the application, the preprocessing the raw image data to obtain a coronal view sequence and a sagittal view sequence includes: standardizing the original image data to obtain standardized three-dimensional image data; obtaining the coronal view sequence and the sagittal view sequence from the normalized three-dimensional image data.
In an embodiment of the present application, the normalizing the original image data to obtain normalized three-dimensional image data includes: acquiring parameters of the original image data; acquiring target image parameters and an image transformation interpolation algorithm; and according to the target image parameters and the image transformation interpolation algorithm, carrying out standardization processing on the original image data to obtain the standardized three-dimensional image data.
In an embodiment of the present application, the parameter of the original image data at least includes one of a shooting direction angle, a resolution, an origin coordinate, and a three-dimensional size of the original image; the target image parameters at least comprise one of a target shooting direction angle, a target resolution, a target origin coordinate and a target three-dimensional size.
In an embodiment of the present application, before the determining whether each image in the coronal view sequence and the sagittal view sequence contains the ROI by classifying the images in the coronal view sequence and the sagittal view sequence, the ROI automatic positioning method further comprises: performing window-width window-level processing on images in the coronal image sequence and the sagittal image sequence.
In an embodiment of the application, before the determining whether each image in the coronal view sequence and the sagittal view sequence contains the ROI by classifying the images in the coronal view sequence and the sagittal view sequence, further includes:
and normalizing the images in the coronal image sequence and the sagittal image sequence.
In an embodiment of the present application, the locating the ROI in the images of the coronal view sequence and the sagittal view sequence comprises: performing feature extraction on the images in the coronal view sequence and the sagittal view sequence; predicting, from the extracted features, position information of the ROI contained within the images in the coronal and sagittal image sequences.
In an embodiment of the present application, the position information includes center point coordinates and size information of the ROI.
In an embodiment of the present application, the integrating the located ROIs in the coronal map sequence to obtain coronal ROI integration regions and integrating the located ROIs in the sagittal map sequence to obtain sagittal ROI integration regions comprises: integrating the overlapping parts of the ROIs positioned in the coronal graph sequence based on a non-maximum suppression algorithm to obtain a target coronal plane ROI; integrating the overlapping parts of the positioned ROI in the sagittal image sequence based on a non-maximum suppression algorithm to obtain a target sagittal plane ROI; and clustering the target coronal plane ROI and the target sagittal plane ROI respectively to obtain a coronal plane ROI integrated region and a sagittal plane ROI integrated region.
In an embodiment of the application, the clustering the target coronal plane ROI and the target sagittal plane ROI to obtain the coronal plane ROI integration region and the sagittal plane ROI integration region respectively includes: clustering the target coronal plane ROI according to a k-means clustering algorithm to obtain an integration region of the coronal plane ROI; and clustering the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region.
In an embodiment of the present application, the clustering the target coronal plane ROI according to a k-means clustering algorithm to obtain the coronal plane ROI integration region includes: selecting a plurality of target coronal planes ROI as cluster centers; performing clustering processing on the ROI of the target coronal plane according to intersection of each cluster center and other target coronal planes to obtain an ROI integration region of the coronal plane; the clustering processing is carried out on the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region, and the method comprises the following steps: selecting a plurality of target sagittal plane ROI as a cluster center; and performing clustering processing on the target sagittal plane ROI according to intersection of each cluster center and other target sagittal planes and comparison to obtain the coronal plane ROI integration region.
In an embodiment of the present application, the performing coordinate transformation on the coronal ROI integration region and the sagittal ROI integration region to obtain three-dimensional coordinates of the target ROI in the raw image data includes: according to formula Pct(X,Y,Z)=Pct(xi,xj,(yi+yj) The/2) carrying out coordinate transformation on the ROI integrated region in the coronal plane and the ROI integrated region in the sagittal plane to obtain a three-dimensional coordinate of the target ROI; it is composed ofIn, Pct(X, Y, Z) represents the three-dimensional coordinates of a point in the target ROI in the raw image data, Pi(xi,yi) Coordinates, P, representing the region of integration of said points in the ith said coronal ROIj(xj,yj) The coordinates of the point in the jth ROI integration region in the sagittal plane are shown.
The present application further provides a ROI automatic positioning device, including: the data acquisition module is used for acquiring original image data; the data preprocessing module is used for preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence; a localization module to localize ROIs of images in the coronal view sequence and the sagittal view sequence; the integration module is used for integrating the positioned ROI in the coronal map sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal map sequence to obtain an ROI integrated region in the sagittal plane; and the coordinate transformation module is used for carrying out coordinate transformation on the ROI integrated region in the coronal plane and the ROI integrated region in the sagittal plane to obtain the three-dimensional coordinate of the target ROI in the original image data.
The application also provides a surgical robot system which comprises the ROI automatic positioning device.
The application also provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes any ROI automatic positioning method when executing the computer program.
The present application further provides a computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by one or more processors, cause the one or more processors to perform any of the above-described ROI automatic positioning methods.
According to the ROI automatic positioning method, the ROI automatic positioning device and the surgical robot system, calculation is not needed on three-dimensional data, the calculation complexity is reduced, the data processing time is shortened, the ROI can be automatically positioned in a three-dimensional space, different ROIs on medical images can be detected at one time, the operation flow of navigation software is simplified, meanwhile, the universality of the navigation software is improved, the working efficiency is improved, and the workload of navigation software operators is reduced.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
FIG. 1 is a schematic illustration of an operator manually labeling joint positions in coronal and sagittal views of a medical image based on navigation software in a related art of the present application;
FIG. 2 is a diagram of an application environment of the ROI automatic positioning method according to an embodiment of the present application;
FIG. 3 is a flowchart of a ROI automatic positioning method according to an embodiment of the present application;
FIG. 4 is a flow chart of raw image data preprocessing according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating an exemplary process for normalizing raw image data according to the present disclosure;
FIG. 6 is a flow chart for determining whether a ROI is included in a coronal view sequence and a sagittal view sequence according to one embodiment of the present application;
FIG. 7 is a flow chart of classifying a coronal view sequence and a sagittal view sequence in accordance with an embodiment of the present application;
FIG. 8 is a flowchart of image pre-processing in a coronal view sequence and a sagittal view sequence according to an embodiment of the present application;
FIG. 9 is a flow chart of the positioning of a ROI in an image according to an embodiment of the present application;
FIG. 10a is a thermodynamic diagram of a target area of an embodiment of the present application;
FIG. 10b shows the predicted offset of the center point of the target area according to an embodiment of the present application;
FIG. 10c shows the predicted length and width of the target area according to an embodiment of the present application;
FIG. 11 is a flow chart of integrating overlapping multiple ROIs based on a non-maximum suppression algorithm according to an embodiment of the present application;
FIG. 12 illustrates a process for integrating overlapping multiple ROIs based on a non-maxima suppression algorithm according to an embodiment of the present application;
FIG. 13 is a flowchart of the integration of overlapping ROI coordinate frames using NMS algorithm in combination with clustering algorithm according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a NMS algorithm combined with a clustering algorithm for integrating overlapping ROI coordinate frames according to an embodiment of the present application;
FIG. 15 is a schematic representation of the transformation of the coordinates of the ROI in the coronal and sagittal planes to the three-dimensional coordinates of the ROI in the original image;
fig. 16 is a schematic structural diagram of an ROI automatic positioning device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The present application provides a region of interest (ROI) automatic positioning method, which can be applied in the application environment as shown in fig. 2. The application environment comprises an operation trolley 1, a mechanical arm 2, a tool target 21, a femur target 22, a tibia target 23, a base target 24, a tip target 241, an osteotomy guiding tool 31, a swing saw 41, an NDI navigation device 51, an auxiliary display 52, a navigation trolley 61, a main display 62, a keyboard 63 and an operation table 81. By utilizing the automatically positioned ROI position, an operator can perform preoperative operations such as key point marking in a corresponding three-dimensional model through navigation software. During surgery, surgical treatment is performed using each device in the application environment map.
The application provides an ROI automatic positioning method, and does not limit the execution subject. Alternatively, the ROI automatic positioning method may be implemented by an ROI positioning apparatus, and an execution subject thereof may be a computer processor. Before operation, an operator imports a three-dimensional model described by a medical image sequence, and triggers the ROI automatic positioning method of the application, and the operator can determine each ROI in the imported and three-dimensional models in a display interface by operating the ROI automatic positioning method of the application. The specific process of the operator importing the sequence of medical images until triggering the data processing performed by the ROI automatic localization method may include: reading in user data through navigation software; opening up a shared memory and writing data; the ROI automatic positioning means is activated by a message notification.
Fig. 3 is a flowchart of an ROI automatic positioning method according to an embodiment of the present application, and as shown in fig. 3, the ROI automatic positioning method provided by the present application includes the following steps:
step S11, acquiring original image data;
step S12, preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence;
step S13, positioning the ROI in the images of the coronal map sequence and the sagittal map sequence;
step S14, integrating the positioned ROI in the coronal image sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal image sequence to obtain an ROI integrated region in the sagittal plane;
and step S15, carrying out coordinate transformation on the coronal ROI integrated region and the sagittal ROI integrated region to obtain the three-dimensional coordinates of the target ROI in the original image data.
In step S11, raw image data is acquired.
The original image data in the embodiments of the present application are, for example, three-dimensional medical images, which can be obtained by a computer device through three-dimensional reconstruction of data of a part to be examined of a patient acquired by a scanning device. The medical image of the present application is described by taking a Computed Tomography (CT) image as an example.
In step S12, the raw image data is preprocessed to obtain a coronal view sequence and a sagittal view sequence.
Specifically, the preprocessing of the raw image data of the present application includes normalizing the raw image data to convert the raw image data (e.g., a three-dimensional model of the lower extremity) into a coronal and sagittal image series under unified image rules. Fig. 4 is a flowchart of preprocessing raw image data according to an embodiment of the present application, and as shown in fig. 4, the preprocessing step S12 for raw image data to obtain the coronal view sequence and the sagittal view sequence includes the following steps S121 and S122 according to the embodiment of the present application.
In step S121, the raw image data is normalized to obtain normalized three-dimensional image data.
Fig. 5 is a flowchart of a normalization process of raw image data according to an embodiment of the present application, and as shown in fig. 5, the step S121 performs the normalization process on the raw image data to obtain normalized three-dimensional image data, and includes the following steps S1211 to S1213.
In step S1211, parameters of the raw image data are acquired.
In an embodiment of the present application, the parameter of the original image data includes at least one of a photographing direction angle, a resolution, an origin coordinate, and a three-dimensional size of the original image.
In step S1212, target image parameters and an image transformation interpolation algorithm are acquired.
In an embodiment of the present application, the target image parameter includes at least one of a target direction angle, a target resolution, a target origin coordinate, and a target three-dimensional size. The image transformation interpolation algorithm may be a nearest neighbor method, a bilinear interpolation method, or a cubic interpolation method, which is not limited in this application.
In step S1213, the raw image data is normalized according to the target image parameter and the image transformation interpolation algorithm, so as to obtain the normalized three-dimensional image data.
In an embodiment of the present application, the normalization processing of the image data may further be min-max normalization, and the original image data is linearly transformed, so that the result value is mapped between 0 and 1, where the conversion formula specifically is:
Figure BDA0003364514230000061
where max is the maximum signal value of the original image data and min is the minimum signal value of the original image data. PiIs the signal value of the ith point in the original image data, Pi *Is the signal value at the ith point in the normalized image data.
In step S122, the coronal view sequence and the sagittal view sequence are obtained from the normalized three-dimensional image data.
In an embodiment of the present application, a sequence of coronaries is obtained by extracting two-dimensional image slices along a coronal plane direction from three-dimensional image data, wherein each two-dimensional image slice serves as a coronaries.
In an embodiment of the present application, the sagittal image sequence is obtained by extracting two-dimensional image slices along the sagittal plane direction in the three-dimensional image data, wherein each two-dimensional image slice is used as a sagittal image.
By the above-described preprocessing of the original image data, a coronal view sequence and a sagittal view sequence are obtained. In order to reduce the processing amount of image data and increase the speed of image data processing, a coronal view and a sagittal view containing ROI and a coronal view and a sagittal view containing no ROI are screened from the coronal view sequence and the sagittal view sequence. For coronal and sagittal images without ROI, the next data processing process is not entered.
Fig. 6 is a flowchart of determining whether the ROI is included in the coronal view sequence and the sagittal view sequence according to an embodiment of the present application, and as shown in fig. 6, in an embodiment of the present application, to reduce the processing amount of the image and increase the image processing speed, steps S125 and S126 are further included before the step S13 locates the ROI of the image in the coronal view sequence and the image in the sagittal view sequence.
In step S125, it is determined whether each image in the coronal view sequence and the sagittal view sequence contains an ROI by classifying the images in the coronal view sequence and the sagittal view sequence.
Images in the coronal view sequence and the sagittal view sequence, some images contain the ROI and some images do not contain the ROI, and whether each image in the coronal view sequence and the sagittal view sequence contains the ROI can be determined by carrying out two classifications on the images in the coronal view sequence and the sagittal view sequence. Specifically, in an embodiment of the present application, images in the coronal view sequence and the sagittal view sequence are classified by using an ROI pre-classification network, and whether each image in the coronal view sequence and the sagittal view sequence includes an ROI is determined by a classification label of whether the image includes the ROI output by the pre-classification network. Fig. 7 is a flowchart of classifying a coronal view sequence and a sagittal view sequence according to an embodiment of the present application, where, as shown in fig. 7, the step S120 of classifying images in the coronal view sequence and the sagittal view sequence and determining whether each image in the coronal view sequence and the sagittal view sequence includes an ROI includes the following steps S1251 to S1253.
In step S1251, feature extraction is performed on the images in the coronal view sequence and the sagittal view sequence, respectively.
In an embodiment of the present application, feature extraction is performed on images in the coronal view sequence and the sagittal view sequence respectively by using a backbone network. The backbone network is any one of VGG series and Resnet series.
In step S1252, the extracted features are mapped to a binary space.
In an embodiment of the application, the features extracted by the backbone network are mapped to a binary space using a fully connected network. The classification result is 0 or 1, for example, 0 indicates an image containing only the background, and 1 indicates an image containing the ROI.
In step S1253, it is determined whether the images in the coronal view sequence and the sagittal view sequence contain the ROI based on the output classification result.
In an embodiment of the present application, the determination is made according to the classification results being 0 and 1, for example, a coronal view and a sagittal view of the classification result being 1, which are determined as images containing the ROI; the coronal and sagittal images with classification results of 0 were determined as images not containing the ROI.
In step S126, images in the coronal map sequence and the sagittal map sequence that do not contain the ROI are filtered.
The coronal and sagittal images comprising the ROI constitute a coronal image sequence and a sagittal image sequence, respectively, for further object detection of the ROI. By filtering the images without ROI in the coronal image sequence and the sagittal image sequence, the number of the images to be processed can be reduced, and the image processing speed is increased.
Fig. 8 is a flowchart of image preprocessing in a coronal view sequence and a sagittal view sequence according to an embodiment of the present application, and as shown in fig. 8, in an embodiment of the present application, in order to provide the requirement of an input image conforming to the foregoing pre-classification network, the images in the coronal view sequence and the sagittal view sequence need to be preprocessed before being input into the pre-classification network, that is, before step S125, steps S123 and S124 are further included.
In step S123, window width window level processing is performed on the images in the coronal image sequence and the sagittal image sequence.
In an embodiment of the present application, the original image is, for example, a CT image, window width windows of the CT images in the coronal image sequence and the sagittal image sequence may be set, and based on the window width windows of the CT images, the CT images in the coronal image sequence and the sagittal image sequence may be subjected to window width window processing, thereby enhancing ROI data of the CT images in the coronal image sequence and the sagittal image sequence.
Further, in order to meet the requirement of the pre-classification network on the size of the input image, in an embodiment of the present application, the method further includes adjusting the size of the images in the coronal view sequence and the sagittal view sequence, for example, adjusting the size of the images to be 1024 × 512 pixels. Adjusting the image size includes two ways of edge cropping and filling.
In step S124, the images in the enhanced coronal image sequence and the enhanced sagittal image sequence are preprocessed.
In an embodiment of the present application, after performing window width window level processing on the images in the coronal image sequence and the sagittal image sequence, normalization processing may be further performed on the enhanced coronal image sequence and the sagittal image sequence according to a requirement of a pre-classification network.
Specifically, in order to further unify the distribution of data and accelerate network convergence, the enhanced coronal image sequence and the enhanced sagittal image sequence are normalized, and the normalization method may be a Z-score normalization method:
Figure BDA0003364514230000081
wherein μ is a mean of image data in the enhanced coronal image sequence and the enhanced sagittal image sequence, and σ is a standard deviation of image data in the enhanced coronal image sequence and the enhanced sagittal image sequence.
In step S13, the ROIs of the images in the coronal map sequence and the sagittal map sequence are located.
In an embodiment of the present application, the ROI of each coronal view in the sequence of coronal views is located by the object detection network, and the ROI of each sagittal view in the sequence of sagittal views is located to obtain the desired object.
Fig. 9 is a flowchart of positioning an ROI in an image according to an embodiment of the present application, where step S13 positions the ROI in the images in the coronal view sequence and the sagittal view sequence as shown in fig. 9, and includes the following steps S131 and S132.
In step S131, feature extraction is performed on the images in the coronal view sequence and the sagittal view sequence.
In an embodiment of the present application, the images in the coronal view sequence and the sagittal view sequence are respectively subjected to feature extraction by using a feature extraction network. The feature extraction network may be Resnet50, Resnet101, HourglassNet, or MobelNet. The feature extraction network uses the extracted features for target prediction.
In an embodiment of the present application, feature extraction is implemented by performing convolution operation on images in the coronal image sequence and the sagittal image sequence through a convolution neural network. The convolution neural network adopts a 3 x 3 filter, and the filter is scanned to the right and the down in sequence respectively to obtain the value of each element of an output matrix, thereby realizing the filtering of the image to be processed. The calculation process may be, for example, as follows:
Figure BDA0003364514230000091
convolution process
A dotted line frame: 4-1 × 1+1 × 0+1 × 1+0 × 0+1 × 1+0 × 1+0 × 1+0 × 0+1 × 1
Black line frame: 3-1 × 1+0 × 1+0 × 1+1 × 0+1 × 1+1 × 0+0 × 1+1 × 0+1 × 1
In one embodiment of the present application, a pooling layer is added between adjacent convolutional layers in the convolutional neural network in order to more efficiently reduce the size of the image, speed up image processing and prevent overfitting. The pooling layer may, for example, perform a maximum pooling operation on 4 × 4 images using a 2 × 2 filter, and as a result, take the corresponding maximum value in a 2 × 2 window, to finally obtain one 2 × 2 image. The maximum pooling provides a way of downsampling the matrix after convolution for the subsequent network layer to continue processing until the image characteristics for judging whether the image input into the target prediction network contains the ROI are obtained.
In step S132, the positional information of the ROI contained within the image in the coronal and sagittal image sequences is predicted from the extracted features.
The position information of the ROI includes coordinates of a center point and size information of the ROI. In an embodiment of the present application, taking ROI as an example of a joint region in an image, after processing features of the extracted image through a target detection network, results shown in fig. 10a to 10c can be predicted. The thermodynamic diagram shown in fig. 10a shows the probability that each image block in an image contains a joint. Fig. 10b shows the amount of offset between the center point and the actual center point of the joint region in the thermodynamic diagram, and the angle and length of offset between the center point of the joint region predicted with respect to the actual center point of the joint are represented by vertical and horizontal coordinate values. Fig. 10c shows the prediction of the length and width of the joint region, and the predicted center point coordinates of the joint region are corrected using the predicted coordinate offset amount of the center point of the joint region, and the joint region in the image can be determined based on the size information (for example, the length and width) of the joint region predicted by the object detection network and the corrected center point.
According to the above-mentioned embodiment of the present application, the ROI of each image in each coronal image sequence and sagittal image sequence is obtained through target detection network detection. In order to obtain ROIs in the dimension of the coronal view and the dimension of the sagittal view covering the three-dimensional model, it is necessary to integrate the detected ROIs of the images.
In step S14, the ROIs located in the coronal map sequence are integrated to obtain a coronal ROI integration region, and the ROIs located in the sagittal map sequence are integrated to obtain a sagittal ROI integration region.
To obtain the coronal ROI integration region and the sagittal ROI integration region, in an embodiment of the present application, the overlapping ROIs are integrated based on a Non-Maximum Suppression (NMS) algorithm, resulting in a coronal ROI integration region covering the ROIs of the coronal map sequence and a sagittal ROI integration region covering the ROIs of the sagittal map sequence.
Fig. 11 is a flowchart of integrating multiple overlapped ROIs based on a non-maximum suppression algorithm according to an embodiment of the present application, and as shown in fig. 11, step S14 integrates the located ROIs in the coronal view sequence to obtain an ROI integration region in the coronal plane, and integrates the located ROIs in the sagittal view sequence to obtain an ROI integration region in the sagittal plane, including the following steps S1401-S1403.
In step S1401, overlapping portions of the ROI positioned in the sequence of coronaries are integrated based on a non-maximum suppression algorithm to obtain a target coronal plane ROI.
In step S1402, the overlapping portions of the ROIs located in the sagittal image sequence are integrated based on a non-maximum suppression algorithm to obtain a target sagittal plane ROI.
In step S1403, the target coronal plane ROI and the target sagittal plane ROI are clustered to obtain the coronal plane ROI integration region and the sagittal plane ROI integration region.
Fig. 12 shows a process of integrating a plurality of overlapped joint regions based on a non-maximum suppression algorithm according to an embodiment of the present application, and as shown in fig. 12, the joint region of the non-maximum value 0.78, the joint region of the non-maximum value 0.80, and the joint region of the non-maximum value 0.86 are all suppressed, and the joint region corresponding to the maximum value 0.92 is reserved, so that redundant joint regions are removed, and the best joint region is reserved.
In an embodiment of the present application, a clustering algorithm may also be used to remove redundant joint regions and retain the best joint region.
In an embodiment of the present application, since there is a small false detection probability during the prediction, after performing non-maximum suppression, there may be some boxes deviating from the cluster center, and the ROI positions in the coronal plane and the sagittal plane need to be further obtained through a clustering algorithm. The method specifically comprises the following steps:
clustering the target coronal plane ROI according to a k-means clustering algorithm to obtain an integration region of the coronal plane ROI; and clustering the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region.
The method comprises the following steps of clustering the target coronal plane ROI according to a k-means clustering algorithm to obtain an integration region of the coronal plane ROI: selecting a plurality of target coronal planes ROI as cluster centers; and performing clustering processing on the ROI of the target coronal plane according to intersection of each cluster center and other target coronal planes to obtain an ROI integration region of the coronal plane. Clustering the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region, wherein the step of clustering the target sagittal plane ROI integration region comprises the following steps: selecting a plurality of target sagittal plane ROI as a cluster center; and performing clustering processing on the target sagittal plane ROI according to intersection of each cluster center and other target sagittal planes and comparison to obtain the coronal plane ROI integration region.
In an embodiment of the present application, as shown in fig. 13, the step S14 integrates the overlapped multiple ROIs by using NMS algorithm in combination with clustering algorithm, the step S14 integrates the located ROIs in the coronal graph sequence to obtain an ROI integration region in coronal plane, and the step S1411-S1418 of integrating the located ROIs in the sagittal graph sequence to obtain an ROI integration region in sagittal plane specifically includes the following steps.
In step S1411, the overlapping multiple ROIs for each ROI are integrated based on a non-maximum algorithm.
In step S1412, the initial value of the number n of cluster centers is set to 0.
In step S1413, a cluster center is selected corresponding to the current ROI.
In step S1414, the number n of cluster centers is incremented by 1, and an Intersection Over Union (IOU) of the current ROI and the current cluster center is calculated.
In step S1415, it is determined whether the IOU is greater than a given threshold.
In step S1416, if the IOU is greater than a given threshold, the current ROI is classified to the current cluster center.
In step S1417, if the IOU is less than or equal to a given threshold, determining whether the number n of the calculated cluster centers is greater than or equal to k, where k is the total number of the cluster centers set corresponding to each type of ROI; if not, a cluster center is reselected from the unselected cluster centers as the current cluster center, and the step S1414 is executed.
In step S1418, it is determined whether all ROIs have been traversed, and if not, an ROI is selected from the non-clustered ROIs as the current ROI, and the process returns to step S1412; if so, finishing the integration of the ROI to obtain the coronal plane ROI integrated region and the sagittal plane ROI integrated region.
In an embodiment of the application, in the step S1413, a cluster center is selected corresponding to the ROI, and specifically, a cluster center is randomly selected corresponding to the ROI.
In an embodiment of the present application, the reselecting of a cluster center from the unselected cluster centers may be randomly reselecting a cluster center from the unselected cluster centers.
As shown in fig. 14, the above-described steps S1411 to S1418 are performed separately for each type of joint, thereby achieving integration of a plurality of joint areas where all the joints overlap. By integrating a plurality of joint regions of the joint through the NMS algorithm and the clustering algorithm, even if some frames deviating from the cluster center may exist after the non-maximum value inhibition is performed, the accurate joint positions of the coronal plane and the sagittal plane can be obtained through the integration method of the joint regions.
In step S15, coordinate transformation is performed on the ROI coronal map region and the ROI sagittal map region to obtain three-dimensional coordinates of the target ROI in the raw image data.
In an embodiment of the present application, the three-dimensional coordinates of the ROI are calculated from the resulting coordinates of the ROI in the coronal and sagittal planes as shown in FIG. 15. Suppose Pct(X, Y, Z) represents the three-dimensional coordinates of the ROI in the original CT image, Pi(xi,yi) Representing the coordinates of the ROI in the ith coronal plane, Pj(xj,yj) Denotes the coordinates of the ROI in the jth sagittal plane, then Pi(xi,yi) The three-dimensional coordinate transformed into the original CT image is CT (x)i,Y,yi),Pj(xj,yj) The three-dimensional coordinates transformed into the original CT image are CT (X, X)j,yj) According to the above formula, the three-dimensional coordinates of the ROI in the original CT image can be derived as:
Pct(X,Y,Z)=Pct(xi,xj,(yi+yj)/2)。
due to errors in the calculation, yiAnd yjNot exactly equal, the coordinate in the Z direction is taken as yiAnd yjThe mean value of (a) is substituted.
According to the ROI automatic positioning method, calculation is not needed on three-dimensional data, the calculation complexity is reduced, the data processing time is reduced, the automatic positioning of the ROI position in a three-dimensional space can be realized, different ROI positions on a medical image can be detected at one time, the operation flow of navigation software is simplified, meanwhile, the universality of the navigation software is improved, the working efficiency is improved, and the workload of navigation software operators is reduced.
It should be understood that, although the steps in the flowcharts in the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps of each flowchart may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
The present application provides an ROI automatic positioning apparatus, as shown in fig. 16, which includes a data acquisition module 101, a data preprocessing module 102, a positioning module 103, an integration module 104, and a coordinate transformation module 105.
A data obtaining module 101, configured to obtain original image data.
And the data preprocessing module 102 is used for preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence.
A localization module 103 that localizes the ROI of the images in the coronal view sequence and the sagittal view sequence.
And an integration module 104 for integrating the positioned ROIs in the coronal view sequence to obtain an ROI integration region in the coronal plane and integrating the positioned ROIs in the sagittal view sequence to obtain a ROI integration region in the sagittal plane.
And the coordinate transformation module 105 is used for carrying out coordinate transformation on the coronal ROI integrated region and the sagittal ROI integrated region to obtain the three-dimensional coordinates of the target ROI in the original image data.
The application provides an ROI automatic positioning device, need not calculate on three-dimensional data, has reduced the calculation complexity, reduces data processing's time, can realize the automatic positioning to ROI position in three-dimensional space, once only detects out the different ROI position on the medical image, has simplified navigation software's operation procedure, has also improved navigation software's commonality simultaneously, has improved work efficiency, has reduced navigation software operating personnel's work load.
For the specific definition of the ROI automatic positioning device of the present application, reference may be made to the above definition of the ROI automatic positioning method, which is not described herein again. The modules in the ROI automatic positioning apparatus can be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
The present application provides a surgical robotic system. The ROI automatic positioning device of the robot system. The robot system does not need to calculate on three-dimensional data in the process of realizing automatic positioning of the ROI of the human body, reduces the calculation complexity, reduces the time for data processing, can realize automatic positioning of the ROI part in a three-dimensional space, detects different ROI parts on a medical image at one time, simplifies the operation flow of navigation software, improves the universality of the navigation software, improves the working efficiency and reduces the workload of navigation software operators.
The application provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes any ROI automatic positioning method when executing the computer program.
A computer-readable storage medium having computer-readable instructions stored thereon is provided. The computer readable instructions, when executed by one or more processors, cause the one or more processors to perform any of the ROI automatic localization methods of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments may be arbitrarily coupled, and for the sake of brevity, all possible couplings of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the couplings of the technical features, the technical features should be considered as the scope of the present description.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (17)

1. An ROI (region of interest) automatic positioning method is characterized by comprising the following steps:
acquiring original image data;
preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence;
locating a ROI in images of the coronal view sequence and the sagittal view sequence;
integrating the positioned ROI in the coronal map sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal map sequence to obtain an ROI integrated region in a sagittal plane;
and carrying out coordinate transformation on the coronal ROI integrated region and the sagittal ROI integrated region to obtain the three-dimensional coordinate of the target ROI in the original image data.
2. The ROI automatic positioning method of claim 1, further comprising, prior to said positioning the ROI in the images of the coronal map sequence and the sagittal map sequence:
determining whether each image in the coronal view sequence and the sagittal view sequence contains an ROI by classifying the images in the coronal view sequence and the sagittal view sequence;
filtering images in the coronal map sequence and the sagittal map sequence that do not include the ROI.
3. The method of claim 1, wherein the pre-processing the raw image data to obtain a coronal view sequence and a sagittal view sequence comprises:
standardizing the original image data to obtain standardized three-dimensional image data;
obtaining the coronal view sequence and the sagittal view sequence from the normalized three-dimensional image data.
4. The method for automatically locating an ROI according to claim 3, wherein the step of normalizing the original image data to obtain normalized three-dimensional image data comprises the steps of:
acquiring parameters of the original image data;
acquiring target image parameters and an image transformation interpolation algorithm;
and according to the target image parameters and the image transformation interpolation algorithm, carrying out standardization processing on the original image data to obtain the standardized three-dimensional image data.
5. The ROI automatic positioning method according to claim 4, wherein:
the parameters of the original image data at least include one of a shooting direction angle, resolution, origin coordinates and three-dimensional size of the original image;
the target image parameters at least comprise one of a target shooting direction angle, a target resolution, a target origin coordinate and a target three-dimensional size.
6. The ROI automatic positioning method of claim 2, further comprising, before said classifying the coronal map sequence and the sagittal map sequence:
performing window-width window-level processing on images in the coronal image sequence and the sagittal image sequence.
7. The method of claim 2, wherein before determining whether each image in the coronal view sequence and the sagittal view sequence contains the ROI by classifying the images in the coronal view sequence and the sagittal view sequence, further comprising:
and normalizing the images in the coronal image sequence and the sagittal image sequence.
8. The ROI automatic positioning method of claim 1, wherein the positioning of the ROI in the images of the coronal view sequence and the sagittal view sequence comprises:
performing feature extraction on the images in the coronal view sequence and the sagittal view sequence;
predicting, from the extracted features, position information of the ROI contained within the images in the coronal and sagittal image sequences.
9. The ROI automatic positioning method according to claim 8, wherein the position information comprises center point coordinates and size information of the ROI.
10. The method for automatically locating an ROI according to claim 1, wherein the integrating the ROI located in the coronal map sequence to obtain an ROI integration region in a coronal plane and the integrating the ROI located in the sagittal map sequence to obtain an ROI integration region in a sagittal plane comprises:
integrating the overlapping parts of the ROIs positioned in the coronal graph sequence based on a non-maximum suppression algorithm to obtain a target coronal plane ROI;
integrating the overlapping parts of the positioned ROI in the sagittal image sequence based on a non-maximum suppression algorithm to obtain a target sagittal plane ROI;
and clustering the target coronal plane ROI and the target sagittal plane ROI respectively to obtain a coronal plane ROI integrated region and a sagittal plane ROI integrated region.
11. The method for automatically locating an ROI according to claim 10, wherein the clustering the ROI in the coronal plane and the ROI in the sagittal plane to obtain an ROI integration region in the coronal plane and an ROI integration region in the sagittal plane respectively comprises:
clustering the target coronal plane ROI according to a k-means clustering algorithm to obtain an integration region of the coronal plane ROI;
and clustering the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region.
12. The method of claim 11, wherein the clustering the ROI of the target coronal plane according to a k-means clustering algorithm to obtain the coronal plane ROI integration region comprises:
selecting a plurality of target coronal planes ROI as cluster centers;
performing clustering processing on the ROI of the target coronal plane according to intersection of each cluster center and other target coronal planes to obtain an ROI integration region of the coronal plane;
the clustering processing is carried out on the target sagittal plane ROI according to a k-means clustering algorithm to obtain the sagittal plane ROI integration region, and the method comprises the following steps:
selecting a plurality of target sagittal plane ROI as a cluster center;
and performing clustering processing on the target sagittal plane ROI according to intersection of each cluster center and other target sagittal planes and comparison to obtain the coronal plane ROI integration region.
13. The ROI automatic positioning method of claim 1, wherein the coordinate transformation of the coronal ROI integration region and the sagittal ROI integration region to obtain three-dimensional coordinates of the target ROI in the raw image data comprises:
according to formula Pct(X,Y,Z)=Pct(xi,xj,(yi+yj) The/2) carrying out coordinate transformation on the ROI integrated region in the coronal plane and the ROI integrated region in the sagittal plane to obtain a three-dimensional coordinate of the target ROI;
wherein, Pct(X, Y, Z) represents the three-dimensional coordinates of a point in the target ROI in the raw image data, Pi(xi,yi) Coordinates, P, representing the region of integration of said points in the ith said coronal ROIj(xj,yj) Indicates that the point is at the jthThe sagittal ROI integrates the coordinates of the region.
14. An ROI automatic positioning apparatus, comprising:
the data acquisition module is used for acquiring original image data;
the data preprocessing module is used for preprocessing the original image data to obtain a coronal image sequence and a sagittal image sequence;
a localization module to localize ROIs of images in the coronal view sequence and the sagittal view sequence;
the integration module is used for integrating the positioned ROI in the coronal map sequence to obtain an ROI integrated region in a coronal plane, and integrating the positioned ROI in the sagittal map sequence to obtain an ROI integrated region in the sagittal plane;
and the coordinate transformation module is used for carrying out coordinate transformation on the ROI integrated region in the coronal plane and the ROI integrated region in the sagittal plane to obtain the three-dimensional coordinate of the target ROI in the original image data.
15. A surgical robotic system comprising an ROI automatic positioning device according to claim 14.
16. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the method of any one of claims 1 to 13 when executing the computer program.
17. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by one or more processors, cause the one or more processors to perform the method of any one of claims 1-13.
CN202111391566.8A 2021-11-19 2021-11-19 ROI automatic positioning method and device, surgical robot system, equipment and medium Pending CN114255329A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111391566.8A CN114255329A (en) 2021-11-19 2021-11-19 ROI automatic positioning method and device, surgical robot system, equipment and medium
PCT/CN2022/132130 WO2023088275A1 (en) 2021-11-19 2022-11-16 Automatic roi positioning method and apparatus, surgical robot system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111391566.8A CN114255329A (en) 2021-11-19 2021-11-19 ROI automatic positioning method and device, surgical robot system, equipment and medium

Publications (1)

Publication Number Publication Date
CN114255329A true CN114255329A (en) 2022-03-29

Family

ID=80791012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111391566.8A Pending CN114255329A (en) 2021-11-19 2021-11-19 ROI automatic positioning method and device, surgical robot system, equipment and medium

Country Status (2)

Country Link
CN (1) CN114255329A (en)
WO (1) WO2023088275A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088275A1 (en) * 2021-11-19 2023-05-25 苏州微创畅行机器人有限公司 Automatic roi positioning method and apparatus, surgical robot system, device and medium
CN116779093A (en) * 2023-08-22 2023-09-19 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
CN110021053B (en) * 2019-04-16 2020-05-15 河北医科大学第二医院 Image positioning method and device based on coordinate transformation, storage medium and equipment
CN110427970B (en) * 2019-07-05 2023-08-01 平安科技(深圳)有限公司 Image classification method, apparatus, computer device and storage medium
CN111340780B (en) * 2020-02-26 2023-04-07 汕头市超声仪器研究所股份有限公司 Focus detection method based on three-dimensional ultrasonic image
CN114255329A (en) * 2021-11-19 2022-03-29 苏州微创畅行机器人有限公司 ROI automatic positioning method and device, surgical robot system, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088275A1 (en) * 2021-11-19 2023-05-25 苏州微创畅行机器人有限公司 Automatic roi positioning method and apparatus, surgical robot system, device and medium
CN116779093A (en) * 2023-08-22 2023-09-19 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment
CN116779093B (en) * 2023-08-22 2023-11-28 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment

Also Published As

Publication number Publication date
WO2023088275A1 (en) 2023-05-25

Similar Documents

Publication Publication Date Title
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
WO2023088275A1 (en) Automatic roi positioning method and apparatus, surgical robot system, device and medium
CN107480677B (en) Method and device for identifying interest region in three-dimensional CT image
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
US5452367A (en) Automated method and system for the segmentation of medical images
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN106887039B (en) Organ and focus three-dimensional imaging method and system based on medical image
CN110223279B (en) Image processing method and device and electronic equipment
CN111311655B (en) Multi-mode image registration method, device, electronic equipment and storage medium
CN113962976B (en) Quality evaluation method for pathological slide digital image
CN111080573A (en) Rib image detection method, computer device and storage medium
CN108510489B (en) Pneumoconiosis detection method and system based on deep learning
CN111292318A (en) Endoscope system, endoscope image recognition method, endoscope image recognition apparatus, and storage medium
CN113962959A (en) Three-dimensional image processing method, three-dimensional image processing device, computer equipment and storage medium
CA3111320A1 (en) Determination of a growth rate of an object in 3d data sets using deep learning
CN110490841B (en) Computer-aided image analysis method, computer device and storage medium
CN115511960A (en) Method and device for positioning central axis of femur, computer equipment and storage medium
CN111798410A (en) Cancer cell pathological grading method, device, equipment and medium based on deep learning model
CN110211200B (en) Dental arch wire generating method and system based on neural network technology
CN117036305A (en) Image processing method, system and storage medium for throat examination
CN112102327B (en) Image processing method, device and computer readable storage medium
CN111243026A (en) Anatomical mark point positioning method and device, computer equipment and storage medium
CN111339993A (en) X-ray image metal detection method and system
CN116525133A (en) Automatic collection method, system, electronic equipment and medium for nucleic acid
CN113962957A (en) Medical image processing method, bone image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination