CN117547353B - Tumor positioning method and system for dual-source CT imaging - Google Patents

Tumor positioning method and system for dual-source CT imaging Download PDF

Info

Publication number
CN117547353B
CN117547353B CN202410044891.4A CN202410044891A CN117547353B CN 117547353 B CN117547353 B CN 117547353B CN 202410044891 A CN202410044891 A CN 202410044891A CN 117547353 B CN117547353 B CN 117547353B
Authority
CN
China
Prior art keywords
image
dimensional
representing
module
tumor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410044891.4A
Other languages
Chinese (zh)
Other versions
CN117547353A (en
Inventor
牛福永
欧阳春
谭福生
吴亚军
刘富春
惠瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Brilliant Robot Chengdu Co ltd
Original Assignee
Zhongke Brilliant Robot Chengdu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Brilliant Robot Chengdu Co ltd filed Critical Zhongke Brilliant Robot Chengdu Co ltd
Priority to CN202410044891.4A priority Critical patent/CN117547353B/en
Publication of CN117547353A publication Critical patent/CN117547353A/en
Application granted granted Critical
Publication of CN117547353B publication Critical patent/CN117547353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the technical field of medical image positioning, in particular to a tumor accurate positioning and robot puncturing method and a system for dual-source CT imaging, wherein the method comprises the steps of acquiring 80kV and 140kV X-ray scanning images through dual-source CT scanning; fusing the 80kV and 140kV X-ray scanning images by utilizing wavelet transformation to obtain a two-dimensional CT image; constructing a two-dimensional CT image into a three-dimensional CT image by adopting a three-dimensional reconstruction method based on a conduction function; using an image segmentation method based on deep learning to output three-dimensional coordinate information of a tumor region in a three-dimensional CT image; performing global path planning according to three-dimensional coordinate information by adopting a search algorithm A, and finding an optimal puncture path of the robot from an initial position to a focus; the robot executes the optimal puncture path according to the motion control instruction, and the invention ensures the accuracy of positioning and puncture.

Description

Tumor positioning method and system for dual-source CT imaging
Technical Field
The invention relates to the technical field of medical image positioning, in particular to a tumor positioning method and system for dual-source CT imaging.
Background
In the current tumor diagnosis and treatment process, tumor localization and biopsy are needed to determine the nature and boundary of the tumor. However, the conventional CT image has a certain error and ambiguity in displaying the detail characteristics of the tumor, so it is difficult to provide clear lesion information, and meanwhile, the manual positioning and puncturing have subjective errors, which cannot meet the requirement of accurate diagnosis, so an accurate image-guided positioning and robot puncturing system is required to significantly improve the positioning accuracy and biopsy effect of the tumor focus.
Disclosure of Invention
The invention provides a tumor positioning system for dual-source CT imaging, which aims to solve the problems that the existing CT image has errors in displaying the detail characteristics of tumors, clear lesion information is difficult to provide, and meanwhile, the requirements of accurate diagnosis cannot be met by manual positioning and puncture.
The invention effectively improves the definition and the information content of the image through the image fusion technology, and can make up the defect of a single medical image through combining the advantage information of a plurality of images, thereby enhancing the identification capability of lesions. The three-dimensional reconstruction technology combining with light projection clearly presents the tissue structure in the three-dimensional scene, is helpful for more accurately understanding the spatial relationship of the human body structure, can realize high-precision automatic puncture and positioning by means of an image-guided robot in the aspect of robot technology, effectively overcomes subjective errors existing in manual operation, and organically combines the technologies, thereby providing a new tumor accurate positioning and puncture diagnosis and treatment scheme. By adopting double-source CT scanning, image fusion processing, three-dimensional reconstruction, target identification and positioning and path planning and control to construct a comprehensive and accurate system, high-efficiency and accurate image guidance is provided for tumor positioning and puncture, the diagnosis precision is improved, the treatment process is optimized, and a more reliable medical solution is provided for tumor patients.
The invention is realized by the following technical scheme:
a tumor positioning method of dual source CT imaging, comprising the steps of:
s1, acquiring 80kV and 140kV X-ray scanning images through dual-source CT scanning;
s2, fusing the 80kV and 140kV X-ray scanning images by utilizing wavelet transformation to obtain a two-dimensional CT image;
s3, constructing a two-dimensional CT image into a three-dimensional CT image by adopting a three-dimensional reconstruction method based on a conduction function;
s4, outputting three-dimensional coordinate information of a tumor region in the three-dimensional CT image by using an image segmentation method based on deep learning;
and S5, carrying out global path planning according to the three-dimensional coordinate information by adopting an A search algorithm, and finding out the optimal puncture path of the robot from the initial position to the focus.
Further, in the step S1, X-ray scan images of 80kV and 140kV are generated through dual-source CT scanning, wherein the 80 kVX-ray scan image has the characteristic of high contrast to soft tissues and low density structures, the 140 kVX-ray scan image has the characteristic of clear bone tissue information, and the X-ray scan image with the optimal display effect of the soft tissues and the bone tissues is used as a basic image for image fusion processing.
Further, in the step S2, wavelet decomposition is performed on the 80kV and 140kV X-ray scanned images, a high-frequency level of the 80kV X-ray scanned image and a low-frequency level of the 140kV X-ray scanned image are extracted, the high-frequency level is used as a high-frequency portion of the fusion image, the low-frequency level is used as a low-frequency portion of the fusion image, and inverse wavelet transformation is performed on the high-frequency portion and the low-frequency portion to obtain the fusion enhanced two-dimensional CT image.
Further, the step S3 includes the following steps:
s301, calculating the path and light intensity attenuation of X-rays passing through a human body slice under each projection angle according to the obtained original projection data of CT scanning;
s302, establishing a radiation conduction equation model:describing the propagation law of X-rays in the human body, wherein +.>Representation of the position->Direction->Intensity of light at->Representation of the position->Linear attenuation coefficient of X-ray at +.>Representing the direction of propagation of the ray, +.>Representing +.>Is a partial derivative of (2);
s303, for each voxel, establishing a plurality of ray conduction equation models according to paths of the voxel in each projection data;
s304, carrying out iterative solution on a radiation conduction equation model established by each voxel by using a finite element method, and calculating an attenuation coefficient conforming to attenuation information of projection data;
s305, accurately reconstructing a three-dimensional CT image of the interior of the human body according to attenuation coefficients of all voxels.
Further, the step S4 includes the following substeps:
s401, collecting CT image data of the existing tumors of different types, and performing focus region segmentation labeling;
s402, establishing an image semantic segmentation network based on a convolutional neural network, wherein the image semantic segmentation network comprises:
The encoder is configured to encode the data in the data stream,wherein->Indicating encoder->Layer profile->Representing convolution and pooling operations, +.>Indicating the total number of layers of the encoder;
the decoder is configured to receive the data,wherein->Representing decoder->Layer profile, < >>Representing upsampling and convolution operations, +.>Indicating the total number of layers of the decoder;
s403, performing network training and loss function:wherein, the method comprises the steps of, wherein,representing the predicted outcome for the loss function, i.e. +.>Error between->Representing the division result of the division network on the image prediction pixel level division,/->Manually marked pixel level division results representing an image,>representing a cross entropy loss function, evaluating the prediction result of a single pixel,/->Representing the +.>Labeling class of individual pixels,/>Representing the%>Category of individual pixels->Representing the total number of pixels in the image;
s404, inputting the three-dimensional CT image into an image semantic segmentation network for reasoning, and outputting a prediction segmentation result of a tumor region;
s405, calculating three-dimensional coordinate information of the three-dimensional outline, volume and space of the tumor according to the prediction segmentation result of the tumor area.
Further, in S5, based on three-dimensional coordinate information of the three-dimensional contour, the volume and the space, searching an optimal puncture path from the initial point to the target focus through an algorithm a, and identifying position information of each anatomical obstacle, and setting a forbidden zone and a cost function through the three-dimensional coordinate information.
The invention also provides a tumor positioning system based on the tumor accurate positioning and robot puncturing method, which comprises a dual-source CT scanning module, an image fusion processing module, a three-dimensional reconstruction module, a target identification and positioning module and a path planning and control module;
the dual-source CT scanning module is provided with two X-ray generating systems for simultaneously generating 80kV and 140kV X-rays for scanning and imaging;
the image fusion processing module performs image fusion processing on the two groups of scanned images through wavelet transformation, and retains high-frequency information of an 80kV CT image and low-frequency information of a 140kV CT image by utilizing an optimization strategy to generate a new two-dimensional CT image with enhanced fusion, and meanwhile retains clear information of soft tissues and skeleton structures;
the three-dimensional reconstruction module calculates a linear attenuation system of each voxel based on a three-dimensional reconstruction algorithm of a conduction function according to a two-dimensional CT image, simulates the transmission process of X rays in a human body through a ray conduction equation model, iteratively solves an equation set, calculates the CT value of each voxel so as to obtain a three-dimensional CT image of the interior of the human body, and restores three-dimensional distribution information of lesions and human anatomy structures;
The target recognition and positioning module builds a convolutional neural network to perform end-to-end image recognition and segmentation based on a deep learning image segmentation method, realizes intelligent recognition of different types of tumors through a training network, and calculates three-dimensional contour, volume and space coordinate information of the tumors according to segmentation results;
the path planning and control module performs global path planning by using an A search algorithm, and plans an optimal puncture path of the robot from the current position to the focus by combining image information, an anatomical structure and obstacle avoidance, performs inverse kinematics calculation, generates an accurate corner of a six-axis joint of the robot, realizes motion control instruction generation, compensates positioning errors by closed loop control, and enables the robot to accurately drive to the focus target position.
Further, the system also comprises a robot system execution module and a process monitoring and feedback module;
the motion control instruction generated by the control module is applied to a robot system execution module, the robot system execution module comprises a motion mechanism and a puncture execution device, and the motion mechanism consists of a seven-joint touch and force feedback robot.
The puncture executing device is internally provided with a precise linear driving module and a special biopsy puncture needle, the linear module realizes pushing needle movement through thread transmission, and is matched with a force sensor to monitor the insertion force of the biopsy puncture needle in real time, and the biopsy puncture needle is of a retractable structure and automatically stretches out to execute biopsy after reaching a target;
The process monitoring and feedback module integrates an ultrasonic imaging probe, acquires an ultrasonic tomographic image in the puncture process in real time, identifies the relative position of a tumor position and a needle head, acquires the real-time position and speed of a joint from the robot control system, is used for analyzing the execution state of the needle head, judging whether the puncture process accords with a planned path or not, and monitors abnormal conditions.
The invention has the beneficial effects that:
the tumor positioning method and the tumor positioning system for the double-source CT imaging, which are provided by the invention, acquire images with different display effects through double-source CT scanning, effectively fuse information through wavelet transformation, accurately express the three-dimensional form of lesions by utilizing an advanced conduction function three-dimensional reconstruction algorithm, realize intelligent identification and quantitative positioning of lesions based on deep learning, perform safe and effective needling path planning by combining an A-type algorithm, accurately and automatically reach a target position through closed-loop control driving a robot, and ensure the accuracy of positioning and puncture by a complete process monitoring and feedback module.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block flow diagram of a dual source CT imaging tumor positioning system according to an embodiment of the present invention;
FIG. 2 is an overall block diagram of a dual source CT imaging tumor positioning system according to an embodiment of the present invention;
FIG. 3 is a block diagram of a dual-source CT scanning module of a tumor positioning system for dual-source CT imaging according to an embodiment of the present invention;
FIG. 4 is a block diagram I of an image fusion processing module of a tumor positioning system for dual-source CT imaging according to an embodiment of the present invention;
FIG. 5 is a block diagram II of an image fusion processing module of a tumor positioning system for dual-source CT imaging according to an embodiment of the present invention;
FIG. 6 is a block diagram I of a three-dimensional reconstruction module of a tumor positioning system for dual-source CT imaging according to an embodiment of the present invention;
FIG. 7 is a block diagram II of a three-dimensional reconstruction module of a tumor positioning system for dual-source CT imaging according to an embodiment of the present invention;
FIG. 8 is a block diagram of a target recognition and localization module of a dual source CT imaging tumor localization system according to an embodiment of the present invention;
FIG. 9 is a block diagram of a path planning and control module of a tumor positioning system for dual source CT imaging according to an embodiment of the present invention;
FIG. 10 is a block diagram of a robot system execution module of a tumor positioning system for dual source CT imaging according to an embodiment of the present invention;
FIG. 11 is a block diagram of a process monitoring and feedback module of a tumor positioning system for dual source CT imaging according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a terminal device of a resource management system based on cloud resource monitoring according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a readable storage medium of a resource management system based on cloud resource monitoring according to an embodiment of the present invention;
in the figure, 200-terminal equipment, 210-memory, 211-RAM, 212-cache memory, 213-ROM, 214-program/utility, 215-program modules, 220-processor, 230-bus, 240-external device, 250-I/O interface, 260-network adapter, 300-program product.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
Example 1
The embodiment provides a specific implementation mode of a tumor positioning system for dual-source CT imaging.
Referring to fig. 1-2, a tumor localization method for dual source CT imaging includes the steps of:
1. The X-ray scanning images of 80kV and 140kV are obtained through double-source CT scanning, in the step 1, the X-ray scanning images of 80kV and 140kV are generated through double-source CT scanning, a SOMACOM Force double-source CT scanning system of Siemens company is adopted in the embodiment, the contrast ratio of the 80 kVX-ray scanning image to soft tissues and low density structures is better, and the bone tissue information which is clearer than the 80kV can be displayed due to the smaller weakening effect of hard radiation of 140kV, so that the double-source CT scanning can simultaneously obtain two sets of CT data with better display effect to the bone tissues and the soft tissues, and a foundation is laid for the follow-up image fusion processing and the accurate depiction of the tissue structures. The X-ray scanning image with the optimal display effect of the soft tissue and the bone tissue is used as a basic image for the image fusion processing.
2. After obtaining two groups of images of the dual-source CT, image fusion is needed to improve the identification capability of lesions, and step 2 mainly analyzes the information advantages of the two groups of CT images, and an image processing algorithm is adopted to perform effective information fusion so as to generate a new image with more abundant comprehensive information.
In the embodiment, wavelet transformation is adopted to perform image fusion processing, wavelet decomposition is performed on two groups of original 80kV and 140kV X-ray scanning images, low-frequency images and high-frequency images are respectively extracted, the high-frequency level of the 80kVCT images is reserved as the high-frequency part of the fusion image according to an optimization strategy, the low-frequency level of the 140kVCT images is adopted as the low-frequency part of the fusion image, and finally inverse wavelet transformation is performed on the high-frequency part and the low-frequency part, so that the fusion enhanced two-dimensional CT image is obtained.
Compared with a single CT image, the fused two-dimensional CT image simultaneously retains clear high-frequency information on soft tissues and clear low-frequency information on bone structures, can enhance the identification capability of lesions in the soft tissues, also more accurately describes the position relation of the lesions relative to the bone structures, and establishes a more accurate image foundation for subsequent lesion positioning and path planning.
2.1, this embodiment adds a kind of image fusion method based on depth convolution neural network on the basis of preserving original image fusion algorithm based on wavelet transformation, this method takes original 80kV and 140kV CT image as input, through the encoder extraction feature mapping, the decoder carries on sampling reconstruction, and then output the new image fused with two kinds of modal information, the method based on deep learning can realize the depth integration of different CT image content, the method of deep learning can learn the inherent correspondence of different image modes automatically through training, obtain the fusion image of higher quality, offer richer and optimized image input for subsequent focus recognition and positioning module, promote the performance of the whole scheme, specifically:
let the image fusion network structure beThe input CT images of 80kV and 140kV are respectively +. >And->The fusion process of the images can be expressed as +.>Wherein->For the output fusion image, network architecture->By encoder->And decoder->Composition->The encoder is used for feature extraction, i.e. & lt & gt>Wherein->For encoder->For extracting feature map of image, each layer of convolution increases feature dimension, and finally outputs encoder feature +.>,/>Representing high-lift semantic features of the encoder output;
,/>for decoder->The multi-layer up-sampling and convolution layer of (2) is used for gradually recovering the resolution of the image, and finally outputting a fusion image;
the loss function is the image quality:/>Wherein->Representing a loss of image quality based on structural similarity, < + >>Representing an information entropy based loss of image quality, +.>And->The method is characterized in that the super-parameters of the loss function are represented, the network structure performs end-to-end training by minimizing the loss function containing structural similarity and information entropy, the loss function constrains the network to output characteristics conforming to high-quality images, and the trained network can learn the internal relations among different CT image contents, so that effective image fusion is performed at a feature layer, and CT images with richer output information and higher quality are output.
3. After obtaining a fused and reinforced two-dimensional CT image, step 3 utilizes a three-dimensional reconstruction algorithm to construct a digitized representation of lesions and human three-dimensional anatomical structures from a two-dimensional image sequence, provides three-dimensional space information for subsequent lesion positioning identification and robot path planning, adopts a three-dimensional reconstruction method based on a conduction function to construct the two-dimensional CT image into a three-dimensional CT image, firstly calculates linear attenuation coefficients of each pixel according to the CT images of 80kV and 140kV, then establishes a ray conduction equation model based on the propagation characteristics of X rays in a human body, simulates the intensity attenuation of the X rays when the X rays pass through various tissues of the human body, and finally carries out iterative solution on a ray conduction equation set through a finite element method to calculate CT value of each voxel, and further recovers the three-dimensional distribution information of the lesions and the human anatomical structures, and is specific:
301. According to the obtained original projection data of CT scanning, calculating the path of X-rays passing through a human body slice and light intensity attenuation under each projection angle;
302. establishing a radiation conduction equation model:describing the propagation law of X-rays in the human body, wherein +.>Representation of the position->Direction->The light intensity at the point(s) is (are),/>representation of the position->Linear attenuation coefficient of X-ray at +.>Representing the direction of propagation of the ray, +.>Representing +.>Is a partial derivative of (2);
303. for each voxel, establishing a plurality of ray conduction equation models according to paths of the voxels in the respective projection data;
304. carrying out iterative solution on a radiation conduction equation model established by each voxel by using a finite element method, and calculating an attenuation coefficient conforming to attenuation information of projection data;
305. accurately reconstructing a three-dimensional CT image of the interior of the human body according to attenuation coefficients of all voxels;
compared with the traditional Feldkamp algorithm, the three-dimensional reconstruction method provided by the embodiment can simulate the physical propagation process of X-rays in a human body more accurately, reconstruct more accurate three-dimensional images, restore the three-dimensional morphological characteristics of fine lesions, and provide reliable three-dimensional information support for subsequent target positioning and robot track planning based on the digitized three-dimensional human body model.
3.1, this embodiment adds a non-iterative reconstruction method based on compressed sensing theory on the basis of maintaining the original iterative reconstruction algorithm based on the radiation conduction equation, and the method uses the overcomplete dictionary containing various basis functions to represent the image by utilizing the local sparse property of the CT image, so as to obtain the sparse representation of the image on the dictionary, and then obtains the compressive projection data of the image by designing a nonlinear sampling mode, so as to construct an optimization problem of solving the sparse constraint, thereby achieving the purpose of recovering the complete image from less projection data. In order to improve the reconstruction quality, the method can also introduce regularization terms based on priori knowledge of images, the compressed sensing reconstruction strategy can obviously reduce the X-ray radiation dosage, the calculation complexity is reduced, and simultaneously, the reconstruction result with higher quality can be obtained by combining the priori knowledge. Therefore, the method not only can lighten the radiation burden of a patient, but also can provide better image support for focus extraction and positioning, and improves the technical advantages of the whole scheme, in particular:
the image is expressed as:wherein->Representing a two-dimensional CT sectional image +.>Representing an overcomplete dictionary containing various base functions, < ->Representing the image at +. >The sampling process is +.>Wherein->Representing compressed projection data>Representing a nonlinear sampling matrix, the reconstruction process is solving the following optimization problem: />Wherein->Representing sparsity, reflecting the sparsity, i.e. finding the +.>The sparsest solution, introducing regularization term:constraint is applied to obtain a more accurate solution, wherein +.>Regularization term representing a priori knowledge, +.>Representing regularization parameters.
The above expression expresses the basic principle of a compressed sensing reconstruction method, namely, dictionary representation images are used for acquiring sparsity, nonlinear sampling is used for acquiring compressed projection data, finally, the sparse optimization problem is solved for reconstructing the images, and a regularization term can be added to introduce priori knowledge.
4. After the three-dimensional reconstructed image is obtained, a lesion target in the image is identified, accurate three-dimensional coordinate information of the lesion target is given, CT images of tumors of different types are collected, focus areas are marked, a convolutional neural network model is trained, an end-to-end identification segmentation network is realized, the three-dimensional reconstructed image is input into the network by using an encoder-decoder structure such as U-Net, accurate segmentation results aiming at the tumors of different types are output by the network through forward propagation, and finally key information such as three-dimensional outlines, positions, ranges and the like of the focus are determined according to the segmentation results, and the method comprises the following steps of:
401. CT image data of different types of tumors are collected, and focus area segmentation and labeling are carried out;
402. establishing an image semantic segmentation network based on a convolutional neural network, wherein the image semantic segmentation network comprises:
the encoder is configured to encode the data in the data stream,wherein->Indicating encoder->Layer profile->Representing convolution and pooling operations, +.>Indicating the total number of layers of the encoder;
the decoder is configured to receive the data,wherein->Representing decoder->Layer profile, < >>Representing upsampling and convolution operations, +.>Indicating the total number of layers of the decoder;
s403, performing network training and loss function:wherein, the method comprises the steps of, wherein,representing the predicted outcome for the loss function, i.e. +.>Error between->Representing the division result of the division network on the image prediction pixel level division,/->Manually marked pixel level division results representing an image,>representing a cross entropy loss function, evaluating the prediction result of a single pixel,/->Representing the +.>Labeling class of individual pixels,/>Representing the%>Category of individual pixels->Representing the total number of pixels in the image;
the above-described loss function expresses an accumulated cross entropy loss calculation for all pixel prediction performers in the entire image. The cross entropy loss can effectively evaluate the classification difference between the prediction result of the image segmentation network and the group trunk.
By minimizing the loss function, a more accurate image segmentation network can be trained that predicts for each pixel.
The prediction refers to a result generated by predicting an image by a network, such as pixel level segmentation of the image by the network, and outputs a prediction type of each pixel, and the predicted result may be called a prediction result or a prediction result.
group trunk, the annotation true value or reference standard of the data. For example, in the image segmentation task, the result generated by manually performing pixel-level labeling is group trunk, which reflects the true category of each pixel in the image and can be used for evaluating the correctness of the performed result of network prediction.
In the image segmentation field, therefore, predicted result and group result represent respectively:
predicted result, which is the result obtained by the network to the image segmentation prediction.
group trunk, the accurate answer of image segmentation of manual annotation.
404. Inputting the three-dimensional CT image into an image semantic segmentation network for reasoning, and outputting a prediction segmentation result of a tumor region;
405. and calculating three-dimensional coordinate information of the three-dimensional contour, volume and space of the tumor according to the prediction segmentation result of the tumor region.
Compared with the traditional method, the method can automatically adapt to different conditions based on deep learning identification, realize real-time accurate positioning of the focus, and finally output three-dimensional coordinate information of a tumor area, so as to provide important target guiding data for accurate puncture of a robot.
4.1, this embodiment adds a two-stage recognition network based on an attention mechanism while preserving a target recognition algorithm based on template matching and feature extraction, so as to improve the detection capability of small lesions.
Firstly, generating a lesion proposal area and a significance hetmap by using a lightweight full convolution network, wherein the hetmap refers to the probability that different areas in an image contain targets in a target detection task, so as to indicate possible target positions, and in the two-stage target detection network in the embodiment, the significance hetmap has the following functions:
(1) In a first stage, a full convolutional network is used to generate regional proposals that may contain lesions and generate corresponding saliency hetmap, identifying the probability that each pixel in these proposed regions contains a target.
(2) The hematmap highlights the areas that are more likely to contain targets, providing guidance for the second stage of proposed regional feature extraction, enabling the attention subnetwork to focus more on the possible target areas.
(3) In the target detection task, the significance hetmap visualizes the confidence of the network to the targets contained in different positions, and is an important intermediate result of target identification prediction.
(4) By comparing with ground truth heatmap, the quality of the network prediction result can be intuitively evaluated, and the network training is guided.
(5) The significance hetmap can also provide density information for the identification and positioning of lesions, and the output result is clearer and more visual.
Therefore, the significance hetmap guides network training and parameter optimization by providing a visual result of target position prediction, and improves the detection and positioning capability of the model on small lesions. And reconstructing a small region identification sub-network adopting an attention mechanism, wherein the small region identification sub-network can focus and extract multi-scale features of the proposed region, enhance the identification representation of the target region by learning the feature weights of different semantic levels through attention, and finally integrate global information to classify and detect the local proposed region.
The two-stage design can better learn and utilize global and local information to enhance the recognition capability of small lesions, so that the two-stage attention network is added into the target recognition module, the adaptability of the model to lesions of different scales can be improved, the robustness of the system is enhanced, and key support is provided for the subsequent lesion positioning.
Mathematical formulation of two-stage attention recognition network:
the first stage generates a proposed region,wherein->Is an input image, < >>Is the proposal area, ++>Is based on regional proposal network- >Is used for extracting the proposed area from the image, < ->Is a network parameter; the expression above represents the utilization->Network slave transportA proposed region is generated into the image.
The attention characteristic extraction in the second stage,wherein->Representing attention features, +.>Characteristic representation representing the proposed area, < >>Representing an attention subnetwork->Parameters representing an attention network; the above expression indicates that the attention sub-network can analyze the characteristics of the proposed region, outputting an attention weighted characteristic expression.
The classification is finally identified and the classification is performed,wherein->Representing the predicted outcome->Representing classification branches->Parameters representing classification branches; the above expression represents final classification, recognition using the attention features.
The above formula expresses the workflow of the two-stage network, namely, the first stage uses rpn network to generate proposal frame, the second stage uses attention sub-network to extract proposal frame characteristics, and finally the classification branch is identified.
5. According to the tumor three-dimensional coordinates obtained by recognition, an optimal puncture path reaching a focus from the current position is planned for the robot, and in the embodiment, the advantages and disadvantages of avoiding anatomical structures such as fragile blood vessels and ribs and inserting needles in different directions are considered, so that safe and accurate path planning needs to be realized.
And 5, carrying out global path planning by adopting an A search algorithm, identifying the position information of each anatomical obstacle according to the three-dimensional reconstruction result, setting a forbidden zone and a cost function, and considering the conditions of patient position constraint and the like, wherein the A algorithm can search out the optimal feasible path from an initial point to a target focus.
6. The robot executes the optimal puncture path according to the motion control instruction, combines a six-axis robot model, performs inverse kinematics calculation, and generates the accurate rotation angle required by each joint, thereby forming a complete motion control instruction, enabling the robot to accurately reach the planned path, feeding back joint angle data in real time to perform closed-loop control in the motion process, and compensating positioning errors. And (3) carrying out global path planning according to the three-dimensional coordinate information by adopting an A search algorithm, and finding out the optimal puncture path of the robot from the initial position to the focus.
The scheme realizes accurate guidance of the robot according to the image information by applying planning and control technology, and automatically drives to the tumor target position.
Example 2
Referring to fig. 3-11, the present embodiment proposes a tumor positioning system for dual source CT imaging based on embodiment 1.
A tumor positioning system for double-source CT imaging comprises a double-source CT scanning module, an image fusion processing module, a three-dimensional reconstruction module, a target identification and positioning module, a path planning and control module, a robot system execution module and a process monitoring and feedback module;
The dual-source CT scanning module is provided with two X-ray generating systems for generating 80kV and 140kV X-rays simultaneously for scanning imaging, the operation mode of the dual-source CT scanning module is that after a patient is ready to be in place on a CT machine frame, a technician selects a preset dual-source CT scanning mode, corresponding scanning parameters including Tube current, tube voltage, pitch and the like are set, scanning is started, and a sensor receives signals after the X-rays pass through the section profile in the patient and converts the signals into digital projection data. Finally, two groups of projection image sequences of 80kV and 140kV are obtained, and the two groups of images have excellent display effects on lesion soft tissues and bone structures, so that key information can be provided for the identification and positioning of subsequent tumors.
The image fusion processing module performs image fusion processing on the two groups of scanned images through wavelet transformation, and retains high-frequency information of an 80kV CT image and low-frequency information of a 140kV CT image by utilizing an optimization strategy to generate a new two-dimensional CT image with enhanced fusion, and meanwhile retains clear information of soft tissues and skeleton structures;
the main function of the image fusion processing module is to analyze the information advantages of two groups of CT images, and an advanced image processing algorithm is adopted to perform effective information fusion so as to generate a new image with more abundant comprehensive information quantity;
According to the method, on the basis of keeping an original wavelet transformation-based image fusion algorithm, an image fusion method based on a depth convolution neural network is newly added in an image fusion processing module, an original 80kV and 140kV CT image is taken as input, a feature map is extracted through an encoder, a decoder is used for sampling and reconstructing, then a new image fused with two modal information is output, the depth integration of different CT image contents can be realized by a deep learning method, the deep learning method can automatically learn the inherent correspondence of different image modes through training, a fusion image with higher quality is obtained, richer and optimized image input is provided for a subsequent focus recognition and positioning module, and the performance of an overall scheme is improved, and the method is specific:
let the image fusion network structure beThe input CT images of 80kV and 140kV are respectively +.>And->The fusion process of the images can be expressed as +.>Wherein->For the output fusion image, network architecture->By encoder->And decoder->Composition->The encoder is used for feature extraction, i.e. & lt & gt>Wherein->For encoder->For extracting feature map of image, each layer of convolution increases feature dimension, and finally outputs encoder feature +. >,/>Representing high-lift semantic features of the encoder output;
,/>for decoder->For gradually restoring image resolution, and finally outputtingOutputting a fusion image;
the loss function is the image quality:/>Wherein->Representing a loss of image quality based on structural similarity, < + >>Representing an information entropy based loss of image quality, +.>And->The method is characterized in that the super-parameters of the loss function are represented, the network structure performs end-to-end training by minimizing the loss function containing structural similarity and information entropy, the loss function constrains the network to output characteristics conforming to high-quality images, and the trained network can learn the internal relations among different CT image contents, so that effective image fusion is performed at a feature layer, and CT images with richer output information and higher quality are output.
The three-dimensional reconstruction module calculates a linear attenuation system of each voxel based on a three-dimensional reconstruction algorithm of a conduction function according to a two-dimensional CT image, simulates the transmission process of X rays in a human body through a ray conduction equation model, iteratively solves an equation set, calculates the CT value of each voxel so as to obtain a three-dimensional CT image of the interior of the human body, and restores three-dimensional distribution information of lesions and human anatomy structures;
the three-dimensional reconstruction module utilizes an advanced three-dimensional reconstruction algorithm to construct digital representation of lesions and human three-dimensional anatomical structures from the two-dimensional image sequence, and provides three-dimensional space information for subsequent lesion positioning identification and robot path planning;
According to the method, on the basis of maintaining an original iterative reconstruction algorithm based on a ray conduction equation, a non-iterative reconstruction method based on a compressed sensing theory is added on a three-dimensional reconstruction module, the image is represented by using an overcomplete dictionary containing various basis functions by utilizing local sparse properties of a CT image, so that sparse representation of the image on the dictionary is obtained, compressed projection data of the image is obtained through a nonlinear sampling mode, and an optimization problem of solving sparse constraint is constructed, so that the aim of recovering a complete image from fewer projection data is fulfilled. In order to improve the reconstruction quality, the method can also introduce regularization terms based on priori knowledge of images, the compressed sensing reconstruction strategy can obviously reduce the X-ray radiation dosage, the calculation complexity is reduced, and simultaneously, the reconstruction result with higher quality can be obtained by combining the priori knowledge. Therefore, the method not only can lighten the radiation burden of a patient, but also can provide better image support for focus extraction and positioning, and improves the technical advantages of the whole scheme, in particular:
the image is expressed as:wherein->Representing a two-dimensional CT sectional image +.>Representing an overcomplete dictionary containing various base functions, < - >Representing the image at +.>The sampling process is +.>Wherein->Representing compressed projection data>Representing non-linear acquisitionThe sample matrix is reconstructed by solving the following optimization problems: />Wherein->Representing sparsity, reflecting the sparsity, i.e. finding the +.>The sparsest solution, introducing regularization term: />Constraint is applied to obtain a more accurate solution, wherein +.>Regularization term representing a priori knowledge, +.>Representing regularization parameters.
The above expression expresses the basic principle of a compressed sensing reconstruction method, namely, dictionary representation images are used for acquiring sparsity, nonlinear sampling is used for acquiring compressed projection data, finally, the sparse optimization problem is solved for reconstructing the images, and a regularization term can be added to introduce priori knowledge.
The target recognition and positioning module builds a convolutional neural network to perform end-to-end image recognition and segmentation based on a deep learning image segmentation method, realizes intelligent recognition of different types of tumors through a training network, and calculates three-dimensional contour, volume and space coordinate information of the tumors according to segmentation results;
in the embodiment, on the basis of keeping a target recognition algorithm based on template matching and feature extraction, a two-stage recognition network based on an attention mechanism is added on a target recognition and positioning module so as to improve the detection capability of small lesions.
4.1, this embodiment adds a two-stage recognition network based on an attention mechanism while preserving a target recognition algorithm based on template matching and feature extraction, so as to improve the detection capability of small lesions.
Firstly, generating a lesion proposal area and a significance hetmap by using a lightweight full convolution network, wherein the hetmap refers to the probability that different areas in an image contain targets in a target detection task, so as to indicate possible target positions, and in the two-stage target detection network in the embodiment, the significance hetmap has the following functions:
(1) In a first stage, a full convolutional network is used to generate regional proposals that may contain lesions and generate corresponding saliency hetmap, identifying the probability that each pixel in these proposed regions contains a target.
(2) The hematmap highlights the areas that are more likely to contain targets, providing guidance for the second stage of proposed regional feature extraction, enabling the attention subnetwork to focus more on the possible target areas.
(3) In the target detection task, the significance hetmap visualizes the confidence of the network to the targets contained in different positions, and is an important intermediate result of target identification prediction.
(4) By comparing with ground truth heatmap, the quality of the network prediction result can be intuitively evaluated, and the network training is guided.
(5) The significance hetmap can also provide density information for the identification and positioning of lesions, and the output result is clearer and more visual.
Therefore, the significance hetmap guides network training and parameter optimization by providing a visual result of target position prediction, and improves the detection and positioning capability of the model on small lesions. And reconstructing a small region identification sub-network adopting an attention mechanism, wherein the small region identification sub-network can focus and extract multi-scale features of the proposed region, enhance the identification representation of the target region by learning the feature weights of different semantic levels through attention, and finally integrate global information to classify and detect the local proposed region.
The two-stage design can better learn and utilize global and local information to enhance the recognition capability of small lesions, so that the two-stage attention network is added into the target recognition module, the adaptability of the model to lesions of different scales can be improved, the robustness of the system is enhanced, and key support is provided for the subsequent lesion positioning.
Mathematical formulation of two-stage attention recognition network:
the first stage generates a proposed region,wherein->Is an input image, < >>Is the proposal area, ++>Is based on regional proposal network- >Is used for extracting the proposed area from the image, < ->Is a network parameter; the expression above represents the utilization->The network generates a proposed region from the input image.
The attention characteristic extraction in the second stage,wherein->Representing attention features, +.>Characteristic representation representing the proposed area, < >>Representing an attention subnetwork->Representation notesParameters of the schematic network; the above expression indicates that the attention sub-network can analyze the characteristics of the proposed region, outputting an attention weighted characteristic expression.
The classification is finally identified and the classification is performed,wherein->Representing the predicted outcome->Representing a classification branch, representing a attention profile, +.>Parameters representing classification branches; the above expression represents final classification, recognition using the attention features.
The above formula expresses the workflow of the two-stage network, namely, the first stage uses rpn network to generate proposal frame, the second stage uses attention sub-network to extract proposal frame characteristics, and finally the classification branch is identified.
The path planning and control module performs global path planning by using an A search algorithm, and plans an optimal puncture path of the robot from the current position to the focus by combining image information, an anatomical structure and obstacle avoidance, performs inverse kinematics calculation, generates an accurate corner of a six-axis joint of the robot, realizes motion control instruction generation, compensates positioning errors by closed loop control, and enables the robot to accurately drive to the focus target position.
The motion control instruction generated by the control module is applied to a robot system execution module, the robot system execution module comprises a motion mechanism and a puncture execution device, and the motion mechanism consists of a seven-joint touch and force feedback robot.
The puncture executing device is internally provided with a precise linear driving module and a special biopsy puncture needle, the linear module realizes pushing needle movement through thread transmission, and is matched with a force sensor to monitor the insertion force of the biopsy puncture needle in real time, and the biopsy puncture needle is of a retractable structure and automatically stretches out to execute biopsy after reaching a target;
the process monitoring and feedback module integrates an ultrasonic imaging probe, acquires an ultrasonic tomographic image in the puncture process in real time, identifies the relative position of a tumor position and a needle head, acquires the real-time position and speed of a joint from the robot control system, is used for analyzing the execution state of the needle head, judging whether the puncture process accords with a planned path or not, and monitors abnormal conditions.
Example 3
Referring to fig. 12, based on embodiment 1, this embodiment proposes a dual-source CT imaging tumor positioning system terminal device, where the terminal device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
Memory 210 may include readable media in the form of volatile memory, such as RAM211 and/or cache memory 212, and may further include ROM213.
The memory 210 further stores a computer program, where the computer program may be executed by the processor 220, so that the processor 220 executes any one of the above applications for geometric distortion correction of remote sensing images of mountain unmanned aerial vehicles in the embodiments of the present application, and a specific implementation manner of the application is consistent with an implementation manner and an achieved technical effect described in the embodiments of the application, and some contents are not repeated. Memory 210 may also include a program/utility 214 having a set (at least one) of program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, as well as the program/utility 214.
Bus 230 may be a local bus representing one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or using any of a variety of bus architectures.
Terminal device 200 can also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., as well as one or more devices capable of interacting with the terminal device 200, and/or with any device (e.g., router, modem, etc.) that enables the terminal device 200 to communicate with one or more other computing devices. Such communication may occur through the I/O interface 250. Also, terminal device 200 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 260. Network adapter 260 may communicate with other modules of terminal device 200 via bus 230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with terminal device 200, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
Example 4
As shown in fig. 13, this embodiment proposes a readable storage medium of a dual-source CT imaging tumor positioning system, where an instruction is stored on the computer readable storage medium, and when the instruction is executed by a processor, a specific implementation manner of the dual-source CT imaging tumor positioning system is consistent with an implementation manner and an achieved technical effect described in an embodiment of the application, and some contents are not repeated.
Fig. 13 shows a program product 300 provided by the present embodiment for implementing the above application, which may employ a portable compact disc read-only memory (CD-ROM) and comprise program code, and may be run on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not limited thereto, and in the present embodiment, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The foregoing has shown and described the basic principles and main features of the present invention and the advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. The tumor positioning method for double-source CT imaging is characterized by comprising the following steps:
s1, acquiring 80kV and 140kV X-ray scanning images through dual-source CT scanning;
s2, fusing the 80kV and 140kV X-ray scanning images by utilizing wavelet transformation to obtain a two-dimensional CT image;
s3, constructing a two-dimensional CT image into a three-dimensional CT image by adopting a three-dimensional reconstruction method based on a conduction function;
s4, outputting three-dimensional coordinate information of a tumor region in the three-dimensional CT image by using an image segmentation method based on deep learning;
s5, carrying out global path planning according to the three-dimensional coordinate information by adopting an A search algorithm, and finding out an optimal puncture path of the robot from the initial position to the focus;
In the step S2, wavelet decomposition is respectively carried out on the 80kV and 140kV X-ray scanning images, the high-frequency level of the 80kV X-ray scanning images and the low-frequency level of the 140kV X-ray scanning images are extracted, the high-frequency level is used as the high-frequency part of the fusion image, the low-frequency level is used as the low-frequency part of the fusion image, and the fusion enhanced two-dimensional CT image is obtained through inverse wavelet transformation on the high-frequency part and the low-frequency part.
2. The tumor positioning method for dual source CT imaging according to claim 1, wherein in S1, X-ray scan images of 80kV and 140kV are generated by dual source CT scan, wherein the 80 kVX-ray scan image has a high contrast characteristic for soft tissue and low density structure, the 140 kVX-ray scan image has a clear bone tissue information characteristic, and the X-ray scan image with the optimal display effect for the soft tissue and bone tissue is used as a base image for the image fusion process.
3. The tumor localization method of dual source CT imaging of claim 1, wherein S3 comprises the steps of:
s301, calculating the path and light intensity attenuation of X-rays passing through a human body slice under each projection angle according to the obtained original projection data of CT scanning;
S302, establishing a radiation conduction equation model:describing the propagation law of X-rays in the human body, wherein +.>Representation of the position->Direction->Intensity of light at->Representation of the position->Linear attenuation coefficient of X-ray at +.>Representing the direction of propagation of the ray, +.>Representing +.>Is a partial derivative of (2);
s303, for each voxel, establishing a plurality of ray conduction equation models according to paths of the voxel in each projection data;
s304, carrying out iterative solution on a radiation conduction equation model established by each voxel by using a finite element method, and calculating an attenuation coefficient conforming to attenuation information of projection data;
s305, accurately reconstructing a three-dimensional CT image of the interior of the human body according to attenuation coefficients of all voxels.
4. A tumor positioning method for dual source CT imaging according to claim 1, wherein said S4 comprises the sub-steps of:
s401, collecting CT image data of the existing tumors of different types, and performing focus region segmentation labeling;
s402, establishing an image semantic segmentation network based on a convolutional neural network, wherein the image semantic segmentation network comprises:
the encoder is configured to encode the data in the data stream,wherein->Indicating encoder->Layer profile->Representing convolution and pooling operations, +. >Representing the total number of layers of the encoder;
the decoder is configured to receive the data,wherein->Representing decoder->Layer profile, < >>Representing upsampling and convolution operations, +.>Indicating the total number of layers of the decoder;
s403, performing network training and loss function:wherein->Representing the predicted outcome for the loss function, i.e. +.>Error between->Representing the division result of the division network on the image prediction pixel level division,/->Manually marked pixel level division results representing an image,>representing a cross entropy loss function, evaluating the prediction result of a single pixel,/->Representing the +.>Labeling class of individual pixels,/>Representing the%>Category of individual pixels->Representing the total number of pixels in the image;
s404, inputting the three-dimensional CT image into an image semantic segmentation network for reasoning, and outputting a prediction segmentation result of a tumor region;
s405, calculating three-dimensional coordinate information of the three-dimensional outline, volume and space of the tumor according to the prediction segmentation result of the tumor area.
5. The method according to claim 4, wherein in S5, the optimal puncture path from the initial point to the target lesion is searched by an a algorithm based on three-dimensional coordinate information of three-dimensional contour, volume and space, and the position information of each anatomical obstacle, the set forbidden zone and the cost function can be identified by the three-dimensional coordinate information.
6. A tumor positioning system based on dual-source CT imaging, which is characterized by comprising a dual-source CT scanning module, an image fusion processing module, a three-dimensional reconstruction module, a target identification and positioning module and a path planning and control module, wherein the tumor accurate positioning and robot puncturing method is based on any one of claims 1-5;
the dual-source CT scanning module is provided with two X-ray generating systems for simultaneously generating 80kV and 140kV X-rays for scanning and imaging;
the image fusion processing module performs image fusion processing on the two groups of scanned images through wavelet transformation, and retains high-frequency information of an 80kV CT image and low-frequency information of a 140kV CT image by utilizing an optimization strategy to generate a new two-dimensional CT image with enhanced fusion, and meanwhile retains clear information of soft tissues and skeleton structures;
the three-dimensional reconstruction module calculates a linear attenuation system of each voxel based on a three-dimensional reconstruction algorithm of a conduction function according to a two-dimensional CT image, simulates the transmission process of X rays in a human body through a ray conduction equation model, iteratively solves an equation set, calculates the CT value of each voxel so as to obtain a three-dimensional CT image of the interior of the human body, and restores three-dimensional distribution information of lesions and human anatomy structures;
The target recognition and positioning module builds a convolutional neural network to perform end-to-end image recognition and segmentation based on a deep learning image segmentation method, realizes intelligent recognition of different types of tumors through a training network, and calculates three-dimensional contour, volume and space coordinate information of the tumors according to segmentation results;
the path planning and control module performs global path planning by using an A search algorithm, and plans an optimal puncture path of the robot from the current position to the focus by combining image information, an anatomical structure and obstacle avoidance, performs inverse kinematics calculation, generates an accurate corner of a six-axis joint of the robot, realizes motion control instruction generation, compensates positioning errors by closed loop control, and enables the robot to accurately drive to the focus target position.
7. The tumor positioning system for dual source CT imaging of claim 6, further comprising a robotic system execution module, a process monitoring and feedback module;
the motion control instruction generated by the control module is applied to a robot system execution module, the robot system execution module comprises a motion mechanism and a puncture execution device, and the motion mechanism consists of a seven-joint touch and force feedback robot;
The puncture executing device is internally provided with a precise linear driving module and a special biopsy puncture needle, the linear module realizes pushing needle movement through thread transmission, and is matched with a force sensor to monitor the insertion force of the biopsy puncture needle in real time, and the biopsy puncture needle is of a retractable structure and automatically stretches out to execute biopsy after reaching a target;
the process monitoring and feedback module integrates an ultrasonic imaging probe, acquires an ultrasonic tomographic image in the puncture process in real time, identifies the relative position of a tumor position and a needle head, acquires the real-time position and speed of a joint from the robot control system, is used for analyzing the execution state of the needle head, judging whether the puncture process accords with a planned path or not, and monitors abnormal conditions.
CN202410044891.4A 2024-01-12 2024-01-12 Tumor positioning method and system for dual-source CT imaging Active CN117547353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410044891.4A CN117547353B (en) 2024-01-12 2024-01-12 Tumor positioning method and system for dual-source CT imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410044891.4A CN117547353B (en) 2024-01-12 2024-01-12 Tumor positioning method and system for dual-source CT imaging

Publications (2)

Publication Number Publication Date
CN117547353A CN117547353A (en) 2024-02-13
CN117547353B true CN117547353B (en) 2024-03-19

Family

ID=89813317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410044891.4A Active CN117547353B (en) 2024-01-12 2024-01-12 Tumor positioning method and system for dual-source CT imaging

Country Status (1)

Country Link
CN (1) CN117547353B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05317304A (en) * 1992-05-26 1993-12-03 Yokogawa Medical Syst Ltd Fat distribution image generation method in ct device
US5287546A (en) * 1992-09-14 1994-02-15 Lunar Corporation Patient positioning apparatus for bone scanning
CN103729868A (en) * 2014-01-07 2014-04-16 天津大学 Dual-energy CT (Computer Tomography) scan data based detection method for reconstructing projected image
CN108765385A (en) * 2018-05-14 2018-11-06 广东药科大学 A kind of double source CT coronary artery extraction method
CN111407410A (en) * 2020-03-23 2020-07-14 苏州新医智越机器人科技有限公司 Six-degree-of-freedom puncture surgical robot and system
CN115599099A (en) * 2022-10-25 2023-01-13 中科璀璨机器人(成都)有限公司(Cn) ROS-based autonomous navigation robot
WO2023045231A1 (en) * 2021-09-22 2023-03-30 浙江大学 Method and apparatus for facial nerve segmentation by decoupling and divide-and-conquer
CN116531094A (en) * 2023-05-29 2023-08-04 西安交通大学 Visual and tactile fusion navigation method and system for cornea implantation operation robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855394B2 (en) * 2011-07-01 2014-10-07 Carestream Health, Inc. Methods and apparatus for texture based filter fusion for CBCT system and cone-beam image reconstruction
US10052495B2 (en) * 2013-09-08 2018-08-21 Tylerton International Inc. Detection of reduced-control cardiac zones
US10863955B2 (en) * 2017-01-06 2020-12-15 Accuray Incorporated Coordinated motion of a rotating 2D x-ray imager and a linear accelerator
US10945695B2 (en) * 2018-12-21 2021-03-16 Canon Medical Systems Corporation Apparatus and method for dual-energy computed tomography (CT) image reconstruction using sparse kVp-switching and deep learning
WO2021159519A1 (en) * 2020-02-14 2021-08-19 西安大医集团股份有限公司 Image guidance method and apparatus, radiotherapy device, and computer storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05317304A (en) * 1992-05-26 1993-12-03 Yokogawa Medical Syst Ltd Fat distribution image generation method in ct device
US5287546A (en) * 1992-09-14 1994-02-15 Lunar Corporation Patient positioning apparatus for bone scanning
CN103729868A (en) * 2014-01-07 2014-04-16 天津大学 Dual-energy CT (Computer Tomography) scan data based detection method for reconstructing projected image
CN108765385A (en) * 2018-05-14 2018-11-06 广东药科大学 A kind of double source CT coronary artery extraction method
CN111407410A (en) * 2020-03-23 2020-07-14 苏州新医智越机器人科技有限公司 Six-degree-of-freedom puncture surgical robot and system
WO2023045231A1 (en) * 2021-09-22 2023-03-30 浙江大学 Method and apparatus for facial nerve segmentation by decoupling and divide-and-conquer
CN115599099A (en) * 2022-10-25 2023-01-13 中科璀璨机器人(成都)有限公司(Cn) ROS-based autonomous navigation robot
CN116531094A (en) * 2023-05-29 2023-08-04 西安交通大学 Visual and tactile fusion navigation method and system for cornea implantation operation robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dual-source CT with metal sphere-tube for preoperative evaluation of patients with aortic valve disease treated by transcatheter aortic valve implantation;Liu Bin 等;《MATerials EXpress》;20230718;第13卷(第3期);第547-552页 *
胸部CT图像中孤立性肺结节良恶性快速分类;刘露;《光学精密工程》;20090815;第17卷(第8期);第2060-2068页 *

Also Published As

Publication number Publication date
CN117547353A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US11288813B2 (en) Systems and methods for anatomic structure segmentation in image analysis
US20210264599A1 (en) Deep learning based medical image detection method and related device
US20190122073A1 (en) System and method for quantifying uncertainty in reasoning about 2d and 3d spatial features with a computer machine learning architecture
EP3611699A1 (en) Image segmentation using deep learning techniques
CN110728674B (en) Image processing method and device, electronic equipment and computer readable storage medium
US8811697B2 (en) Data transmission in remote computer assisted detection
JP2021521993A (en) Image enhancement using a hostile generation network
CN109791692A (en) Computer aided detection is carried out using the multiple images of the different perspectives from area-of-interest to improve accuracy in detection
EP3111373B1 (en) Unsupervised training for an atlas-based registration
CN112598649B (en) 2D/3D spine CT non-rigid registration method based on generation of countermeasure network
CA2960751A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
CN111798424B (en) Medical image-based nodule detection method and device and electronic equipment
CN114792326A (en) Surgical navigation point cloud segmentation and registration method based on structured light
CN114972729A (en) Method and system for label efficient learning for medical image analysis
CN117152442B (en) Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN117547353B (en) Tumor positioning method and system for dual-source CT imaging
US11244453B2 (en) Determining malignancy of pulmonary nodules using deep learning
Tian et al. RGB oralscan video-based orthodontic treatment monitoring
CN111144449A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112825619A (en) Training machine learning algorithm using digitally reconstructed radiological images
WO2022127318A1 (en) Scanning positioning method and apparatus, storage medium and electronic device
CN115482223A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114757894A (en) Bone tumor focus analysis system
Agomma et al. Automatic detection of anatomical regions in frontal X-ray images: Comparing convolutional neural networks to random forest
CN112967295B (en) Image processing method and system based on residual network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant